LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 
Reply

How to efficiently log multiple data streams with TDMS

Highlighted

Ok, first off, I'll admit I am completely clueless when it comes to logging, TDMS in particular.  That said, I'm trying to work out the best way to log some data from an existing LabVIEW-based control system, so that users can later access that data in the event of catastrophic failure or other situations where they might want to see exactly what happened during a particular run.

 

I've got a total of between 6 and 12 data points that need to be stored (depending on how many sensors are on the system).  These are values being read from a cRIO control system.  They can all be set to Single data type, if necessary - even the one Boolean value I'm tracking is already being put through the "convert to 0,1" for graph display purposes.  The data is currently read at 100ms intervals for display, but I will be toying with the rate that I want to dump data to the disk - a little loss is OK, just need general trending for long term history.  I need to keep file sizes manageable, but informative enough to be useful later.

 

So, I am looking for advice on the best way to set this up.  It will need to be a file that can be concurrently be read as it is being written, when necessary - one of the reasons I am looking at TDMS in the first place (it was recommended to me previously).  I also need an accurate Date/Time stamp that can be used when displaying the data graphically on a chart, so they can sync up with the external camera recordings to correlate just what happened and when.

 

Are there specific pitfalls I should watch for?  Should I bundle all of the data points into an array for each storage tick, then decimate the array on the other end when reading?  I've dug through many of the examples, even found a few covering manual timestamp writing, but is there a preferred method that keeps file size minimized (or extraction simplified)?

 

I definitely appreciate any help...  It's easy to get overwhelmed and confused in all of the various methods I am finding for handling TDMS files, and determining which method is right for me.

0 Kudos
Message 1 of 12
(3,166 Views)

1)The examples of TDMS logging in LV is a good start for you to know the functionalities of TDMS nodes. 

2)TDMS supports logging while reading, you can do that by opening the same file for logging and reading respectively or use the same reference to wire TDMS Write and Read node, both works.

3)TDMS files organize data in a three-level hierarchy of objects - root, groups and channels. Thus you are not necessary to pack all your data to an array, you can also log different data to different channels.

0 Kudos
Message 2 of 12
(3,151 Views)

Thanks for the reply, deppSu...

 

Do you know if there are any down sides to breaking data up into groups and such?  File sizes are a primary concern for me, and I did read that you can incorrectly organize your TDMS files and end up with a bunch of unnecessary header information, which dramatically increases the file size.  Do you know if breaking up my data into groups will end up costing in terms of file size?

0 Kudos
Message 3 of 12
(3,113 Views)

This is a good question and advanced topic for TDMS. Basically, to minimize the meta size in your TDMS file, you should log data with same "data layout" - log the same set of channels for each write and set the same raw size for each channel. In LabVIEW, you can achieve that by put the TDMS Write in a loop, or you can also achieve that by using NI_MInimumBufferSize or use TDMS Advance API, you can find help and examples of them in LV.

 

You can refer the optimization part of http://www.ni.com/white-paper/5696/en/ if you want have deeper insight.

0 Kudos
Message 4 of 12
(3,087 Views)

I need to bump this topic again...  I'll be honest, the TDMS examples and available help are completely letting me down here.

 

As I stated, I have up to 12 data values that I need to stream into a log file, so TDMS was suggested to me.  The fact that I can concurrently read a file being written to was a prime reason I chose this format.  And, "it's super easy" as I was told...

 

Here's the problem.  I have multiple data streams.  Streams that are not waveform data, but actual realtime data feedback from a control system, that is being read from a cRIO control system into a host computer (which is where I want to log the data).  I also need to log an accurate timestamp with this data.  This data will be streamed to a log file in a loop that consistently writes a data set every 200ms (that may change, not exactly sure on the timing yet).

 

Every worthwhile example that I've found has assumed I'm just logging a single waveform, and the data formatting is totally different from what I need.  I've been flailing around with the code, trying to find a correct structure to write my data (put it all in an array, write individual points, etc) and it is, quite honestly, giving me a headache.  And finding the correct way for applying the correct timestamp (accurate data and time the data was collected) is so uncharacteristically obtuse and hard to track down...  This isn't even counting how to read the data back out of the file to display for later evaluation and/or troubleshooting...  Augh!

 

It's very disheartening when a colleague can throw everthing I'm trying to do together in 12 minutes in the very limited SCADA user interface program he uses to monitor his PLCs...  Yet LabVIEW, the superior program I always brag about, is slowly driving me insane trying to do what seems like a relatively simple task like logging...

 

So, does anyone have any actual useful examples of logging multiple DIFFERENT data points (not waveforms) and timestamps into a TDMS file?  Or real suggestions for how to accomplish it, other than "go look at the examples" which I have done (and redone).  Unless, of course, you have an actual relevant example that won't bring up more questions than it answers for me, in which case I say "bring it on!"

 

Thanks for any help...  My poor overworked brain will be eternally grateful.

0 Kudos
Message 5 of 12
(2,983 Views)

What problems do you have when you just do it?  The simple way, save point by point.

0 Kudos
Message 6 of 12
(2,967 Views)

It's the organization of the whole thing.  Point by point makes perfect sense, but do I set up groups and channels for each of my 12 data streams, do it with 12 separate "write to TDMS" actions, or group the data together somehow?  How do I organize the data so that there's not a massive amount of useless overhead in the files?  There's so much info about streaming waveforms and DAQ signals straight in to TDMS, but very little about the best way for logging other types of values (and effectively getting them back out).  

0 Kudos
Message 7 of 12
(2,962 Views)

I would suggest just doing the simple thing and see if it works.  TDMS already has buffering that you can hopefully adjust if there are issues.  

0 Kudos
Message 8 of 12
(2,953 Views)

You can put the "write to TDMS" in a loop using string as channel name or you can wire the 2-D data array directy to a single "write to TDMS" with array of string as channel names if they are of same type. As for the timestamp, you should log it as a seperate channel.

0 Kudos
Message 9 of 12
(2,929 Views)

That's the part where I stumble, deppSu...  My data is being sampled at a set rate from the controller (100-200ms), and I want to log that directly to a file as I'm going.  So, it's a set of 12 individual data points for each scan, but those points are all the same type (just decimal numbers, can be whatever representation I choose).  I can throw them together easily into an array for storage, but that means it's not a 2-D array, which seems to be what most examples I find for "multi-channel TDMS" assumes.  When I have tried to figure out writing the data as a 1-D array, I get weird errors and it looks like my channels names or some other input does not match up.  I finally get something to actually pass the error check and run, and it all ends up one long single jumbled set of data, not multiple channels...  As I said, frustrating.

 

I may try just individual writes for each data point today, each with their own channel name.  It seems horribly inefficient, and certainly will make for a very long code diagram to have 12 individual writes, but at least I can see if it works.  Until I can find an explanation for a better way to do it, that is.

 

Thanks to you guys for the suggestions and comments...  It's helping to at least sort a bit of how these files work in my brain.  Still welcoming any further suggestions or examples though!

0 Kudos
Message 10 of 12
(2,905 Views)