LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

What's the best way to handle simultaneous writes to the same TDMS file?

I have multiple parallel loops that are processing analog data (arrays of doubles) at differing rates. The loop frequency and the amount of data in each loop is different. The processes are completely independent from each other except that upon certain conditions (i.e. an exceedance, user command) each loop may need to write it's current data into a single TDMS file.  Each loop's data is written into a dedicated group and and channel heiearchy.

 

At times when more than one loop is writing to the single TDMS file, it appears that performance slows way down- by that I mean the user interface lags, queues start to fill up, loop iteration times increase.

 

The most data I'm needing to write at any time (total) is on the order of 40MB/sec, so I doubt i'm pushing the limits of TDMS (this project is being developed on a brand new PXI chassis, with a 4 drive striped raid array), I think the problem lies in how i'm accessing the file.

 

I'm managing the file reference in a seperate process and copying that single file reference into a local variable of each loop.

 

Perhaps I'm going about this the wrong way? Do I need multiple file references? What's the proper way to handle simultaneous asynchrounous writes from multiple loops to a single TDMS file? Is there a good way other than splitting up the storage into seperate files?

0 Kudos
Message 1 of 11
(3,397 Views)

I've had trouble loosing dat when writting TDMS from multiple threads.

 

I'd suggest a modified consumer that consumes from multiple threads and let a single Logger write the file.

 

Let us know if you get the I/O rates you need.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 11
(3,389 Views)

That's kind of what I was thinking - shoving the desired data, channel and group strings into a cluster then queueing it up for a dedicated writer loop elsewhere. That seems reasonable because it ensures that things are written in a linear fashion and that any wait won't block the current thread but of course it adds complexity. I was hoping for an elegant "in place" solution, but thanks for your reply.

0 Kudos
Message 3 of 11
(3,384 Views)

Hi pjrose,

 

Could you please let us know:

1) What the version of LabVIEW you are using?

2) What the type of the PXIe chassis, controller and the RAID Array you are using, is that NI PXIe 1075 chassis, NI PXIe 8133 conroller and NI HDD 8265 RAID?

 

Thank you,

Yongqing Ye

NI R&D

0 Kudos
Message 4 of 11
(3,353 Views)

@pjrose wrote:

That's kind of what I was thinking - shoving the desired data, channel and group strings into a cluster then queueing it up for a dedicated writer loop elsewhere. That seems reasonable because it ensures that things are written in a linear fashion and that any wait won't block the current thread but of course it adds complexity. I was hoping for an elegant "in place" solution, but thanks for your reply.


 

Slight mod for performance reasons.

 

Shoving the data into a cluster may invoke a buffer copy.

 

If you used a unique queue to transfer collector to logger and you defined at the time the queues are created the metadata, then the queue could consist of simply the update data and provided the wire does not fork, re-use the buffer that contains the update for the transfer and save yourself a data buffer and its associated CPU over-head.

 

If you are curious about wirting from multiple threads to a single TDMS file, then you could shove the TDMS file ref into a DVR and then use inplace operations on the DRV ref to do your file writting. I suspect that will end up creating a sinlge bottle-neck with all threads throttled by the disk I/O. Normal disk I/O would in most cases be slower than allof the other things are will be doing, but if you have fancy disk hardware, and it has a large disk cache etc etc... maybe not.

 

Now if you run into to trouble with what I suggested above, we can change things over to tkae advantage of the new TDMS streaming features built into DAQmx that support I/O direclty to disk. In that case I would move toward having a single TDMS file for each of the I/O streams and let NI stuff do the work. After the data is collected the multiple TDMS files could be read and the data consolodated into a single final file.

 

I am curious about what you end up using and the performance you realize since I get these types of apps regularly and your feed-back will help me out along with others who will be treding the same path.

 

Trying to help,

 

Ben

 

PS: If Herbert replies contradicting what I suggested, listen to him.

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 11
(3,352 Views)

It's the 1065 rack with the 8133 cpu/controller and the 8260 RAID controller.

0 Kudos
Message 6 of 11
(3,307 Views)

Then what's the version of LabVIEW you are using? If LV 2011, I would recommend using TDMS Advanced API.

0 Kudos
Message 7 of 11
(3,305 Views)

Ben, I was wondering if you could elaborate on this 'metadata', 'unique queue' and 'wire forking' business....I'm not 100% sure what you're referring to there...

 

I've gotten it working to some extent by inserting a data type code as the first element before enquing to the common writer loop, but I'm not seeing drastic gains in performance opposed to simply bundling everything together, it does seem to be working, but the UI still lags several seconds.

 

I read about using a DVR but I don't think that will help me here.

 

Did I mention that the data is being acquired and processed in RT via hypervisor and tranferred to windows via shared memory? Not sure if it changes anything, but it definitely doesn't help the resource situation.

0 Kudos
Message 8 of 11
(3,304 Views)

I am using labview 2011. Could you let me know your thoughts on why the advanced TDMS functions will help here? I would like to get some direction in researching the the implementation using those functions.

 

Thanks.

0 Kudos
Message 9 of 11
(3,301 Views)

Yes, sure!

 

First of all, TDMS Advanced API is especially designed for high performance streamng use cases. If you are using NI PXIe-8133 and NI RAID Array, then TDMS Advanced API is the perfect match  if you really care performance.

 

Second, compared with TDMS Standard API, Advanced API has the ability to asynchronously writing and reading which Standard API doesn't have. TDMS Advanced API was introduced in LV 2010, and in LV 2011, we have made some improvements to make it even faster, now it can accesse the full transfer rate of the underlying hardware if you using Advanced API.

0 Kudos
Message 10 of 11
(3,295 Views)