LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TDMS logging from multiple concurent tasks

Solved!
Go to solution
I have an application that has multiple daqmx tasks running in parallel, continiously acquireing data at different rates (10, 100 and 1kHz and possibly another kHZ), I would like to log these all to a file and am looking at the log TDMS task.  Can I share the same file over these tasks) using the daqmx 9.0 log and read TDMS functions.  Also what happens to my taks if there is a file error (disk gets full)?
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 1 of 19
(5,684 Views)

The DAQmx feature can only write one file per task.  The reason for this limitation is because of headers and synchronization.  For this to happen, for an example, one task would want to write to "Group1" and another "Group2", the TDMS file must contain a header in between data to signify "Okay. Now the data is for Group2."  Many of those headers being written to the file can slow down performance.  As well, the tasks cannot write to the TDMS file at the same exact time.  That is, while one thread is writing data to the file, the other thread cannot write data to the file while one is writing; they need to lock each other out.  This locking behavior would slow down performance as well.

 

Therefore, what we are left with is the way that the DAQmx logging feature works, a single task writes to a single file.  It writes only one header at the beginning of the file and then raw binary data (2 bytes per sample, typically) for the entire length of the file.  The net benefit is really small files and really great performance.  However, you do end up with separate files for separate parallel tasks.  The only remedy (though not in your case) is to use channel expansion (that is, use one task for lots of devices/channels); the problem in your case is that different rates prevent "channel expansion".

 

I probably gave you more of an explanation than you cared to hear about, but hopefully this makes sense.

 

As to your second question, if there were some file I/O error in the course of "Log and Read" mode (such as running out of disk space), the DAQmx Read call will report such an error.

 

Let me know if you have any additional questions.

Thanks,

Andy McRorie
NI R&D
0 Kudos
Message 2 of 19
(5,664 Views)

What is the best way to handle this then, make one file per task?

Can these files be consolidated at the end into one file with several groups?  The TDMS Seems like a nice method to same my data since it is essentially a bunch (about 40-60 channels) of continious wave forms.

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 3 of 19
(5,654 Views)
Solution
Accepted by topic author falkpl

Hey Paul,

 

The best way to handle this would be to create one file per task.

 

The files can be concatenated simply by concatenating the binary content of the files.  Attached is an example that shows combining the two files with LabVIEW Binary File I/O.

Thanks,

Andy McRorie
NI R&D
0 Kudos
Message 4 of 19
(5,634 Views)
Thanks alot, this seems to work.
Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 5 of 19
(5,612 Views)

Does this mean that parallel writing to same TDMS file also has reliability issues?  I have tried some experiments with parallel writing (to different TDMS groups on the same file, and sometimes it works, and sometimes there is corruption different TDMS groups on the same file. 

 

I assume this means that the parallel write should not be used???

 

I would prefer to be able to perform parallel writing, because it gives a much more user-friendly structuring of data,

 

Any impact on this in LabVIEW 2010???

 

Carsten

0 Kudos
Message 6 of 19
(5,429 Views)

Also, it appears that the "mergeTDMSfiles" vi will not work for very large files, since the build array function is memory resident.   Any way to get around this?

 

Carsten

0 Kudos
Message 7 of 19
(5,419 Views)

@CarstenPXI wrote:

[...] sometimes there is corruption different TDMS groups on the same file. 

 


I am very interested in finding out more about that. Could you post some code that I could use to reproduce the corruption?


@CarstenPXI wrote:
I assume this means that the parallel write should not be used???

No. Our APIs should either support that and create valid files in the process or return an error. Anything else would likely be a bug.


@CarstenPXI wrote:

Also, it appears that the "mergeTDMSfiles" vi will not work for very large files, since the build array function is memory resident.   Any way to get around this?


Hope this helps:

 

18473iDA670B160866B205

 

Thanks,

Herbert

 

0 Kudos
Message 8 of 19
(5,398 Views)

@Herbert Engels wrote:

@CarstenPXI wrote:

[...] sometimes there is corruption different TDMS groups on the same file. 

 


I am very interested in finding out more about that. Could you post some code that I could use to reproduce the corruption?


@CarstenPXI wrote:
I assume this means that the parallel write should not be used???

No. Our APIs should either support that and create valid files in the process or return an error. Anything else would likely be a bug.


@CarstenPXI wrote:

Also, it appears that the "mergeTDMSfiles" vi will not work for very large files, since the build array function is memory resident.   Any way to get around this?


Hope this helps:

 

18473iDA670B160866B205

 

Thanks,

Herbert

 


THanks for that post Herbert.

 

Just to add a case where we loose data...

 

Using multiple file Opens on a TDMS file does not operate like every other type of file I have seen where if the file is already open we get a pointer to the existing resource.

 

In TDMS a sepearte open results in a sepearte reference and any data written using one is not visable using the other.

 

So single "Open" are a must with TDMS.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 9 of 19
(5,393 Views)

 


@Ben wrote:
Using multiple file Opens on a TDMS file does not operate like every other type of file I have seen where if the file is already open we get a pointer to the existing resource.

 

   In TDMS a sepearte open results in a sepearte reference and any data written using one is not visable using the other.


What you're saying is that if you have 2 TDMS Open in the same application and you write to one of them, the data you wrote to that reference cannot be read from the other one? If so, are you referring to data values or to properties?

In this use case, it does happen that data you write to one reference is temporarily invisible to the other one. You wouldn't be loosing data though, because as soon as the first reference flushes that data to disk, it is visible to the second one. This rarely applies to data values, but it happens with properties a lot. If immediate visiblity for all open references is important to you, you can enforce that at any point in time by calling "TDMS Flush" on the reference you were writing to. Does that sound like it might fix this for you?

 

If you're referring to 2 references being opened in two different processes, things are a bit different. In that case, you would actually need to close and re-open the second reference in order for the reading application to pick up the most recent changes. This is certainly not pretty by any means, but it should solve the problem for now.

 

Did I get this right or is the problem you were bringing up different from what I described?

 

Thanks,

Herbert

 

 

Message 10 of 19
(5,387 Views)