LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TDMS file to Matlab

I want to use the example from the following link.

 

https://forums.ni.com/t5/Example-Programs/Convert-TDMS-File-with-DAQmx-Raw-data-Using-LabVIEW/ta-p/3...

 

I am having an issue with using this for larger files.  I was wondering if there was a better/faster VI created for doing this.  I have 12 channels of data in one TDMS file.  The TDMS file is about 1.17GB at its largest.  I am using this VI on a PXIe-8133 with Win 7 64b and 8 GB of RAM.

 

Does the data type change if I just do a DAQmx read and use the TDMS VIs to write the data instead of doing the TDMS write within DAQmx?  Is that data type still compatible with MatLAB?  Would I still be able to get fast enough processing without over running a queue buffer?  This application I am building is reading 12 channels from 6 PXI-6363 (2ch/each) at 500KHz.  It is a continuous acquisition for a max of 120 seconds.

 

Thanks.

0 Kudos
Message 1 of 7
(4,236 Views)

From what I read the format is closed, so probably other then these NI blocks there is no alternative. But how big are the resulting files? Maybe your disk is also at its max write speed.

0 Kudos
Message 2 of 7
(4,207 Views)

When I try to run this on the whole file with this conversion VI I get the "LabVIEW is out of memory" message.  I have upgraded to 8GB, which is all the PXIe-8133 can handle I believe.

 

I have done the patch to make LabVIEW use 3GB of memory.  I did that when I was working on an 8GB upgrade.  It made the "Hardware Reserved" portion of the memory very large.  Not sure if I wanted it to stay that way.  I was also in the process of a Win7 64b upgrade that had to be re-done due to a blue screen, so I never went back to the 3GB patch.

 

I think I am going to change over to using a logging queue and see how the memory holds out during acquisition.  I assume the memory that is used by a queue is under the 2GB umbrella that normal LabVIEW has to work with?

 

Thanks

0 Kudos
Message 3 of 7
(4,198 Views)
  1. Save the file in chunks not all at once to save memory.
  2. Install the hdf library , there is an example in the examples folder on how to save native Matlab files.

 

mcduff

0 Kudos
Message 4 of 7
(4,193 Views)

@LennartM wrote:

From what I read the format is closed.


You haven't done much reading then have you?  The file format is documented and available, but not necessarily open as that implies open source.  NI does provide an API for file IO in LabVIEW, CVI, and a couple other languages.  I think NI provided DLLs at one point too.  Oh and years ago NI shared a pure G implementation of TDMS but I think it was just an example of what can be done.

 

As for your specific usage.  I'd look at if the file is fragmented.

 

https://www.dmcinfo.com/latest-thinking/blog/id/205/labview-data-storage-tdms-performance-tweaking

https://devs.wiresmithtech.com/blog/tdms-fragmentation-cslug/

https://forums.ni.com/t5/Example-Programs/Avoiding-TDMS-Fragmentation/ta-p/3525628

 

Basically because TDMS has a priority on streaming and fast logging, it will ultimately sacrifice other things as a result.  This means we can log very fast with little over head, but reading might be more difficult.  To help avoid this you can periodically defrag files when you know you can, or make this part of your report generation process.  After a file has been defragged it will be smaller, and load faster.  One sign of a badly fragmented file is having a large tdms_index file.  This is the file that keeps track of where the data is in the file.  For a 1GB TDMS file, I'd guess you should have an index file around 3MB or less.  If it is larger it doesn't mean it is fragmented, bit it might be.  I'm a big fan of the file format and have successfully been using it for large applications for years.  That being said I know the struggles with using it on these larger applications, and it isn't always intuitive.  Especially when you find yourself writing wrappers for existing functions.

Message 5 of 7
(4,180 Views)

Hooovahh,

 

I tried to defrag the file that I get from the TDMS logging that is done with DAQmx and I get Error -2542.  I assume it doesn't make a difference if you do it after the acquisition is done or during.  I definitely didn't want processes being done that would slow down the acquisition.

 

Error -2542 occurred at an unidentified location

Possible reason(s):

LabVIEW: Files that contain unscaled data cannot be processed by the TDMS Defragment and TDMS Convert Format functions. Data can be scaled by reading it from the file using the TDMS Read function.

 

It appears that I have to do the original read operation on the TDMS file, the operation I am trying to speed up, before I can defrag the file.

 

The largest data file I think I will have to deal with is 1.17 GB.  It looks like the average disk write speeds are about 35-50 MB/sec for the LabVIEW processes.  After the read process and the defrag the file goes to 5.15 GB.  I assume this is normal???

 

Thanks.

0 Kudos
Message 6 of 7
(4,155 Views)

@JoeWork wrote:

 

I tried to defrag the file that I get from the TDMS logging that is done with DAQmx and I get Error -2542.  I assume it doesn't make a difference if you do it after the acquisition is done or during.  I definitely didn't want processes being done that would slow down the acquisition.

 


This can be done a couple of ways.  The easiest is just to wait until all logging is done and the file is closed, then perform the defrag on the file.  It is going to take a decent amount of time for large files.  There is a way to get progress on the defrag, but no cancel mid defrag option.  Search the Example Finder for TDMS Display Defragmentation Progress.vi to see this in action.

 

Other options (that add complexity) involve things like logging to a new file, defragging the old one, and then combining them when you either make a new file, or the logging is done.  This means the finalize process is shorter since you aren't defragging the whole file, just the portion since the last new file creation.  When not using the DAQmx logging functions, I've also done things like performing logging in a separate loop.  Here I can periodically close, defrag, and reopen the file.  Requests to log data get put into a queue and when the defrag and open are done it goes through and logs all the things waiting in the queue.

 


@JoeWork wrote:

 

Error -2542 occurred at an unidentified location

Possible reason(s):

LabVIEW: Files that contain unscaled data cannot be processed by the TDMS Defragment and TDMS Convert Format functions. Data can be scaled by reading it from the file using the TDMS Read function.

 

It appears that I have to do the original read operation on the TDMS file, the operation I am trying to speed up, before I can defrag the file.


I'm not familiar with that error during a defrag, it likely has something to do with the DAQmx logging which I use rarely, sorry.

 


@JoeWork wrote:

 

 

The largest data file I think I will have to deal with is 1.17 GB.  It looks like the average disk write speeds are about 35-50 MB/sec for the LabVIEW processes.  After the read process and the defrag the file goes to 5.15 GB.  I assume this is normal???


That is not normal, I'm not sure what is going on.  Do you have some code to show as an example?  Performing a read operation shouldn't change the file size of the TDMS file, and doing a defrag should never make the file get larger either.

0 Kudos
Message 7 of 7
(4,149 Views)