LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reading TDMS file from LabVIEW on python is so slow?

My TDMS file measured by LabVIEW is just around 170 MB.

When I try to open it using npTDMS package:

tdms_file = TdmsFile("path_to_file.tdms")

The data can be read, but it takes ~1min to run the single line above. Does that make sense? I thought binary file should be read efficiently.

 

Is there a way to optimize the file to read it faster? 

0 Kudos
Message 1 of 4
(3,321 Views)

Hi Anthony,

 

I'm no Python expert but that function just opens the TDMS file, which should be very fast.

 

But most often there also comes an index file with each TDMS: is there a chance that function needs to (re)create this index file from scratch when opening the TDMS file?

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 4
(3,311 Views)

When I've seen a TDMS file take forever to open, it has always been the result of a very fragmented TDMS file.  Is there an index file?  If so how big is it in relation to the TDMS file?  The index should be very small.  Say on the order of an index file under 1MB in size for a 100MB file data log.  If you have a 80MB index, for a 100MB data file then your file was created and flushed in a way that was inefficient.  TDMS files always have a fast write, they are intended to stream to disk so it can't take a long time to write.  But that might be at the expense of a read taking longer than you'd like.

 

If you do have a very fragmented file.  Run the TDMS Defrag function on the file.  It might take a long time but you should end up with a much smaller data and much smaller index file.  Try opening the new file and it should be pretty quick.  This defrag may take a long time, and there is an option to get a progress on the defrag.  LabVIEW ships with an example in the example finder on how to query the progress.

 

As for how to avoid having to defrag in the future.  You could perform periodic defrags as the file gets fragmented during the writing of the file.  But an alternative would be to not flush to disk so often.  I'd be curious what the write code looks like but it helps to avoid writing single points of data, and instead write N samples at a time, and even for M channels at once writing a 2D array of data.

Message 3 of 4
(3,292 Views)

Hi Hooovahh,

 

Thanks for your reply. I think you are right. the TDMS file is about the same size of index file.

I tried to used the example code to defrag it, but the result is weird. The code is attached.

I edit the path to my file(6MB tdms and 6MB index file), but the result shows 150MB tdms and 500KB index.

When I try to open it, it is empty... 

 

I would like to try to prevent fragment in the first place. The attached is code I use for saving TDMS file.

I learn from the internet we can add buffer before saving it, but I don't know how to do that.

 

Thanks

Download All
0 Kudos
Message 4 of 4
(3,234 Views)