LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TDMS defrag error

Am currently streaming  7 different groups of single channel Waveforms and spectra (as arrays) to TDMS file. (Am currently just simulating the data so there is no DAQ involved)

 

To optimize read performance, I defrag after writing.

 

During Defrag memory use increases  to about 267 MByte  after which I get a Visual C++ error and LabVIEW crashes.

 

 

 Crash always occurs if tdms file is bigger than some hundred MBytes on a Windows XP machine with 2GByte RAM.

 

Am running LabVIEW 2009. Problem occurs both with patch 2 and 3.

 

 

 

Problem still occurs if I flush before closing.

 

Problem occurs if I run a defrag program after the program that writes tdms files, i.e. manually start the separate defrag program after write program is stopped.

 

On a Windows 7 machine with 4 G RAM Memory LabVIEW memory usage jumps in big steps during defrag  to about 375 MByte, and the machine spends a couple of minutes before finishing a 6 GByte file.  However, on this machine I have not been able to create the crash.

 

Back on the 2 GByte XP machine I have tried the following:

 

Was careful not to defrag an invalid or already open tdms file.  Have read all the posts on that issue.  Finally removed the defrag from the write program, ran it separately, and crash still occured. Am able to read the big files without problems with the TDMS reader.

 

Set a large buffer size for each of the channels (1M).  This dropped the size of the tdms index file to 1 kbyte or so vs. some 50 MByte before.  But crash still occured when trying to defrag.  (Again only on the 2 GByte XP machine.)

 

 

Only other variable:  I have about 4 G free disk space on my XP machine, and half a terabyte on the Windows 7 machine. 

 

My best guess is that the defrag vi can eat memory uncontrollably, resulting in crash.   Next best guess, is that I am doing something stupid.  I always assume that is the best guess, but after a half day on the issue, I have moved it to #2 😉

 

Can't post the VI publicly, but will be glad to send it to Herbert.

 

Thanks,

 

Carsten

 

Message 1 of 6
(4,611 Views)

Hi CarstenPXI,

 

We also found that problem of TDMS Defragment and the fix will go with LabVIEW 2010. The problem is like what you guess, Defragment tries to allocate some memory but on some machines, it would fail and cause the crash.


In my mind, there are several workaround, for example, using defragment in LV 8.6 if the file is TDMS 1.0 version, or you can just read each channel in the file individually and  then write them all to a new file. 

 

Please let me know if you have any emergent problems regarding to this issue.

 

Thanks. 

Message 2 of 6
(4,591 Views)

Thank you for your quick reply.

 

Please confirm that the LabVIEW 8.6 de-frag VI works properly, and the 9.0 does not?

 

Secondly, of course we only need to de-frag if read performance is too slow.  We already know this is the case if we use smaller blocks.   We will have to do some more read performance testing with large write buffers which help make very small index files.  

 

It is sort of an unfortunate situation, TDMS 2.0 should be much faster, but because we cannot reliably de-frag, the read application may be up being slower.  It may well be that we have to roll back to TDMS 1.0 and 8.6 defrag, and see if we get acceptable performance.  The question then is how will that affect write performance.

 

From what you write, is it correct that the fix for the de-frag will not be in the February maintenance release (I assume that is already in manufacturing)?   Any chance that it will be in a patch 4, or do we have to wait until NI Week?

 

Carsten

0 Kudos
Message 3 of 6
(4,580 Views)

Oops, one more question:

 

I don't see why or how reading each channel individually and then writing them all to a new file will help.  Could you please explain a bit more.    Also, for large files (multi Gigabyte files) this would give a big performance hit and double disk footprint while doing this operation?

 

Carsten

0 Kudos
Message 4 of 6
(4,579 Views)

Firstly, I confirm that in LV 8.6, we don't have this problem.

 

Secondly, let me explain what defragment is doing. For example, if you write data to a file for multiple times, you'll probably get TDMS file with multiple segments and headers, so that the .tdms_index file will quite large. What defragment does is to re-organize your data in the tdms file and you'll get only one segment/header and a tiny .tdms_index file.

 

So, after you writing the file and you can make up a VI, read the channels in the file and write the data into a new TDMS file, only one time's writing will not result in multiple segments, so the new written tdms file is well organized. If you can't write all channels altogether once to a TDMS file, you can write one channel by one channel, if there are not too many channels. Does this make sense?

 

I recommend you to do so if you want to write a 2.0 TDMS file and benefit from TDMS high speed streaming in LabVIEW 2009 and it's a workaround not to use defragment but also get a "defragemented" file. I'm afraid the fix would not go with 2009 SP1. 

Message 5 of 6
(4,566 Views)

Our first approach will be to see if using large per channel buffer gives us adequate read performance.  I base this on a guess that if the index file is small, then read performance is better.

 

Carsten

Message 6 of 6
(4,560 Views)