LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory increase while logging TDMS files

Hello everyone,

I know there have been many threads on this topic (and I have read them as well), but I can't get out of it.

 

The application I'm developing acquires data from a cDAQ, reading 48 analog channels at 100 kHz; up to six tests have to be performed in parallel, dividing the 48 channels into the different tests as specified by the user (e.g.: test 1 - channels 1 to 16; test 2 - channels 17 to 22; test 3 - channels 23 to 30, etc.); once every second I have to perform some measurements on the newest chunk of data, compare them to some user-specified settings and detect eventual triggers (e.g. "mean value of channel 13 greater or equal to 0" may be a trigger setting); if a trigger activates, I'm requested to store on disk the last 10 seconds of the subset of the 48 channels selected by the user for that specific test (worst case: all 48 channels, using SGL data type, 100 kHz, 10 seconds = 48*100000*10*4 = 192 MB each trigger file). In parallel, the mentioned measurements have to be saved into other files, therefore slowly increasing their size (but anyway much smaller than the previous ones).

 

Since both the measurement execution + trigger detection and the data logging are quite expensive in terms of time, I've divided them into different subVIs running inside loops with their own queued state machines, and I use queues to send data from DAQ reading to processing and from processing to logging, as well as to keep memory of the latest 10 seconds data buffer in case a trigger happens. As for data logging, I used the TDMS library as immaginable from the discussion title.

 

I tried to run my application simulating a trigger every 5 minutes (I don't know how often triggers will happen in the real usage) with a setting that determines the creation of 2 files, each 48MB big, for every trigger. It executed for almost 4,5 hours, then a "Memory is full" error stopped any further saving, since the RAM used by the application had increased up to 2,8 GB (in that period, the total size of data saved to disk was 4,8 GB).

 

In search of what can cause this memory problem, I tried disabling everything in the logging VI that has to do with TDMS files (both writing and setting properties, and no reading is used in the whole program): one hour ago I started a "log-less" executable and now its memory usage got decently stable around 520 MB.

Being all the rest of the code identical, my conclusion is that writing TDMS files and/or setting their properties is causing memory to heavily increase.

 

Having read this, this, this, and this topics, and also this help page from NI, I tried these things (all already implemented before the 4,5 hours test that I mentioned above):

  • Storing the "slow TDMS" files data in a buffer (queue), periodically flushing it and grouping the data by path, group and channel in order to minimize the quantity of file accesses;
  • Periodically resetting the logging VI: I send the exit command via queue, the exit state of the logger contains a Request deallocation, and immediately after the VI restarts again;
  • Paying attention to setting TDMS properties for big files before writing them, so that I don't have to re-open big files just to set their properties;
  • Using the Advanced TDMS library to perform the writing operations (code attached).

 

Since this application should be used to test that last up to 5000 hours, I absolutely need to get completely rid of previously-written files in the memory of my computer.

The program is developed with LV 2012 SP1 and is currently running on a Windows 8.1 machine. If it can help I can upgrade to LV 2016, my company has a maintenance license and we've just been shipped it.

 

Sorry for the long post and for some comments left in Italian in the snippets attached; hope someone can help me.

0 Kudos
Message 1 of 2
(2,592 Views)

I reproduced the "Write algorithm for slow files" in an isolated VI (attached), and I made an exe of it both with the "open and close file at every iteration" (EXE 1) layout and with the "open file once, iterate with the reference kept open and finally close file" (EXE 2) layout.

Running the two applications for a few minutes was enough to get a big difference:

EXE 1: in almost 5 minutes the RAM usage increased from 28,3 MB to 47,3 MB, whereas the size of the written file was only 240 kB.

EXE 2: in almost 8 minutes the RAM usage stayed constantly at 28,3 MB (the output file was 192 kB big).

 

However, when I tried to execute the VI with the "EXE 1" format in development mode, I saw the memory usage remaining constant for 12 minutes.

 

So, the same source code behaves differently in development mode and in run mode.

 

I'm getting even more confused...

Message 2 of 2
(2,536 Views)