05-08-2012 11:13 PM
I know the "memory us full" problem has been frequently discussed in this forum, and I did read those posts but still couldn't solve the problem.
We have implemented a test function based on PXI5412 and 5122 modules. The program generates arbitary waveform to excite a DUT and acquires its response with the digitizer, and write acquired data into a TDMS file (~40MB for each point of measurement). The program works very well in single point mode, that is, in discrete runs. However, when I try to run it continously in a FOR loop to measure multiple (up to one thousand) points, the program is always aborted midway when ~10GB data are recorded, with an error "memory is full".
My computer has a 4GB RAM and the OS is winXP professional 32bit. I checked the RAM usage when the error "memory is full" happened; there actually was at least 2.5GB unused. I've learned this is possibly because there is not a continous free space in RAM for data operation. It is also suggested to increase the virtual memory to 3GB for Labview by modifying the boot.ini file. But after I did this the program always encounter another error "Invalid TDMS file reference", and so I couldn't write any data to a TDMS file.
I've attached the data logging part of my code here. Is there an alternative way to circumvent the memory problem, like using the same memory block for buffering data between the digitizer memory and hard disk?
Solved! Go to Solution.
05-09-2012 12:30 AM
Could you provide a VI file to reproduce this "memory full" issue?
There are several potential solutions:
1. Use Windows 7 64bit and LabVIEW 64bit to better use your >=4GB memory.
2. Write each point to a separate tdms file. According to your "Labview code.jpg", you write all points to one tdms file.
3. Use "NI_MinimumBufferSize" (http://zone.ni.com/reference/en-XX/help/371361F-01/lvhowto/setting_tdms_buffersize/) to cache data.
Without your VI to reproduce this "memory full" issue, I could not verify which solution is best for your scenario.
05-10-2012 12:17 AM
Thank you very much for your help. Because this Labview program is run on the machine which is the controller of another instrument, it is not convenient to switch to Win7 until I can ensure all compatibility issues for them. I have also thought about writing each point to a seperate TDMS file, but it would be highly preferable to use a single file for the sake of data organization and subsequent analysis.
Sorry I can't put the VI here now as it conains several subVIs which rely on communication with the other instrument. I shall modify it somewhat so you can run it on your PXI system. Anyway, would you be able to suggest a proper buffer setting as for the third option so that I can try it first?
05-10-2012 01:15 AM - edited 05-10-2012 01:18 AM
I need to know your tdms file details before I could provide a proper buffer setting.
Current design: You write a 40GB tdms file, which contains 1000 groups (one point one group), each group contains 1000 channels, and each channel contains 1D I16 array with length 20M. (Note: This 40GB tdms file contains 1000 x 1000 = 1M channels)
Change: You might write 1D I16 array instead of 2D I16 array, which could reduce channel number from 1M to 1K, and then reduce memory usage.
New design: You write a 40GB tdms file, which contains 1000 groups (one point one group), each group contains 1 channel, and each channel contains 1D I16 array with length 20G (for each point, call "TDMS Write" 1000 times in a "For Loop", and each time write 1-D I16 array with length 20M). (Note: This 40GB tdms file contains 1000 x 1 = 1K channels)
05-10-2012 08:01 AM
For a typical TDMS file in our application, one group (i.e. each point) contains 10's-100 channels which are generated from PXI5122 in a multi-record form. The max size of each channel goes up to the memory limit of the 5122 (64MByte, single channel).
Following your suggestion, I've tried putting "TDMS Write" in a for loop but still using 2D I16 arrays. It turned out that I was able to record over 1000 groups in a ~40 GB TDMS file, and it seemed the file size would be finally limited by hard drive rather than the RAM. Please see attached the revised part of code. One thing I noticed is that a trade-off for this change is ~0.8 s more time spent in each group loop (3.2 s vs 4 s),. This may not be a big problem as we can coordinate the other instrument accordingly.
Can you explain to me a little bit about what is going on with this change? I guess the memory usage is changed but I am not very clear.