04-18-2012 08:19 AM
I've written a VI that reads a PNRF data file (using LabVIEW's PNRF data plugin) and derives the channel data, channel names, and sampling rate. However, I'm limited to reading files that have no more than 8 channels with 2,370,000 readings in each channel. Anything more and LabVIEW gives me an out-of-memory error on a Windows 7 Pro 32 bit computer that has 4 GB of RAM. Using task manager, LabVIEW errors out with a memory usage at 1,063,216 KB.
Is there a better way to read in larger PNRF files?
The VI looks like this:
Thanks,
Ron
Solved! Go to Solution.
04-18-2012 08:23 PM
Hi Ron,
Can you please put your PNRF data file to ftp://ftp.ni.com/incoming/ ? We want to investigate whether anything wrong in the DataPlugin or LabVIEW. Thanks!
Best Regards,
Mavis
04-19-2012 12:30 AM
Well 1GB of memory use is pretty large for a 32Bit system. LabVIEW by default only gets 2GB of your 4GB at most since the rest is used by Windows. So it will have to error out at some point. Depending on how large your data chunks are this can sometimes go up to 1.3 GB or so but then it definitely stops. Since you want to read in 8 channels and each channel waveform requires a contingious chunk of memory, you acquire in fact 8 chunks of 2'370'000 * 8 Bytes. As soon as you start to put this in your "converted data" array you tell LabVIEW to allocate another chunk of 8 * 8 * 2'370'000 Bytes. And once you do something with that data LabVIEW will repeatedly have to create copies of that data.
The right way to process such large files is not to read them in all at once but instead in smaller chunks. I have no idea if the PRNF Toolkit has an advanced Function palette that supports reading in single channels or a repeatedly smaller timespans of the data though.
04-19-2012 06:27 AM
Hello Mavis,
I don't have a FTP transfer program on my system, so I've attached the largest file.
Sorry, the forums rejected the file for exceeding their limit. Here is a link to the file located at the public folder of my Dropbox account:
http://dl.dropbox.com/u/2023272/Honda%20linear011.pnrf
Please let me know if there is anything else I can do to help.
Thanks,
Ron
04-19-2012 06:30 AM
Hello Rolf,
Yes, I've been looking for some method of reading one channel and smaller time chunks at a time but haven't found any obvious functions. Part of the problem is my lack of experience with this data conversion part of LabVIEW.
Thanks,
Ron
04-22-2012 10:21 PM
Hi Ron,
Thanks for uploading the pnrf file. With it we validated that the DataPlugin should be fine, consuming reasonable memory.
Can you please try reading channel values chunk by chunk? You can use the index/count inputs of 'Read Data'.
Hope this helps,
Mavis
04-24-2012 11:34 AM
Hello Mavis,
I created a read-by-chunk version. Hopefully, I wrote it correctly. The VM memory and Mem usage (as reported by Windows task manager) is about the same as when the VI that doesn't read each data channel individually. In both VIs, I have added the decimation function to reduce that amount of data processed, which has helped in reading larger files. I will be testing the limits with both VIs in the next few days.
Thanks,
Ron
04-24-2012 12:09 PM
Update: I improved the time (to 1/4) and memory ( to 1/2) parameters by changing the "output data channel" property of "Read Data [Channel]" function to "array of waveforms".
04-24-2012 09:20 PM
Hi Ron,
Glad to hear that your time and memory usage were improved a lot with array of waveforms!
The index/count inputs of Read Data mean the start index and number of values to read from all channels. You can refer to Help for detail.
But it might not help in your case, for all values will be concatenated later.
Best Regards,
Mavis
04-25-2012 01:30 AM
@Mavis wrote:
Hi Ron,
Glad to hear that your time and memory usage were improved a lot with array of waveforms!
The index/count inputs of Read Data mean the start index and number of values to read from all channels. You can refer to Help for detail.
But it might not help in your case, for all values will be concatenated later.
Best Regards,
Mavis
Well it would help if you do the decimation in the loop that reads the chunks for each channel. As it is now, the VI reads in all the data as a block for each channel and then decimates them afterwards. Doing the same in two loops, one for the channels and the other for individual smaller blocks per channel would certainly reduce the maximum required memory considerably.