11-28-2017 02:37 AM - edited 11-28-2017 02:40 AM
Background: Using a high speed digital waveform generator card (6556) and a scope card (5154).
Sampling rate used for scope card is 50Mhz, record length 2500 with trigger rate 10KHz and I am using "fetch more than available" property uninterrupted real time acquisition. So I am acquiring 25M Samples/sec, which i have to write into a binary file. Used a producer-consumer loop structure with queues for the same.
Specifics about file write Loop: I am writing to multiple files updating the file name after a specified amount of data (every 80000 records). Every writing loop 100 number of records are written (i.e 2500*100 sample points). Ideally since same number of data points are written the execution time for each iteration should be closely same. But i see some jumps, say 15ms (in most iterations) to a value sometimes greater than 50 or even 100ms.
Problem: When i enable "file write", after writing more than 400000 records (approx), an error message (Not enough memory for the operation) pops up. After which another LV error message pops up (code: 1074118653). If i don't enable the file write the error is not there, which means the acquisition is proper but the writing loop is causing the interruption.
Solved! Go to Solution.
11-28-2017 03:06 AM
You ditch 99,9% of the data, you only write 1 of 800 sample groups. The 799 times you don't write to file you should build up a data array which also means that the finishing empty loop should result in a last save with that is cached.
/Y
11-28-2017 03:27 AM
Hi Rex,
what is the queue size when the error occurs?
Why do you set the file position to "end" each time?
To stop the consumer I would destroy the queue after the producer finishes and check the error out of the QueueRead in the consumer loop (instead of sending an empty array)…
11-28-2017 04:16 AM
@Yamaeda wrote:
You ditch 99,9% of the data, you only write 1 of 800 sample groups. The 799 times you don't write to file you should build up a data array which also means that the finishing empty loop should result in a last save with that is cached.
/Y
I tried using flushing function after the write function so that i physically write in every iteration. Still I am getting the same error.
11-28-2017 05:11 AM
@GerdW wrote:
Hi Rex,
what is the queue size when the error occurs?
Why do you set the file position to "end" each time?
To stop the consumer I would destroy the queue after the producer finishes and check the error out of the QueueRead in the consumer loop (instead of sending an empty array)…
Hi,
I have removed the "set file position". I was in a impression that without that appending wouldn't happen. I am sending an empty array, because i do not want to lose the data that is present in the queue after i stop the producer. So consumer saves all the data until it finds an empty array at the end.
I noticed that the elements in the queue are increasing to some value above 1300 elements (1300 2d arrays) when i see the jumps (the ones i was mentioning earlier 15ms to 50-100ms in some iterations) in the iteration time of consumer loop.
11-28-2017 05:20 AM
@Rex_saint wrote:
@Yamaeda wrote:
You ditch 99,9% of the data, you only write 1 of 800 sample groups. The 799 times you don't write to file you should build up a data array which also means that the finishing empty loop should result in a last save with that is cached.
/Y
I tried using flushing function after the write function so that i physically write in every iteration. Still I am getting the same error.
That had nothing to do with what i wrote. 🙂 Build an array in the consumer which you empty when you write to disk.
/Y
11-28-2017 06:41 AM
One tip, convert the DBL arrays to SGL BEFORE sending them via Queue. This halves the data stored in the Queue. Assuming of course you really do this in your final code.
One point it might be worth mentioning.....
If you pre-allocate your files (assuming you know how many traces you will be recording) Windows 64-bit actually does shadow caching of the files in memory. It's kind of like a free 64-bit memory space utilisable via 32-bit LabVIEW. If you can make use of this, you can essentially benefit from many GB of RAM which otherwise are unusable in LabVIEW 32-bit. Example, if you know you'll have 100x Files, pre-allocate them by simply writing zeroes to the files. Windows caches the last accessed files up to and including the full amount of RAM it has "free". Later writes and reads will actually be to memory, and not direct to disk. The OS then controls the act of physically writing. Closing the file may force a flush, I'm not sure. If so, you might want to delay doing that to get maximum throughput.
11-28-2017 10:18 PM
Example, if you know you'll have 100x Files, pre-allocate them by simply writing zeroes to the files. Windows caches the last accessed files up to and including the full amount of RAM it has "free". Later writes and reads will actually be to memory, and not direct to disk. The OS then controls the act of physically writing. Closing the file may force a flush, I'm not sure. If so, you might want to delay doing that to get maximum throughput.
Wow that sounds promising. I am okay with delaying the flushing. I didn't quite well get how to per-allocate a file. Does it mean, that i should create all the files outside the consumer loop and write zeros (How many?) to them and not close those files. And then perform writing operation later in the loop. Can you help me modify the below VI wrt. your idea. I have created a dummy VI to write 400 chunks(2500X100 sized 2D array) into one file. And to check the performance i am displaying an array that contains all the loop iteration time periods which are greater than 30ms. The smaller this array is the better it is.
11-29-2017 12:55 AM
You just need to have written the files shortly before starting your acquisition. If your file needs to be 10MB, then you write 10M 0 Bytes......
That's all. Inside the loop, you just overwrite the file as per usual (When opening, choose the right method and start with putting the file write pointer at the start.).