11-23-2017 01:21 AM
I am using a queue structure to save the acquired data to a SGL file. I am acquiring a chunk of 2500x200 array in one loop iteration which is taking me roughly 20ms. The writing to file loop is taking 20-22 ms per iteration but sometimes it is as high as 500 to 1500 ms. I am not being able to understand what is causing this jump in loop time (of the writing loop). Maintaining the loop time this loop close to 25 ms is very important for me since i'm using high speed digitizer for acquisition and high lags in writing data will cause either data loss or will lead to memory insufficiency of the digitizer card.
Solved! Go to Solution.
11-23-2017 06:05 AM - edited 11-23-2017 06:06 AM
You're looking at continuously saving 80 MB per second (2500X200X4/0.025). You might very well run into the limits of the hardware.
Is this a HDD or SSD? If you want absolute top notch performance, you need a ram disk (and lots of RAM).
If you want to optimize your current solution, I'd at least use the low level file IO functions. At the moment you're continuously opening, attaching (, flushing) and closing the same file. That will take time. Open the file and put the reference in a shift register.
And if you have enough CPU and development time, you might consider compressing the data in some way before writing it to disk.
11-23-2017 06:21 AM
Your biggest problem is that you are constantly opening and closing the file. This can take a long time. You need to create/open the file before your loop and close it after the loop. You can then use Write To Binary File to write your data inside of the loop.
11-23-2017 06:27 AM
1. Don't use a Directory value property, wire or atleast use a local variable.
2. Don't use Write to spreadsheet file, use Open file outside loop, Write to file inside and close file after loop instead.
3. You're probably at the disk capacity, but the jumps shouldn't occur.
/Y
11-23-2017 08:38 AM
@Yamaeda wrote:
3. You're probably at the disk capacity, but the jumps shouldn't occur.
The jumping might be a virus scanner. If the file is closed, this might trigger the scanner, losing a lot of disk capacity and some CPU as well. So:
4. Add file or directory exception to the virus scanner.
11-23-2017 08:47 AM
Somebody better versed can probably answer this, but as it has been mentioned, do not use the value property. It may cause the bottom loop to run in the UI thread, not sure. I am under the impression that the value property causes a switch to the UI thread, and if a UI thread switch is in a case structure, the compiler causes the whole loop to run in the UI thread to avoid thread switching.
mcduff
11-23-2017 09:05 AM
@mcduff wrote:
Somebody better versed can probably answer this, but as it has been mentioned, do not use the value property. It may cause the bottom loop to run in the UI thread, not sure. I am under the impression that the value property causes a switch to the UI thread, and if a UI thread switch is in a case structure, the compiler causes the whole loop to run in the UI thread to avoid thread switching.
It would not be there in my code, that's for sure.
The property node read will be force to synchronise with the UI thread. This would mean that it probably 'waits' for a 30-60 Hz refresh of the UI, AFAIK. Not ideal for sure, but one of the minor bottlenecks. And completely redundant, so a good candidate for refactoring.
On the subject of opening\writing\closing the file... While replacing that with a file reference in a shift register, reading the file size every loop should be done with that reference as well (Get Position).
11-24-2017 03:44 AM
Hi,
Thank you all for replies. From what I understood: using lower level file I/Os, avoiding the property node for directory and breaking the chunks to smaller array and then writing the file would help the consumer loop performance. I have made changes accordingly but still I see jumps in consumer loop time. I have recorded several times (approx 30 times out of 1200 iterations) the loop time is greater than 50 ms. I still don't understand properly why is the writing time increasing in some loop iterations. Can I optimize the usage of CPU resources in anyway for this code so as to minimize these jumps.
11-24-2017 04:04 AM
No need to Move file pointer to end, if you keep the reference open it'll add to the end. Opening and closing files can take time (that's why you shouldn't use the Write to spreadsheet file which you've removed) so i'd go with one file for the whole measurement unless it's several GB.
As for chunk size i'd fit it so you write to file 1 to 10 times/sec.
/Y
11-24-2017 06:35 AM
1. Your number of chunks should be in a shift register, not a front panel item that you use a local variable to update. You can also reset this value inside of the case structure where you create a new file. This would eliminate the feedback node and Quotient & Remainder node since you can just watch for the value to be 400.
2. You should be able to directly wire up your 2D array to the Write Binary File. If the format isn't quite right, use Reshape Array to make it a 1D Array. This would eliminate the FOR loop.
3. Your jumps are likely to the creation of the new file. Not much you can do there if that is a requirement.