11-25-2017 01:04 AM
@crossrulz wrote:
1. Your number of chunks should be in a shift register, not a front panel item that you use a local variable to update. You can also reset this value inside of the case structure where you create a new file. This would eliminate the feedback node and Quotient & Remainder node since you can just watch for the value to be 400.
2. You should be able to directly wire up your 2D array to the Write Binary File. If the format isn't quite right, use Reshape Array to make it a 1D Array. This would eliminate the FOR loop.
3. Your jumps are likely to the creation of the new file. Not much you can do there if that is a requirement.
Thank you have changed that number of chunks part and as of 2D array thing, I have seen that writing 1D array multiple times is faster than the 2D array. The file is getting updated 3 times (for 1200 chunks) but those jumps are as many as 30. And also those jumps are coming in between writing a file. I have tried not updating the file name and used the same file for writing 1200 chunks and the jumps are still there as the file size in increasing. I doubt that those jumps are arising as the file writing is taking place for a long time.
I am still curious how the jumps are there when I am writing the same amount of data into file every iteration. Please someone enlighten me if this is something to do with resource management of CPU. And if yes is there was way to optimize it to minimize the jumps, Like maybe assigning priority order when the resources are demanded for file write.
11-25-2017 04:35 AM
Rex_saint wrote:I am still curious how the jumps are there when I am writing the same amount of data into file every iteration. Please someone enlighten me if this is something to do with resource management of CPU. And if yes is there was way to optimize it to minimize the jumps, Like maybe assigning priority order when the resources are demanded for file write.
Both Windows as the hard disk will cache that data. Windows will probably buffer the data until a complete block\sector\whatever can be written. You could try adding a file flush after the writes. This might even the storage load, but will be less efficient over all.
11-25-2017 08:54 AM
Try to write in multiples of the disk sector size, this gives a dramatic improvement.
my my post on that thread.
mcduff
11-28-2017 11:44 PM
@mcduff wrote:
Try to write in multiples of the disk sector size, this gives a dramatic improvement.
It does effect dramatically. Thanks a lot.
11-29-2017 02:59 AM
@Rex_saint wrote:
@mcduff wrote:
Try to write in multiples of the disk sector size, this gives a dramatic improvement.
It does effect dramatically. Thanks a lot.
As the OP, you have the power of marking the solution(s)! If you feel mcduff's (spot on) post (partially) solved your problem, consider marking it as solution.
If there are other solutions, or if you find something yourself (and post it), you can always mark that as solution as well.
Marking solutions helps us all to make sense of those >1M posts in the forum.