LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why does writing sgl file some times takes long : queue implementation

Solved!
Go to solution

@crossrulz wrote:

1. Your number of chunks should be in a shift register, not a front panel item that you use a local variable to update.  You can also reset this value inside of the case structure where you create a new file.  This would eliminate the feedback node and Quotient & Remainder node since you can just watch for the value to be 400.

 

2. You should be able to directly wire up your 2D array to the Write Binary File.  If the format isn't quite right, use Reshape Array to make it a 1D Array.  This would eliminate the FOR loop.

 

3. Your jumps are likely to the creation of the new file.  Not much you can do there if that is a requirement.


Thank you have changed that number of chunks part and as of 2D array thing, I have seen that writing 1D array multiple times is faster than the 2D array. The file is getting updated 3 times (for 1200 chunks) but those jumps are as many as 30. And also those jumps are coming in between writing a file. I have tried not updating the file name and used the same file for writing 1200 chunks and the jumps are still there as the file size in increasing. I doubt that those jumps are arising as the file writing is taking place for a long time. 

 

I am still curious how the jumps are there when I am writing the same amount of data into file every iteration. Please someone enlighten me if this is something to do with resource management of CPU. And if yes is there was way to optimize it to minimize the jumps, Like maybe assigning priority order when the resources are demanded for file write.

0 Kudos
Message 11 of 15
(256 Views)

Rex_saint wrote:

I am still curious how the jumps are there when I am writing the same amount of data into file every iteration. Please someone enlighten me if this is something to do with resource management of CPU. And if yes is there was way to optimize it to minimize the jumps, Like maybe assigning priority order when the resources are demanded for file write.


Both Windows as the hard disk will cache that data. Windows will probably buffer the data until a complete block\sector\whatever can be written. You could try adding a file flush after the writes. This might even the storage load, but will be less efficient over all.

0 Kudos
Message 12 of 15
(250 Views)
Solution
Accepted by topic author Rex_saint

Try to write in multiples of the disk sector size, this gives a dramatic improvement.

 

see https://forums.ni.com/t5/LabVIEW/Can-i-optimize-write-of-large-data-array-into-text-file/td-p/335196...

 

my my post on that thread.

 

mcduff

Message 13 of 15
(242 Views)

@mcduff wrote:

Try to write in multiples of the disk sector size, this gives a dramatic improvement.

 


It does effect dramatically. Thanks a lot.

0 Kudos
Message 14 of 15
(207 Views)

@Rex_saint wrote:

@mcduff wrote:

Try to write in multiples of the disk sector size, this gives a dramatic improvement.


It does effect dramatically. Thanks a lot.


As the OP, you have the power of marking the solution(s)! If you feel mcduff's (spot on) post (partially) solved your problem, consider marking it as solution.

 

If there are other solutions, or if you find something yourself (and post it), you can always mark that as solution as well.

 

Marking solutions helps us all to make sense of those >1M posts in the forum.

 

0 Kudos
Message 15 of 15
(195 Views)