From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Memory Management for Large Arrays

I assumed you got into memory problems while saving also, in which case line-by-line should reduce the memory footprint. Else your previous Write Spreadsheet file works fine.

Attached in LV2013.

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
Message 21 of 41
(1,205 Views)

Yes, I did run into memory problems while writing to file.

 

Ok, to make a more realistic test, I prepared an array with more than a million elements, and the corresponding values array of the same number of rows, and 4 columns.

 

In the file "File Write timer.vi", I am testing several methods to see the time differences.

 

Interestingly, the original approach when I am merging two arrays column-wise works well. The code you added, I put it in a separate case "OT row-wise" and it doesn't work, and also I created a third option to append each row (that is formed by merging an element from strings array with a row from the 2D values array, and using that array) to the Spreadsheet file.

 

The first option works, and produced a spreadsheet file of more than 60 MBs, the second option gave an error while the third didn't give anything.

I also tried to write a variant to file, but don't know how it will be so the option is not completely implemented.

 

I would also like to understand how that write in chunks (as I mentioned in the previous post) using that example file's logic could be implemented.

 

Thanks ahead!

Vaibhav
0 Kudos
Message 22 of 41
(1,173 Views)

Don't do all three writes in parallel. Disk IO is much more efficient if things occur serially. You branch the same data into several locations in parallel, do the formatting of the same data three times, thus the compile must make copies. Also your time measurements are highly suspect, because the parallel operations tep on each others toes, esoecially since the subVI is not reentrant anyway. Not good.

 

What is the purpose of the complicated FOR loop on the left? Just use "build array" in concatenate mode instead.

Message 23 of 41
(1,161 Views)

Agreed on all.

My bad!

 

In fact, I always wondered about time calculations for parallel processes. Because the processor is simultaneously doing sevaral tasks. But I saw such method of time calculation on forum replies, so I thought it's the way to do it. Earlier I used to do the serial processes as well.

Vaibhav
0 Kudos
Message 24 of 41
(1,154 Views)

Just use a FOR loop to do the different writes on the same data.

 

 

Message 25 of 41
(1,150 Views)

You can write the variant to a binary file and later read it again as variant. I do that all the time. All attributes are preserved.

0 Kudos
Message 26 of 41
(1,142 Views)

@altenbach wrote:

Just use a FOR loop to do the different writes on the same data.

 

 



Again, very clever.

But I am doing them in the sequence just to calculate the time. But yes, a good technique.

To build that array using FOR loop in the beginning, it was also due to scalability. I just have to edit the value of N for both arrays, instead of adding inputs in build array and I may miss. But you are right, build array could be used as well. Instead of simple ways, I was complicating while finding solutions.

 

Ok, after changing to serial file writing, the XS_2 still gives an error and XS_3 (append to file TRUE) took a very long time, so I interrupted the process in the middle, and removing it altogether. Instead of that, I added "Binary" option to the XS_3. Selected the extension .txt to start with. But it's not readable. So it's not going to serve the purpose.

The idea to store into file is to use for analysis and documenting.

And it also took more than triple (almost quadruple) the time of the XS_1 (the simple array merging technique, that had given me error in the past).

 

Vaibhav
0 Kudos
Message 27 of 41
(1,142 Views)

@Vaibhav wrote:

But I am doing them in the sequence just to calculate the time. But yes, a good technique.


You can easily place the timing sequence insude the FOR loop and autoindex at the right loop boundary. You'll get an array of times, one for each iteration.

Message 28 of 41
(1,127 Views)

Ok, I get it.

Attached the code. If it wasn't for the error message of the XS_2, the timing for the next iteration wouldn't have been so messed up in the previous method using sequencial frames, this didn't happen.

 

Still wondering what is wrong with the row-wise file writing option.

 

Do you have any idea about that "writing in chunk" method (as in the "GLV_StreamToDisk.vi")? and how can I incorporate it in my case?

 

Thanks for your time and efforts!

Vaibhav
Download All
0 Kudos
Message 29 of 41
(1,118 Views)

Your timing code is still pretty meaningless because the elasped time for the first iteration will include the elapsed time for the first loop, most of the other operations, as well as the variant loop. I still don't see the point of the complicated loop on the left.

0 Kudos
Message 30 of 41
(1,105 Views)