From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Reading very large (193 GB) binary file

Solved!
Go to solution

@jgold47 wrote:

Wiring a cluster rather than the array of clusters, yes, I am able to open and read from the large file.  This will require other changes "downstream," but it might solve my problem.  (In response to another reply that popped up, yes, I had previously tried wiring values other than 1 to "count.")


Excellent! Then read 10^6 clusters at a time and analyze chunks. (working with single elements will probably be slow)

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 21 of 24
(1,236 Views)

@jgold47 wrote:

Wiring a cluster rather than the array of clusters, yes, I am able to open and read from the large file.  This will require other changes "downstream," but it might solve my problem.  (In response to another reply that popped up, yes, I had previously tried wiring values other than 1 to "count.")


It may also solve a problem that you have yet to find!

 

I also had to handle a multi-Gigabyte file challenge and the lareg amount of data presented a performance challenge just processing all of the data. I ended up using a chain of Producer-Consumers such that...

 

Loop1

Looks for CR in text file and aps record to loop2

 

Loop2

formats the data in the record as cluster and pass to loop3

 

Loop3

Checks timestamp to see if it is in the desired time range and if so pass to loop4

 

Loop4

Applies the new set of reading to the GUI

 

This let me set loose multiple CPU and update the GUI on a regular schedule.

 

So embrace the change!

 

It may be good.

 

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 22 of 24
(1,212 Views)
Solution
Accepted by topic author jgold47

Time to close this out, even though I still don't have an answer to the real question: Why does the file size matter?

 

I am able to open the very large binary file; the error occurs when I try to read it the first time.  It is not the format of the file, as I can read the first "chunks" of data I saved if I use a utility to carve off a relatively small section of the front of the file.  Thus it is the file size itself.  For some reason, the binary read vi looks at the size of the file and says "no way," even though I'm only trying to read in a manageable chunk at a time.

 

But I don't see this discussion as getting me much further, so as I say, time to close this out.

0 Kudos
Message 23 of 24
(1,175 Views)

Long time closed, but I ran into this thread trying to read a rather small size binary file. 2GB exactly to be precise. Came out with an "Out Of Memory" Error immediately.

Thanks and Cudos to Kyle9730 he was onto it. It is (in my opinion) LV's array index type, which is I32 and therefor can address only 2GB (= x8000 0000 = d2.147.483.648). So my guess is: LV is trying to allocate the memory for the binary data ahead and throwing this error instead of even trying. 

In my case reading just U8 it became quite obvious, whereas jgold47's cluster made the calculation impossible.

So it is not the file size itself at all, but the array size you are trying to read with the "Read from Binary File.vi"

In my snippet I've overcame the problem by splitting up the file read and did the processing into U16 before continuing.

(Why unzipping in memory is a different thread's topic.)

0 Kudos
Message 24 of 24
(494 Views)