From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
12-14-2016 03:18 AM
@jgold47 wrote:
Wiring a cluster rather than the array of clusters, yes, I am able to open and read from the large file. This will require other changes "downstream," but it might solve my problem. (In response to another reply that popped up, yes, I had previously tried wiring values other than 1 to "count.")
Excellent! Then read 10^6 clusters at a time and analyze chunks. (working with single elements will probably be slow)
/Y
12-14-2016 12:59 PM
@jgold47 wrote:
Wiring a cluster rather than the array of clusters, yes, I am able to open and read from the large file. This will require other changes "downstream," but it might solve my problem. (In response to another reply that popped up, yes, I had previously tried wiring values other than 1 to "count.")
It may also solve a problem that you have yet to find!
I also had to handle a multi-Gigabyte file challenge and the lareg amount of data presented a performance challenge just processing all of the data. I ended up using a chain of Producer-Consumers such that...
Loop1
Looks for CR in text file and aps record to loop2
Loop2
formats the data in the record as cluster and pass to loop3
Loop3
Checks timestamp to see if it is in the desired time range and if so pass to loop4
Loop4
Applies the new set of reading to the GUI
This let me set loose multiple CPU and update the GUI on a regular schedule.
So embrace the change!
It may be good.
Ben
12-20-2016 04:50 PM
Time to close this out, even though I still don't have an answer to the real question: Why does the file size matter?
I am able to open the very large binary file; the error occurs when I try to read it the first time. It is not the format of the file, as I can read the first "chunks" of data I saved if I use a utility to carve off a relatively small section of the front of the file. Thus it is the file size itself. For some reason, the binary read vi looks at the size of the file and says "no way," even though I'm only trying to read in a manageable chunk at a time.
But I don't see this discussion as getting me much further, so as I say, time to close this out.
05-20-2021 09:38 AM
Long time closed, but I ran into this thread trying to read a rather small size binary file. 2GB exactly to be precise. Came out with an "Out Of Memory" Error immediately.
Thanks and Cudos to Kyle9730 he was onto it. It is (in my opinion) LV's array index type, which is I32 and therefor can address only 2GB (= x8000 0000 = d2.147.483.648). So my guess is: LV is trying to allocate the memory for the binary data ahead and throwing this error instead of even trying.
In my case reading just U8 it became quite obvious, whereas jgold47's cluster made the calculation impossible.
So it is not the file size itself at all, but the array size you are trying to read with the "Read from Binary File.vi"
In my snippet I've overcame the problem by splitting up the file read and did the processing into U16 before continuing.
(Why unzipping in memory is a different thread's topic.)