From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can i open large data?

he code in the screenhot opens realy all data formats, but only small data, when the data becomes too large, LabView say: "Not enough Memory to complite this operation". But I have enough Memory. This appear already by a data-largeness of fwe bytes (maybe more than 10 MB).
I have 1 Gbyte Memory, also enough, but when i will open a small data with only 10 MByte it say's to me that i have not enough memory.
Ok when i would open a data with more than 1Gbyte, for example a zip-file, i can understand it, but the data are realy not large.

Can somebody say waht is wrong?

ThX
0 Kudos
Message 1 of 11
(3,524 Views)
Hello,

Looks like the example I gave you.

Try to read the file in multiple reads each time starting at the cuurent read pointer location.

Hint do it in a For Loop, the file function give you the required parameters.

Regards,

André
Regards,
André (CLA, CLED)
0 Kudos
Message 2 of 11
(3,504 Views)
You might have 1GB of memory available in your system, but the problem is that it is scattered in many different places. You might have 5MB here, 250kB there, and so on. Just like C and many other programming languages, when LabVIEW allocates memory for an array, that memory must all be contiguous. So when you try to directly read a 1GB file into an array, you don't just need 1GB of memory, you need 1GB of contiguous memory! That's a really huge chunk and isn't likely to come by in a 32-bit operating system.

The best way to accomplish reading your file is to take the advice that the other poster gave, and read the file in smaller chunks. You could, for instance, read 1000 1MB chunks or 10000 100kB chunks. Then you can store the data for instance in a giant Queue, or perhaps a 1D array of clusters of 1D arrays.

Do you really need the entire 1GB file of data in memory at the same time? Can you not just extract a portion at a time?
Jarrod S.
National Instruments
Message 3 of 11
(3,500 Views)
Hi André,

i think to read the file in multiple reads, is the right way.

Has anybody an example code for me , or a screenshot?

THX

0 Kudos
Message 4 of 11
(3,470 Views)
Which part don't you understand.

Kind regards,

André
Regards,
André (CLA, CLED)
0 Kudos
Message 5 of 11
(3,457 Views)
Sorry but i don't know how read the file in multiple reads with the read pointer location, is there any Vi for it?

THX
0 Kudos
Message 6 of 11
(3,438 Views)
No there is no VI for it, at least no standard LV VI.

You will have to do the following in a while loop:
- determine a good chunk size, e.g 1MB.
- Set the read pointer at the start of the file.
- Read the chunk of data and put in a shiftregister.
- Set the read pointer at the current location, perhaps its already at position chunk size.
- Read the next chunk append it to the previous
- Etc.

There are more possibilities, its probably one of many.

Regards,

André

Regards,
André (CLA, CLED)
0 Kudos
Message 7 of 11
(3,428 Views)
and how can i set the read pointer at the start of the file? is there a vi to do this?
THX
0 Kudos
Message 8 of 11
(3,416 Views)
The read pointer starts by default at the beginning of a file. To explicitly set it somewhere else, use the Set File Position VI in the Advanced File IO palette.


Message Edited by Jarrod S. on 08-30-2007 11:03 AM

Jarrod S.
National Instruments
0 Kudos
Message 9 of 11
(3,410 Views)
You should also read the tutorial Managing Large Data Sets in LabVIEW.  Some things you will learn:
  1. Your current code is making several copies of your data.  The tutorial will teach you how to find them and eliminate them.  The tutorial has not been updated for LabVIEW 8.5 yet, and there are several enhancements in LabVIEW 8.5.  The updated version is posted below.  The code examples did not change.
  2. For best speed in reading from the disk, you want to use 65,000 byte chunks.
  3. Store your data in a single-element queue.  This will give you best performance for a large array.
  4. Store your data as a set of arrays instead of one array (this has been mentioned above).  You can break it up into several single-element queues or save it as an array of clusters, each cluster containing a sub-array of the data.  The cluster acts sort of like a handle in C.
Let us know if you need more help.
0 Kudos
Message 10 of 11
(3,384 Views)