02-02-2005 08:56 AM
02-02-2005 09:08 AM
02-02-2005 09:23 AM
02-03-2005 09:36 AM
02-03-2005 10:24 AM
02-04-2005 09:55 AM
Maybe you can tell us a little bit about your project and the justification for the absolute need to keep all this in memory. Hopefully we can come up with a better approach. 🙂
If you have LabVIEW 7.1 you should also carefully study your code while displaying buffer allocations and memory usage. Make your array 10x-100x smaller and run your VI. How is the memory footprint compared to what you expect. Calculate how many data copies there are.
What is your OS (windows, mac, linux, etc)? What kind of computer is this? How much RAM do you have?
About the slow allocation (personally, I don't think several seconds is very slow. How fast do you expect this to be?): How is your virtual memory configured. Often, such huge allocation tasks will cause windows to dynamically increase the swap file size, a very slow operation, especially if the swap file is fragmented. To prevent this resizing operation, set the minimum swap file size to a very generous value (an possibly defragment your HD before doing so).
02-05-2005 01:07 PM
02-07-2005
07:24 AM
- last edited on
11-25-2025
12:28 PM
by
Content Cleaner
Mitra,
You can probably pull this off with a bit of patience. I have successfully allocated and used larger arrays (almost a gigabyte) on a machine with 512MBytes of RAM. Check out the tutorial "Managing Large Data Sets in LabVIEW". It shows you how to handle such things and includes code to do almost exactly what you are attempting (see the memory browse example). It still won't be easy, but you should be able to do it, provided your analysis can be done in pieces (most can). Even though the limit for Windows is 2GBytes/process, you will find a practical limit at about 1GByte total memory in LabVIEW. With this limit, you will be unable to have more than one copy of your data without running out of memory.
02-07-2005 08:16 AM