LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

memory usage issues

I wrote the attached program in order to view and analyze some data sets that I had taken in a lab setting.  The data sets can be rather large, as I was recording at 1-10k S/s on 6 channels, and storing to a text file with the "Write to Spreadsheet" function, along with the time stamp for each.

 

There are some issues with the data sets, mainly that they contain extra 0's in columns I was not using.  I had previously written a program that would configure and acquire up to 20 channels, and to get things displaying properly, I filled the unused channels with zeros.  I will be changing how that records in the future, but the reality is, even if I use all 20 channels, I still want to be able to view the file in the program attached.

 

I was originally using the "Read From Spreadsheet" function, but on larger data sets, I would run out of memory.  I wrote my own version, trying to gather the data line by line, rather than as one chunk, but I think I missed the boat, because if I load a few sets of data, I run out of memory, crash labview and corrupt the main vi.

 

It seems to do fine actually reading the file, although it does take a while, but once it goes to the "Update" state, and trys to process everything, it sucks up memory, and never seems to give it back, even after the VI has been stopped, although I do get it all back when I close the vi.

 

I've read this (which was slightly over my head) and I have read this which seems to have some good suggestions, but I am not sure what is causing my issue, so I am not sure of which solutions to try.

 

BTW, the data file I have attached is one of the smallest ones that I have.  The problem obviously escalates the larger I go.

 

Thanks in advace for any help eliminating my ignorance.

 

Download All
0 Kudos
Message 1 of 7
(3,713 Views)

I had no problems running it.

 

What version of OS do you have?  I forgot to look at LV version.  How much RAM do you have available?

 

As far as code is concerned, I didn't see anything obvious.  Maybe the culprit is one of the routines you removed? (I'm guessing you removed code since there are cases in your state machine that do nothing).



There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 7
(3,695 Views)

Large datasets should never be stored in formatted text files. Use binary files instead.

0 Kudos
Message 3 of 7
(3,691 Views)

You can also use TDMS file format (which is a binary format, of course). The drawback is that if you share these files with other applications, they would need to know the specific file format. But then that would be true for any file with the data saved in raw binary.

0 Kudos
Message 4 of 7
(3,685 Views)

First to Crossrulz,

 

I actually have not removed anything from the vi.  There is additional functionality I intend to add, mainly filtering my data to something more smooth.  This is one of the reasons I went with such a high sample rate, I am measuring a 32 Hz pressure wave, and wanted enough excess samples to do a good job filtering.  In the original VI some of this functionality had been added, but was not being used when I corrupted the vi.  It is possible that I did something different when re-wrote the main VI...  I havn't tried to totally duplicate the event...

 

I am running Windows XP on this machine, with LV 2011.  It isn't the best of machines, with only 2GB of total RAM, 600 MB of which is typically free.  This could be the issue I suppose.  Any tips on how to eliminate some memory usage here, to cope with this machine?

 

To altenbach,

 

What is the benefit of the bianary over the text?  My train of thought had been to store it in a format that I could open and read without labview if the need ever presented itself.  That and the fact that I was fairly familliar with the write to spreadsheet file VI, is what made my choice.  Any specific tips to using bianary files?

 

smercurio,

 

I will explore the TDMS format.  These files do not need to be shared to any other application other than this one.

 

I do suppose that if I needed a text or excel readable version, that I could write a vi to convert the bianary file to a formatted text file.

 

 

From these I am getting the feeling that it is the conversion from text to dbl that is using so much memory.  The time I actually corrupted the vi, I was testing some of these approaches, and repeatedly opened my largest file.  Sounds like maybe with my limited machine, I just didn't have time to de-allocate all the array copies before I tried loading another.  I guess that's good and bad...  At least I didn't do something blatantly stupid.

 

0 Kudos
Message 5 of 7
(3,661 Views)

@krwlz101 wrote:

What is the benefit of the bianary over the text?  My train of thought had been to store it in a format that I could open and read without labview if the need ever presented itself.  That and the fact that I was fairly familliar with the write to spreadsheet file VI, is what made my choice.  Any specific tips to using bianary files?


In a binary file you are (1) guaranteed that each bit in the data is preserved and that the number of bits is fixed per value (8 bytes for DBL). Data is transferred to disk with very little manipulation. When reading such a file, very little manipulation is needed.

 

Formatting a DBL into a decimally formatted string is expensive and lossy. Using only 8 bytes would significantly cut your precision (remember, the decimal point, minus sign, and exponent, column delimiter, row delimiter, all use up bytes). The more decimal digits you want to preserve, the bigger the file gets. Reading a decimally formatted file back inot a DBL array is also very expensive. The raw string needs to be parsed, delimiters found, and converted back into the binary representation (which wil never be exactly the same as what you had earlier). These processes take orders of magnitude more cpu resources. For a small file it does not matter, but for a big file it will be significant.

 

Decimal formatted files are OK for small datasets. Large files are typically not for direct human consumption anyway.

 

Binary LabVIEW files are big endian by default and you should be able to write small code to read it in almost any programming language. If you really insist on formatted files for safekeeping, you could initially save them in binary and convert them to text using a batch process overnight. 😄

Message 6 of 7
(3,652 Views)

Thanks altenbach.  All of that makes a lot of sense.  Not coming from any type of programming language, I didn't really consider the magnituide differences there.  And if I am writing a VI to read the files, clearly, they are not being used directly by humans!

 

I'll do a re-write on the code that I take my data with, and may even write a vi to convert one or two data sets to bianary, just to figure this out.


Many thanks to all.

0 Kudos
Message 7 of 7
(3,617 Views)