LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW: Memory is full when updating graph

I am using windows 7 32 bit OS with 8 gb of ram.

I am using CDAQ for capturing data and sending it in a queue to the main program where i am plotting the data in graph and setting the tolerance and limits for comparing the results. For short cycle time there is no issue. But after 2 to 3 minute the queue elements are not becoming empty. So RAM usage becoming high and labVIEW memory full message is showing. This happens in each case where graph is being updated.

do_what_you_like
0 Kudos
Message 1 of 7
(3,214 Views)

A 32bit OS can only support 4GB of RAM and your application can only use significantly less than that. Why do you even have 8GB?

 

How many points are in the graph when you run out of memory? How are you plotting the data in the graph? Does the data grow without limits? Can you show us your code (for example I don't understand how you "update" the graph in multiple cases. Are you using local variables?).

0 Kudos
Message 2 of 7
(3,187 Views)

That is an industrial PC and they have bought with 32 bit.

Totally data graph is getting about 167757 points in array. I have attached an image of it. And I am using this data in other cases to calculate the tolerance and limits. For that also a graph will be plotted. I have attached an update case image also. In that image temp actual data is the data graph.

do_what_you_like
Download All
0 Kudos
Message 3 of 7
(3,175 Views)

You you have about 2MB of data in the graph by selecting four columns from several potentially much larger 2D arrays. How large are these? How large are the data structures in the cluster? is it all real data or are there NaNs? Are the array sizes fixed or do the arrays constantly grow or shrink?

 

I assume that your graph indicator does not have 170k horizontal pixels, so some decimation would probably make more sense. How does the graph look? (just plain line or points? Fancy think lines and point styles? Fills? Antialiasing?)

 

It is difficult to analyze a truncated picture, but from what I see it's not pretty. We cannot give more specific advice without seeing the entire code. There are plenty of case structures, even a very big one, and we have no idea what's in the other cases. Why are the sensors arranged as columns in the original 2D arrays? Since column elements are not adjacent in memory, getting a column is harder for a computer.

 

Why are there so many value property nodes? All could be replaced by local variables (or hopefully with the terminal if the architecture were better).

 

Let's for example look at at that small FOR loop (bottom middle): Why are you autoindeixng and wiring the array size to N at the same time? Why are you reading the same local variable N times in repetition. Wouldn't once before the loop be sufficient? How many cursors are there? Do you know that property nodes execute top to bottom? Setting the active cursor needs to be on top. In your case, posX will use a stale cursor index from the previous call. Is that really what you want?

 

In summary, just scratching the surface shows us a lot of questionable code and there is no telling what else is outside the currently visible area, so all bets are off. How are the sensor data accumulated? I doubt that the graph is the core of the problem.

0 Kudos
Message 4 of 7
(3,145 Views)

The picture of part of your VI is frustrating to look at, so I won't.  Attach your VI so we can see everything, and really get an idea of what you are trying to do.  Are you trying to make a plot of all of the data points?  Have you thought about the fact that a display has on the order of 1000 pixels across, so it makes no sense to show 10,000 points?  Are you plotting data "as it arrives" (which suggests a Chart that shows, say, the last 1000 points that were acquired)?  Are you designing what I call a "Flexigraph" (there's a LabVIEW Blog that described this, and I've incorporated it in several of my routines), where you maintain, say, 5 sets of 1000 points that hold "the last 1000 points", "the last 10,000 points averaging 10 data points/plot point", "the last 100,000 points averaging 100 data points/plot point", etc.?

 

Are you using a Producer/Consumer pattern to acquire the data in one loop and plot it in a parallel loop?

 

Sorry I didn't spend more time looking at your picture of part of your VI -- I might have been able to deduce the answers to some of these questions ...

 

Bob Schor

0 Kudos
Message 5 of 7
(3,142 Views)

Thank you all for your kind replies. The problem was the values written to the consumer from producer were high and graph update took some time. So I made some wait time in write case of producer and the issue was solved. I also changed the OS to win 7 64 bit. I am new to LabVIEW and your suggestions will help me in development. 

do_what_you_like
0 Kudos
Message 6 of 7
(3,072 Views)

@Dr_No wrote:

Thank you all for your kind replies. The problem was the values written to the consumer from producer were high and graph update took some time. So I made some wait time in write case of producer and the issue was solved. I also changed the OS to win 7 64 bit. I am new to LabVIEW and your suggestions will help me in development. So here are some suggestions:

Here are some suggestions:

  • The OS change to 64-bit is good.  However, resist installing 64-bit LabVIEW (unless you really are dealing with massive arrays of data) -- the 32-bit code base is more complete, and appears to be more robust.
  • Producer/Consumer is designed to "get the slow parts out of the Producer" and allow parallelism to work.  Typically, Producers "produce" periodically (such as a DAQ device giving 1000 samples at 1KHz, which happens once/second), while Consumers do time-consuming things like plot updates and file I/O.  You want the Consumer, on average, to be as fast as the Producer, which may mean that you have to think about what it does.  For example, if I'm acquiring points at 1KHz, I probably want to save them all to a file (a process which could take a second or two for file opening, but only a fraction of a second for file writing), but probably don't want to display every point (the human eye can't see images updating at 1KHz), so you may want to decimate (or block-average) your data before plotting it -- the computation will be fast, and the "slow" display updates only need to deal with a fraction of the points, greatly speeding up the Consumer Loop so it runs at the same frequency as the Producer.
  • If you follow the previous suggestion, you will not need to "slow down" your Producer!

Bob Schor

0 Kudos
Message 7 of 7
(3,058 Views)