Hi, I develop the device which reads the data on TCP(I use ftp://ftp.ni.com/pub/devzone/epd/stm_2.0_installer.zip)
from crio and then displays them on 7 waveform chart ( 5 chart with 150000 history length and 2 with 600 - 1 minute of measurements) and 2 xy graph. After 10 minutes of work there is an error “not enough memory to complete this operation”. On the WIndows Task manger I have seen that there is a filling RAM approximately from 45 % up to 72 % then the mistake jumps out. But I do not know why. Help please. Labview 8.6Windows Vista3 GB RAMP.S. Forgive for bad English.
It's likely that instead of trying to plot 5 charts with 150000 data-points, you're trying to plot 150000charts with 5 data-points! Try right-clicking on the chart - do you see "Transpose Array" property? If it's checked, UNcheck it (if UNchecked, check it!)
The reason that option is not available is because it applies to a graph displaying a 2-D array. Since your graphs are displaying 1-D arrays and clusters, this option is not available.
It really sounds like you're running into memory limitations due to the large amount of data you are acquiring. You can reduce the amount of data you are dealing with or take a look at some ways to optimize your program to handle it. You can start with the LabVIEW Help page for Memory Management for Large Data Sets. I also recommend searching 'large data sets' on the forums here or even from ni.com.
Sorry so long to reply. I don't have LV8.6 so couldn't see your code. A possible solution for displaying this data would be to "reduce" it before plotting it. Divide the raw-data into, say, 10000 equal-sized sections, then plot the max&min of each section. The resulting plot is, to all appearances, the same as if you let LabVIEW plot it. You wouldn't be able to use the chart navigation features to zoom-in on detail - but I don't know if you need to(?) The reason I picked 10000 for number of segments is that, when "rendered" by LabVIEW, the data will be fitted into columns of pixels - probably less than 2000 columns. By supplying 10000 points, there will be plenty of extra detail to render an accurate plot. I first used this technique around 1991 to display 1Meg of data every 10 seconds on a Mac IIci running at (something like) 80MHz via a "Tokamak" accelerator.
Same problem I also faced in my project, things which i did to overcome this is,
1. Removed unnecessary array building in the code, i.e. auto-indexing, use of build arrays made minimum.
2. Instead of plotting dbl data , I16 format was used for plotting.
3. Instead of plotting all the samples, try to plot 1 sample in every 100 sample or 1000 samples.
4.Try increasing the virtual memory of the system..... this information I got by NI expert. But I didnt try in my application.
Rohith M P