LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Improving code efficiency wihen dealing with large array sizes, plotting large 1D waveform graphs

OK I am faced with about 300 * 1 million rows and potentially more

[BADGE NAME]

0 Kudos
Message 21 of 28
(1,300 Views)

@blessedk wrote:
OK I am faced with about 300 * 1 million rows and potentially more

Lets assume these are DBL, so we would have (1M x 300 x 8 bytes) 2.4GB per single data copy of the 2D array. You will have additional copies, e.g. for the graph etc. so this seems ridiculously high. Also, remember that arrays are contiguous in memory.

 

Are you using 64bit LabVIEW?

 

What you are trying to do reminds me of these pictures you can find on the internet..... 😄

 

0 Kudos
Message 22 of 28
(1,294 Views)
Yes I am using Labview 64-bit. Can you elaborate a bit more on "arrays are contiguous in memory"

[BADGE NAME]

0 Kudos
Message 23 of 28
(1,283 Views)

Arrays in memory cannot be fragmented, so not only do you need a sufficient amount of free memory, the memory also needs to be in a single chunk.

Message 24 of 28
(1,279 Views)
Yamaeda,

I went back to your comment and observed something very interesting ( hopefully I understood that correctly) . " capturing zoom and scroll events" So I imagine you mean that any desired number of points spacing can be achieved as long as you are plotting ( or displaying) much smaller sections of the dataset at a time. Then When you display all, you basically show a decimated version

[BADGE NAME]

Message 25 of 28
(1,275 Views)
The challenge with that for a beginner like me is with the implementation but I will definitely like to learn that approach ( hopefully I can; just like i Managed to do a state machine for the first time in my life :))

[BADGE NAME]

0 Kudos
Message 26 of 28
(1,272 Views)

@blessedk wrote:
Yamaeda,

I went back to your comment and observed something very interesting ( hopefully I understood that correctly) . " capturing zoom and scroll events" So I imagine you mean that any desired number of points spacing can be achieved as long as you are plotting ( or displaying) much smaller sections of the dataset at a time. Then When you display all, you basically show a decimated version

Exactly!

You basically decimate until you get 1sample/pixel, the decimation factor can be easily calculated from the currently shown time range (or similar) and sample rate.

If you e.g. sample at 10kHz for a minute, but have zoomed seconds 20-30, your first sample will be the 200000th, and if the graph is 1000 pixels wide the decimation should be 100k samples / 1000 pixels = 100.

 

/Y

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 27 of 28
(1,243 Views)

I stumbled upon a Blog report (http://culverson.com/tips-and-tricks/) that talks about making a flexible Graph with multiple time scales, but with no "scrolling" backward in time (that is, you can see the last 5 minutes or 5 hours of data, but the plots are "anchored" with the latest Plot point, so you see the last 5 minutes or hours, not the first 5 minutes or hours).

 

I actually tried playing with this for a little demo project.  It works surprisingly well, and scales nicely.  Suppose you are sampling at 1KHz, your plot shows 500 points, and you sample for 5000 seconds (that's 5 million points).  To keep the math simple, I'm going to use scale factors of 10.  So plotting every point, you can see the last 500/1000 = 0.5 seconds of data (and it takes 500 points to show that).  If you decimate (or average) 10 points at a time, your 500 "averaged" points will represent the last 5 seconds of data (for another 500 points).  A third average of averages will show the last 50 seconds of data (again, taking 500 points), a fourth shows the last 500 seconds of data, and a final fifth shows 5000 seconds, or all of the data.

 

All of this takes 5 * 500 = 2500 points to save all the plots, rather less than 5 million points.  [Of course, you want to save all of the data, so you stream the 5 million points to disk, accumulating the 5 plot representations "on the fly", as suggested by the Blog article].  It's actually quite amazing that it works so well -- if done right, it takes surprisingly few computational resources.  Of course, I tried this on a 4 channel display, not 300 channels, but that's only 75 times more data ...

 

See if Culverson's ideas make sense to you and fit what you are trying to do.

 

Bob Schor

0 Kudos
Message 28 of 28
(1,205 Views)