LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Replacing a 2D array section of a 3D array

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

A new array of data points arrives and I want to insert it into the array. Let's focus on 1 channel for now (the first one, page index 0). How can I insert this new 2d array into the 3d array at a specified location. Say each data array that arrives is 100 points. The first iteration must be inserted at row 0, column 0. The second data array at row 0, column 100, etc. But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.

 

I have a version that works, but I don't think it's the most efficient, since it uses a for loop instead of writing the data directly. The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).

Basjong53_2-1778236924456.png

Any suggestions? 

Maybe even a better data structure?

0 Kudos
Message 1 of 6
(223 Views)
From a performance point of view, I would like to recommend ensuring that you write data strictly sequentially. You can clearly feel the difference:
snippet.png
Message 2 of 6
(206 Views)

Hi Basjong,

 


@Basjong53 wrote:

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

Maybe even a better data structure?


Other (but not immediatly better) data structure:

  • 1D array of DVRs
  • each DVR contains a cluster of two 1D arrays (one array for "I", the other for "V"), as you currently use different columns for I/V…

@Basjong53 wrote:

The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).


As a 3D array this would require 1M  points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…

Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.

Do you really need to hold ALL the data in memory?

Can't you use a (TDMS?) file or a database to store/manage the data?

 


@Basjong53 wrote:

But when I wire both the page and column index of the replace array subset function, the function only accepts 1d arrays.


Yes as you restrict the "entry point" to a certain page+column…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 3 of 6
(203 Views)

@GerdW wrote:

Hi Basjong,

 


@Basjong53 wrote:

I have a pre-allocated 3D array. The pages represent different measurement channels. The rows are voltage and current. The columns are data points. So a size could be [10][2][10000] ([channels][I+V][data]).

 

Maybe even a better data structure?


Other (but not immediatly better) data structure:

  • 1D array of DVRs
  • each DVR contains a cluster of two 1D arrays (one array for "I", the other for "V"), as you currently use different columns for I/V…

A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

Basjong53_0-1778240633201.png

 


@GerdW wrote:

@Basjong53 wrote:

The final version may have a lot of data (1M points) and a lot of channels (100+), so I'd like to be as efficient as possible. (in the future, I'll use a DVR for the buffer, but that's not the issue now).


As a 3D array this would require 1M  points * 8 B/point * 2 * 100 channels = 1600 MB: I guess you need LabVIEW-64bit to handle this safely…

Breaking up the data in smaller chunks (like an array of DVRs) might help to manage this amount of data.

Do you really need to hold ALL the data in memory?

Can't you use a (TDMS?) file or a database to store/manage the data?


Yeah, I should probably manage this better. Not all channels will be visualized at the same time. So I can keep just the active channels in memory and store all others just in a file.

0 Kudos
Message 4 of 6
(177 Views)

Hi Basjong,

 


@Basjong53 wrote:
A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

You still can go with just one array (per cluster) once you combine voltage+current into a complex number 🙂

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 5 of 6
(169 Views)

@Basjong53 wrote:

A 1D array of a cluster of Voltage and Current is probably better. I tried to be clever by using a single array. 

I agree.  Think in terms of the structure of your data.  You have two measurements (Voltage and Current) that are related to each other (by being acquired at the same time, and manifesting different "aspects" of the data you are saving).  They are acquired simultaneously, and any analysis that you do will likely require both to make an interpretation of the data.  

 

So if the basic "quantity of interest" is a Cluster of V and I, the other two dimensions are "Channel Number" (how many simultaneously-acquired sets of V,I data you are handling) and Time, itself. Here, your DAQ device helps you with creating a logical data structure.  Because the DAQ device (probably) gives you N (= 10?) channels simultaneously, and for design reasons gives you a "chunk" of data (say, 1000 samples of your 10 channels) at a time, the "natural" way of saving these data are in a 2-D "Total # Samples" (variable) by "Total # channels" (fixed, 10 in this example).  The "rows" are individual samples of 10 ("columns") channels.  You are taking 1000 samples (of 10 channels) at a time, so these become the rows of your data matrix.  And within each "sample" (row) of each "channel" (column) there is a "pair" (cluster) of Voltage and Current readings.

 

Is it obvious/intuitive why you'd take 1000 samples at a time and save them in such large "chunks", rather than one sample at a time?

 

Bob Schor

0 Kudos
Message 6 of 6
(133 Views)