From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Complex and heavy data process with Labview

Solved!
Go to solution

Hi,

I have a Matlab program that process and analyse data. The code runs as fast as the time it takes for the data to record. I would like to make that analysis in real-time. The data is measured through another software at the moment but I intend to measure it using Labview. The issue is, it's a lot of data, it's a lot of complex manipulation and it all needs to be done very fast.

 

In short, what is the best way to do heavy data process with Labview at a great speed?

 

Here is a brief explanation of what the code needs to do. Measure the data, about 250 000 data points, at a 40Hz rate. It then needs to analyse and compare this data with the data it previously measured. It involves doing fourrier transforms, convolutions and other a bit lighter mathematical manipulations on about 5 000 000 data at a 40Hz rate. I made it work on Matlab post-experiment by loading all the data and than analyzing it and the time it took was a bit faster than a 40Hz frequency.

 

I know Labview can run matlab scribt but it was my understanding that transferring the data from Labview to Matlab would be a too slow process. So to any skilled programmers, how would you approach that problem?

 

Thanks you.

RMT

0 Kudos
Message 1 of 8
(2,762 Views)

Well, just try it. 😄

 

Does this run on a PC or an embedded RT system?

 

Your description is too vague to give more targeted advice. Keep it all in wires and shift registers and avoid datacopies, constant array resizing, etc.. I would be happy to go over it once you have a working VI that gives the correct result (no matter how slow).

 

(If you get the MASM toolkit, there are parallelized  FFT routines that could speed things up even more, but just try with plain LabVIEW primitives first.) 

Message 2 of 8
(2,742 Views)

Indeed, you do not have a lot of description of your system.

Is there a way to do parallel processing? is some data can be processed separately?

If you cannot do parallel processing, priorities CPU with higher frequency over the number of core.

I do believe it is very feasible in LabVIEW. you might have to put the analysis part in a sub VI that run at a higher priority. This will ensure that you get the most processing power for that function.

But my biggest advice is that you need to chose the best architecture. LabVIEW can do it for sure, but the question is can YOU do it in LabVIEW?

Benoit

Message 3 of 8
(2,729 Views)

It's on computer. As for more detail, I'll give try to give a more concrete exemple. Let's assume, on the side, I have a working vi that provide me with a 1000x500 single precision array at a 40Hz rate. That is the data I receive at each time step.

 

From there, I want to take this newly acquired array and the 9 previous ones to make a 1000x500x10 array. From there, I want to average those values in this way for exemple:

for x = 0:99

    for y = 0:49

       for z = 0:8

NewArray(x,y,z) = average(OldArray(x*10:x*10+9,y*10:y*10+9,z:z+1));

       end

    end

end

So this give my for each 10x10x2 part o my previous matrix, a single value in my matrix. Another way to do this that I found is much faster is through convolutions:

NewArray = conv( OldArray, ones(10,10,2)/(10*10*2) );

 

Then I'll create multiple arrays in a similar fashion before multiplying and dividing them element by element.

 

There is a lot of parallel computing that can be done since each average value in the array are independent from one another. Also, I don't want to use for loops since those will force me to compute one element at the time instead of all at the same time.

 

The actual process is a bit more complicated but it's simply more of very similar operations. Writing this in Labview with the function seems a very hardous job. On the other hand, I fear that running a Matlab script will make the whole process a lot slower since I'll need to ship the data from Labview to Matlab and back to Labview for displaying the result.

 

Thanks and I hope it clarifies a bit.

RMT

0 Kudos
Message 4 of 8
(2,690 Views)
Solution
Accepted by topic author RaphaelMT

So what do you have at the end? Just a 100x50 2D array that is calculated from the previous 10 1000x500 arrays? Do you also need to retain the original data?

 I would decimate the incoming 1000x500 array instantly to 100x50 by averaging each 10x10 tile, then do a running average of adjacent slices of the last 10 decimated arrays.

 

There are efficient way to do running averages (see e.g. My 2016 NI week talk) and I am sure something could be done for your specific algorithm.

 

Yes, using LabVIEW will most likely be significantly faster. and it should not be hard to implement.

Message 5 of 8
(2,677 Views)

I kinda follow but not completely.  Regardless, I don't have unique expertise in this kind of processing and others will have better detailed advice.  I'm mentioning just 1 thing at the outset because you said:


From there, I want to take this newly acquired array and the 9 previous ones to make a 1000x500x10 array. 

That is probably a step you should *NOT* take.  You'll be asking for 20 MB of contiguous memory space each time you do this.  And then filling it with values.   That's gonna be a lot of overhead, even though LabVIEW's smart enough that it likely will know to keep handing you the same 20 MB space, provided your code makes it clear that it would be data-safe to do so.

 

Since your algorithm intends to average 10x10 blocks of the original 1000x500 array into single values, you should *definitely* be doing this part of the decimation *before* accumulating the history of 10 distinct arrays.  It further appears that you will actually average across 2 distinct arrays at a time.

 

So again, I don't claim to be the expert here, but the following simple approach seems much less wasteful of memory and data copying:

1. Initialize shift register to indicate "1st of 2".  Number, boolean, whatever.

2. Initialize another shift register to hold your decimated arrays.  (keep reading for definition)

3. Capture single 1000x500 array

4. Decimate down to 100x50 via averaging the 10x10 blocks

5. Store decimated array in shift register.  Update other shift register to indicate "2nd of 2".

6. repeat steps 3,4

7. Average current and previous decimated arrays.  Deliver as needed.

8. Update shift register to indicate "1st of 2".

9. keep repeating steps 3-8

 

 

-Kevin P

 

[EDIT: altenbach strikes again!   Many similar thoughts but he got there faster and more concisely, per usual.]

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 8
(2,670 Views)

At the end I have one 100x50 2D array and I'd like to to display it (in real-time, showing the most recent one) and record it for latter. The original data does not need to be retain the original data except the last 9 to run the analysis for the most recent one.

 

Thanks, I'll look into that post.

0 Kudos
Message 7 of 8
(2,647 Views)

Here's a quick draft (LV 2015) showing:

 

  • decimate by averaging of 10x10 tiles
  • averaging the last 10 decimated frames

 

Modify as needed. There are probably bugs. Loop time seem to match your requirements (including random array generation!).

(note that you get about the same loop rate even if you average the last 10 full 2D arrays without decimation).

Message 8 of 8
(2,634 Views)