LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Continuous decimation with uneven amount of samples

 

Hi! I am acquiring data from cDaq at given rate. Before writing them I would like to decimate them – each channel with different decimation factor. Since I acquire data over a long time the continuous decimation is required. In addition I read all the data in the buffer - not just predefined amount. I tried to  create memory efficient algorithm for that so I wanted to ask any of you to have a look at it. What could be improved?

It’s based on Action Engine concept. When the VI is initialized it creates array for each channel for storing samples that can be skipped in the beginning of each array.

When decimating it skips those elements and decimates the rest evaluating how many elements are redundant and should be skipped in next iteration on this channel.

 

Main_TestDecim.vi – test the algorithm on given data set

Decim_Core.vi – the algorithm of the decimation plus removal of redundant elements

Decim_Cont.vi – Action engine

Could I ask  some professional to have a look at it. Can I improve it anyhow?

Thanks

 

LV 2011, Win7
0 Kudos
Message 1 of 14
(3,415 Views)

Hey Ceties,

 

I have taken a look at your code. It's really well developed, congratulations for your style! 🙂

 

There are a few things in which I see room for improvement:

 

  • This Build Array function is inside a For Loop.  Every time a new value is appended to the array, LabVIEW must reallocate the memory buffer and copy the entire array to a new location.  This can cause execution time to become slower with each loop iteration. Consider building those arrays with For Loops (auto-indexed tunnel)!
  • This VI has debugging enabled, which can reduce performance slightly.  Consider disabling debugging in the VI Properties >> Execution dialog box for this VI. 

 

Have you tried VI Analyzer? It can help you a lot in performance.

 

Have a nice day!

Matyas

0 Kudos
Message 2 of 14
(3,375 Views)

Hey Matyas and thanks for your time. I cannot simply replace the build array by the auto indexed tunnel from the for loop since the data has different length for each decimation factor and thus it would append zeros for channels with bigger decimation factor. Or did I miss something?

Isn't there some other method to perform decimation than the one I developed?

Thanks!
LV 2011, Win7
0 Kudos
Message 3 of 14
(3,360 Views)

Hi Ceties!

 

I'll give  you two options:

 

  • Upper part (soultion A): I replaced the build array function with insert into array. LabVIEW will only fill in a previously allocated array >> you can spare memory.
  • Lower part (solution B): Minimized the code directly for this task. If you want to use this vi generally, it needs refabrication. It will decimate once on all the samples >> you will get equal arrays in lenght appendedwith zeros.


I suggest you run VI Analyzer in both cases to check memory further information about memory usage.

I only modified the main VI...

 

Have a nice day,

Matyas

0 Kudos
Message 4 of 14
(3,325 Views)

Hey!

 

1] are u sure about the insert array instead of build array? I know that the build array is quite optimized operation as long as you are adding to the end of the aray.

2] I am aware of the optimization but it won’t work for me since I have continuous decimation over multiple channels and also that it adds zeros is a problem – but thanks for trying anyway.

Message Edited by ceties on 11-30-2009 03:12 PM
LV 2011, Win7
0 Kudos
Message 5 of 14
(3,301 Views)

Hey!

 

1.)   What's the problem with insert into array? You're continuously filling an array with new values. It's just what you need, don't you?

Building array constantly reorganize your array in the memory. When you start building not 3*100, but 3*1000000 elements your PC will be stacked under the load. Imagine that you need to move a 1D array with 1 MSamples from one memory location to another in one loop iteration! It wouldn't be a honeymoon, even if it's a well optimzed function 🙂 Ever, you need to build an array once then you fill it with elements. You can use initalize/indexed for loop/build array for creating the array than you insert  your data with insert into/replace element/... etc.

2.) Agree. I just wanted to give you an alternative solution. 🙂

 

Didn't you run the code? It gives you the same result with less CPU usage. Is it not what you were looking for?

 

Best,

Matyas

0 Kudos
Message 6 of 14
(3,283 Views)

Hi and again thanks.

Now I see what you meant.  I can use the for loop to autoindex for each channel – I made it way too complicated – thanx for pointing itout.

 

Nevertheless there isone thing wrong with your code – YOU HAVE TO assure that the RESET of the AEoccurs before the for loop runs – that’s why the output of it has to be wiredinto it. Otherwise the execution is not deterministic and you can recognize itif you run the code multiple times.

 

 

If I replace your insert into array just by build array I get slightly better or equal results (inthis case they both behave the same allocating chunk in memory and then copyingif the chunk is not big enough)

 

If I ran the code 1Mx

Build array: 117.004ms

Insert into array:119.942ms

 

Build array is in addition muchbetter readable in the code.

Message Edited by ceties on 12-01-2009 04:27 AM
LV 2011, Win7
Download All
0 Kudos
Message 7 of 14
(3,278 Views)

Hi Ceties,

 

Yes, it's true. There's no real difference in speed.

I caught the explanation on the wrong end. Our computers will not be limited in speed using build array function inside a loop. It's about memory allocation...

You will definetely be more memory efficient when not using those build array functions inside the loop.

 

Check this KB for more details:

http://digital.ni.com/public.nsf/websearch/771AC793114A5CB986256CAB00079F57?OpenDocument

 

From LabVIEW Help:

For example, if you see places where you are frequently increasing the size of an array or string using the Build Array or Concatenate Strings functions, you are generating copies of data.

http://zone.ni.com/reference/en-XX/help/371361B-01/lvconcepts/vi_memory_usage/

 

You have to make the decision:

  • If memory usage doesn't count, you can choose build array if it's more readable...
  • If memory usage counts, don't allocate arrays dynamically, even if it's faster with a good computer or a bit more readable...


I look forward to get your opinion.

Matyas

Message Edited by Matyas on 12-01-2009 07:31 AM
0 Kudos
Message 8 of 14
(3,260 Views)
I am aware that far the best way is to preallocate the array first and then just to replace the array subsets but I cannot do that since I have continuous measurement and I don't know how many samples I will acquire, e.g. how long the measurement will run.

I just wanted to point out that build array and insert into array are totally equal in terms of speed AND memory allocation. Of course it would be best to preallocate the whole array but for the reasons described above I cannot.
LV 2011, Win7
0 Kudos
Message 9 of 14
(3,254 Views)

Hi,

 

Yes. You convinced me. If you'll later use this as a subVI for decimating a continuous measurement, it will make no difference in speed or memory. In those for loops used, you could preallocate the array, because you know the exact size of the array at the begining.

If you take a deeper look at the link to LabVIEW Help I attached, you'll see that we broke at least two thumb rules:make copy of arrays and use cluster of arrays. So, what you can do now, is to get rid of that cluster >> you won't need neither the initalize array nor the build array functions.

 

Try to follow my second post and use different arrays for all channles to store your decimated data!

Decimate in paralell and don't try to build one table for all the data >> You will neither have 2D array filled with zeros to match size.

Is it possible in your application?

 

Bye,

Matyas

0 Kudos
Message 10 of 14
(3,229 Views)