Above i have cut out a piece of my code showing the portion taking the index array from channel 1.
Also attached is the program vi in the above reply.(In my program: I am readinga two channels 0 and 1 from my CDAQ. And using index array to take out elements from y-array for taking moving average.)
So with every iteration of the loop you are interacting with DAQ hardware, coverting between dynamic data, waveforms and arrays, using two instances of a bessel fiter, doing two FFTs, updating three charts and four xy graphs and are still worrying that a simple "index array" is slowing you down? Even if you would do several 1M point running averages, it would probably be one of the fastest parts of the code.
Sorry, I cannot see how your DAQ is configured (I don't have DAQmx instealled). How big are the datasets? How do you measure the loop rate?
When i plotted in the wave chart the moving average response was just a little bit slower so thats what worried me if my moving average is slower,
I have set the CDAQ at 200 Hz. So 200 samples per second.
All parts in the loop run at the same speed. A running average always has a delayed response to quick input changes because it averages over the past N point hstory. It cannot look into the future. Is that what you mean?
Yeah maybe its because of quick data fluctuations that is delaying it .I am new to labview so was making sure my code is efficiently doing the moving average.
In my program i intend to read two channels simultaneously and after that filter the data to eliminate 23 hz frequency that i am observing during the road test. And since there are fluctuations, i want to smoothe out the data coming from the bessel filter. So could you please just give my program a glance to make sure i am doing right what i intend to do?
Thank you for your suggestions.
When you have extremely messy code (as you do) with data acquisition functions operating at who-knows-what speed, lots of things going on in a single loop, and the basic question is "Why is Array Indexing so Slow?", it helps to write some simple simulation code that takes as much of the variability out of the program as possible and lets you concentrate on what is going on.
Your question is about speed, and how additional processing slows you down. Write a "speedy data generator", namely the Random Number Generator in the Numeric Functions Palette. If you expect your DAQ device to give you 1000 points at 1KHz, and (since you'll be doing Waveform operations on them) you want the output as a Waveform (not Dynamic Data, shudder), you can simulate that this way:
Now you can put this in a loop and see how rapidly the loop runs (it should loop once/second). Now start adding other functions in, such as filters, FFTs, plots, writing to disk, etc., and observe their effect on your loop speed.
So I presume you understand the principle of Data Flow. Given that the above piece of code takes a second to run, after which it produces the data that filters, FFTs, etc. require in order to "do their thing", would you expect that the total loop time should be something like "1 second + Processing Time for Filters, FFTs, etc."? Does it make sense that this might be > 1 second?
LabVIEW has a solution to this dilemma, something called the Producer/Consumer Design Pattern. It involves taking the 1-seconds-worth of data and "exporting" it (via a Queue) to a separate loop that exists to process (or "consume") the data. The above Snippet, if put in a loop , will run once a second, but 99.999% of the time will be spent doing the 1-second "wait". That means that the CPU has 0.99999 seconds "free" for a second (Consumer) loop to run and do filtering, FFTs, and all kinds of things.
There are examples of Producer/Consumer Designs in LabVIEW, and on the Web, and in this Forum. Do some "simulations" (for which you don't need hardware, incidentally), and convince yourself that if you know what you are doing, you can do it really nicely in LabVIEW.
Thanks for your suggestions.
Well I had used producer consumer loop before too for writing data in file.
So does that mean using producer consumer loop ensures no loss of data?
Producer/Consumer does not guarantee that you won't miss a data point, but can go a long way to "almost" guaranteeing it. Let's divide data acquisition into two pieces -- acquire data and do something with the data. The sequence has to be Acquire, then Process (unless you have a way to process data before you acquire it, quite a trick, even for LabVIEW). Generally, data acquisition occurs at regular intevals (for example, if acquiring 1000 points at a time at 1KHz, every second you need to be ready to take the next sample. In principle, if any loop take 1.1 seconds, you miss a sample.
Now, suppose the second part, "do something with the data", is occasionally "slow" -- on average, it takes 0.9 seconds, but sometimes it take 1.3 seconds. Oops, missing data.
But if you are using Producer/Consumer, the Producer is just doing "Acquire" and probably taking only a few milliseconds. The data then goes into a Queue, which can "expand" to allow some processing to take extra time, but not have any impact on the Acquire process. Ideally, you want to process the data at the same rate you acquire it, but if it is slower, that only means that the Queue gets longer. As long as you have enough memory for the Queue to expand ...
But generally, Producer/Consumer Design is used precisely to prevent any data being lost.
I tried to check your 200 pt moving average vi you had uploaded in the earlier post but it has missing delta-t sub vi. Can you upload your delta-t sub-vi too? So i can check your program too.