From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DOWNSAMPLING AFTER OR BEFORE QUEUE

Hi,

 

I want to downsample my acquisition (fixed around 1.6kHz) to 100 Hz. I have a loop for acquistion and a loop for data logging connected through queues. My question is, should I downsample (using resample waveform.vi) the waveform after or before enqueue the element?

Thanks!

0 Kudos
Message 1 of 6
(2,606 Views)

That depends on the rest on the code and hardware...  (for example how the data is read, block size read, how often do you empty the queue,...) 

You deal task processing time (cycle time*) vs. internal memory  ..  but with your stated numbers I assume the OS shifts more data into the web in the meantime 😄 

 

If you use continious aquisition with DAQmx, the driver has it's own data queue , so if you read blocks of 160 values (every 100ms)  or more,  I would place it before ...

 

*)  With DAQmx and continious aq. the cycle time is 'fixed' by the samplerate and data block size read. Reading the data is fast, decimation is this case too, no problem. 

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 2 of 6
(2,591 Views)

Thank you Henrik_Volkers for the explanation.

0 Kudos
Message 3 of 6
(2,548 Views)

And you can always use two queues:

1) Acquire and put on queue A

2) Get queue A, down sample and put on queue B

3) Get queue B, do the rest

 

That might seem like the worst of two worlds (CPU and memory), but on an FPGA it might make sense if neither 1+2 nor 2+3 will fit in a single cycle timed loop. On a PC it might make the program make the most of the CPU (since 1, 2 and 3 are executed in parallel), at the expense of memory (and code complexity).

0 Kudos
Message 4 of 6
(2,541 Views)

I do not know if there is a cut-n-dry right answer but there are many things to consider.

The two big factors seems to be memory and CPU and how those two resource are used for the data that drive LabVIEW programs.

 

Rather than try to describe some rules of thumbs I will rather speak to some of the more demanding applications I have had the privilege to develop.

 

Project #1

Demux a digital stereo microphone signal

The microphones could operate in multiple modes to allow for high resolution and also power saving. Using the best CPU that money could buy, I was a about to acquired a HSDIO signal at 800MHz but could at best only record the data to disk. Any delays at all along the way resulted in not being able to get the data out of the HSDIO board fast enough to prevent over-writes. I had to resort to reducing the sample rate to 400MHz.

 

Buried in that pile of info were two signals coming in on a single DI line that represented the left and right channel of the microphone. One was clocked by a rising edge and the other from the falling edge of a clock signal. Since the data from both microphones was to be sampled at the same time, analyzed and displayed "live" I could not set up an acquisition using the clock. An acquisition has to be either rising edge or falling edge but not both at the same time. That ruled out using the hardware to do the job and I was faced with doing it on code.

 

Now to complicate the game, the devices had to be tested as the transitioned from lower-power to high resolution mode (i.e. from one clock rate to another) seamlessly to measure how long it took for them to change states.

 

Oh Bother!

 

So what I did was...

 

I used queues to transfer the raw HSDIO data from the acquisition buffer to keep the data coming in from backing up. If the data in a wire does not fork, (possible data copy involving memory and CPU) before being presented to a "enqueue" function, the queue can operate "in-place" and only a handle or some such is transferred from where it was queued to where it was dequeued.

 

So far so god, the acquisition can keep rolling.

 

The next step was to separate the left and right channels.

 

The data being enqueued from the hardware was processed sample by sample to reduce the 400Mhz data down to about 200KHz and the information from the two channels separated. The two binary encoded audio signals was then passed of via two additional queues, one for each left and right channel.

 

Two additional loops accepted the binary encoded audio signals and then converted those to 20KHz analog signals. After the conversion they were passed to another set of queues to be processed to analyze noise, FFT etc.

 

It kept all of the available processor very busy but by passing the data via queues, and using a "bucket brigade" scheme, the application worked and was able to keep up with the data.

 

So if there is a summary in this story it is that use queues to move data quickly and reduce the amount of data being processed as near the source as possible.

 

In the case of this thread, that would mean Down-sample early and do not waste resources moving fluff around.

 

Just my 2 cents,

 

Ben 

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 5 of 6
(2,527 Views)

Hi guys,

Thanks, that's really interesting, never thought of using multiple queues, I'll probably give it a try here to process the data. Thanks again!

0 Kudos
Message 6 of 6
(2,523 Views)