LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimate digital signal out from NI 9229

 

We used the NI 9229 device for acquiring an analog signal for 10 seconds, and converting it to digital at sampling rate equal 2000 Samples/sec.  

 

Application for seismology, the usual rates are 100 HZ, and 200 Hz (or, more specifically, samples per second – sps). However, the minimum rate of the used digitizer (NI 9229) is 1600 S/s. Thus, the sampled data should be decimated for the required rate.

 

I tried to solve this problem by averaging the data to reach to the required rate. According to the above example, for reaching to 100 S/s from 2000 S/s, each 20 samples are represented by one sample.

 

However, due to the high difference in signal amplitudes, the average method will generate high error in the amplitude of the decimated signal.

 

So, how to decimate the signal for low sampling frequencies (<1000) with minimizing the error of the signal after decimation as minimum as possible.

 

Kindly, download the attached files for more information.

 

 

Download All
0 Kudos
Message 1 of 5
(1,777 Views)

Things might be a little different in seismology, but I fear you don't understand what you are doing.  You use the terms "averaging" and "decimating" as though they were the same thing, which they are not.  "Decimation" means to replace your sample of N points with a smaller sample where you take "one point every m points" ("decimate" literally means "take every tenth point").  Thus if your data were 1, 5, 5, 5, 5, 1, 5, 5, 5, 5, ... (repeated in that pattern forever), a "decimated" version (taking every 10th point) would yield either 1, 1, 1, ... or 5, 5, 5, ... .  If you replaced the series by averaging every 10 points, you'd get 4.2, 4.2, 4.2 no matter where you started averaging.

 

What do you know about the expected frequency range of the data you are sampling?  You need to sample at a frequency at least two times the highest frequency you expect in your signal.  Are you doing this?

 

If you don't understand signals, especially digitized signals, and don't understand how to process them (including ways to sample "continuous" signals with digitizing instrumentation), you should learn about this before writing the software to process the data.  In particular, "decimation" does not seem to be appropriate in this case.

 

Bob Schor

 

 

 

 

0 Kudos
Message 2 of 5
(1,729 Views)

Thanks alot, Bob for your detailed explanation. Really, i need some time to deep understand how to process decimation, certainly, in seismology field. So, what is your suggested solution in my problem.

 

 

0 Kudos
Message 3 of 5
(1,708 Views)

@Emad_NRIAG wrote:

 So, what is your suggested solution in my problem.


First, get a good understanding of the raw signal you are trying to acquire.  I'm assuming you have seismographic data coming from some device.  Am I correct in thinking this is an analog signal, maybe in the range of ±1 to ±10 volts?  Do you know the expected highest frequency of the signals coming out of this instrument?  [It may have an analog filter somewhere inside, and the specifications may say something like "Frequency range 0.01 – 4000 Hz"].

 

Why is the frequency range important?  Well, if you want to acquire analog data using digital means, you need to sample the data at some sampling frequency.  In order to represent an analog sinusoid at some frequency "f", you need to sample with a frequency at least 2 f .  Indeed, a reasonable practice is to sample at, say, 10 times the highest frequency of interest to you, and to put an analog low pass filter on your signal before you sample that filters out all the frequencies above your sampling frequency (if you don't, it will be sampled as "noise").

 

Now ask yourself what the lowest frequency of interest is in your signal.  This determines the length of time that you need to (continuously) sample your signal.  Why?  You need to capture at least one period of your signal in your sample.  So if you were interesting in frequencies down to 0.1 Hz, you would need to sample for at least 10 seconds.

 

So how many samples do you need to take?  If Fs is the Sampling Frequency, and Ts is the Sampling Time (the time you are continually acquiring data), then Ns, the Number of Samples is simple Fs * Ts.

 

For a signal with a Frequency range of, say, 0.1 – 2000 Hz, you would need to sample for 10 seconds at a minimum of 4000 Hz (twice the highest frequency of interest), or 40,000 points.

 

This should be easily accomplished with your NI 9229.  But, please, do not use the Dreaded DAQ Assistant (and especially do not use its Evil Twin, the Dynamic Data Wire, which looks like a wire made of black checkerboard).  Do a Web Search for "Learn 10 Functions in NI-DAQmx and Handle 80 Percent of your Data Acquisition Applications".  Learn about MAX, how to create an A/D Task, how to set up a loop to continuously sample at 4 kHz, 1000 samples at a time, how to run your loop for 10 seconds (meaning 40 times).  At some point, you may want to learn about LabVIEW's Waveform data type and figure out how to combine the 40 Waveforms you'd get from your 40 loops and arranging them into a single Waveform, all ready for all kinds of interesting analyses using LabVIEW Waveform functions ...

 

Bob Schor

 

 

 

Message 4 of 5
(1,691 Views)

Great Thanks for your clear and bright explanation.

 

I'll try to carryout this task, certainly, taking your suggestions into account.

 

Great Thanks again for your cooperation...

0 Kudos
Message 5 of 5
(1,668 Views)