LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimating a Signal While Keeping The Same Signal Length?

Hi, 

 

I would like to know if it is actually possible to do the above. 

 

I'm currently working on wavelet based multiresolution analysis using LabVIEW Student Edition, which unfortunately doesn't have all the toolkits and Express VIs available for me to use. Attached is a VI I have created (Daub4.vi), and screencaps showing the results of the decimation. 

 

What I'm trying to do is to decompose the signal by 8 levels, each time dividing the sampling rate by 2, done using decimate 1d array. However, in terms of keeping the signal length the same, I'm not too sure if I should actually manipulate the dt value or simply just use the waveform graph scaling factor to do the job. Some advice would be welcome. 

 

Also, another problem I have is pretty weird:

 

I'm trying to extract a part of a very large ECG signal (1 second = 64 samples, sampling rate = 2048Hz, I have about 10 minutes of data), selecting from 40960 samples to 65536 samples. However, upon output, the waveform graph only shows up to 65535 samples instead of 65536. Is this supposed to be normal?

 

Thanks,

Derren

0 Kudos
Message 1 of 4
(938 Views)

Derren,

 

Exactly what do you mean by dividing the sample rate while keeping the signal length the same? Do you want the number of samples to be the same or the time duration of the total signal to be the same? Since you did not post the top level VI, I cannot tell what you are doing with the data.

 

Also your last comment does not make sense.  "1 second = 64 samples, sampling rate = 2048Hz".  If you have 64 samples in one second, that is a sampling rate of 64 Hz, not 2048 Hz.  Neither rate produces 65536 samples in ten minutes. 60 s * 10 min * 64 Hz = 38400 samples. At 2048 Hz you get 1228800 samples. By today's standards those are not particularly large datasets.

 

In LabVIEW arrays are indexed from zero so the last element in a 65536 element array has index 65535.

 

Lynn

0 Kudos
Message 2 of 4
(916 Views)

Hi Lynn, 

 

What I meant was to keep the time duration of the total signal to be the same, sorry for the misconception. 

What I'm trying to do with the data is to decompose it, then using the 3rd, 4th and 5th level of the decomposition, construct an algorithm for peak detection of the signal.

 

That last comment was a brain fart, excuse that. :/

I meant it to be a sampling rate of 2048Hz, with 64 samples per channel.

 

As for the 65536 samples (32 seconds that is), it's just a part of a data that is 10 minutes long, again, sorry for the misconception. 

 

Thanks,

Derren

0 Kudos
Message 3 of 4
(908 Views)

Derren,

 

OK. That makes more sense.

 

Consider the time at which each sample in the original data set is taken. The first is at t0 and index 0. The second sample is taken at t0 + dt, index 1. The third sample is at t0 + 2*dt, index 2. After decimation the reduced array has the same first element as the original data. The second element is the third sample of the original data set. The time at which that sample was take has not changed and is still t0 + 2*dt. So simply multiplying dt by the decimaton factor and using the sme t0 should give you the correct times of all the samples. Of course roundoff errors can creep in but this should not be an issue for what you are doing. By using values which are integer powers of two for both the sample rate and decimation factors, the calculations may be exact.

 

Lynn

0 Kudos
Message 4 of 4
(896 Views)