LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why are the amplitude values in the power spectrum dependent on total sampling time?

When I do power spectrum analysis to my measurements, I have realized that the spectrum changes depending on the total sampling time. I am employing linear rms averaging in the spectrum computation. With a spectrum graph attached to this message, I will illustrate my problem. Graph represents a measurement with 40kHz. The sampling period for each package of data which are send to spectrum calculations are increased from 40000 samples( 1sec. measurement) to 320000 (8 sec.measurement) samples for each package of data. Note that spectrum for 10 of these packages were averaged. What I have expected to have only a better frequency resolution, but I have got different amplitudes as we
ll. Moreover, the amplitudes decreased systematically when I increase the total sampling time. I have checked my hardware, averaging method, effect of sampling rate etc., but I could not find the reason for this systematic change. Then I have made a long measurement and divided into packages with different sizes and calculated the spectrum for different packages sizes. I have got the same tendency.

Now my question is whether this phenomena is related to the computation algorithm and it is known problem and how can I avoid this.
0 Kudos
Message 1 of 3
(3,360 Views)
The total power of your signal, as represented by the power spectrum, is distributed in a number of 'bins' that are equally spaced in frequency. If your signal is a wide band noise signal, say white noise, each bin will contain the same amount of energy (in average). If you double the total number of bins (reduce the bin bandwidth by a factor of 2), the power is each bin will therefore only be half of what is was (so the total power of your signal, that is the sum of all bin powers stays unchanged).

Now the bandwidth of each bin (in Hz) is equal the inverse of your time domain record (in seconds). If you change the sample rate or the number of samples of your signal, you are changing the record length and this affects the bandwidth of your bins (and the ene
rgy level in each bin).

If you want the spectrum to be more or less unaffected by the record length, you can use the Power Spectrum Density (PSD) VI instead of the Power Spectrum. The PSD will 'normalized your spectrum to a constant bin-bandwidth of 1 Hz and this should fix your 'problem'.

Hope these explanations helped.
Message 2 of 3
(3,360 Views)
Thank you for the help. I have tried Power Spectrum Density instead of Power spectrum. Now all spectrum data collapse.
0 Kudos
Message 3 of 3
(3,360 Views)