I am facing a strange problem here. I am using NI PCI 6123 DAQ card. I apply analog voltage signal of frequency say f1 from Function Generator (FG) and acquire voltage digitized data using C APIs (as per examples given in "DAQmx ANSI C" e.g. "Cont Acq-Int Clk-Anlg Start") and take FFT of this data, i face frequency shift in the acquired data (i.e. peak appearing at frequency different from the one being transmitted by FG).
I tried fft of MATLAB as well as FFTW in C++ (with selection of FFT size to get 1 Hz resolution) but results are same. The higher the sampling frequency, higher the frequency shift. E.g. with 492 Ksamples/s sampling frequency, 100 KHz signal and 492 K FFT points, the peak appears at around 98 KHz instead of 100 KHz. As i reduce sampling to say 400 Ksamples/s, the peak appears around 99.8 KHz but still there is some shift in frequency.
However, when i acquire data in Labview signal express using same setup and plot power spectrum, i get the peak at right frequency location.
I checked my MATLAB and C++ FFT codes many times on simulated data but found no problem in that case.
Any help is greatly appreciated.
I would like to confirm that I'm understanding you correctly. When you are using LabVIEW Signal Express you get the behavior you expect. However, when using C and MATLAB you do not, right? But when you simulate the signal with C and MATLAB everything seems to work correctly too, just not when you give C and MATLAB a signal from you device.
Please correct me if I'm not understanding this correctly.
Thanks for reply.Yes, you are absolutely right. When i apply C++ and MATLAB's fft to the data captured by the device using NI C routines, i got frequency shift error. Higher the sampling, more is the shift. However, if data captured and plot by labview signal express, it represents correct frequency contents.
Also If i generate data by myself (simulated data) and apply C++ and MATLAB's fft, results are correct.
Note: I have observed same behavior in different NI cards like NI PCI 6123 and PCIe 6363 Cards.
Hard to say for sure, but it sounds like it *could* be driven by the fact that *actual* sample rates sometimes need to differ from *requested* sampling rates. This happens because sample rates are derived by dividing a fixed clock by an integer, so only specific discrete sample rates are actually possible.
What points me in this direction is that the higher the requested sample rate, the smaller the integer divisor gets, leading to larger discrete steps in sample rate, and a higher likelihood of a bigger % error between requested and actual rates.
If you do your FFT based on an assumption that the *requested* sample rate was used, it would end up looking like a frequency shift.
There's a way to query the task for its *actual* sample rate. I know how to do it under the LabVIEW API, but not in C++. I'm sure it must be there somewhere though.
Thanks for help. I have studied some NI literature related to difference b/w actual sample rate and desired sample rate. I think the on board clock frequency of PCI 6123 Card is 20 MHz. So if i set my sample rate to e.g. 492010 samples/s/channel (desired sample rate), the actual sample rate i will get will be:
actual sample rate = round(20e6/(round(20e6/492010)))
Kindly correct me if i am wrong.
Note: I could not found any C API that can query the task for its *actual* sample rate. Can any one help me in this regard plz?
Thanks & Regards
Yeah, that's the right idea for calculating an actual achievable sample rate. I don't think the outer round() function should be there though, and I'm not sure what rounding method is used by DAQmx in the spot you have the inner round() function. It might round to the nearest integer, or it might always round up or always down. It will be consistent, I just don't know offhand which way it does things.