LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

FPGA DDS and DAQmx frequency accuracy

Solved!
Go to solution

Hello.

I have a question about the frequency accuracy that I can expect from my DDS signal generation code. 

 

In short, I am generating a signal using a DDS signal generation code on an FPGA and then recording the resulting data using a DAQmx code.  After recording the data for a long period of time, I Fourier transform the time domain data, and I look at the frequency spectrum.  I find that the frequency peaks are a few parts per million off of where I expect them to be.  I am curious if this is the best frequency accuracy I can expect or if I am doing something wrong.

 

I have attached all of the code for completeness, but I believe that most of it is unnecessary for this discussion.  I inherited this code, and we are updating and changing its features.  I am aware that there are unnecessary and inefficient parts in the code. 

 

The “FPGA” code is run on a PXI-7852R FPGA.  I believe the most relevant part of the code for this discussion is the central section in the middle near the label “COMP X”.  We use a phase accumulator to step through some lookup tables in order to generate an analog output (A02 near the bottom) that is the sum of two sine waves at different frequencies.  There are some subVIs that are not attached.  I do not believe they are relevant, but I am happy to send them if necessary.

 

The “Host” code is run on a PXI-8101 Embedded Controller.  It allows us to change parameters and scale variables without recompiling.

 

The data is acquired using the “DAQmx” code.  We use a PXI-4462 Dynamic Signal Analyzer.  The FPGA, embedded controller, and signal analyzer are all connected to the same PXI-1044 Chassis.  The Chassis has an external 10 MHz atomic clock as its input into back panel 10 MHz Ref In BNC.

Here is the issue:  I set one of the sine components in COMP X to a frequency of 22 Hz.  I recorded the output signal with the DAQmx code for many hours, and then I took the Fourier transform of that signal.  The “22 Hz” peak of the frequency was off by a few 10s of microHz.  (See attached.)  The amount of the discrepancy seems to be proportional to the frequency.  For example, if my other sine frequency is at 66 Hz, the “66 Hz” peak is different from 66 Hz by three times as much as the “22 Hz” peak is off.

 

I can think of a few possible causes for this discrepancy.  First, it may be that this is as good of accuracy as I can expect given my parameters and hardware.  Second, there may be an issue with my code that is throwing off the frequency.  (Note:  I have an error monitor on the phase accumulator, and the loop seems to be running in the appropriate amount of ticks.)  Third, I may not have all of my hardware properly referenced to a common clock.  I feel that this may be the most likely.

 

As I understand it, all of the clocks should be synchronized to the chassis clock (which is controlled by the external clock.)  The attached screen shots show that I have attempted to synchronize the FGPA with the PXI_Clk10.  I have tried the base and top-level clock settings using the PXI Clk10 as shown in the attached.  I’ve also tried using the default on-board clock for the base and top-level clock.

I would appreciate any insight that you could share about this issue.


Thank you.

 

(Possibly) Relevant Information

LabView 2013

PXI-7852R FPGA

PXI-8101 Embedded Controller

PXI-1044 Chassis with external 10 MHz clock input into back panel 10 MHz Ref In BNC

PXI-4462 Dynamic Signal Analyzer

Windows 7

0 Kudos
Message 1 of 9
(4,535 Views)

Here are the screenshots.

0 Kudos
Message 2 of 9
(4,533 Views)

And one more screenshot.

0 Kudos
Message 3 of 9
(4,531 Views)

 

Unless your timebase is at an integer multiple of 11 Hz, you will not get an integer number of samples per cycle at 22 or 66 Hz.  Regardless of the size of the phase accumulator there will be some error. Do you see the same kinds of error behavior at 10 and 30 HZz

 

Lynn

0 Kudos
Message 4 of 9
(4,500 Views)

Thanks for your reply, Lynn.  I appreciate you taking the time to help.

 

Unfortunately checking at other frequencies requires an overnight run, so I haven't checked that yet.  I will look into that.

 

In the mean time, is there a way to estimate what the error will be if the time base is not an integer multiple of the frequency?

 

Thanks again.

0 Kudos
Message 5 of 9
(4,486 Views)

I would have to spend some time thinking about the details to give you a comprehensive answer.

 

The basic process is to calculate the exact number of samples which would be generated in a perfect world. Then round to integers in a way which is consistent with the register and word sizes in your devices. Then go back and calculate the actual frequenices produced by the finite representations of the data.

 

For example suppose the sampling frequency is 10 MHz and you are assuming that it is exact, that is, you are neglecting any errors due to the reference clock. Also suppose that the frequency or period is represented by a 16 bit integer.  For 22 Hz the period is 0.04545454545.... seconds.  That requires 454,545.45... periods of the 10 MHz clock. Round that down to 454545 as the nearest integer. Then the period is 454545 cycles of the 10 MHz clock or 0.0454545 seconds. The frequency of that signal is 22.000022...Hz.

 

The DDS may use a more sophisticated algorithm to more closely approximate the desired frequency.

 

Lynn

Message 6 of 9
(4,477 Views)

Thanks again for the response, Lynn.

 

Thank you for the tip.  We have looked into this, and the frequency discrepancy that we are seeing is much larger than would be the error due to rounding.

 

The FPGA frequency output looks stable in comparison to the external 10 MHz chassis clock (at least to a level of stability much lower than our frequency discrepancy).  I am now thinking that the frequency discrepancy could be due to the data acquisition code.  Perhaps the Sample Clock in our DAQmx code for our PXI-4462 Dynamic Signal Analyzer is referencing a different clock.

 

Do you think this could be the source of our error?  Or do you have any other suggestions on things to check?

 

Thanks.

0 Kudos
Message 7 of 9
(4,420 Views)

What is the sample rate used on the PXI-4462? The "standard" rates of 204.8 kS/s and binary divisions of that will result in the same kinds of resolution issues. 

 

Because you will not get integer numbers of samples per cycle of your 22 Hz signal measurements will result in spectral leakage which could account for the errors.

 

Lynn

Message 8 of 9
(4,397 Views)
Solution
Accepted by topic author w4frawerfr

Thanks again, Lynn.  I appreciate you taking the time to help.

 

I believe that we have found the solution to our problem.  I had not correctly chosen the reference clock for the sample clock of the data acquisition DAQmx.

 

It appears to be actually a problem with how we were acquiring the data in the DAQmx labview program.  The NI card that we were using for acquisitions, PXI-4462 Dynamic Signal Analyzer, appears to be running off the wrong clock.  The 4462's sample clock was referencing the 4462's on-board clock.  I had assumed that this clock was synced to the chassis 10 MHz reference clock, PXI_Clk10, (which in our case is overridden by the atomic clock), but I was wrong.

 

In the VI "DAQmx Task ExampleDanWriteToSpread4ch.vi" there was a problem.  I have a screen shot of the code below.  Here's the change I made.  First, double click to open the subVI circled in red in the screenshot.  This opens up the subVI seen in the second screen shot.  In the original code, the "task out" of the Sample Clock was wired to the "task/channels in" of the Start Trigger ("Start None" is how it's labeled), and the "error out" of the Sample clock was wired to the "error in" of the Start Trigger.  This original configuration is indicated by the blue arrows.  I added the stuff below circled in red.  I added DAQmx Timing Nodes.  The first is necessary to reference the sample to the reference of the chassis/atomic clock. The second isn't necessary.  It is just meant to see what the actual sampling rate of the DAQ is.  You can left click on the element (white background) to change what you're controlling/looking at.  The reference clock is under More->Reference Clock->Source.  (Now that I think about it, you could probably put both of those in one DAQmx Timing Node.  I'll change that now, but I'm not going to update the screenshot.  It'll look the same, just with both elements under the same yellow header.)  The important thing is that you need the reference clock source to be PXI_Clk10.  The "Dev1" (that is, device 1) doesn't really matter; all of the devices should be tied to the same PXI_Clk10, so any "Dev" # should be fine.

 

Once I did this, my frequency accuracy seemed to be spot on (to the level I'm measuring it).  The frequency measurements seemed accurate no matter what I chose my sampling rate to be, no matter if the clock frequency was an integer multiple of the sampling rate or not.

Download All
0 Kudos
Message 9 of 9
(4,323 Views)