High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

5114 Frequency Accuracy

The 25ppm represents the accuracy of the acquired samples.  As a whole. it can be thought of as quantization error, or digitization error.  This doesn't take into account the accuracy of the algorithm used.  While I'm not aware of how to calculate the accuracy of the algorithms, I do know how to calculate the accuracy of the measurements based on a known input frequency.

 

Absolute Error:

err = |Measured_Freq - Actual_Freq| / Actual_Freq

err_in_ppm = err / 1E-6

 

The following article lists all the sources of error when using the Single Tone Measurement.vi and gives an example of how to use it and calculate the error using the formula above:http://www.ni.com/example/30792/en/

 

You can use this example to see what affect sample rate, noise, etc. has on the accuracy of the frequency measurement algorithm.

 

I hope this helps.

-Nathan

Systems Engineer
SISU
0 Kudos
Message 11 of 19
(3,959 Views)

So the 25 ppm is 25 ppm of the measured signal? ie 70 KHz * 25 ppm = 1.75 HZ? I don't use LabView so the reference you give  is useless.

0 Kudos
Message 12 of 19
(3,957 Views)

http://www.ni.com/pdf/manuals/374179e.pdf

 

If you look at the specifications, its 25ppm of the timebase, thus Timebase accuracy.  Since the sample clock is derrived from this timebase, it will also have the same accuracy.  So using the PXI-5114 at the max sample rate.  You have 250Mhz * 25ppm = 6,250 Hz.  This is the maximum amount the sample clock will vary off of the ideal 250Mhz.  If you are sampling at 70KHz, the actual sample rate will be somewhere in 68.25 - 71.75 kHz.  

 

Now if you notice, we have not talked about the actual signal yet.  Due to this variablility in the sample clock, this may cause our actual measured signal to be slightly distorted once digitized.  If we use your 70KHz signal as an example, once digitized, each sample will have a dt of ~14.3uS.  If our sample rate actually was running at 68.25kHz consistently during the entire acquisition, then the actual dt is ~14.7uS.  This will cause the frequency of our measured signal to shrink, the larger dt being interpreted as ~14.3uS in software.  

 

Now if our measured signal has a period of 10 samples, then using the actual sample clock rate of 68.25kHz its true frequency is: 6.825 kHz.  In software, the same signal of 10 samples appear to have a frequency of: 7kHz.  And this is what the frequency measurement will return, assuming the accuracy of the algorithm to be 100%

 

So, as you can see, the accuracy of the measured signal depends on the accuracy of BOTH the sample clock (provided in specifications), and the algorithm used to determine the accuracy.  Since we can expect the sample clock to have random jitter (which was not the case in the example above), averaging many periods with high resolution will help us determine the true frequency of our signal.  

 

I've built an executable out of the example you said wasn't helpful, though you will still need the LabVIEW runtime engine to run it.  (Attached)

 

-Nathan

Systems Engineer
SISU
0 Kudos
Message 13 of 19
(3,953 Views)

Thank you for the very detailed explanation. If I understand this correctly, if I lower the sample rate, accuracy goes up.

I got your program to run and it seems to verify this. So my conclusion is, if I lower the sample rate to 1MHz my accuracy

will go up to 25 Hz.

0 Kudos
Message 14 of 19
(3,950 Views)

You are correct, by reducing the sample rate, you are reducing the amount of error introduced by the Reference Timebase.  If your using a PXI system, you could also use a Timing and Sync Card, which has a more accurate clock, which can be used as your Reference Timebase instead.  

http://sine.ni.com/nips/cds/view/p/lang/en/nid/13332

 

For the algorithm side of things, and with the example program, there are other factors that affect the absolute accuracy of the frequency measurement.  One of the biggest factors when using "Single Tone Measurement.vi" (used in the example program), is the number of periods that are included for the calculation.  The more periods there are, the better average that will be calculated.  

 

One word of caution when reducing the sample rate: your signal resolution will also be affected.  For example, when going from 100 points per period in your waveform to 10 points per period, the time domain signal is not going to look as smooth.  This can also affect the frequency calculation algorithms, especially if it needs to interpolate where zero crossing are.  When interpolating, it will be more accurate with points that are just barely above and below zero.

 

I hope this helps!

-Nathan 

Systems Engineer
SISU
0 Kudos
Message 15 of 19
(3,941 Views)

Great help! Thanks!

0 Kudos
Message 16 of 19
(3,937 Views)

A follow on question to the +/- 25 ppm time base accuracy........I understand if I am sampling at 100 MHz my error will be 2500KHz.

Does this also apply to time measurements? Like 1/2500 KHz = 400 ns error? I'm measuring rise time on a signal and need to calculate the

error. Is this the answer?

 

Thanks.

 

Barry

0 Kudos
Message 17 of 19
(3,908 Views)

If your attempting to measure error on the rise time of a signal, then the timebase accuracy is only a small part of the error.  Other sources of error include Jitter and AC accuracy, in addition.  The largest source of error for rise time measurements actually occur when the rise time of the signal your measuring approaches the rise time of the digitizer.

 

So, the point I want to make is this: above we found that the accuracy is dramatically improved through averaging, and averaging will reduce/eliminate the error due to jitter and timebase accuracy.  

 

So, to answer your question, 100MHz actually gives you a timebase error range of 2.5KHz (100MHz * 25ppm), so your clock will be somewhere between 99.998-100.003MHz.  As you can see, just converting the error range yields a large (400ns) error, which is not correct.  You need to compare the difference in period between the 100MHz and the 100.003MHz signal.

 

100MHz signal has 10ns period, and with max ppm error 100.003MHz has a period of 9.9997ns.  So, the max time offset you will see due to timebase accuracy between two adjacent samples is 0.3ps.

 

In conclusion, the timebase accuracy, while it can vary a lot, will actually have very little jitter between adjacent samples.  For a rising edge measurement, you should be more concerned with other sources of error.

 

I hope this helps!

-Nathan

Systems Engineer
SISU
0 Kudos
Message 18 of 19
(3,896 Views)

I think there is still some confusion on time base accuracy.  The time base accuracy is only dependent on the amount of time the signal is measured.  If the timebase clock is high by 25 ppm, then the total sample time will be smaller by 0.0025%.  The timing error is fixed for each period and adds together for the entire measurement interval.  If the sample rate is higher, the error per period is smaller.  If the sample rate is lower, the error per period is larger.  This can be demonstrated in the time domain.  At 100 Mhz, the period is 10 ns with an error of 25 ppm * 10 ns = 0.00025 ns.  At 10 Mhz, the period is 100 ns with an error 25 ppm * 100 ns = 0.0025 ns.  But it takes 10 times more samples for the 100 Mhz rate to equal the same measurement interval as the 10 Mhz.  So 10 * 0.00025 ns = 0.0025 ns, the same total error.

 

The higher sample rate improves the accuracy.  Instead of having the zero crossing somewhere in the middle of the 100 ns window, it is only a 10 ns window at the high rate.  The worst case measurement error would be less than +/-10 ns compared to the 100 ns.  The software algorithms should also work much better with more data point to work with.  I noticed the pulse width measurement algorithm uses digital hysteresis to find crosspoints.  I wonder if any of these algorithms are published anyplace by NI.

 

In summary, the total measurement error is (ppm * interval measured) + (some factor * sample period).  I suspect no one knows how to theoretically determine this unknown factor.  I think oscilloscope manufacturers just make a lot of measurements to determine the factor but NI doesn't know what digitizers will be used (or what manufacturer).

 

John Anderson

0 Kudos
Message 19 of 19
(3,649 Views)