05142014 01:42 PM
I am trying to determine the timing accuracy of a measured time interval using the 6255 DAQ with an A/D differential signal on 1 channel. The sampling rate is 1.25 MS/sec. The specifications state the Timing Accuracy as 50 ppm of sample rate. Since the frequency generator base clock accuracy is also 50 ppm, I interpret this to mean the sample period of 0.8 us is also accurate to 50 ppm. Since my requirement is to measure 80 us or 100 samples, the error is (measurement * 50 ppm) = 80 * 0.00005 = 0.004 us. This is consistent with Oscilloscope accuracy specifications.
This is fine but it does not address any quantization error in a single sample. Oscilloscopes usually specify a factor multiplied by the sample period. For example, a typical specification for an oscilloscope is (ppm * measurement) + (factor * sample interval) where sample interval is 1/ sample rate. The factor is usually from 0.06 to 0.2 and depends on trigger errors, jitter, and software algorithms etc. This factor term is usually the dominate error in the equation. For example if I used a factor = 0.1, the error is (0.1 *0.8) = 0.08 which is significantly larger than 0.004 us. Oscilloscopes use much high sampling rates so this is usually not a problem.
The current software algorithm for finding the leading edges is simple zero crossing from minus to plus and subtracting the corresponding times for the measurement interval. Please let me know if what I have stated is correct and do you have any insight into the second factor term. I believe the total accuracy error is understated. Some of these issues were previously touched on in 5114 Frequency Accuracy discussion.
John Anderson
05152014 09:53 PM
John,
I wanted to let you know I am currently researching your question, but want to dig just a little deeper before I offer an answer. I should be able to provide more insight tomorrow.
Thanks for you patience.
05162014 04:17 PM
As you indicate the 6255 is spec'd a having a timing accuracy of 50ppm of the sample rate. With a 1.25 Mhz sample rate that corresponds to a potential deviation of 62.5 hz. I'm not entirely certain why you reference the frequency generator accuracy as this does not affect the AI timing. Could you potentially elaborate?
In terms of your quantization error concern, I think we may be comparing apples and oranges to some extent. You note the scope specification includes software algorithm and trigger errors. These are not included in the timing accuracy, but also are not neccesarily inherent to the card.
The trigger error can depend on several factors, including the type of trigger used and the time base used for that trigger. The manual explains it in much more detail: http://www.ni.com/pdf/manuals/371291h.pdf
The software error is highly variable depending on the configuration you are using and thus not really something we can specify.
The 5114 thread you mention is very much a different animal which discusses one of our digitizers with our NISCOPE software. That has much more defined characteristics which we can more definitively quantify.
05202014 01:39 PM
I used Figure 91 of the user's manual where the 80 MHz oscillator drive all other internal clocks. I assumed this included the frequency generator. In any case the accuracy is stated as 50 ppm. I have read the specifications in detail but that part was not clear.
The signal being digitized is a RS485 @ 113600 Baud. Since the data bits vary, I use the start and stop bits as a timing interval to determine the actual frequency. This measurement interval needs to be accurate enough to meet a 1% tolerance with a 4:1 TAR.
John Anderson
05222014 07:59 AM  edited 05222014 08:00 AM
When you say 4:1 TAR, what you mean is that the measuring device is 4x more accurate than the device being measured, correct? If so, I would need to know the timing accuracy of the RS485 device before I could answer that question.
Also, what is your 1% tolerance relative to? With respect to the "real world," the 50ppm spec indicates 0.005% accuracy. Are you referencing the accuracy of the 6255 timing relative to the RS485 timing accuracy?
Edited for clarity
05222014 11:36 AM
Here are the details:
The frequency accuracy is 113600 +/ 1% or +/1136 Hz. Everything is converted to the time domain as follows:
LSL 
1 / 112464 
8.8917342438 µS 
MEAN 
1 / 113600 
8.8028169014 µS 
USL 
1 / 114736 
8.7156602984 µS 
LSLMEAN 

0.0889173424 µS 
MEANUSL 

0.08715660298 µS 
ST 

0.08715660298 µS 
Notice the tolerances are not symetrical in the time domain. Using the smallest tolerance limit for worst case means the measurement tolerance limit is 0.08715660298 uS. The nominal measurement time interval is 10 bits * 8.8028169014 uS = 88.028169014 uS. Using just the 50 ppm of the time interval is 0.0005 * 88.028169014 = 0.0044014084507 uS. This gives an accuracy ratio of 0.08715660298 / 0.0044014084507 = 19.8. This is fine except the sampling rate is not used anywhere in the calculations. I know that increasing the sampling rate, increases the timing accuracy for oscilloscopes and expect the same here. I don't see how the 6255 accuracy can be 50 ppm of the sampling rate when that will decrease the accuracy with higher sampling rates, when it should be increasing.
John Anderson
05232014 01:34 PM
I think for the best explanation, we need to step back for a second and clear up a few possible misconceptions that I have perpetuated.
First, you had previously mentioned quantization error in an oscilloscope. When I responded to that question, I failed to think about the terminology you had used and just looked at your formula. Whenever I've seen it used, quantization error refers to the accuracy of the signal in terms of voltage, not frequency. See here: http://en.wikipedia.org/wiki/Quantization_(signal_processing) If I am incorrect in that interpretation let me know.
In terms of sample clock accuracy being inversely proportional to sample speed, we've been discussing it in the frequency domain but I have been thinking about it in the time domain. As you noted above, the actual time period is 1/frequency. So the timing accuracy of your measurement actually goes up as the error frequency increases.
Now for the sake of discourse and future reference, let's look at the graph below to illustrate why the frequency domain and sample rate errors are directly proportional.
The yellow waveform represents an arbitrary base clock  let's say it's 12 MHz measured from rising edge to rising edge and has up to 100 ppm of possible jitter (1200 Hz). That corresponds to a true sampling rate of 11998800 Hz to 12001200 Hz or periods of 8.332500008e8 s to 8.334166750e8 s. That's +/ 8.33371e12 s of possible error.
The red waveform represents a derived clock at 4 Mhz. Since that's 3 cycles of the base clock per cycle of the sample clock, we must add all of the potential errors, yielding 2.500113e11 s of possible error. Since the base period is now 2.500e7 s that yields a range of 2.4954166557e7 to 2.5002500113e7s. This is 3999600 Hz to 4007346 Hz, a difference of 3873 Hz or 96.8 ppm. Given rounding errors, we'll call that good.
Now, these have all been arbitrary numbers. With regards to the question being posed:
Sampling at 1.25 Mhz, we have a frequency deviation of 62.5 Hz. This gives us a period range of 7.999600e7 s to 8.000400e7 s. With a little more math, we have an error of +/ 4e11 s; that's 40 picoseconds which is 0.00004 microseconds. That's 2179% shorter than your shortest tolerance.
05272014 12:58 PM
The classic example of quantization error is showing the difference between the actual sin wave and the digitized version with delta bars in the vertical range. But what is rarely shown is the effect of errors in the time domain. So the specified time of measurement may be slightly off thus affecting the voltage measurement position. Since the signal is sampled in both the time and voltage domain, the sample interval "quantizies" any time measurements although this term is rarely used. I also believe you used the term jitter incorrectly. Jitter is a random timing error for each sample usually around 10 nS and averages itself out over a longer interval. The ppm refers to the cumulative error of all samples (e.g. 62.5 samples for 1.25 MS total). This means the error for 1 sample (0.8 uS) is 40 picoseconds.
The calculation for to 40 picosecond error is correct but you did not state for how many periods. As stated above, 1 period is also 0.00005 * 0.8 = 0.00004 uS or 40 picoseconds. So the 50 ppm applies to both sample rate and period since they are the inverse of each other. So I believe the calculation 0.0005 * 88.028169014 = 0.0044014084507 uS is correct but the interval does not fall on a sample interval. Since signals are transitioning from +3.7 to 3.7V (or vice versa) with the zero voltage crossing being the time measurement points, I can only sample at points 110 and 111 (88.0 and 88.8 uS) and try to determine the zero crossing. If I use 88.0 us as my reading, the error would be 0.028169014 uS. This would make the accuracy 0.08715660298 / (0.0044014084507+0.028169014) = 2.7 which unacceptable.
I am also assuming the initial trigger using 0V for the leading edge of the start bit has no error. This is not true since there is about 7 mV of noise on the signal. This could introduce a timing error in the nanosecond range along with the jitter. And I also need to make measurements well past the trigger reference for other bytes. So the problem is how to determine what the measurement is between samples and how accurate will it be. I know sine(x)/x and using the slope between two points is popular. You mentioned NISCOPE software that I have looked at for several other systems, but have not found any accuracy specifications associated with the software. Using the measurement period, sample rate, voltage accuracy and clock accuracy, the overall timing accuracy should be able to be determined. Oscilloscope manufactures have been doing this for years.
One other problem is the lack of samples on the rising/falling edge of the waveform. If I consistently had two samples, the slope and zero crossing could be determine fairly accurately. Of course increasing the sample rate to 25 MHz would increase the number of samples by a factor of 20. That may give me sufficient accuracy without interpolation.
John Anderson
05282014 10:41 AM
You bring up a good point. I was focusing on the accuracy of the card and didn't actually confirm the sutiability of the card for what you are trying to measure. The accuracy discussion is really irrelevant because as you note, the minimum sample period of 0.8 microseconds prevents this card from being a viable solution.
It seems as if what we're really trying to do here is use an analog input as a counter of sorts. In that case there is an alternate defintion of quantization error that I didn't consider because it's a function of the system and not inherent to the card itself. Our full definition in that context is here: http://zone.ni.com/reference/enXX/help/370466V01/mxcncpts/quanterror/
Following that document, f you are aiming for 1% error relative to your baud rate, you will need a sampling frequency of at least 11.36 MS/s. We have quite a few cards that would fulfill this requirement, and a full selection of our digitizers can be found here: http://www.ni.com/digitizers/
If you would like more information on any of these cards or more details on which one would be best suited for your application, just let me know.
If line noise is not a consideration, I can think of a couple "quick and dirty" options that would allow you to use your current card, but I doubt they would work for you. I'll just throw the following for anyone else that may reference this thread in the future.
Assuming you're operating on the standard 10V differential and have no commonmode voltage, the first thing that comes to mind is connecting the common to dground, the rx+ line to a standard digital input and then using a pullup resistor or r/c combination to connect the rx input to a digital line as well. A little software work to combine them and you'd have your signal. Or, alternatively, you could read only the rx+ line and assume that the rx line is functioning appropriately.
As far as the definition of jitter, I'll agree to disagree.
05282014 08:08 PM
The quantization article is for a frequency counter and missed clock edges. I know exactly what sample intervals to look at for a rising or falling edge between two samples. If it is not there, then I have a failure but could search for it. Still not having at least two samples between the 10% to 90% prevents me from determining the actual zero crossing. I can increase the tolerance limits to 5% as one option and have a valid test.
We already tried using two channels, one for each line, and it doesn't work. The sample rate has to be much lower and the large voltage transitions do not settle in time. The noise can just be averaged out over a 100 or so samples.
The definition of jitter from NI's Measurement Fundamentals:
The shortterm variations of a digital signal's significant instants from their ideal positions in time.
Thanks for the help.
John Anderson
This site uses cookies to offer you a better browsing experience. Learn more about our privacy statement and cookie policy.