I just received the calibration data for my PXI-6551 from National Instruments/Dynamic Technology Inc. which lists the channel skew in picoseconds for each of the 20 digital IO channels.
Is it possible, to compensate using the calibrated channel skew? In the past I used to assume ‘perfect channels’. Suppose channel 0 (Chip Select in SPI) is 110ps while channel 3 SCLK serial clock) is determined to be -170ps after calibration from Dynamic Technology.
Can I assume that the skew between CS and SCLK is 110 – (-170) = 280ps and use this value in my measurements?
I'm currently assuming that by employing these numbers the accuracy of my generation and possibly acquisition tasks will improve even further.
Solved! Go to Solution.
How are you using the knowledge of the skew in your measurements? If the calibration report says the skew for each channel based off a particular reference, I would assume those values to be true as far as the skew of each channel. If adding that additional time to your measurements gives you better accuracy, you can certainly implement this, but I am having a hard time grasping the use case where this would give you better accuracy.
I think this is possible during a setup or hold time type of measurement (e.g clock and data). The calibrated channel skew (if reasonably stable over time and temperature) could be added to that determined by the software.
If I can know the spec. in ppm/deg C for the channel skews as well as how folks at NI HSDIO think this will change with time we can use this information.
You can set the data delay with the niHSDIO Configure Data Position Delay VI, which can be configured to fractions of your maximum sample clock. This can be used to compensate for your channel skew you are seeing on your calibration report. For the 6551, you can only set one value for all channels. so you can only compensate an average value for all channels or try and get a few channels shifted in the right direction to compensate for the predetermined skew.
As for how much this will affect your measurement, 100ps is around 10GHz frequency, and the max rate for the HSDIO board is 50MHz, so the delay would only be about 1/200th of a delay. Typically this does not adversely affect the generation/acquisition enough to affect performance.
I plan to compensate post-data collection. Basically, disregarding the 'data position delay' feature (which I'm already using in my code to avail of resolutions = 1/256 of Tsample),
if two 'in phase' pulses are launched simultaneously (they employ the same clock and timebase) along two channels which have an "as-left" calibrated channel skew of 100ps, they will emerge with a relative delay of 100ps.
If this is done multiple times and averaged, the jitter induced resolution (which is a feature of the Data Position Delay feature) has much less of an impact on the final result while the "as-left" calibrated channel skew of 100ps will persist and may be compensated post data collection. The trick is to know how this 100ps channel skew determined by Dynamic Technology varies with time and temperature.
Like the PXI-4461 spec of 5 to 15ppm/deg C, is there something simliar for the PXI-6551?
Also, from National Instruments' past experience does this change with time? If so, what is the ppm per sqrt (1000 hrs) to use PXI-4461 jargon?
Using the calibration report will tell you how far two channels were skewed....at room temperature, using the standard cable and calibration test station. There will be drift with temperature and time and variation with cable that will affect the "actual" skew your DUT will see which are all lumped into the guaranteed specification we publish. The relative skew should be consistent in the short term under controlled conditions, but if you're relying on those values for long term accuracy to guarantee any sort of specification, I would warn against that. Additionally, if you're using a longer cable or have any sort of switching or insertion delay onto a test card, the skew will be different at your DUT. Keep in mind that the published guaranteed specification takes calibration, time, temperature, standard cable into account, the calibration report is just a snap shot to compensate for time drift and to verify the unit.
If you want better accuracy than published, you would need to measure the skew at your DUT under the conditions your test runs (temperature, data rate, etc). This would have to be something you monitored and managed since we can only guarantee the published specification.
Now if you're just doing some sanity checks on the benchtop and not trying to set a specification based on the 6552, then using the calibration report would give you a good approximation, but again, if you're talking about minimizing test accuracy as a component of a specification, I would warn you against that unless you are able to verify your deviation from the spec using a more accurate device (ie, measure the skew at the DUT as a station calibration step maybe). Again though, NI can't guarantee a skew other than what is published in the most recent datasheet.
In regards to your drift calculation, we recommend a 2 year calibration / verification interval but do not have a specified drift for skew.
Thanks Ryan M.
That's a great explanation! Let us accept this as the solution.