Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

Counter uncertainty on PCI 62221

Hello,

 

I would like to calculate the uncertainty of a measurement using a counter (pulseweight measurement or period).

I'm reading N samples and sum them at the end to know my total test time. I supposed in this way my uncertainty must be high (Uncertainty times N samples ...).

This is why I'm looking for several information I didn't find on the NI website.

 

1) The counter resolution is 32 bits, but what is its range voltage input ? I would like to know the minimum timing change detectable.

2) How can I know the accuracy in % of the measure time ? 50 ppm on a 80 MHz clock ok, but there must have a relation with the measured time too no ?

3) What is the clock deviation in time ? (calibration will be necessary but it's to have an idea)

4) I saw a numeric filter in my channel settings too, but I don't know how it's supposed to work and what can be the impact on the uncertainty of my measurement ?

 

Thank you all in advance Smiley Happy 

 

0 Kudos
Message 1 of 9
(4,673 Views)

1.  The counter's 32 bits aren't like an A/D converter's bits -- you don't use them to figure out resolution over a voltage range.  They're just the size of the count register and tell you how high you can count before rollover.

  The minimum measurable timing change will generally be 2 periods of the timebase used for the measurement.  For a 62xx M-series board, the fastest timebase is 80 MHz so the minimum measurement interval will be 25 nanosec.

 

2.  50 ppm is already a %-like unitless spec.  If your measurement takes a second, the accuracy would be 50 millionths of a second.   If 10 seconds, then 500 millionths, etc.  There's likely a small temperature-dependent component to the accuracy too, but generally the 50 ppm scales pretty straightforwardly.

 

3.  No way of knowing without a reference standard.  It's spec'ed to within 50 ppm, but where it lands within that +/- 50 ppm is unknown.

 

4. Can't make a blanket statement for all possible filter settings and all kinds of measurements.    It might have *no* effect, but it depends on what you're measuring and how you're filtering.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 2 of 9
(4,644 Views)

Thank you for your answer Kevin_Price.

 

4. If it can help, this is the filter in labview :

 

Capture.PNG

 

I didn't find so much information in the "Help"

0 Kudos
Message 3 of 9
(4,622 Views)

Details will be in your device user manual, I assume it's actually a PCI-6221.

 

A quick glance suggests that using a digital input filter introduces jitter to your measurements in the order of 25 nanosec.  I'm unsure how that distributes about the nominal correct value (one-sided?  uniform distribution?  other?).  I suspect it's not specifically cumulative b/c it probably relates to relative phasing of input edges and the board's filter clock.  But if the distribution of measurements isn't uniform about the nominal correct value, then you'd see a net effect that *looks like* cumulative error.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 4 of 9
(4,600 Views)

That's help a lot, thank you.

But after reading the manual I'm still not sure about one point.

In pulse weight measurement, are the "errors" accumulating on each pulseweight measurement (If I read 50 samples does it mean I will have 50 times my error) ? or only on time the error for the total ?

 

 

0 Kudos
Message 5 of 9
(4,587 Views)

Educated guess:  the digital filter makes incoming edges get synced with an internal filter clock.  The net effect is pretty much like *quantizing* the measurement according to the period of the filter clock.  I'd venture that it isn't qualitatively different than the quantization error you get from the regular internal clock, just that you now quantize at 25 nsec instead of 12.5

 

Again, this is an educated guess, happy to welcome corrections from anyone more authoritative.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 6 of 9
(4,575 Views)

By "pulse weight" do you mean "pulse width"?

 

If so, you probably do not want to filter, and you want to use the best clock available, and you'll have an average uncertainty of no less than sqrt(2)*clock period.  

0 Kudos
Message 7 of 9
(4,566 Views)

You can find below my settings for my counter if it helps to understand my measurement.

Yes I mean pulse width 🙂

init counter.PNG

 

read counter.PNG
Here it's just a period measurement but I have other example with a pulse width measurement.

I'm using the 80MHz clock. 

So you mean I will have 17,7ns uncertainty ? My only point I need to know now (and maybe it's because I don't get completly the operating measurement principle) is if I get this uncertainty for each period I measured (so it's non negligible) or just one for all them (totally negligible in my measurement).

0 Kudos
Message 8 of 9
(4,560 Views)

You just get the number of 80 MHz edges that occured between the rising and falling edge.  The average uncertainty is therefore just the quantization error from the clock.  How that adds up depends on what you do with that information. 

0 Kudos
Message 9 of 9
(4,553 Views)