Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

CO: Requested vs. actual frequency

Solved!
Go to solution

Hi,

 

I am using a PCIe-6353 DAQ card and try to figure out what the 'allowed' frequencies are. I found some threads about it but I am not sure I understood everything correctly. This article comes pretty close:

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P83OSAS&l=en-US

 

As far as I understand, the CO can only generate signals at multiples of its internal clock. For the PCIe-6353 that should be 100 MHz (I think; the datasheet has different values for "Sample clock timebase"), 10 ns respectively. So let's say I want to generate a clock at 140 kHz (period 7142.86 ns). With a time base of 10 ns, an integer no, of ticks is therefore not possible. Following the linked article, the device will actually output the closest frequency that can be represented by an integer No. of ticks. In this case 139.86 kHz or 140.056 kHz, depending if one rounds up or down.

 

My question is:

1)  Did I understand it correctly?

2) The article above says one can use the DAQmx timing property node "SampleClk.Rate" to read the actual frequency. WHen I do that with a simulated device, I get exactly 140 kHz, which should not be (see above). Is that because of the simulated device?

3) Isn't it better to calculate the allowed frequency myself and use this as input? This way I know exactly what is actually happening. imagine I have another external device but can only synchronize the start, not the clocks itself.  In that case it will make a difference if I set the NI card to 140 kHz (running at 140.056 kHz) and the other device running at exactly 140 kHz. If i would just set it to a value that can actually followed exactly, the synchronization should be better - assuming the other device can actually run at exactly 140 kHz)

 

Thank you

0 Kudos
Message 1 of 9
(3,257 Views)
Solution
Accepted by Flumen

1. Yes you understood correctly.

 

2. Here, the DAQmx Timing property 'SampClk.Rate' reports 0 on a simulated X-series device.  Querying the DAQmx Channel property 'CO.Pulse.Freq' gives me the expected 140.056 kHz.  All this is on DAQmx 18.5

 

3. No matter what you pre-calculate, you won't change the underlying quantization effect.  Only *integer* divisors will be possible.  I'd say it's worth finding out whether the other device actually operates at 140 kHz or whether it has a similar quantization effect.

 

4. Even if both devices were *trying* to run at the exact same freq, they would still have an accuracy tolerance that would prevent perfect sync.

 

5. I'd normally try to use the external device's clock as a sample clock for the 6353, but you've implied that isn't feasible.

 

6. The main other option I'd try for is to have one device capture / timestamp the clock signals from the other device.   Ultimately you need to identify one master time reference, not two.

   It's like what they say about starting quarterbacks -- if you think you have two, you actually have none.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 9
(3,232 Views)

Hi Kevin,

 

Thanks for your quick reply. Querying the 'CO.Pulse.Freq' does indeed do the trick.

There is no way of synchronizing the two devices. But I want to at least define the exact same frequency and use the same start trigger. I will try to find out what the time base of my other device is.

 

Thank you

0 Kudos
Message 3 of 9
(3,219 Views)

There is another point I wanted to mention. I found a strange behavior of the requested vs. ouput frequency. 

As you can see in the screenshot below, The CO sometimes switches to another frequency (the next increment of the 100MHz/10ns timebase) although the input matches exactly the output frequency seen at another frequency.

 

Maybe the DAQmx function rounds in a slightly different way than in Labview itself? This does not seem very deterministic to me and poses a problem.

Let's say I want to define 100 output pulses at a higher frequency 1 (ctr0) and then define lower frequency 2, e.g., 1/5 of frequency 1 (in a way that the no. of ticks is integer, of course) and thus generate 20 pulses. If the CO function changes the frequencies although it is an actual possible value, the synchronization will not be perfect anymore.

 

The only work around I can see is to use the #ticks directly. 

Is that going to solve the issue or will I have the same or other issues? I quickly checked and it looked good, meaning the no. of input ticks were equal to the no. of output ticks. But I might not have seen the case where it breaks down.

 

0 Kudos
Message 4 of 9
(3,214 Views)

I think the main problem is that you're looking for *exactness* down to a greater # of significant figures than a DBL can actually represent.  By memory, I believe that a DBL can carry between 15 and 16 significant figures in decimal representation (as derived from the # bits in low-level binary representation).   What shows up after that when you display ~20 significant digits is probably, strictly speaking, undefined (even if consistent on a given platform).

 

For your experiment: *truncate* your frequency input at the 15th significant figure (6 before the decimal, 9 after).  Then I'll bet your actual freq will match, but again, only to the 15 significant figures a DBL can represent.

 

It appears that when DAQmx tries to calculate a divisor for pulse frequency, it will round *down* to the next lowest integer, making the actual freq >= your requested freq.  Thus, it may be worth your while to do your *own* calculations to define your pulse parameters in terms of timebase (100 MHz is the default) and integer #'s of Ticks. 

 

When you define pulse params in Ticks, there's no funny business.  DAQmx will simply use what you give it (except that it will usually require a value >= 2 Ticks).

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 5 of 9
(3,202 Views)

@Flumen wrote:

Hi,

 

I am using a PCIe-6353 DAQ card and try to figure out what the 'allowed' frequencies are. I found some threads about it but I am not sure I understood everything correctly. This article comes pretty close:

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P83OSAS&l=en-US

 

As far as I understand, the CO can only generate signals at multiples of its internal clock. For the PCIe-6353 that should be 100 MHz (I think; the datasheet has different values for "Sample clock timebase"), 10 ns respectively. So let's say I want to generate a clock at 140 kHz (period 7142.86 ns). With a time base of 10 ns, an integer no, of ticks is therefore not possible. Following the linked article, the device will actually output the closest frequency that can be represented by an integer No. of ticks. In this case 139.86 kHz or 140.056 kHz, depending if one rounds up or down.

 

My question is:

1)  Did I understand it correctly?

2) The article above says one can use the DAQmx timing property node "SampleClk.Rate" to read the actual frequency. WHen I do that with a simulated device, I get exactly 140 kHz, which should not be (see above). Is that because of the simulated device?

3) Isn't it better to calculate the allowed frequency myself and use this as input? This way I know exactly what is actually happening. imagine I have another external device but can only synchronize the start, not the clocks itself.  In that case it will make a difference if I set the NI card to 140 kHz (running at 140.056 kHz) and the other device running at exactly 140 kHz. If i would just set it to a value that can actually followed exactly, the synchronization should be better - assuming the other device can actually run at exactly 140 kHz)

 

Thank you

I even had the same doubt


0 Kudos
Message 6 of 9
(3,190 Views)

What i still do not understand is why the CO sometimes uses the exact frequency as provided and sometimes it is changed although the input frequency can be represented exactly (see my example screenshot above).  

 

Instead, I changed the implementation to use ticks in order to get around this issue - the same way you proposed as well. This seems to work and there are no surprises, at least in the simulated device.

Using ticks also makes the output of multiples of the frequency much more reliable and correct. Instead of having rounding issues, I can just define CO_1 as, e.g., 600 ticks total (200 up, 400 down,1 tick 10 ns, so 6 us total) and 200 repetitions (retriggerable) while CO_2 as 200x of that resulting in 120000 ticks total (1.2 ms). Triggering CO_1 now with CO_2 as input should result in perfect synchronization and look like a continuous pulse train.

 

Thanks!

 

0 Kudos
Message 7 of 9
(3,182 Views)

Ok, say the main issue is solved by defining pulse params in units of integer Ticks.  However, just to follow up a little:

 

What i still do not understand is why the CO sometimes uses the exact frequency as provided and sometimes it is changed although the input frequency can be represented exactly (see my example screenshot above).  


I tried to explain this earlier, but apparently not thoroughly enough.  Search here for more discussions about IEEE floating point.  The limited precision available in floating point representation causes consternation around here pretty regularly.  Here are some key take-aways:

 

1. The digits displayed on your GUI indicator or control might *not* be a *perfect* representation of the value held in the 8 byte IEEE format used to store a DBL.  When you ask an indicator to show more digits than the DBL can represent, the trailing ones are *not* reliable.  I don't where in the display process those last digits get "invented", but they do not really exist in the DBL value stored in memory.

 

2. This must work the other way too.  When you enter more digits into a control than a DBL can actually represent, LabVIEW must somehow choose a nearby value it *can* represent in 8 byte IEEE format.  I don't know the exact process for how LabVIEW interprets the number you enter when it's not a legal IEEE DBL value, but it's not entirely surprising to me that you get the behavior you're talking about.

    One time you query the actual frequency, then display it in a GUI indicator out to more digits than are actually present.  Some of those trailing digits are meaningless.  You then enter those same digits (including the meaningless ones) into a GUI control.  Now that entry, a mix of meaningful and meaningless digits, must again be interpreted to be stored as an IEEE 8 byte DBL.

    I can't blame you for expecting that the conversion process would be reversible, but I'm also not surprised that it turns out not to be.

 

Hope that helps some.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 9
(3,162 Views)

Hi Kevin,

 

Thanks for the insight. I understand why that happens and am pretty happy with using ticks. At least I know exactly what is actually happening..

 

Thanks!

0 Kudos
Message 9 of 9
(3,125 Views)