LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Is setting the property "AIConv.Rate" essentially the same thing as setting the timing for the task?

Solved!
Go to solution

Hey fancy folk,

 

TLDR; Main question is in the title.

 

Background:

I am looking through some old code (LV 7.1) that I have inherited and I'm trying to make sure I understand the tasks being created. The tower has a PXI and SCXI chassis where the SCXI is controlled via "S8 PXI 6052E" card (I believe).  The test itself uses a quad encoder running into an FPGA to generate a clock and that clock is passed to PXI_TRIG4.  Basically, we are doing order analysis based on the quad enc to make sure that we're sampling at the same position every time.  

 

There are 3 tasks being create in the test code, an SCXI task, quad enc task, and edge counter task. The SCXI task is clocked off of "S8 PXI6052E"/PXI_TRIG4 while the quad and edge counter are clocked off of the "6602/PXI_TRIG2".  Doing some digging, I see that when the tasks are created, "S8/ai/SamplingClock" is connected to "6602/PXI_TRIG2". Continuing the digging, I see that the code sets the SCXI task's "AIConv.Rate", via a property node, to 333000.

 

From what I've read, the "AIConv.Rate" is the rate at which to clock the A to D converter.  To me, that sounds a lot like the sampling rate.  Reading some white papers, it sounds like typically the "AiConv.Rate" is typically determined based on the card's max sampling rate, number of channels, and sample type (multiplex/simultaneous), which sounds like the "AIConv.Rate" is handled automatically and I shouldn't have to mess with it.  But since the code does, I wanted to make sure I understood what is actually happening "behind the scenes".

 

Is it safe to assume that the 333 kHz for the "AIConv.Rate" is a 333 kHz clock that is sent to PXI_TRIG2, meaning that the quad enc and edge counter tasks are set to a sampling rate of 333 kHz?  

 

Thanks,

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 1 of 6
(345 Views)

No, not quite.

 

On a multiplexing device, a multichannel AI task will use both a Sample Clock *and* a usually-behind-the-scenes Convert Clock.  Your 333 kHz Convert Clock controls the rate at which multiplexing selections and A/D conversions are done from one channel to the next.   This is *not* the same thing as the sample clock!

 

The majority of applications don't need to configure the Convert Rate explicitly.  DAQmx defaults to spread out the multiplexing across the sample interval as much as possible.  There's a little bit of overhead though, so your 333 kHz Convert Clock may only be valid for sample rates up to ~100 kHz, not 111 kHz.

 

One possible reason for configuring it explicitly is to make sure that the channel-to-channel A/D timing skew remains constant over a range of variable overall sample rates.  Of course, many of those situations would also be fine if you simply *queried* for the Convert Rate and then used that value in the post-processing routines that need it.

    I don't know your app & system to declare whether the explicit setting of the Convert Clock Rate is necessary, unnecessary but helpful, irrelevant, or potentially harmful.  Faster multiplexing increases the risk of "ghosting" effects, so if 333 kHz is very much faster than necessary, it's possible it could be making things worse.

 

The overall signal routing you describe sounds reasonable as long as your app starts the quad and edge counter tasks *before* starting the SCXI AI task.  The counter tasks are getting their sample clock from the AI task, so you want them to be running and ready for it when the AI task actually starts.   Trig2 and Trig4 will have the same frequency so all your tasks will run at the same sample rate.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 6
(313 Views)

Kevin,

 

Thanks for your response.  

 

That makes sense that the convert clock isnt the sample clock.  With the documentation I wasnt sure if sample clock derived convert clock or what.  The wording was a bit ambiguous, but also, I get its hard writing descriptive stuff for nuanced things.    

 

Edit2:

So if the convert rate is faster than the samplings rate, is that just continually updating whatever is in the (assumed) buffer (or whatever you want to call the "holding till told to pass the data" spot is called) of the card so when the sampling rate triggers, it dumps the AtoD buffer to the sampling buffer?

 

New question, how would the convert clock work based on order analysis? Ive done some order analysis stuff before and I always thought it was kinda weird to set the rate for a task that isn't being clocked off of a clock.

 


One possible reason for configuring it explicitly is to make sure that the channel-to-channel A/D timing skew remains constant over a range of variable overall sample rates.


I would guess this is why we are setting the convert rate since we are sampling based on position and the user can set a variable speed for the test. We are clocking the SCXI AI task off of an encoder that is 5000 pulse/ch (so 20k in quad) and routed into an FPGA to generate the clock from the quad.  The test typically runs at 25 RPM, so my sampling rate should be around 8333 samples/s (20k samp/rev * 25 rev/min * min/60s => samp/s).  

 

Additional question about the ai/SampleClock pin.  the Quad and Edge counter is clocked off of the SCXI AI task's ai/SampleClock.  If that clock is being derived from the Quad to CLK of the FPGA, then is it a "straight pass through" from the FPGA to the ai/SampleClock?  The current route from the FPGA is to PXI_Trig4 then PXI_TRIG4 goes to the clock for the SCXI AI Task.

 

EDIT:

it just seems weird to me that the quad and edge counter tasks are clocked off of PXI_TRIG2 and not PXI_TRIG4.  Maybe that was the original designers work around to solve that problem?  To boil it down, I don't understand what the ai/SampleClock is in this situation.  In a time based measurement, it would be the rate (I believe) and with position based sampling, I think it would be the encoder.  I know I read that if you have too many things reading the PXI_TRIG/RTSI lines, you can load them down and get bad data.  I also remember reading you shouldnt use them as clocks because there is some time delay...  But they are super convenient to easily clock everything together... 

 

You are correct that the quad and edge count tasks are started before the SCXI Ai task.  Basically everything in the code set it's start trigger to the SCXI Ai Task so once that starts, everything else starts as well. 

 

Thanks,

Matt  

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 3 of 6
(279 Views)
Solution
Accepted by topic author Matt_AM

I'll do some of my answering inline in blue.

 


@Matt_AM wrote:

So if the convert rate is faster than the samplings rate, is that just continually updating whatever is in the (assumed) buffer (or whatever you want to call the "holding till told to pass the data" spot is called) of the card so when the sampling rate triggers, it dumps the AtoD buffer to the sampling buffer?

 

The device will have a smallish on-board FIFO.  A/D conversion values first get written into this FIFO.  Soon after, they will be delivered from the device FIFO up to the task buffer that resides in PC memory.  Then they'll be delivered to you when you call DAQmx Read, which retrieves them from the task buffer.  I don't know the exact rules for how and when data gets moved from FIFO to task buffer though.  It isn't necessarily once per full sample of all channels in the task.

 

New question, how would the convert clock work based on order analysis? Ive done some order analysis stuff before and I always thought it was kinda weird to set the rate for a task that isn't being clocked off of a clock.

 

I have only passing familiarity with order analysis.  But I'd say that the convert clock would only affect phase characterization, not amplitude.  And it could be complicated if the sample rate varies while the convert clock remains fixed because the fixed convert clock time interval would then represent a variable phase offset within the sample.

    As to defining the sample rate, two things:

- in the call to DAQmx Timing, make a reasonable estimate of the actual max sample rate

- in your analysis, convert to the units "per revolution" instead of "per second".  Your quad encoder gives you 20k samples/rev.


EDIT:

it just seems weird to me that the quad and edge counter tasks are clocked off of PXI_TRIG2 and not PXI_TRIG4.  Maybe that was the original designers work around to solve that problem?  To boil it down, I don't understand what the ai/SampleClock is in this situation. 

 

PXI_TRIG4 shows pulses whenever your FPGA is powered and your encoder has motion.  But PXI_TRIG2 will only be passing them along during the time when the AI task is actively sampling.

 

You are correct that the quad and edge count tasks are started before the SCXI Ai task.  Basically everything in the code set it's start trigger to the SCXI Ai Task so once that starts, everything else starts as well. 

 

By sequencing the starts that way, your sampling is in sync for all tasks.  If the AI were started first and then the encoder tasks were each started sometime later, their first (and then all subsequent) samples would be offset from one another, i.e., NOT in sync.

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 4 of 6
(265 Views)

My responses will be in red to differentiate between the question and responses to your response

 


@Kevin_Price wrote:

I'll do some of my answering inline in blue.

 


@Matt_AM wrote:

So if the convert rate is faster than the samplings rate, is that just continually updating whatever is in the (assumed) buffer (or whatever you want to call the "holding till told to pass the data" spot is called) of the card so when the sampling rate triggers, it dumps the AtoD buffer to the sampling buffer?

 

The device will have a smallish on-board FIFO.  A/D conversion values first get written into this FIFO.  Soon after, they will be delivered from the device FIFO up to the task buffer that resides in PC memory.  Then they'll be delivered to you when you call DAQmx Read, which retrieves them from the task buffer.  I don't know the exact rules for how and when data gets moved from FIFO to task buffer though.  It isn't necessarily once per full sample of all channels in the task.

 

Is the Convert Clock the one that writes the values to the on board FIFO buffer or is that the sampling rate?  I would assume that the sampling rate takes the A/D data (set by the convert rate) and stores that to the FIFO buffer.  I understand that you don't know the exact rules, but I figured I'd ask.  Worst case is an "I'm not sure" answer.

 

New question, how would the convert clock work based on order analysis? Ive done some order analysis stuff before and I always thought it was kinda weird to set the rate for a task that isn't being clocked off of a clock.

 

I have only passing familiarity with order analysis.  But I'd say that the convert clock would only affect phase characterization, not amplitude.  And it could be complicated if the sample rate varies while the convert clock remains fixed because the fixed convert clock time interval would then represent a variable phase offset within the sample.

    As to defining the sample rate, two things:

- in the call to DAQmx Timing, make a reasonable estimate of the actual max sample rate

- in your analysis, convert to the units "per revolution" instead of "per second".  Your quad encoder gives you 20k samples/rev.

 

Regarding the samp/rev vs samp/s, the 8333 samp/s is me trying to give a "this is how fast we are sampling in the time domain, so I believe the 333 kHz convert clock is fine, although faster than is required"  When doing the actual analysis, we are using the samp/rev then converting to the freq domain for analysis.  


EDIT:

it just seems weird to me that the quad and edge counter tasks are clocked off of PXI_TRIG2 and not PXI_TRIG4.  Maybe that was the original designers work around to solve that problem?  To boil it down, I don't understand what the ai/SampleClock is in this situation. 

 

PXI_TRIG4 shows pulses whenever your FPGA is powered and your encoder has motion.  But PXI_TRIG2 will only be passing them along during the time when the AI task is actively sampling.

 

Awesome, that is what I was thinking would happen, I just did a poor job of explaining it.  Thank you for the clarification!

 

You are correct that the quad and edge count tasks are started before the SCXI Ai task.  Basically everything in the code set it's start trigger to the SCXI Ai Task so once that starts, everything else starts as well. 

 

By sequencing the starts that way, your sampling is in sync for all tasks.  If the AI were started first and then the encoder tasks were each started sometime later, their first (and then all subsequent) samples would be offset from one another, i.e., NOT in sync.

 

Correct.  If I am doing stuff that needs to be in sync, I will clock everything off of a generate "faux clock" and start the faux clock last after all my other tasks have been started waiting for the clock.  Follow up question regarding the way I typically sync stuff, since I am starting my clock last, is there any reason I need to have the same start trigger?  I would assume no and a start trigger would be more aligned to things at different sampling rates (say microphones sampling at 48 kHz and a torque reading at 1 kHz starting at the same time).

 

-Kevin P


Thanks for the information, I really appreciate it,

 

Matt

Attention new LV users, NI has transitioned from being able to purchase LV out right to a subscription based model. Just a warning because LV now has a yearly subscription associated with it.
0 Kudos
Message 5 of 6
(238 Views)

@Matt_AM wrote:

Is the Convert Clock the one that writes the values to the on board FIFO buffer or is that the sampling rate?  I would assume that the sampling rate takes the A/D data (set by the convert rate) and stores that to the FIFO buffer.  I understand that you don't know the exact rules, but I figured I'd ask.  Worst case is an "I'm not sure" answer.

First answer: I'm not sure.  I would *suspect* that there's a single super-low-level A/D register and that each individual A/D conversion gets copied into the FIFO immediately.  After that, I kinda suspect that values only get delivered up to the PC and task buffer in units of full samples, i.e., one conversion value per channel in the task list.   I don't think I've ever tested this, though it'd be possible to test out by configuring to use a counter output as the convert clock and then controlling the counter to generate not enough pulses for all the channels.   Though it'd probably be a better test if this didn't happen until after a few full samples had been delivered to the task buffer.

 

 


@Matt_AM wrote:

Correct.  If I am doing stuff that needs to be in sync, I will clock everything off of a generate "faux clock" and start the faux clock last after all my other tasks have been started waiting for the clock.  Follow up question regarding the way I typically sync stuff, since I am starting my clock last, is there any reason I need to have the same start trigger?  I would assume no and a start trigger would be more aligned to things at different sampling rates (say microphones sampling at 48 kHz and a torque reading at 1 kHz starting at the same time).

You nailed it.  That's been a pet soapbox of mine around here for a long time.  Contrary to some of the examples and tutorials, in many apps sync does not require any trigger  at all (nor any explicit work with reference clocks or sync pulses).  A shared sample clock can be sufficient and when syncing across separate devices, it can be *preferable* to using a trigger alone.

    Triggering becomes necessary when sample clocks can't be shared, either due to different sample rate needs, or use of devices (such as those with Delta-Sigma converters) that don't allow for the direct use of an external sample clock.  Even then however, triggering alone can leave you vulnerable to timing skew caused by accuracy tolerances in DAQ device timebases.  One little factoid I keep in my head is that the 50 ppm spec on many devices corresponds to a skew of 3 msec per minute of acquisition.  In some apps this is a LOT and needs to be addressed, in others it doesn't really matter.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 6
(229 Views)