Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Settling time and channel scanning

I am interested to confirm the precise point at which the multiplexer in a DAQCard is switched to the next channel to be sampled.

For example, when performing a scan of a group of channels, is the multiplexer switched to the next channel to be acquired as soon as the preceeding channel has been sampled? This would mean that increasing the sample interval would allow more time for settling of the voltage to be measured.

Also, what about functions that don't allow the sample interval to be specified explicitly (e.g. AI_VRead_Scan)? What sample interval do they allow?

Finally, when initiating a sample of a single channel, how long before the first sample is taken is the multiplexer set to route through the channel to be sampled?


With thanks

Jamie Fraser
0 Kudos
Message 1 of 6
(3,412 Views)
Jamie;

All valid questions. The majority of the National Instruments DAQ devices has two main clock pulses, the Scan clock and the Channel clock.

The Scan clock synchronize the beggining of a scan, and the channel clock will synchronize the precise moment that the multiplexer will switch to a new address, and get the next channel of the channel list.

If you don't specify the channel clock frequency, NI-DAQ will make that decision for you, based on the number of channels and the sample rate chosen. Sometimes, that might cause the instrumentation amplifier to don't have enough time to settle properly,causing the channel reading to "interfere" with each other.

To have a full control of when the multiplexer will switch, you need to set the channel clock as
well as the scan clock. Also, keep in mind that the multiplexer will switch on the rising edge of the channel clock.

If you configure just one channel on the channel list, the scan clock will superimpose the channel clock and will be the only clock on your system.

Hope this helps.
Filipe A.
Applications Engineer
National Instruments
0 Kudos
Message 2 of 6
(3,412 Views)
Filipe,

Thanks for your note, but I'm still confused about this.

I understand that a scan rate and sample rate can be set independently using SCAN_Start(). For example, if four channels are being scanned, 1,2,3,4...1,2,3,4... then the time period 1,2,3,4... is the scan time period and the time between 1,2 and 2,3 and 3,4 is the sample time period.

However, it is the point between samples at which the multiplexer switches that I am unsure about. For example, if the scan is as follows:

1,multiplexer switch,2,multiplexer switch,3,multiplexer switch,4,...multiplexer switch...1, etc.

what is the relationship between the time that each action is performed.

Can you add any extra info please?

With Thanks

Jamie Fraser
0 Kudos
Message 3 of 6
(3,426 Views)

Our Traditional NIDAQ app, which was written in C++ using the C NIDAQ APIs, using E series cards like the 6071E, increases the sampling interval (time between channels in a single multiplexed scan) to maximize the settling time between channels, subject to the limitations of the number of channels, our desired per-channel sampling rate, and the card's bandwidth.  We determined the card's bandwidth empirically when the app started, by attempting to set a very high scan rate and then, if an error was returned, in effect using binary search to determine the maximum scan rate that didn't return an error.  Now we need to rewrite this code to use DAQmx.  In DAQmx, how do I change the sampling interval?  Also, in Traditional NIDAQ, we sometimes found that we couldn't increase the sampling interval to as large a value as was implied by the current scan rate, number of channels, and the card's bandwidth, so we would have to sit in a loop calling SCAN_Start, checking for an error -10092, and decrementing the sampling interval until we started the card without error.  Is this still a problem with M series cards?  Last but not least, there was no way for software to query what settling time was needed for a given settling error -- tables were given in the E series documentation, but there was no way to get this information at run-time, so we had to hard-code the rules for what model of card required how much settling time.

I know that the M series cards have greatly improved settling time characteristics, but we still want to maximize settling time when fewer channels than the maximum supported by the card's bandwidth are in use.  Also, we will still have to support the old E series cards.  I've looked through the DAQmx C Reference Help and found very little about settling time, aside from a lot of stuff about debouncing switches and relays.  The one thing I did see was this: 

"NI-DAQmx adds in an additional 10 microseconds per channel settling time to compensate for most potential system settling constraints"

In a typical situation, we would run 16 channels of a 6071E at 40 kHz per channel, for an aggregate bandwidth of 640 ksamples/sec (card bandwidth 1.25 msamples/sec).  This gave us 1.5625 us (40 kHz = 25 us, 25 us / 16 channels = 1.5625 us) between channels in a multiplexed scan, which gave us adequate "settling quality" for our application. But if the minimum settling time in DAQmx is 10 us, obviously we wouldn't be able to scan as many channels at the same rate.  In the above example, it would take at least 16 * 10 us = 160 us for one multiplexed scan of 16 channels, for a per-channel sampling rate of only 6.25 kHz, which is unusably slow.

How do we modify the settling time in DAQmx, using the C APIs?  Will we be able to duplicate the example I gave above (scanning 16 channels of a 6071E at a per-channel sampling rate of 40 kHz) in DAQmx?  Any other advice you can give relative to scanning multiple channels at rates near the bandwidth limits of the card would be greatly appreciated.

Thanks in advance for any assistance you can provide.

Larry

 

 

Message 4 of 6
(3,239 Views)
Hi Larry,

Fortunately, the default behavior is exactly what you are looking for. 

In your post you mentioned that you wanted to get the maximum possible sample clock rate for the number of channels that you have in your task, and then you want to adjust the convert clock (interchannel delay) such that there is equal distance between samples.  However, if you're at the max frequency for the board then it has no choice other than to sample at even intervals between channels and by default you have to maximize the interchannel delay because you are already encroaching upon the minimum rate.

Fortunately in DAQmx, it's VERY easy to determine what the max rate of sampling is to use the command:

int32 __CFUNC DAQmxGetSampClkMaxRate(TaskHandle taskHandle, float64 *data);

Then it's really simple, input the *data as the rate for setting the clock, and you're done.  It works the same for E and M Series cards and now you don't have to play a guessing game of what the max rate is.  You simply know what the max rate is and program it to use that rate. 

And DAQmx doesn't make newer boards run slower, if you got a board like a NI 6115 you would still get rates of up to 10 MS/s whether you use DAQmx or Traditional DAQ.

If you really want to change the AI Convert rate then you could simply determine the number of channels in your task, state what your desired sampling rate is, then multiply the two to find the ideal AI Convert rate to maximize settling time.  You can then set the AI Convert Clock rate by using this command:

int32 __CFUNC DAQmxSetAIConvRate(TaskHandle taskHandle, float64 data);

Regards,
Message 5 of 6
(3,226 Views)
Perfect. Thanks for the quick reply!
 
Larry
 
0 Kudos
Message 6 of 6
(3,220 Views)