Our Traditional NIDAQ app, which was written in C++ using the C NIDAQ APIs, using E series cards like the 6071E, increases the sampling interval (time between channels in a single multiplexed scan) to maximize the settling time between channels, subject to the limitations of the number of channels, our desired per-channel sampling rate, and the card's bandwidth. We determined the card's bandwidth empirically when the app started, by attempting to set a very high scan rate and then, if an error was returned, in effect using binary search to determine the maximum scan rate that didn't return an error. Now we need to rewrite this code to use DAQmx. In DAQmx, how do I change the sampling interval? Also, in Traditional NIDAQ, we sometimes found that we couldn't increase the sampling interval to as large a value as was implied by the current scan rate, number of channels, and the card's bandwidth, so we would have to sit in a loop calling SCAN_Start, checking for an error -10092, and decrementing the sampling interval until we started the card without error. Is this still a problem with M series cards? Last but not least, there was no way for software to query what settling time was needed for a given settling error -- tables were given in the E series documentation, but there was no way to get this information at run-time, so we had to hard-code the rules for what model of card required how much settling time.
I know that the M series cards have greatly improved settling time characteristics, but we still want to maximize settling time when fewer channels than the maximum supported by the card's bandwidth are in use. Also, we will still have to support the old E series cards. I've looked through the DAQmx C Reference Help and found very little about settling time, aside from a lot of stuff about debouncing switches and relays. The one thing I did see was this:
"NI-DAQmx adds in an additional 10 microseconds per channel settling time to compensate for most potential system settling constraints"
In a typical situation, we would run 16 channels of a 6071E at 40 kHz per channel, for an aggregate bandwidth of 640 ksamples/sec (card bandwidth 1.25 msamples/sec). This gave us 1.5625 us (40 kHz = 25 us, 25 us / 16 channels = 1.5625 us) between channels in a multiplexed scan, which gave us adequate "settling quality" for our application. But if the minimum settling time in DAQmx is 10 us, obviously we wouldn't be able to scan as many channels at the same rate. In the above example, it would take at least 16 * 10 us = 160 us for one multiplexed scan of 16 channels, for a per-channel sampling rate of only 6.25 kHz, which is unusably slow.
How do we modify the settling time in DAQmx, using the C APIs? Will we be able to duplicate the example I gave above (scanning 16 channels of a 6071E at a per-channel sampling rate of 40 kHz) in DAQmx? Any other advice you can give relative to scanning multiple channels at rates near the bandwidth limits of the card would be greatly appreciated.
Thanks in advance for any assistance you can provide.
Then it's really simple, input the *data as the rate for setting the clock, and you're done. It works the same for E and M Series cards and now you don't have to play a guessing game of what the max rate is. You simply know what the max rate is and program it to use that rate.
int32 __CFUNC DAQmxGetSampClkMaxRate(TaskHandle taskHandle, float64 *data);
int32 __CFUNC DAQmxSetAIConvRate(TaskHandle taskHandle, float64 data);