Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Low-latency single-point analog input

I am trying to use an X-series PCIE-6320 DAQ for the feedback in a control loop, using the DAQmx API in C++.

 

The controller is expected to run on a standard Windows 7/10 PC with flexible configurations so the timing is not all known ahead of time, but it is expected to run up to 20kHz. It's not a real-time system, but on average we would like to achieve that rate, which means the latency of the DAQ readout must be around 20µs or less. I've been having trouble achieving that with DAQmx version 18.6.

 

All I am trying to get is a single sample on a single channel as fast as possible when the software is ready for it. If there is any buffering going on in the background, I would expect it to work in "overwrite" mode, so that's what I tried to set up.

 

For the obvious "on demand" way I get 54µs in a 4-channel configuration. I know some of the settings below are not relevant in this configuration, but I've been experimenting with a lot of them so I think it's better to post everything:

 

DAQmxCreateAIVoltageChan(taskHandle,
                         chanName.c_str(), "",
                         DAQmx_Val_Cfg_Default,
                         // Hardcode +/- 10V range.
                         -10.0,10.0,DAQmx_Val_Volts, nullptr);
DAQmxGetSampClkMaxRate(taskHandle, &sampleRate);
DAQmxSetRealTimeReportMissedSamp(taskHandle, false);
DAQmxSetRealTimeConvLateErrorsToWarnings(taskHandle, true);
DAQmxSetReadWaitMode(taskHandle, DAQmx_Val_Poll);
DAQmxSetReadSleepTime(taskHandle, 0.0);
DAQmxSetReadOverWrite(taskHandle, DAQmx_Val_OverwriteUnreadSamps);
DAQmxSetSampTimingType(taskHandle, DAQmx_Val_OnDemand);
DAQmxSetSampClkRate(taskHandle, sampleRate);
DAQmxSetSampClkActiveEdge(taskHandle, DAQmx_Val_Rising);
DAQmxSetBufInputBufSize(taskHandle, 0);
DAQmxStartTask(taskHandle);

...
DAQmxSetReadChannelsToRead(taskHandle, d->m_channelNames[ch].c_str());
DAQmxReadAnalogScalarF64(taskHandle, TIMEOUT_S, &v, nullptr);

In other threads I found the recommendation to instead set up a buffered acquisition to always read the last sample, using the added code below. However this way I always get 1ms latency! Strangely, if I request a sample earlier than the last (say, -5 instead of -1), the read time drops drastically.

 

 

DAQmxSetReadReadAllAvailSamp(taskHandle, false);
DAQmxSetReadRelativeTo(taskHandle, DAQmx_Val_MostRecentSamp);
DAQmxSetReadOffset(taskHandle, -1);

 

 

For any other configuration where I try to set it up to acquire in the background and just grab the latest sample, I can get lower latencies but always get an error if the software stops reading for a period of time. Nothing like DAQmx_Val_OverwriteUnreadSamps or DAQmx_Val_IgnoreOverruns seems to work to leave it totally free-running. Also the buffering API is opaque enough at this point that I am not confident that I am actually reading the latest sample in anything but the above two configurations.

 

Another part of the problem is that the latency seems to scale with the number of channels. So I have to select the channel up front when creating the task. This is surprising because this is supposed to be a simultaneous acquisition device, and certainly the extra data transfer for a sample for a each channel is insignificant. Using DAQmxReadAnalogF64 vs. DAQmxSetReadChannelsToRead+DAQmxReadAnalogScalarF64 doesn't seem to make any difference here.

 

How can I get the single-point as fast as possible with DAQmx?

 

Is there a lower level API I can use to achieve this with this card?

0 Kudos
Message 1 of 2
(1,775 Views)

I only use LabVIEW and I tinkered just a little bit using a similar 6341 board.

 

For me, the technique of reading already-buffered samples (using a negative value for offset) seemed to be fastest with loop rates in the 30-35 microsec range for a single channel task.  The speed was pretty insensitive to the # samples I read each iteration, but was kinda dramatically sensitive to the # channels in the task.  I got similar results for either 1 or 2 channels, then a big slowdown with 3 or more (1 millisec neighborhood).

 

I also tried with the normal method of reading *next* samples in a lossless stream.  The best I saw there was around 50-55 microsec average using with 20 kHz sampling, read 1 per loop iteration.   This result was pretty insensitive to # channels in the task.   

    There's a subtle growing problem with this result -- the 1 sample I kept reading would get staler and staler because my loop rate couldn't *quite* keep up with my sample rate.

 

Running at 10 kHz in lossless mode worked much more consistently.  There I could keep up with the sample rate and had a *max* time interval of 100 microsec.  (With a longer run, there'd probably be occasional spikes in this interval though).

 

So, all that aside now, are you really controlling something that benefits from a control loop rate that's approximately 20 kHz but variable?   A lot of people posting here looking for fast loop rates end up actually needing fast *sample* rates and slower but pretty consistent control loop rates.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 2
(1,747 Views)