05-04-2017 04:18 PM
I'm using an NI 6259 and writing a real-time C application on Linux (NI-DAQmx Base 15.0 for Linux) to sample a single AI signal, do a computation, and send an AO signal based on the result of that computation. We had been running at 1024Hz without much problem, but recently we have been trying to increase our rate to 2048Hz and even 4096Hz. We had been simply calling DAQmxBaseReadAnalogF64() and DAQmxBaseWriteAnalogF64() without modifying anything using DAQmxBaseCfgSampleClkTiming to run at 1024Hz. However, I thought that in order to run at a higher rate, I should use DAQmxBaseCfgSampleClkTiming using "OnboardClock," sample rate of 2048/4096, and DAQmx_Val_ContSamps. However, I start missing deadlines while running at 4096 Hz. My question is whether I'm just approaching the limit of what is achievable or whether there is something I'm doing wrong/not doing that would enable me to read/write at a higher rate.
On a related note, I wanted to try DAQmx_Val_HWTimedSinglePoint, but my compiler didn't know what that was. Has that been phased out?
One more question: How does the buffer work? I initially set my "sample rate" to 10000. I figured that running my sampling at a higher rate than my actual program wouldn't hurt anything. However, I noticed that at some point every call to DAQmxBaseReadAnalogF64() returned the same value. I'm assuming I filled up my buffer? How can I remedy this?
05-05-2017 08:23 AM
What you describe sounds like a real-time control application, in which case, you probably do *not* want to use a buffered AO task. The choice for AI is less clear cut. For AO, either HW-timed single point (if you can find how to make it work) or software-timed on-demand modes are better for making latency small and regular.
I would tend to expect it to be possible to run at 4 kHz, but I'm really only a LabVIEW / Windows guy these days. I have some past dabblings with Linux and with Real-Time, but not expertise. I'm not sure if the DAQmx Base driver is less efficient than the full-up DAQmx I'm more familiar with. And without seeing code, there's no way to identify possible inefficiencies there. (And I'm personally unlikely to be any help there anyway, as I've barely used C in two decades.)
Quick overview of how buffering works:
AI: when you read from a buffered AI task, you will be retrieving samples ordered from oldest to freshest. If you don't read all available samples (typically specified with the magic # -1), you will leave some behind in the buffer until the next read. Either way, the next read starts where you left off so you can retrieve a lossless stream of data. However, if you fail to empty the buffer fast enough, it will need to overwrite data you haven't retrieved yet. This will produce a buffer overflow error, which is not recoverable.
AO: when you write to a buffered AO task, the samples you write go to the *back* of the line in the AO buffer. They won't be generated as actual signals until the stuff in front of them gets their turn first. This is why there's always latency with buffered AO. If you fail to feed the buffer fast enough, the line will grow empty and there will be nothing for the D/A to generate. This will produce a buffer underflow error, also not recoverable.
-Kevin P
10-19-2017 02:50 PM - edited 10-19-2017 03:06 PM
Bump: Can someone more familiar with Linux NIDaqMXBase offer some input? I'll simplify the question. We're needing to write to and read from the card at a rate of 2048Hz. What settings should we use to do this most efficiently?
Thanks.