Hi, This is my first post here, so if I am asking the question in the wrong place please let me know.
I am trying to control NI USB 6221 through C++ in Microsoft Visual Studio 2019 using ANSI C library. Usually, everything works fine but when I try to read data from an Analog Input at the maximum possible sampling frequency (in my case it is 250k S/s) for more than a second in the continuous mode it gives me the following error:
"The application is not able to keep up with the hardware acquisition.\nIncreasing the buffer size, reading the data more frequently, or specifying a fixed number of samples to read instead of reading all available samples might correct the problem."
This error occurs when I use DAQmxReadAnalogF64() function to read data in the callback function that gets called when the required number of samples is available. I tried to increase the buffer size but it did not help. I do not understand why this error is happening because in all other situations it works fine.
Solved! Go to Solution.
What IS your buffer size? Are you sure (see this link)?
What IS your "required number of samples" to fire off the callback function?
It'd help if you'd post the parts of the code where you configure the task and read from it. Though I don't personally use any of the text API's for DAQmx, others do. I'll probably be able to follow along pretty well anyway though.
The 3rd suggestion in the error text (specify a fixed # samples to read) isn't always good advice. It depends on how the app is structured. There are plenty of situations where reading all available samples would be the better option, and yours might possibly be one of them.
The code is a part of a much larger program and so posting it wouldn't make sense unless I post the entire thing, but then it would be too long.
The buffer size is around 625,000 samples of type double (this is just for testing, in reality, it can be much higher), the required number of samples to fire off the callback function is also the same. I am trying to get data at the maximum sampling frequency (250k S/s) of my bord, which NI USB 6221.
I was able to fix this problem by using DAQmxCfgInputBuffer() function. I assigned the required buffer by using this function. My program flow now is as follows:
Step 1: Create Task
Step 2: Create AI Channels, Assigns channel names, terminal config, Range, etc.
Step 3: Assign sampling rate, sample mode and samples per channel
Step 4: Create buffer
Step 5: Assign callback function
Step 6: Start Task
The documentation for ANSI C control of National Instruments DAQ boards is very limited and so it is quite difficult to figure out what to do and where to do it. It would be nice if I could find some C/C++ documentation with better examples so that I get a general idea of how to configure.
The buffer size is around 625,000 samples of type double (this is just for testing, in reality, it can be much higher), the required number of samples to fire off the callback function is also the same. I am trying to get data at the maximum sampling frequency (250k S/s) of my board, which NI USB 6221.
Ding ding ding! We have a winner folks!
That's the problem right there -- waiting until the buffer is *ENTIRELY* full before firing off the callback. DAQmx is busy in the background trying to deliver data to the buffer at 250 kHz. After your buffer is completely full, such that the *very next* sample from the board will assert the error you're struggling with, your software has to be alerted to fire off a callback function and then use the driver to start retrieving data out of the buffer in time to make room for the new samples that want to arrive.
Key tip: DAQmx manages your task buffer as though it's circular. It keeps track of where you left off with your previous read so you can string together a series of reads and be certain you're getting a contiguous stream of samples with no repetitions. You don't have to manage it from the app side by waiting for the buffer to fill and then reading it all at once.
If you set your callback for anywhere from say 25k to 100k samples (while leaving the buffer at 625k), things ought to run smooth. A common rule of thumb that tends to work rather well across a wide range of apps is to aim for about 10 reads per second. With data coming in at 250k, you could pull it out at the same rate with 10 reads per second of 25k samples each.