01-23-2024 07:04 PM
Hi,
I'm using the NI DAQ-9184 model.
I set up a task for encoder measurement in NI MAX and use it using C++ code in the Visual Studio 2019 environment.
In my code, I'm reading data by asking the DAQ to drive the equipment and measure that angle.
The equipment (4 channels) gives a drive command of 5 degrees for each channel,
After 300ms, the system reads the value from the DAQ device through the DAQmxReadCounterF64() function.
Recently, I was considering changing the Sample Mode and ran into a problem while testing.
I am asking a question because I changed the Sample Mode from 1 Sample Mode (When Requestion) to continuous sample and a timing error occurred during the test.
Please refer to the information below and give me advice on what I did wrong.
Thanks for reading.
Summary.
Actual Action Expected Value Measured Value Twice Measured Value (Please refer to the code below. // MeasureDAQ() )
Equipment Ch1 5 degree drive (0->5) 5 degree 0 degree 5 degree
Equipment Ch2 5 degree drive (0->5) 5 degree 0 degree 5 degree
Equipment Ch3 5 degree drive (0->5) 5 degree 0 degree 5 degree
Equipment Ch4 5 degree drive (0->5) 5 degree 0 degree 5 degree
Equipment Ch1 5 degree drive (5->10) 10 degree 5 degree 10 degree
Equipment Ch2 5 degree drive (5->10) 10 degree 5 degree 10 degree
Equipment Ch3 5 degree drive (5->10) 10 degree 5 degree 10 degree
Equipment Ch4 5 degree drive (5->10) 10 degree 5 degree 10 degree
...
Equipment Ch1 5 degree drive (25->30) 30 degree 25 degree 30 degree
Equipment Ch2 5 degree drive (25->30) 30 degree 25 degree 30 degree
Equipment Ch3 5 degree drive (25->30) 30 degree 25 degree 30 degree
Equipment Ch4 5 degree drive (25->30) 30 degree 25 degree 30 degree
My Code
void Encoder::TaskOn()
{
string handle_name = "encoder";
auto loadhandle = DAQmxLoadTask(handle_name.c_str(), &task_handle_);
auto start_task = DAQmxStartTask(task_handle_);
// !!!!added code!!!!
DAQmxSetReadOverWrite(task_handle_, DAQmx_Val_OverwriteUnreadSamps);
DAQmxSetReadRelativeTo(task_handle_, DAQmx_Val_MostRecentSamp);
}
void Encoder::MeasureDAQ()
{
data_.clear();
float64 measure[80] = { 0, };
int32 read;
// original code ( 1 Sample Request Mode (20 Samples / each Channel Request, 20*4 = 80) )
auto readcount = DAQmxReadCounterF64(task_handle_, 20, 10.0, measure, 80, &read, 0);
data_ = vector<double>(measure, measure + sizeof(measure) / sizeof(measure[0]));
// !!!!!!!! Revision Code (Continous Mode)!!!!!!!!!!!!!
DAQmxSetReadOffset(task_handle_, -1); // set Offset to -1 to read the last sample acquired.
auto readcount = DAQmxReadCounterF64(task_handle_, 20, 10.0, measure, 80, &read, 0); // But, The Timing doesn't match what I expected
// Instead of measuring data at the desired point in time,
// data just before the desired point in time is obtained
auto readcount = DAQmxReadCounterF64(task_handle_, 20, 10.0, measure, 80, &read, 0); // If I read it twice, I can get the data I want at the time I want.
}
Solved! Go to Solution.
01-24-2024 06:48 AM
It's probably not the code, at least probably not the part you've focused on. Setting the offset to (-1) is a valid way to retrieve the most recently sampled value in the task buffer.
So that leaves other possibilities, most prominently the method you use to sync the drive motions you command and the encoder measurements you acquire. Sometimes sync mainly involves the configuration of the tasks along with some care about the sequence of starting them. But in the case of motion systems, there's also system response time to consider. If you take an encoder measurement immediately after issuing a motion command, your system may not have had enough time to move all the way to its destination, perhaps not even enough to register on your low-res encoder.
I don't know the VC++ syntax as I only program DAQmx in LabVIEW. But there, I would need to set both the offset *AND* another property named "RelativeTo" which I would set to "most recent sample". Otherwise, the default value would be "Current Read Position". I don't know what to expect from 2 consecutive reads with Offset=(-1) and RelativeTo=Current Read Position, which is what your code appears to be doing.
So overall, there are several candidates that need further investigation. But Offset=(-1) doesn't need to be one of them.
-Kevin P
01-24-2024 06:27 PM - edited 01-24-2024 06:31 PM
thanks for your reply Kevin.
First, let me talk about the RelativeTo property. In my code I set the RelaveTo property. The code below is the first of the two codes I posted.
void Encoder::TaskOn()
{
string handle_name = "encoder";
auto loadhandle = DAQmxLoadTask(handle_name.c_str(), &task_handle_);
auto start_task = DAQmxStartTask(task_handle_);
// !!!!added code!!!!
DAQmxSetReadOverWrite(task_handle_, DAQmx_Val_OverwriteUnreadSamps);
DAQmxSetReadRelativeTo(task_handle_, DAQmx_Val_MostRecentSamp); /* <<<<===RelativeTo */
}
Looking at the first code, the overwrite buffer and Most Recent Sampling settings are configured through SetReadOverWrite() and SetReadRelativeTo().
I will consider configuring synchronization as you said.
I have a few questions.
1. I set the Encoder Task to continuous mode through NI DAX.
Then, when I call LoadTask() and StartTask() in my code, it will be continuously measured according to the sampling rate (1000 Hz, my setting). Is that right?
2. My code has the Overwrite buffer property set, so unread data is overwritten, correct?
3. To summarize the above, the encoder is continuously measuring the encoder value after calling StartTask(), and unread data will be overwritten in the buffer.
Is that right?
※ The motor drive command is sent after a delay of at least 2.5 seconds after StartTask().
4. The sequence of my system is as follows.
(1) Motor drive command
(2) 300ms delay
(3) Encoder measurement request
(4) After 200ms delay, repeat (1) ~ (4) (repeat 56 times)
In the above operation, is there anything I need to consider regarding synchronization?
Thanks for reading.
01-25-2024 01:34 PM
Sorry, I replied from a low-res laptop screen and didn't scroll up to see where you did indeed set up "RelativeTo". On to your latest questions:
1. Yes with an asterisk. Many NI DAQ devices don't support the internal generation of a sample clock for counter tasks. Dunno why, it's just been that way for a very long time.
Are you counting on the device to generate the 1000 Hz sample clock for you? That won't work for a lot of devices. Or are you pointing to a PFI pin to which you're feeding a 1000 Hz clock? That should work for most devices I know about.
I would think you'd be able to test your task in MAX to make sure about the clock.
2. Yes, I think the OverWrite setting should let the task keep overwriting its buffer without throwing an error. The LabVIEW API only has a single similar setting to do that too.
3. Yes, sounds right to me.
4. The main "gotcha" I see is the magic # for motion delay set at 300 ms. It would be better to check whether the motion is complete. Does the drive offer such a signal or communication message?
Having said that, the consistent success you get from 2 reads in rapid succession tells me that it's very unlikely you can blame a motion that's almost-but-not-quite-done when you make the first read.
<time passes>
Ok, I went and looked up NI help for the C++ read function. Now I *DO* see a flaw, but not one that explains your observations. Not to me at least.
The flaw is that you're requesting 20 samples, starting with the 1 most recent past sample and spanning until the next 19 future ones. Granted, waiting for another 19 samples at 1000 Hz isn't a *long* wait (assuming you really DO have a 1000 Hz clock), but it seems like a flawed approach overall. If willing to wait for new samples, you may as well have stuck with your original unclocked on-demand task.
I also can't see how you get from the 20 measured values in your 80 element array to the single value you report on your output screen. But whatever the method, I still don't have any clear theories about how it could lead to your consistent success with the double read.
-Kevin P
01-25-2024 06:44 PM
Kevin, thank you so much for teaching me so much.
I learned a lot from your answer. Thank you so much.
1. Many NI DAQ devices don't support the internal generation of a sample clock for counter tasks.
--> I didn't know these facts. It simply operated with the internal clock type set.
If so, the timing cannot be considered accurate.
2. Yes, I think the OverWrite setting should let the task keep overwriting its buffer without throwing an error. The LabVIEW API only has a single similar setting to do that too.
3. Yes, sounds right to me.
--> Thank you for checking!!!!
4. The main "gotcha" I see is the magic # for motion delay set at 300 ms.
--> In my system, a response message is received when a 5-degree operation command is given to the equipment. But that doesn't mean the motion is complete. It is an ACK for the command, and there are no other messages after the device completes the 5-degree drive operation.
Having said that, the consistent success you get from 2 reads in rapid succession tells me that it's very unlikely you can blame a motion that's almost-but-not-quite-done when you make the first read.
--> If you look at the NI MAX program, there is a timing setting in the TASK settings.
There are settings for [Sampling Mode] [Samples Per Channel To Acquire] [rate(HZ)].
In Continous Mode, the “Samples Per Channel To Acquire” value is known as the buffer size. Is that right?
- I had set this value to less than 1000. I think it was probably 100. Could this be a problem?
The reason I set it that way was because I thought that at a rate of 1000Hz, each sample would be collected at 1ms.
Since there are 4 channels of 20 samples each, a total of 80 samples will be sampled, so I thought 100 would be enough.
If it becomes a problem
--> And I was missing in the code posted above.
When I tested Continous Mode, I set it to about 4000 using DAQmxSetBufInputBufSize() in the code, so was it okay?
5. If willing to wait for new samples, you may as well have stuck with your original unclocked on-demand task.
-->
The reason I considered "Continous Sampling mode" was because there was a delay when using 1 Sample Request Mode (20 Samples / each Channel Request, 20*4 = 80).
A delay occurred at some point, and I suspect the following regarding the settings.
I think the delay occurred after I set the option to enable TDMS file logging.
During my Continuous Mode test, TDMS File Logging was enabled and Continuous Mode did not work properly, so I was suspicious of this.
Lastly, what I'm curious about is whether there is an effect on performance regarding the TDMS Logging option?
Thanks for reading
01-30-2024 10:03 AM
1-3 are pretty well covered I think.
4. If the drive isn't helping you know when motion stops, maybe you should instead keep monitoring your encoder data to determine that for yourself. I used to do this kind of live monitoring on signals pretty often, setting up fairly simple "stability criteria" based on a linear fit. The slope needs to be sufficiently close to 0, the R^2 correlation term needs to be "pretty close" to 1 to demonstrate a well-behaved non-erratic resting-place behavior, and the mean and/or median value needs to be sufficiently close to the target in cases where there *is* a known target.
I would recommend you continue with RelativeTo=MostRecentSample, but read 30 samples with an offset of -30. (Note: keep in mind that you can't do your first read until the task has had time to accumulate at least 30 samples). Do this in a loop where you also keep checking your stability criteria. Also have some kind of bail-out criteria so you don't get stuck in the loop forever. Add a small delay to the loop such as maybe 50-100 msec.
In continuous sampling mode, "samples per channel to acquire" lets you suggest a *minimum* buffer size, but DAQmx may override you with a bigger one. This is fine since continuous sampling really only needs the buffer to be "big enough", the actual size is otherwise pretty irrelevant. (My habit is to specify a 5-10 second buffer as a minimum size.)
When you call the function that explicitly sets buffer size, it's my understanding that DAQmx will obey the size you ask for and not override you.
5. TDMS logging is very efficient as a rule. I doubt that was causing your delay. I'm not sure whether TDMS logging plays nicely with the plan to read discontinuous chunks of recent data though. I suspect it probably does but have never tried out such a combo.
-Kevin P
02-06-2024 04:23 PM
Sorry for the late reply.
Kevin, you cleared up some of my doubts.
It appears that logging in discontinuous mode may be affecting our system.
The problem of having to read data twice at the desired time in continuous mode is currently not a consideration, although it is curious.
I'll let you know if I find out anything later.
Thank you and have a nice day always!!