Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to achieve low latency analog read/write with NI DAQmx Python with erroneous AO

Hi,

 

I am trying to implement low latency (or real-time) analog input/output with NI DAQmx Python API. My hardware is NI 9252 (AI) and NI 9263 (AO), with chassis being cDAQ-9174.

 

It is written for a robotic control application where low latency is priority (i.e., only the newest single data point is useful). I need to write an analog command and immediately read the feedback analog input from the robot.

 

I understand that USB based DAQ is more capable of net throughput than low latency (with discussion in this thread), but I am trying to optimize my code with software timed analog I/O. Currently a single AO write operation takes around 0.014s which is the bottleneck. 

 

More absurdly, the following code finishes in 0.002s but does NOT write and read correctly (starts to read before write actually finishes, and read is also noisy and incorrect):

 

    with nidaqmx.Task() as task_ai, nidaqmx.Task() as task_ao:
        # Create accelerometer channel and configure sample clock
        task_ai.ai_channels.add_ai_voltage_chan('cDAQ1Mod1/ai1:2')
        task_ai.timing.cfg_samp_clk_timing(sample_rate, sample_mode = AcquisitionType.CONTINUOUS, samps_per_chan=sample_rate*10)
        task_ao.ao_channels.add_ao_voltage_chan('cDAQ1Mod3/ao0:3')

        task_ai.start()
        task_ao.start()

        task_ao.write([1,1,1,1])   
        task_ao.wait_until_done()

        voltage_list = task_ai.read(number_of_samples_per_channel=nidaqmx.constants.READ_ALL_AVAILABLE)

 

 

With some weird tweaks, the following code can write and read correctly but takes much longer to complete (0.02s):

    with nidaqmx.Task() as task_ai, nidaqmx.Task() as task_ao:
        # Create accelerometer channel and configure sample clock
        task_ai.ai_channels.add_ai_voltage_chan('cDAQ1Mod1/ai1:2')
        task_ai.timing.cfg_samp_clk_timing(sample_rate, sample_mode = AcquisitionType.CONTINUOUS, samps_per_chan=sample_rate*10)
        task_ao.ao_channels.add_ao_voltage_chan('cDAQ1Mod3/ao0:3')

        task_ai.start()
        task_ao.start()

        task_ao.write([1,1,1,1])   
        task_ao.wait_until_done()
        time.sleep(0.00001) # need to add a sleep here to ensure it writes correctly

        voltage_list = task_ai.read(number_of_samples_per_channel=nidaqmx.constants.READ_ALL_AVAILABLE) 
        voltage_list = task_ai.read(number_of_samples_per_channel=nidaqmx.constants.READ_ALL_AVAILABLE) # need to read again

Note that the sleep above actually sleeps much longer that 0.00001s (took around 0.013s instead) which truly baffles me, also, two analog read is required to ensure correct readings.

 

 

It appears there are some intricacies that I am not capturing, and any ideas or suggestions pointing me to the right directions are appreciated. Thanks!

 

 

0 Kudos
Message 1 of 4
(1,861 Views)

Your AI module uses a Delta-Sigma converter -- it has internal filtering that delays the input signal.  If you look up the specs you'll find something about "filter delay" or "input delay".  The delay is usually dominated by some # samples worth of delay due to a digital filtering stage -- thus the *time* delay is very dependent on sample *rate*.

 

That *seems* likely to be at least part of the cause for your observations.  But I also have to admit that it seems weird that the AI Read function wouldn't wait until the signal has propagated all the way through the filter stage before returning data to you.

 

So that leads me to speculate blindly that there's *also* some kind of lag or delay in your system's response to your AO stimulus.  If your first AI Read happens before your system has fully responded, you might end up with wrong-looking AI data too.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 2 of 4
(1,838 Views)

Hi,

 

Thanks a lot for your comprehensive reply!

 

I took a look at filter config and noticed that for a step input the Delta-Sigma converter will only lead to 1 delayed sample point which should be ok (indeed it is related to the sample rate).

 

My system will have some delay but what I observed is that even for an AO output, it took 0.013s to complete (equivalent to 77Hz), but the AO suppose to run at 100 kS/s/ch. Also there seems to be some issues with NI DAQmx Python API, that when I added a sleep function the behavior drastically changed (as shown in my main thread).

 

Thanks a lot if you could help me on this!

0 Kudos
Message 3 of 4
(1,792 Views)

I can't be much help on the particulars of Python or the text-language set of DAQmx API functions.  Here's a little info that might help some, and also may frustrate some.

 

1. The only way to get AO running at 100 kS/s/ch is to configure a hardware clock and decent-sized task buffer.  There will be additional buffers for the USB transfer handler and the device's onboard FIFO.  All these add up to the strong likelihood of pretty considerable latency between writing new data to the task and seeing it show up as a real-world signal.

 

2. Your AI task is already hardware-clocked with a buffer.  If you combine your choice to "READ_ALL_AVAILABLE" with as fast a software loop as you can manage, that'll help minimize latency on the input side.

 

3. In general, low latency control loops should to run in unbuffered, on-demand mode.  This makes them subject to the vagaries of software timing when running a regular PC OS like Windows.   It will also drastically limit the maximum effective input sample rate and output update rate.  I mean *drastically*, like easily 2 orders of magnitude.

 

4. AI starts capturing and buffering data in the background as soon as you start the task, before you get around to reading it.

 

5. Your AO task is started *after* the AI task.   And it's in unclocked unbuffered on-demand mode, so it doesn't change the output signal until you call the Write function.   (For clocked & buffered AO, you would need to write to your buffer first *before* starting the task.  Then it would start generating in the background, much like described for AI.)

 

6. The combo of #4 and #5 explains why your AI data shows things that happened *before* you sent any signals out AO.

 

7. For AO in on-demand mode, there's no reason to "wait_until_done()".  I'd comment out that line b/c the method used for waiting may contribute to your latency.

 

8.  Can't speak to anything about Python execution speed or sleep() functions.  Not sure where you start and stop the time keeping to get 0.013 sec, but I'm not shocked to hear that kind of figure if it includes any of the task config code.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 4 of 4
(1,784 Views)