From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Python NI-DAQmx API ADCTimingMode

Solved!
Go to solution

I am using Python 3.7.4 with the NI-DAQmx Python API to read thermocouples from a NI 9213 module in a cDAQ-9188 chassis.  While reading all 16 channels I can only get 1 read of each channel every second.  The 9213 should read 75 S/s aggregate so I think I should be able to get closer to 75/16= 4.7 reads per second.  I think that if I could change the ADC timing mode to high speed rather than high resolution I could get a faster rate, but I cannot figure out how to implement that change.  I have found the documentation for the NI-DAQmx Python API to be unclear (https://nidaqmx-python.readthedocs.io/en/latest/ai_channel.html#) and I cannot find any examples regarding the ADC timing mode.

 

Attached is a simple python script showing the problem I am having.  I have gotten the script to run without throwing errors at lines 9 or 10, but changing between HIGH_SPEED and HIGH_RESOLUTION does not seem to have any effect.  I am unsure if I am using the ADCTimingMode call correctly or if there is different problem.

 

Does anyone know how to implement changing the ADC timing mode or have a better idea how I could speed up acquisition?  Alternatively I am trying to figure out how to use the stream readers but have not gotten too far on that.  Thank you.

0 Kudos
Message 1 of 9
(3,835 Views)

I really don't know the text API, but I'm pretty sure that at least *part* of the problem is that you haven't configured your task to use a sample clock for timing, thus the timing properties you set aren't yet being used.

 

It looks like you'd need to use something under nidaqmx.task.timing, probably including a call to cfg_samp_clk_timing.  However, I don't know how the "rate" parameter of that call and the attempt to set the HIGH_SPEED property are going to interact on your device.  My inclination would be to try to set the rate directly so that hopefully there's no need to set the more vague HIGH_SPEED property.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 2 of 9
(3,822 Views)

I have tried using the cfg_samp_clk_timing, see attached.  That gives me data at the rate I ask for, but the data is repeated.  For instance if I ask for data at 4 Hz it will give me data every 1/4 second but for the first channel it will look like:

24.018

24.018

24.018

24.018

24.102

24.102

24.102

24.065...

 

It is like it is sampling at the correct rate, but the NI 9213 is not actually updating the data that is being sampled faster than 1 Hz.

0 Kudos
Message 3 of 9
(3,818 Views)

I'm out of ideas I can justify, now it's just a matter of trying stuff and observing.  I've never used that particular temperature device and have no experience to draw on.

 

What happens when you request various sample rates with just 1 channel in your task?   The spec sheet suggests a max possible sample rate of up to 100 Hz for a single channel.  Can you get that?

 

I think it might be helpful to focus on single channel behavior and get it to make sense first, and *then* move on to multi-channel.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 4 of 9
(3,810 Views)

The behavior with 1 channel instead of all 16 is the same, but with a higher rate.  With 1 channel I can get a little better than 4 Hz, which is way below the spec sheet value.

 

If I make the task in MAX and set the ADC Timing mode to High Resolution, then run the task in in Python it behaves the same way as in my TC_read.py script.  However, if I use MAX to set the ADC Timing mode to High Speed I can get rates up to at least 75 Hz to work over all 16 channels when I run the task in Python, see TC_read_MAX.py attached. 

 

I can do it like this but it is cumbersome and I would really like to programmatically setup the task.

 

Since I can get the higher rates when I set the ADC timing mode in MAX, I believe that I have a syntax problem with setting the ADC timing mode in Python, or there is a problem with the way the way the NI-DAQmx Python API handles ai_adc_timing_mode.

0 Kudos
Message 5 of 9
(3,802 Views)

I had only focused on the config and setup previously, now I notice that the dt you calculate is actually your loop iteration rate.  This is often *not* equal to the hardware sample rate.

 

When you set up a task with a sample clock, there will also be a task buffer.  The driver moves data from the device into the buffer according to the hardware sample rate.  In order to keep up and prevent a buffer overflow, your app will need to read from the buffer at the same average rate.  But you can do this in different ways.   

 

Suppose the task is set for a 75 Hz sample rate.  You could try to read 1 sample at a time at 75 Hz, or 5 samples at a time at 15 Hz, or 25 samples at a time at 3 Hz.   Etc.   Any of these choices will read from the buffer at an average rate of 75 samples per sec.

 

In your code you call "DAQtask.read()" with no arguments to specify how many samples to read.  I really don't know what the default behavior of that function will be.  I see several other functions in the API (particularly under nidaqmx.task.in_stream and nidaqmx.stream_readers) that would let you specify the # samples more explicitly.

 

For further diagnosis there should also be a way to query the task for the hardware sample rate, but I have no familiarity with the syntax.

 

All that being said, you seem to have gotten a much faster *iteration* rate with the predefined MAX task than when you tried to configure it explicitly.  While it seems most likely this is due to a faster hardware sample rate, it's also possible that your iteration rate changed because the MAX task configuration had implications for the behavior of the no-argument "read()" function.

 

Hopefully someone else joins in that knows more about your particular device and the python DAQmx API.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 6 of 9
(3,794 Views)
Solution
Accepted by topic author kb122333

I know this is late but just ran into this problem and giving it to someone in the future that might need it.  The problem is that the function was never set up in nidaqmx.task.timing but is present in C. 

 

Copy and paste the code below at the end of the following file (inside the "Timing" class):

c:/Users/<user>/AppData/Local/Programs/Python/Python37-32/lib/site-packages/nidaqmx/_task_modules/timing.py

def adc_sample_high_speed(
        selfchannel=""sample_mode=14712):
    """
    Turn that s**t up boi
    """
    cfunc = lib_importer.windll.DAQmxSetAIADCTimingMode
    if cfunc.argtypes is None:
        with cfunc.arglock:
            if cfunc.argtypes is None:
                cfunc.argtypes = [
                    lib_importer.task_handle, ctypes_byte_str,
                    ctypes.c_int]

    error_code = cfunc(
        self._handle, channel, sample_mode)
    check_for_error(error_code)

 

You can now run the following function in your code to set the device to High Speed sampling:

DAQtask.timing.adc_sample_high_speed()
Message 7 of 9
(3,276 Views)

Gave this a go and it works nicely!
went from .5s sample time to .01s sample time that was defined in the datasheet for the NI 9219 I am using.

0 Kudos
Message 8 of 9
(2,610 Views)

I am being forced to upgrade to python 3.11 which has a completely different timing module.

 

Is it possible someone who knows C can help me out here?

0 Kudos
Message 9 of 9
(896 Views)