LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Changing DAQmx range while task is running

Introduction

I have a system of an analog input and analog output board. Namely the NI PCI-6225 and NI PCI-6738, respectively. They have multiple channels that in the software should be controlled by the user. Example, the users sets a sine wave output on channel 0 of the output board and the input board measures the voltage and current response on channel 0 and 1. 

All channels should be independent from each other, so while one channel is sending it's sine wave (which could take 1 minute) another channel shouldn't have to wait to start its measurement.

 

Setup

I've developed a wrapper to handle all this and it's not my question. My question is about setting the range of the input board. To fully utilize the functionality of the expensive NI boards I want to have some autoranging functionality. Meaning that in my sine wave example, I fetch the voltage/current every 100ms and the current channel (voltage/sensing resistance) could set its range based on the previous measurement.

 

Problem

The problem is that you cannot set the AI.max and AI.min properties while task is running. I'd have to stop the task, set the range and start the task. But I have the output and input synchronized to each other since the input has to start measuring once a voltage has been set. Stopping and starting a task takes ~30ms, messing with my measurement.

 

I don't have a good idea how to solve this. I'm hoping that some expert here can help me in the right direction. I've attached a (very) rough prototype of my implementation. To try the program you can simulate the above mentioned board and add an RTSI cable adding the two simulated boards.

0 Kudos
Message 1 of 11
(495 Views)
Sorry, I think you've already found your answer that you need to stop and start the tasks to adjust the range. For what it's worth, most instruments I use that have an autoranging capability also have a brief pause when adjusting the range.

Depending on how long your measurement takes, you might just want to re-take a measurement with a different range if you detect it is small enough to get more precision, or too large to fit in your current range.
0 Kudos
Message 2 of 11
(476 Views)

Unfortunately, by the hardware architecture of DAQ boards, the range cannot change while the task is running, it is simple as a bunch of relays or gain bits of PGA have to be configured and the HW was not designed to do so.

 

You can refer to what settings can be changed during a running task in the DAQmx manual.

 

If you want to do this, you can add your own programmable gain amplifier before conneting it to the DAQ AI and configure the gain as and when required without affecting the AI task.

 

Real time auto-raning are pretty complex and optimizing it dependents on the type of signal. One easy alternative is to use two or more AI channels to sample the same signal at different voltage ranges, you compare both the measurements, based on which one is not clipped you choose the one with better resolution in the software.

 

Edit: This is one other reason why you go for high-dynamic range ADC/DAC to avoid range change, typically in sound and vibration applications. Those ADC/DAC are typically 24-bit delta-sigma converters.

-Santhosh
Semiconductor Validation & Production Test
Soliton Technologies
NI CLD, CTD
LabVIEW + TestStand + TestStand Semiconductor Module (2013 - 2020)
NI STS for Mixed signal and RF
Message 3 of 11
(468 Views)

Dangit, santo beat me to it.

 

Your boards have lots of AI's. Unless you're using all of them, just duplicate each measurement, each with a different range, then pick the better one in software. You choose which is "better" by using the largest range measurement, then selecting the range in which that measurement falls. Then you actually record the measurement in your sub-range.

0 Kudos
Message 4 of 11
(448 Views)

The NI-6225 has a muxed input, thus, you can not have individual gains for each channel. You would need a board with simultaneous sampling.

0 Kudos
Message 5 of 11
(437 Views)

You sure? I've done mixed gain sampling on a few cards. Most recently, the USB-6363, which isn't a simultaneous sampling card either. It seemed to work just fine. I *did* see some ghosting at the maximum end of the sample rates (1 or 2 MS/s total) when it switched gains, but other than that it worked fine.

 

This very old thread claimed it was possible back in 2005: https://forums.ni.com/t5/Multifunction-DAQ/Different-input-ranges-on-multiple-channels-in-DAQmx/td-p...

 

You just have to create your task using multiple Create Virtual Channel nodes, and assign each one a different gain value.

Message 6 of 11
(434 Views)

@BertMcMahan wrote:

You sure? I've done mixed gain sampling on a few cards. Most recently, the USB-6363, which isn't a simultaneous sampling card either. It seemed to work just fine. I *did* see some ghosting at the maximum end of the sample rates (1 or 2 MS/s total) when it switched gains, but other than that it worked fine.

 

This very old thread claimed it was possible back in 2005: https://forums.ni.com/t5/Multifunction-DAQ/Different-input-ranges-on-multiple-channels-in-DAQmx/td-p...

 

You just have to create your task using multiple Create Virtual Channel nodes, and assign each one a different gain value.


Good Catch. Did not know this. Thought muxed cards could not change the gain of each channel fast enough since they had the same ADC. Thanks!

0 Kudos
Message 7 of 11
(427 Views)

@santo_13 wrote:

Unfortunately, by the hardware architecture of DAQ boards, the range cannot change while the task is running, it is simple as a bunch of relays or gain bits of PGA have to be configured and the HW was not designed to do so.

 

You can refer to what settings can be changed during a running task in the DAQmx manual.

 

If you want to do this, you can add your own programmable gain amplifier before conneting it to the DAQ AI and configure the gain as and when required without affecting the AI task.

 

Real time auto-raning are pretty complex and optimizing it dependents on the type of signal. One easy alternative is to use two or more AI channels to sample the same signal at different voltage ranges, you compare both the measurements, based on which one is not clipped you choose the one with better resolution in the software.

 

Edit: This is one other reason why you go for high-dynamic range ADC/DAC to avoid range change, typically in sound and vibration applications. Those ADC/DAC are typically 24-bit delta-sigma converters.


Thank you all for the replies. Unfortunately I do have to use all 80 available channels, so I cannot duplicate any channels. 

 

I've accepted that I have to stop and start the task to change the range, no way around it. I just need to know the best practice to setup my architecture. I have to send an array of points to the analog output and the analog input should measure the voltage and current response from one voltage to the next. My idea was to sync the clocks of both devices, and use the CurrentWritePosition of the analog output as a marker from where to read the data of the analog input, since it is synchronized with CurrentReadPosition. But stopping and starting the input task messes with the CurrentReadPosition.

 

Do you have recommendations for a better architecture?

0 Kudos
Message 8 of 11
(380 Views)

Can you stop and restart the generation as well? You can then just share clocks the whole time so you know they're synchronized. You might actually want to look into your synchronization as well; since you're generating a sample and reading one at the same time, it's hard to know what voltage source you're looking at. You may read just before the AO updates, or maybe right after it updates; either way, your system may not be stable.

0 Kudos
Message 9 of 11
(357 Views)

Adding to BertMcMahan's comments:

 

I often like to drive *both* AO and AI with a counter output while I configure AO to output on the *leading* edge and AI to sample on the *falling* edge of the pulse.  I can then choose a pulse width that accomodates system response and settling time.

 

Pro Tip:  when AI includes multiple channels, you may need to *also* consider controlling the convert clock.  I *think* the default behavior is to spread out the conversions across the whole sample interval as much as possible.  So channels that are toward the bottom of your channel list might take sample #N a little while after AO issues sample #N+1.   To combat that, you can take control of the convert clock and set it to a higer rate which allows all channels to be converted before the next AO sample.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW?
Message 10 of 11
(354 Views)