From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Offset issue with Expansion module

I am using NI 9264 and NI 9229 Expansion modules in a cRIO expansion chassis NI-9151 connected to the FPGA target NI-7854R in connector 2. I have 16bit Analog input and output in connector 0 which works as expected. I am reading the data as raw values instead of Calibrated values and then I am converting it to voltage. When I apply 10V in AO (Multiplying with 3276.7) I am getting around 10.5V and in the AI when I apply 10 V I am only reading 9.5 V around (raw value/139800.125). Is there anything I am missing here or is there any configuration/Calibration that need to be done to get it fixed? Any help is highly appreciated.

-----

The best solution is the one you find it by yourself
0 Kudos
Message 1 of 7
(2,920 Views)

Can some help me understand why this is happening?

-----

The best solution is the one you find it by yourself
0 Kudos
Message 2 of 7
(2,797 Views)

The manual for the 9264 has the following (very curious) information:

   Output range

     Nominal              ±10V

     Minimal              ±10.35V

     Typical               ±10.5V

     Maximum           ±10.65V

 

It would appear, therefore, that your device is working "as designed", producing the "typical" output voltage.  I'm unsure why this module is designed this way (except, perhaps, cost considerations) to deliberately produce more voltage than the "nominal" output.  One thing this does let you do is to get the "nominal" voltage without requiring amplifiers by either (a) building a simple voltage divider circuit yourself (two resistors) or (b) letting you "self-calibrate" each channel by figuring out what 16-bit input value produces exactly 10.0V, then pre-scaling the values you send to this device to convert its output from the "Typical" (or, better, "Actual") Output Range to the Nominal Range.

 

Note that the on-line documentation I found (with the above table) doesn't mention being able to "trim" the Output Range to make it "Nominal", but perhaps such an adjustment does, in fact, exist.  Take a look.

 

Bob Schor

Message 3 of 7
(2,784 Views)

Thanks Bob_Schor,

 

I have manually adjusted the 16bit value to get the voltage what is expected. I understand there is a maximum voltage of 10.65V but will this cause a consistent drift in the voltage? Here is what I am getting for 5 Volts I am getting 5.25 and 10V its 10.5 which shows a linear voltage offset. I will explore more and post if I get any other useful information on this.

-----

The best solution is the one you find it by yourself
0 Kudos
Message 4 of 7
(2,738 Views)

I think the reason for the values in the manual which Bob Schor posted is to guarantee that due to errors and drift from temperature or aging, the device will always be able to measure a signal of exactly 10 volts. If the full scale range is designed to be 10.000 V but some error made a particular instance of the device to have an actual range of 9.973 V, that specific device could not measure a 10 V signal. For someone characterizing D/A converters on a production line, this is extremely important. This device is designed in such a way that they can guarantee complete coverage of the nominal range. The tradeoff is that calibration is required to account for the actual range of a particular device. Another tradeoff is that slightly fewer than the maximum possible number of binary codes will be used over the nominal range.

 

Lynn

 

Message 5 of 7
(2,722 Views)

To paraphrase Lynn's response, you can either scale your DAC using Hardware (resistors, trim-pots) or Software (multiply by a scale factor).  It just occurred to me as I was writing this that you might be able to use MAX and DAQmx to set up the (software) scaling right in MAX.  I've not used this feature, but you can tell MAX that you want to save "scaled values" (for example, instead of saving a 16-bit number, you would save a Dbl "Volts" that ranges from (nominally) -10.0 to 10.0).  I've not done this myself, but the Help in MAX (look for Scales or Custom Scaling) should show you how to do this.  Of course, you want to "experiment" and see if this works!  If we assume that your DAC is linear, you should be able to come up with the Scaling Equation by simply measuring the two voltages corresponding to 32767, nominal +10, and -32767 (or 32768), nominal -10.  Remember, however, if you do this, you'll need to be sure your code now sends "volts" (Dbls in the range -10 to +10) to the DAC instead of I16 values.

 

BS

0 Kudos
Message 6 of 7
(2,702 Views)

Thanks Lynn and Bob-Schor for the response.

 


@Bob_Schor wrote:

It just occurred to me as I was writing this that you might be able to use MAX and DAQmx to set up the (software) scaling right in MAX.  I've not used this feature, but you can tell MAX that you want to save "scaled values" (for example, instead of saving a 16-bit number, you would save a Dbl "Volts" that ranges from (nominally) -10.0 to 10.0).  I've not done this myself, but the Help in MAX (look for Scales or Custom Scaling) should show you how to do this.  Of course, you want to "experiment" and see if this works!  If we assume that your DAC is linear, you should be able to come up with the Scaling Equation by simply measuring the two voltages corresponding to 32767, nominal +10, and -32767 (or 32768), nominal -10.  Remember, however, if you do this, you'll need to be sure your code now sends "volts" (Dbls in the range -10 to +10) to the DAC instead of I16 values.

 

BS


Since I am using the FPGA card in RT in a cRIO expansion module I cannot set the scaling for my data acquisition (As far I know). I am planning to have the calibration to remove this offset issue.

-----

The best solution is the one you find it by yourself
0 Kudos
Message 7 of 7
(2,686 Views)