Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

How to get raw ADC values from DAQ board?

Hi,
 
I am using two NI PCI-6254 DAQ boards with LabVIEW and NI-DAQmx in a data-logging application that reads 64 analog input channels and writes them to files on my hard drive.  I have noticed that the data type of the output of the Analog Sample NChan NSamp VI is an 8-byte floating point (DBL) type.  I understand this to result from the VI converting the raw ADC counts produced by the ADCs on the DAQ board into real numbers that fit the input range I had earlier specified.  I would really rather LabVIEW skip the conversion step and just give me the raw ADC counts.  That way, the processor has less work to do and I can store only 2 bytes per sample rather than 8 bytes per sample.  (I will be generating a ton of data in my application and would like to keep log files as small as possible.)  Is there an easy way to turn off the automatic scaling feature in LabVIEW and have the Analog Sample VI just give me data of a 16-bit unsigned integer (INT16) tpye?
 
And on a related note, I am a little confused on what LabVIEW exactly does when it scales the ADC values into a real number range with units of volts.  Earlier today, I generated a 20Hz sine wave with amplitude 1.4V on Dev1/ai0 and tried to record 5 seconds worth of measurements of it.  I set the input high and low levels to +2V and -2V in LabVIEW using one of the NI-DAQmx VI's and recorded the DBL values that were generated.  When I read my data samples, each they read like:
 
      0  0  0  0  1  1  1  1  1  2  2  2  2  2  2  2  2 2  1  1  1  1  1  0  0  0  0  -1  -1  -1  -1  -1  -2  -2  -2  -2  -2  -2  -2  -2  -2  -1  -1  -1  -1  -1  0  0  0  0 ...
 
In other words, the values written to the data file came only from the set {-2, -1, 0, 1, 2} and not from the continuous range [-2, 2].  What could be causing this?  I read somewhere on the NI support site that some DAQ boards have the ability to not report a specified number of least significant bits.  That is, if you have 16bit ADCs but are measure a signal that has 10 bits of noise in it, you can have the DAQ board report only the 6 most significant bits.  Could that be what's going on here?  I haven't messed with these settings on my board.  (Actually, I didn't even know they were there.)  Or could it have to do with the way LabVIEW does its scaling?  How can I just get the raw 16bits of binary data from the ADC's, without any automatic scaling or dropping of LSBs?
 
Thank you in advance for any help.
 
-- Robert
 
PS - I appear to have part of this message 2 or 3 times.  When I was going to type out the line with all the 0's, 1's, 2's, -1's, and -2's, I hit tab to tab over.  The cursor didn't tab over, so I hit the space bar a few times to move the cursor.  Apparently, when I hit tab, the focus got moved to the submit post button, and when I hit the space bar repeatedly, the submit post button got pressed repeatedly, submitting the post 2 or 3 times.  I am sorry about this -- I really hate message board clutter.  If you can delete the other two posts (they're the incomplete ones that are identical to the first half of this one) please do.  Thanks!
  

Message Edited by rtnewsome on 03-06-2006 02:08 AM

0 Kudos
Message 1 of 2
(2,855 Views)

The datatype from the DAQmx Read can be easily changed. Do you see the description of the instance type below the function? The default is "Analog DBL 1Chan 1Samp" but you can click on the selector arrow on the fight side and change this to Unscaled if you wish.

The absence of data to the right of the decimal place may be because you've done a conversion to an integer type either before the file write or during the file write. If you probe the output of the DAQmx Read directly, what kind of readings do you get? What are you using for writing to a file?

0 Kudos
Message 2 of 2
(2,832 Views)