06-14-2007 06:37 PM
06-18-2007 09:21 AM
int32 DAQmxReadBinaryI16 (TaskHandle taskHandle, int32 numSampsPerChan, float64 timeout, bool32 fillMode, int16 readArray[], uInt32 arraySizeInSamps, int32 *sampsPerChanRead, bool32 *reserved);
Reads multiple unscaled, signed 16-bit integer samples from a task that contains one or more analog input channels.06-19-2007 11:18 AM
Hi jrolston,
Since you are using .NET with NI-DAQmx, then you want to definitely stick to
the .NET classes and not mess with the C API.
I'm not too sure of what exactly might be the best method for you but I will
provide you with both options. The NI-DAQmx API provides both "raw"
and "unscaled" analog read functions.
Unscaled read returns data in the native format of the device, read directly
from the device or buffer without any scaling. In .NET, if this is what you
were wanting, then you would need to use the AnalogUnscaledReader class instead of the AnalogMultiChannelReader class. Check out this forum
post as well.
Raw reads return data in both the native format and organization of the device,
read directly from the device or buffer without either scaling or reorder. In
.NET, if you want to perform raw I/O, you need to call the raw I/O methods on
the DaqStream class directly rather than using a reader or writer class.
Refer to the Reading and Writing with the NI-DAQmx .NET Library help
topic in the NI-DAQmx .NET Documentation (Start >> All Programs
>> National Instruments >> NI-DAQ >> NI-DAQmx .NET Framework
2.0 Help; Then navigate to NI Measurement Studio >> NI Measurement
Studio .NET Class Library >> Using the Measurement Studio .NET Class
Libraries >> Using the Measurement Studio DAQmx .NET Library)
Best Regards,
06-19-2007 02:57 PM
Hi and thanks for the responses. I stumbled across AnalogUnscaledReader and started using that. I also found this post, which talks about how to convert these raw values to scaled voltages (turns out it's not, like I though, a linear encoding). That post discusses LabView, but in C# you call the DeviceScalingCoefficients property of an AIChannel, which returns (with a PCI-6259) a 4-element array of doubles, one for each scaling coefficient. I also noted that these vary greatly between cards (I have four PCI-6259s, all with different coefficients). I guess the coefficients are changed during calibration of the card.
In any case, writing the 16-bit values to disk (vs. the 64-bit double values) saves a lot of time and space.