Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

C/C++ Traditional NIDAQ to NIDAQmx AI_VRead Equivalency Issue

Solved!
Go to solution

I am updating a program that uses the NIDAQ Legacy software and upgrading it to NIDAQMX. I have run into a limitation with the new NIDAQMX software that the original NIDAQ Legacy software was able to handle in a much better way.

 

The program requires a 50ms loop. Using traditional NIDAQ, calling the AI_Vread function for the different channels results in analog input readings for all of the required samples within less than 1ms (so very fast).

 

Example code:

 

for(int i = 0; i < channels; i++){

    AI_VRead(device, i, gain[i], &voltage[i]); //Reads all within <1ms

}

 

I would like to know if there is a better solution with NIDAQMX than what I have so far come up with (one preferably as easy as using the NIDAQ Legacy way). My explanation is below on what I have tried. My conclusion from this is that the traditional DAQ driver allows a 1 time reading of the analog channel directly to the hardware while NIDAQmx does not allow this, please correct me if this is wrong (I would like to do this in order to implement the same code as the legacy/traditional program I am working with).

 

Methods I tried:

 

  1. Calling the finite read method (DAQmxReadAnalogF64). This method is too slow, with a general performance of 100ms. My understanding is that this is too slow because it requires the TaskHandle and other objects to be created and destroyed within that time window.

 

Example Code:

void ReadAnalogSample()

{

int32       error=0;        TaskHandle  taskHandle=0;        int32       read;        char        errBuff[2048]={'\0'}; float64     data[samples];

 

    DAQmxErrChk (DAQmxCreateTask("",&taskHandle));

    DAQmxErrChk(DAQmxCreateAIVoltageChan(taskHandle,deviceChannels,

        "",DAQmx_Val_Cfg_Default,-10.0,10.0,DAQmx_Val_Volts,NULL));

    DAQmxErrChk (DAQmxCfgSampClkTiming(taskHandle,"",1000.0,

        DAQmx_Val_Rising,DAQmx_Val_FiniteSamps,samples));

    DAQmxErrChk (DAQmxStartTask(taskHandle));

    DAQmxErrChk (DAQmxReadAnalogF64(taskHandle,samples,10.0,

        DAQmx_Val_GroupByChannel,data,samples,&read,NULL));

Error:

    if( DAQmxFailed(error) )

        DAQmxGetExtendedErrorInfo(errBuff,2048);

}

 

2. Using a continuous loop with a callback function. The code for a callback requires 3 functions including the setup function, the callback, and the done callback (this is explained in the examples and I setup this to match my hardware and channel required). While these functions work and can sample at the correct rate, the DAQ device seems to need an entire second of time in order to buffer the data on the device which causes delayed data as well as neeeding to extract the buffered data in my main loop with a counter.

0 Kudos
Message 1 of 4
(2,491 Views)
Solution
Accepted by topic author rabatah

I didn't respond at first because I'm not fully equipped to answer.  For starters, I've really only programmed DAQ devices using LabVIEW, not text languages.  But I *am* among the relatively few around here who have a hazy fading memory of traditional NI-DAQ so I'll see what I can do.

 

Now some thoughts, no particular order.

 

- my recollection is that certain kinds of software-timed, immediate-mode interactions with DAQ boards *did* execute quicker under traditional NI-DAQ.  High-speed streaming on the other hand has greatly improved in the DAQmx era, though I'm not sure how much credit to give to the driver vs. advances in DAQ hardware and PC bus architectures (ISA-->PCI-->PCIe).

 

- there is very likely a way to use DAQmx to accomplish what you need.  It might not be a direct literal translation from the original code though, some things need to be set up and approached a little differently.

 

- I don't know particulars about callbacks in text languages.  I *do* know the DAQmx driver is much more smart and friendly in some key ways that might make you not *need* a callback.

    A very typical construct I use in LabVIEW is to let DAQmx become my loop timer.  It appears that you want to read data from a 1000 Hz acquisition task every 50 msec.  In LabVIEW, I would configure and start my task before entering my loop.  Inside the loop I would request 20 samples (50 msec worth) per iteration.  DAQmx manages the timing such that it will block until those 20 samples are available and then return and deliver them to me.  Every 50 msec, I get another 20 samples in another iteration of the loop.

   Under traditional NI-DAQ, this construct would have burned CPU and blocked other code from access to other DAQ tasks.  DAQmx solved both those issues, enabling such a simple construct to be used for effective loop timing.

 

- the 1 full second delay you see with your callback is *probably* due to something in your code and unnecessary.  Perhaps there's a default value you aren't overriding somewhere?  Under LabVIEW, there's a roughly equivalent mechanism known as DAQmx Events and I've not known them to show appreciable overhead like that.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 2 of 4
(2,013 Views)

Hello Kevin,

 

Thank you for the reply. Just as a start I will continue to work on this and get back to you. I have some comments on your comments below that maybe you have some better insight on then myself. I kept your comments with a - mark and put mine starting with a * mark.

 

- my recollection is that certain kinds of software-timed, immediate-mode interactions with DAQ boards *did* execute quicker under traditional NI-DAQ. High-speed streaming on the other hand has greatly improved in the DAQmx era, though I'm not sure how much credit to give to the driver vs. advances in DAQ hardware and PC bus architectures (ISA-->PCI-->PCIe).

 

*The card I am using is a PCI-6033E.

 

- there is very likely a way to use DAQmx to accomplish what you need. It might not be a direct literal translation from the original code though, some things need to be set up and approached a little differently.

 

*Yes I figured so far I understand that there will need to be a DAQmx Task setup function and DAQmx Task start function as well as multiple setup functions to input the correct parameters to the DAQ device.

 

- A very typical construct I use in LabVIEW is to let DAQmx become my loop timer. It appears that you want to read data from a 1000 Hz acquisition task every 50 msec. In LabVIEW, I would configure and start my task before entering my loop. Inside the loop I would request 20 samples (50 msec worth) per iteration. DAQmx manages the timing such that it will block until those 20 samples are available and then return and deliver them to me. Every 50 msec, I get another 20 samples in another iteration of the loop.

 

*My guess is that I setup the task wrong and I need to fix the frequency rate from 1000 Hz to a different value as well as setup the buffer correctly. Since I am looking for a sample every ~50ms then my frequency is 20Hz so I will start from there. I actually have a few ideas on how to accomplish what I am looking for so once I figure it out I will get back to you with the code. If I get stuck I will try and ask some more questions.

0 Kudos
Message 3 of 4
(2,000 Views)

I have migrated my NI PXI 5922 (2 Analog inputs, 1 analog trigger input and 2 digital trigger inputs on the AUX DIN Connector) to the PXIe NI1090. I have come to realize that this system does not support the use of Analog trigger from NIDAQmx C++ programing, while all works fine with the InstrumentStudio. I can capture the signals at the input without triggering but in my application I need to trigger. Here is the program I have written to test the triggering capacity of the NI system. All the drivers and programs have been updated.

 

#include <iostream>
#include <NIDAQmx.h>

void checkAnalogTriggerSupport(const std::string& deviceName) {
int32 error = 0;
uInt32 data;
char errBuff[2048] = { '\0' };

// Call DAQmxGetDevAnlgTrigSupported to check for analog trigger support
error = DAQmxGetDevAnlgTrigSupported(deviceName.c_str(), &data);

if (DAQmxFailed(error)) {
// Retrieve extended error information
DAQmxGetExtendedErrorInfo(errBuff, 2048);
std::cerr << "DAQmx Error: " << errBuff << std::endl;
} else {
if (data) {
std::cout << "Device " << deviceName << " supports analog triggering." << std::endl;
} else {
std::cout << "Device " << deviceName << " does not support analog triggering." << std::endl;
}
}
}

int main() {
std::string deviceName = "Dev1"; // The actual device name

checkAnalogTriggerSupport(deviceName);

return 0;
}

 

This is the result

 

DAQmx Error: Device does not support this property.

Device: Dev1
Property: DAQmx_Dev_AnlgTrigSupported

Status Code: -200197

 

The other question is what is the name of the BNC trigger input on the device that is used by InstrumentStudio. It shows Dev1/TRIG, but when I use this it says that this is not a valid device.

Any ideas?

0 Kudos
Message 4 of 4
(105 Views)