LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Labview 2011 and DAQmx 9.4 AI Much Slower Than Old Traditional DAQ

I have an application that I originally wrote back in 2000 using LV 5.0 and Traditional NIDAQ in conjunction with a PCI-MIO-16E-1 DAQ card. This application measures the L/R time constant of the winding of a DC Brushless motor. Via some switching transistors controlled with the DAQ card's Digital IO I apply a voltage to the motor coils. An analog current sense circuit measures the current rise in the motor coil and the voltage from the current sense is digitized at 1.25MHz using the PCI-MIM-16E-1 DAQ. This process is repeated for each of the 6 possible phase energization states in 1 degree increments over 90 degrees of mechanical rotation. Over the past 11 years this application has ran well on a Dell GX-240 3GHz single core machine with 512MB of memory.

 

Recently, we decided to update this code to DAQmx and LV 2011 to modernize it to current standards.  I installed NIDAQ 9.4 and converted all the DAQ commands in the code to DAQmx. I also revised the code to use pre-allocate array space and replace array element to streamline memory usage. Overall I expected to see at least equal performance to the old Traditional DAQ. What I found is the original time required to test over 90 degree rotation of about 19 minutes has now more than doubled to around 43 minutes.  The L/R rise time measurements are the same as before but timing between phase pair combinations has increase dramatically. It seems that it is taking considerably longer for NIDAQ to hand off the measurement data which consists of 6000 individual data point. The data acquire is done using the DAQmx Triggered AI from the DAQmx examples. It uses an analog trigger from the input channel. Once the data is captured it is low pass filtered and the time to first time constant is calculated. It then captures another rise time for the next phase combination. One other change I made was to replace the PCI-MIO-16E-1 DAQ card with a M-Series 6251. This had no effect on the speed the overall program exicuted. 

 

My question is, has anyone noticed a significant impact on the time required to hand off arrays of data between NIDAQ and the calling Labview program? This 2x increase is test time is unacceptable.

 

 

 

 

0 Kudos
Message 1 of 3
(2,465 Views)

Hello Johnfr

 

It is hard to say, it would depend on how good is the programming, the OS, and the PC hardware, but in this case you may open the main VI of your application and go to Tools->Profile->Performance and Memory and try to find out which part of the code is taking so long.

 

Regards

 

Mart G

0 Kudos
Message 2 of 3
(2,448 Views)

Hi johnfr,

 

Depending on how often your program commits the task, analog trigger commit time might be a large part of the slowdown you're seeing. When you commit an M Series task that uses analog trigger, DAQmx delays for 250 ms to allow the outputs of the trigger PWMs to settle. To avoid incurring this cost on every iteration, you should commit the task up front like in this example:

 

Acquire & Graph Voltage-Internal Clock-Hardware Trigger Restarts

 

However, if the PCI-MIO-16E-1 displays the same amount of slowdown, it might be due to something else. The PCI-MIO-16E-1 has trigger DACs, which settle more quickly than the PCI-6251's trigger PWMs. Still, paying attention to how often your program creates, reserves, and commits the task is important with E Series too.

 

I don't think time spent waiting in DAQmx DLLs shows up in the VI profiler. To characterize where your program is spending its time, you'll probably get better results using the Elapsed Time express VI or the Tick Count (ms) function.

 

Brad

---
Brad Keryan
NI R&D
0 Kudos
Message 3 of 3
(2,427 Views)