LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to speed up for loop

I am new to LabView programming.  I created a very simple VI to toggle one digital line in a for loop as input to a stepper motor driver.  I am using the myDAQ USB module to run this VI.  I set the loop to execute 1000 times.  It currently takes about 6.5 seconds to complete this for loop or about 6.5 ms per iteration.  Attached is the VI.  

Am I missing something big here?  The for loop is very simple but is running very slowly.  I was looking to toggle the digital output at a rate as low as 0.1 ms so I am way off.

Thanks

0 Kudos
Message 1 of 10
(1,341 Views)

Next time please attach the VI file itself but not the image. We cannot debug an image.

It takes some time for the host PC to send USB packets to the device and wait for the response. Thus, 6.5ms per iteration is reasonable for a USB-based device. This is the limitation of software-timed tasks.

You should use hardware timing. There are two ways:

1. Use the Analog Output. myDAQ supports update rate of 200kS/s. See shipping example <LabVIEW>\examples\DAQmx\Analog Output\Voltage - Finite Output.vi

2. Use the counter to generate a sample clock for the digital task. I am not sure if this is supported in myDAQ but worth a try. See How To Use Counters To Generate a Sample Clock For a DAQmx Digital I/O Task

In both cases, the limitation of about 5ms per DAQmx VI call is still applied. If you are going to send at 0.1ms or 10kHz, make sure you write multiple data points per DAQmx Write VI call. For example, you can send 100 data points per DAQmx Write VI every 10ms.

-------------------------------------------------------
Control Lead | Intelline Inc
Message 2 of 10
(1,328 Views)

You can embed images directly, no need for PDFs.

 

I can't see any code that measures execution time of the loop, so your stopwatch measure includes configuration of the task an killing it. No way to tell how long one iteration (two DOs) actually takes. We can't even tell if you disabled debugging.

 

And yes, as has been mentioned, software timed single points are not fast, whatever that means.

0 Kudos
Message 3 of 10
(1,264 Views)

Thanks for the feedback.  I am surprised that the digital IO is so slow.

I have two DAQmx Write (VI) calls inside the for loop, and I used the Digital 1D Bool 1Chan 1Samp instance.

Should I be using the Digital 1D U8 1Chan NSamp instance?  It sounds like I can send a 1D array with a number of samples for each call to DAQmx Write().  Is this a possible way to go to speed things up?

One suggestion was to use the digital-to-analog converter and send a waveform to the channel.  Is this really a valid approach to simulate basic TTL level digital IO?  Seems like a hack that really stretches it.

Creating a sample clock from a counter sounds promising.  I looked at the example and it seems like it will be a bit complicated to implement but I will give it a try.  Any suggestions welcome..

 

barney99_0-1683052579881.png

 

 

0 Kudos
Message 4 of 10
(1,238 Views)

Should I be using the Digital 1D U8 1Chan NSamp instance?  It sounds like I can send a 1D array with a number of samples for each call to DAQmx Write().  Is this a possible way to go to speed things up?

You need to use sample clock timing to use the NSamp instance. If you are using the default On Demand Timing, you can only write one sample per DAQmx Write VI. In order to use sample clock timing for digital output, you need the counter to generate the clock.

 

One suggestion was to use the digital-to-analog converter and send a waveform to the channel.  Is this really a valid approach to simulate basic TTL level digital IO?  Seems like a hack that really stretches it.

Similarly, the idea is to use sample clock in the hardware timed task. AO task has a built-in sample clock timing. You can just configure the AO task to generate its own sample clock without using counter.

Only the high-end X Series devices have dedicated sample clock timing engine for digital tasks.

-------------------------------------------------------
Control Lead | Intelline Inc
Message 5 of 10
(1,227 Views)

@barney99 wrote:

Thanks for the feedback.  I am surprised that the digital IO is so slow.

It's not really slow.  It's the gathering of the information that is.

 

Edit: Or maybe that's what you meant.  Oops.

Bill
CLD
(Mid-Level minion.)
My support system ensures that I don't look totally incompetent.
Proud to say that I've progressed beyond knowing just enough to be dangerous. I now know enough to know that I have no clue about anything at all.
Humble author of the CLAD Nugget.
Message 6 of 10
(1,192 Views)

@barney99 wrote:

Thanks for the feedback.  I am surprised that the digital IO is so slow.

 


The delay is coming from the communication latency of the USB packets. If you are using a PCI/PXI-based device, the execution time is below 1ms.

-------------------------------------------------------
Control Lead | Intelline Inc
0 Kudos
Message 7 of 10
(1,182 Views)

Can we take a step back and let you explain what kind of stepper motor controller you are using and what kind of signals it expects (min pulse length, etc.). How is it connected to your USB DAQ device? Can you look at your digital pin with a scope? What's the frequency and jitter?

What toggle speed do you expect? (Your Vi currently has absolutely no timing and does two digital IOs per iteration. Obviously, there is no way to define any specific speed with this setup.

You also only toggle one single Boolean, so why are you writing an entire array if only one element changes?

 

 

 

0 Kudos
Message 8 of 10
(1,177 Views)

For the stepper motors i've controlled, i've let the counter do the signal generation. Just set a frequency and a Duty cycle.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 9 of 10
(1,163 Views)

Thanks all for the replies.  I have used LabWindows but I am new to using LabView.

 

Regarding toggle speed and purpose, I only need to create a falling edge so HIGH to LOW.  You are correct that I am only changing one Boolean.  I am writing the entire array because I am new to LabView and I am following along with example programs to get started.  Can you suggest how I would write just one channel?  Would this increase the for loop speed?  If so, by how much?

 

The for loop is currently running without any timing restrictions because I want to see how fast this will operate.  I timed the overall execution time for performing 1000 loops which provides a good estimate for loop execution speed.  The timing restriction I am aware of is the Wait vi that waits a specified number of milliseconds.  Since the for loop is operating without any timing constraints and it currently takes on the order of 6 ms per loop, I can't see how adding a wait is going to speed things up.  But then again, maybe I am just not seeing how this works correctly.  How can I add timing to the loop to make it run faster?

 

It has been suggested that the slow speed is due to latency of the USB packets.  This seems puzzling because I think I could send this data (16 bits per loop) using RS-232 at 115,200 bit/sec baud rate much faster than what I am observing with this for loop.

 

So there seems to be some different issues going on under the hood of LabView.  If I move forward here what would be my best option for success:  should I try writing a 0-5V "square wave" out on the AO (digital to analog out)?  Or should I try to generate a sample clock for the digital IO task? 

 

Thanks again for the observations and help.

0 Kudos
Message 10 of 10
(1,157 Views)