LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Improving my execution speed - Labview RT?

I am working on a system using an NI PCIe 6351 and an Adlink PCIe 9842.  The NI card is used to trigger and the Adlink card is used as a digitizer.  I want to be able to trigger, collect data and perform some computations on the data with a repetition rate of ~1 kHz.  Currently, I am running in LV 2010 in Windows 7 and cannot exceed ~500 loops/sec.  I also noticed the looping speed is inconsistent.

 

Will my program loop faster in LV realtime?

If I have drivers to use the Adlink card in regular labview, will it also run in LV realtime?

 

0 Kudos
Message 1 of 6
(3,444 Views)

Can you share your code, or at least be more specific about what you are doing?  Do you need to process each sample individually, or do you simply need to process 1000 samples every second?  If so, you may be able to do so more efficiently by processing several samples at once.

 

It is possible but not definite that your loop will run faster in RT.  The timing of the loop should be more consistent in RT.  I don't know about the drivers issue, I've never used any Adlink equipment.

0 Kudos
Message 2 of 6
(3,433 Views)

I have attached our code.

 

Here is what happens on each loop of the program:  The NI card triggers on an analog edge and sends two pulses (one from each counter).  One pulse tells the Adlink card to start recording.  The Adlink card reads 1400 points at 200 MS/s.  This data is passed to the data analysis loop, where we use a custom fitting algorithm to fit the data to an exponential.  The decay constant for this exponential is recorded and it's on to the next loop.

 

I have tried a variety of data analysis algorithms.  With the simplest and most efficient one possible, the program will loop at ~550 Hz.  To achieve this, I

had to hide all graphs and all but one indicator.  We know that mathscripts are slower, and have since stopped using them.

 

Any suggestions on making things go faster would be greatly appreciated.

0 Kudos
Message 3 of 6
(3,420 Views)

At a first glance your code looks reasonably clean and is sufficiently complex that I can't quickly identify any definite problems.  I'm not too surprised that hiding the graphs helps - if I'm not mistaken, you would otherwise be trying to update the graph every time you acquire data, which would be over 500hz (much faster than your monitor can display and your eye can see).

 

Have you run your code with the profiler?  It will slow down the overall execution but will show you which subVIs are consuming the most time, and that can help target your optimization efforts.

 

You might gain something by moving some of your loops into separate subVIs and having them run in different execution subsystems.

 

Why are you using notifiers instead of queues to transfer data between loops?  Queues may be slightly more efficient because when you enqueue an element in one loop and dequeue it in another, it's the same block of memory - no copy required.  A notifier acts more like previewing a queue, where getting the notification may require a copy of the data so that the original can remain in the notifier.  This isn't an issue for simple data types such as an enumeration but can be an issue for larger ones such as your data array.

 

Check how fast you can actually generate the trigger signal and acquire data (take out the data processing and either time your code or put a scope on the trigger line).  It may be that the time to set up the digital output and then stop it is a limiting factor.  If so, will your sampling setup allow you to generate a continuous pulse train on the trigger line at the desired frequency?  Then you would only need to start the DAQ task once.

 

You could also check how fast your sampling loop runs independently by logging and collecting many samples (write to a file if necessary), then processing them separately.  This should help you determine where you're reaching your limit.

 

Of course, another useful quick check is to look at your CPU usage while the code is running.  If you're only at about 10%, then you're likely limited by the way you're using the hardware.  If you're at 90%, then it's probably the data processing.

0 Kudos
Message 4 of 6
(3,414 Views)

Thanks for this advice.

 

We looked at the code with the profiler.  In terms of the average time per loop iteration, by far the most time (6.4 ms) was spent on the Start DAQmx

Task.vi.  The next closest was the vi which the Adlink card uses to collect data, at 0.2 ms per loop.  Everything else showed 0 for average time per loop.

 

We switched over to using queues.  This gained us a few percent in speed.

 

You recommended, "You might gain something by moving some of your loops into separate subVIs and having them run in different execution subsystems."...Can you elaborate on what this means?  What do you mean by execution subsystem?

 

Unfortunately, we cannot control exactly when we will trigger.  The data of interest comes psuedo-randomly and we must wait to trigger on the interesting

part of the data.  I can only adjust the average frequency at which the data comes.  If there were a way to only start the DAQ task once and still have it trigger in the same fashion it is now, that may be helpful.

 

We think we may be CPU limited.  We are currently using an i3 @ 3.2 GHz.  This is only a dual core processor and we have something like 6 loops in

our program.  There is a 2nd gen i7 @ 3.8 GHz on the way to us.

 

Thanks again!

0 Kudos
Message 5 of 6
(3,403 Views)

@Blee66 wrote:

 

You recommended, "You might gain something by moving some of your loops into separate subVIs and having them run in different execution subsystems."...Can you elaborate on what this means?  What do you mean by execution subsystem?


If you look at the Execution category in VI Properties (File->VI Properties) you'll see that you can set the Preferred Execution System.  You could turn some of your loops into subVIs and then assign them different execution systems; I do not know if this will help your code.  Disabling debugging in the same place will improve execution speed.

 


Blee66 wrote:

Unfortunately, we cannot control exactly when we will trigger.  The data of interest comes psuedo-randomly and we must wait to trigger on the interesting

part of the data.  I can only adjust the average frequency at which the data comes.  If there were a way to only start the DAQ task once and still have it trigger in the same fashion it is now, that may be helpful.


I'm not an expert in the details of DAQ timing, but it looks to me like your existing code already sets up the DAQ task to be retriggerable.  As far as I can tell from the documentation, that means it will run each time the trigger condition is met until you execute DAQmx Stop Task.  I don't think you need to make any changes to your code except to execute DAQmx Start Task and DAQmx Stop Task only once per acquisition sequence.  Have you already tried this?

0 Kudos
Message 6 of 6
(3,396 Views)