LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can I achieve consistent, high-frequency motor loop timing

I am trying to control a piezoelectric motor using an analog +/- 10V command signal and a PID controller implemented in LabView 8. The program is essentially a timing loop set to run at a specific frequency using one of the hardware clocks on an NI PCI-6221. I set the frequency to 1 kHz. The actual loop execution time (as determined by comparing subsequent values of the 'Global End Time' from the timed loop) is 4-5 ms for two consecutive iterations, followed by a third which takes 11-12 ms. I understand that the loop will not actually execute at 1 kHz, since the amount of computation within the loop will give it a finite execution time. However, I'm uncertain why every third iteration has a much longer execution time. This long time step is leading to some pretty serious stability problems, particularly when I try to move to higher frequency oscillations. I've invested a considerable amount of time and all available funds into getting this thing to work. I'd hate to have to shell out more money that I don't really have to purchase a hardware controller (e.g. Galil), in addition to putting in the time required to learn the relevant language. Any help in getting the timing consistent would be greatly appreciated.

Thanks

Matt Reilly

0 Kudos
Message 1 of 10
(3,430 Views)
Matt,
From your post it's not clear to me whether you are using
(1) LabVIEW 8.0 with your own PID algorithm implementation
 
or
 
(2) the LabVIEW Real-Time 8.0 module with our PID Toolkit
 
for your control (help page for control application programming) application.
 
If you are using just LabVIEW, there are the inherant risks of trying to run a 1kHz control loop on a non-deterministic operating system (Windows, for example).  If you require determinism, the LabVIEW Real-time module is certainly recommended.  If this is not an option, you must identify the portion of your code that is causing the undesired delay.  Each portion of your code should take a set amount of time, and there are a couple of ways to test this.  You can use the VI Profiler in LabVIEW to help.  You can also test individual portions of your code to determine how long they take to run on average using the shipping example in the NI Example finder called "Timing Template (data dep).vi".
0 Kudos
Message 2 of 10
(3,398 Views)
Mark,

We do not have access to the PID module or the real time module and really don't have the funds to pursue them. Thanks for the pointers on checking the execution time.  Hopefully I can figure out where I'm hitting the snag by going through the code in this manner.

Thanks again
Matt
0 Kudos
Message 3 of 10
(3,392 Views)
Matt,

Are you doing any Array or String manipulations inside your loop which cause memory re-allocation? Build array or autoindexing are the usual culprits.

Do you have display updates or file writing going on inside the loop? These involve calls to the OS and it may take more time than you like.

There are ways around many of these bottlenecks, but getting consistent, reliable 1 kHz loop timing from a desktop OS is going to be difficult.

If you can post your code or a simplified version which demonstrates the problem, someone will likely be able to help.

Lynn
0 Kudos
Message 4 of 10
(3,379 Views)
0 Kudos
Message 5 of 10
(3,370 Views)
Unclebump,

Thanks for the article.  The computer I'll be running the final product on is unfortunately much slower than the one I'm testing with.  Any little bit of optimization will help!

Mark,

That VI profiling was just the thing I needed.  Apparently my analog read/write tasks were poorly configured.  I set all of their parameters when I first started the program and forgot about them.  I switched from reading N=10 samples at 15 kHz to running continuous at 50 kHz and it runs very smoothly with the whole loop running at 1 kHz.  Thanks for the help!

Matt
0 Kudos
Message 6 of 10
(3,364 Views)
Lynn,

I was building an array to record my data.  The data is a 5 element array that tacked on an extra line for every iteration of the cycle after I told the program to start recording.  I have the actual file writing outside the loop (i.e. the file is written only after the program is stopped) to improve the loop time.  This approach is causing things to slow down after a while.  The alternative approach that I'm trying currently is to fix the size of the array and dump the data to disk after the array is full.  However, this approach seems to cause a considerable stall in the loop when it writes the data.  Is there a better way to do this, short of initializing a huge array and inserting the data into it on the fly?  This seems like it will eat a lot of memory and will allow me minimal flexibility.

Thanks
Matt
0 Kudos
Message 7 of 10
(3,364 Views)
Matt,

Initializing an array and using Replace Array Subset is the fastest because it does not cause any memory reallocations. If you do the file writing in a parallel loop, you should be able to avoid the slowdown. Make sure both loops have a Wait function (even set to 0 milliseconds in the acquisition loop). When the array gets full, write a copy to a queue and replace the full array with the preinitialized one. You will have two arrays of memory allocated, but unless your arrays are 10s or 100s of MB, memory should not be a problem. If the memory is there, what is it going to be used for while the acquisition is going on - playing solitare?

There is a white paper on handling large data sets on the DevZone. It discusses such matters in detail.

Lynn
0 Kudos
Message 8 of 10
(3,345 Views)

I'd recommend using a Timed Loop for your main control loop.  Timed Loops are set to run with a particularly high execution priority, higher than the parallel loop you'll set up to accumulate and perhaps save your data.  Even a timed loop won't give you perfectly consistent 1 msec loop intervals, but the deviations are a bit more likely to be smaller and less frequent.

Another add-on thought is to use the data acq sampling clock as a Timing Source for the Timed Loop.  I don't think it improves determinism, but it might give you a more precise measure of your actual loop interval times.  Perhaps knowledge of the actual loop time can be taken into account for your control algorithm?  That should make you a bit less prone to instability in your control algorithm.

I would have suggested that the main Timed Loop write data directly to a Queue but johnsold's idea may well be faster.  It sounds like it would be.

-Kevin P.

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 9 of 10
(3,337 Views)

Thanks for the suggestions guys.  I had previously been using the "Write to spreadsheet" VI.  By using the VI profiling tool, I saw that this VI actually opens, writes to, then closes the file every time through and this was slowing things down considerably.  I picked the parts that I needed out of this VI and trimmed things down.  Now it's recording my data every time the loop executes and still runs at very close to 1 ms per loop iteration.

Thanks for the help everyone.  I can post my program here if anyone is interested.

Thanks

Matt

0 Kudos
Message 10 of 10
(3,327 Views)