From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
07-15-2009 05:49 PM
I have a VI working which takes over 50 hours to complete, and I obviously want to optimize it for speed; however, when I use the profiler it lists the total run time as approximately one tenth of the actual time. I read somewhere that LabVIEW may not take memory operations (allocations/deallocations?) into account in its profiling, but then I have no way of telling which sub-VI is taking the most time, including memory operations.
Anyone have any advice as to how to approach this problem? I can come up with some method of adding up the total run-time for suspect sub-VIs, but I'm concerned I might do it in such a way that I significantly add to the running time of said sub-VI, especially considering some of them run thousands or millions of times.
07-16-2009 03:35 AM
Hm, do you use any external code like ActiveX, .NET or DLLs? Is the application waiting for user interaction?
Norbert
07-16-2009 09:27 AM
07-16-2009 09:30 AM
Odd.... the profiler logs the times the VIs are "running", so if there are "missing times", the VIs are idle in between. This happens if e.g. dialogs wait for input or if external code is executed.
So, if the time total is less than the overall time of execution, there must be time slots, where (all) VIs are waiting for something.....
Norbert
07-16-2009 09:47 AM
07-16-2009 09:54 AM
Sure, you can benchmark around the motion VIs. But if you are using motion VIs, i asume that you have motion hardware in your system as well.
Because motion is not arbitrarily fast, you can only identify parts which probably can be optimized. This requires testing because motion will get quite unstable if you try to move something too fast.....
hope this helps,
Norbert
07-16-2009 10:14 AM
Of course I intend to test the hardware, but I also want to be sure that it is, in fact, the motion hardware that accounts for the "missing" time.
So how can I time the motion VIs? Manually add timing code in each VI?
07-16-2009 10:20 AM
If you need reliable information about the biggest software execution times, i suggest you to insert (timing) benchmarking elements around every major part of your application.
Example:
Application has motion, vision, file IO and DAQ. The application works like a sequencer controlling those for parts.
Just include "Tick Count (ms)" before entering one of the four parts in your sequencer. After returning to the sequencer, take another tick count and subtract the first from the current one. Now you have the "excessed" time for the part. Do the same for the other parts. So maybe you find out:
Motion 45%
Vision 30%
DAQ 15%
File IO 10%
So motion is a good place to dig deeper in order to optimize execution times... so repead the approach for the major parts of motion...
hope this helps,
Norbert
07-16-2009 10:39 AM
That's kind of what I was thinking, except it's more like Motion->Data Acquisition->Data Analysis->Repeat Many Times->.........->Finally, File Output
The potential problem is that the motion VIs run more times than the data acquisition and analysis VIs, so I thought that adding benchmarking may skew the results. But maybe it's safe to assume that the time required for benchmarking is insignificant...