LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Timed Loop Over Head Issues (Massive CPU Usage)

Highlighted

Using 10 Timed loops on RT with 1Mhz clock, and a dt of 1000 (1000 microseconds, or 1 ms) and all of them Accessing an FGV (an array of doubles) by index causes a massive CPU increase . If I only launch 1 Timed loop, the result is a very small change in CPU. Can you please explain to me why there is a non-linear relationship between the CPU and how many Timed Loops I launch, as well as what I can do to eliminate this.

 

NOTE:
The same thing happens on the PC so it's not just an RT issue.

0 Kudos
Message 1 of 10
(903 Views)

Do you have some sample code that shows the problem?

0 Kudos
Message 2 of 10
(899 Views)

I have code that is propriety and I cannot share unfortunatly , However I can describe what is happening, and If need be, create a basic example of what I am doing that I would be able to share.

 

What I am doing:

For all intents and purposes, I have one "Main" VI which launches 10 re-entrant "Sub-VI's" . Each Sub-VI has a Timed loop which is configured to a 1khz clock on the PC or 1Mhz clock on the RT), and the dt is set to 1 on the PC and 1000 on the RT. This is so I can execute code cyclically at 1ms execution rate (or 1000hz). I also have a FGV , which stores an array of values. This FGV is also set to subroutine in the VI Execution priority to reduce the overhead of calling it. Each timed loop, access the FGV 1-2 times per cycle (so 1-2 times per ms, times 1000ms per second, times 10 "modules" ). 

 

When I launch only 1 module, there is zero problems, and the CPU increase is marginal, but when I lauch another and another and another, the CPU grows exponentially it seems.

0 Kudos
Message 3 of 10
(893 Views)

What happens if you don't set the FGV to subroutine priority but leave it at normal priority? (See also)

 

Your FGV is the critical section. How much code is in it? Are all data structures fixed size or do e.g. arrays constantly grow or shrink? How big are the data structures? What else is in the timed loops?


LabVIEW Champion Do more with less code and in less time
0 Kudos
Message 4 of 10
(880 Views)

@altenbach wrote:

What happens if you don't set the FGV to subroutine priority but leave it at normal priority? (See also)

 

Your FGV is the critical section. How much code is in it? Are all data structures fixed size or do e.g. arrays constantly grow or shrink? How big are the data structures? What else is in the timed loops?


The FGV has a constant size that is established at the launch of the Main VI. It has typically under 2000 elements, and the FGV basically has no code inside, other than obviously storing the values, and indexing them when need accessed by the time loop. 

 

If you remove the Sub-routine status then the CPU skyrockets because of the overhead of calling the Sub-VI but that is a really great point. I think another thing that might be happening is infighting between the timed loops for control of the CPU, and thereby taking longer to execute each one , maybe couple with the fact that the FGV is sub-routined, might take a lot longer.

0 Kudos
Message 5 of 10
(861 Views)

Lets get this straight

 

you have A FGV with "Basically no code inside"  And, you will not help us help you by posting the code?

 

LabVIEW is not the problem.  <Yes, the magic 8-ball cannot even begin to guess>

Always maintain an accurate count of deployed mousetraps.
Message 6 of 10
(842 Views)

The code inside is an array of values (can be string, double, variant, etc..), this part is not the part that I am unable to share. It's just one specific  bottom portion, my application is significantly larger.

 

When the FGV is accessed, it returns the values inside. So the code itself is not taking a significant amount of time. It has been benchmarked separately. 

 

So my question is whether or not it is the concurrent access to a subroutine FGV that may be causing the issue, or if it is the fact that I have 10 timed loops running at 1000hz, and they are competing for CPU resources and therefore slowing each other down.  

 

LabVIEW assigns one thread per timed loop, but depending on how you configure it, they may interrupt each other. My main theory is that the more timed loops there are, in particular to them accessing the same subroutine FGV, may be leading to this sharp increase in CPU.

0 Kudos
Message 7 of 10
(830 Views)

1. What hardware are you using?  If a cRIO, you could potentially move some of your loops to the FPGA.

2. Do you really need Timed Loops?  Most applications only have 1 or 2 time critical loops.  If your loops are not absolutely time critical, then change to normal WHILE loops as this will reduce overhead.

3. Could you combine the loops into 1 loop and just use shift registers to hold your data?  The idea here is to reduce the number of loops by combining loops with the same timing.

4. Are you constantly changing the data stored in the FGV?  If not, you could try using a normal Global Variable as these tend to use less overhead in my experience.  Another option is a shared variable set up as an RT FIFO.


There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
Message 8 of 10
(810 Views)

The subroutine priority may very well be part of the problem.

 

This sounds fairly similar to an RT issue I had to deal with some years back.  I thought I'd posted about it here before, but my searches didn't turn anything up.  Sit back, time for what Ben calls a "sea story":

 

 

In my case, we also had many separate modules of code that interacted via shared access to a big FGV that was essentially a "tag engine".  Our problem arose from the resource contention resolution mechanism LabVIEW RT had in place when a time-critical vi tries to call a subvi (such as our FGV) which is presently in use by another process.  The rules are such that the time-critical vi gets to interrupt and grab access, but the process of doing it takes some extra overhead. 

 

For us, this showed up as loop timing jitter.  We were normally running at 1 or 2 kHz, but each time this kind of contention was resolved there would be a ~2 msec step function of extra time in the loop iteration.

 

We had a large code base that was heavily reliant on this core FGV, so it was important to try to solve the problem without changing the API.  Here's what worked for me:

 

- I actually set the FGV to be reentrant w/ pre-allocated clones.  (Yeah, I know, keep reading, you'll see...)

- Internally, I changed the storage mechanism.  Instead of arrays held directly in shift registers, I stored the data in single-element queues and held the queue refs in those same shift registers.   These were named queues so that every instance of the FGV held a unique ref to the same queue.

- Why it worked: I no longer caused blocking at the attempt to gain access to the FGV itself.  Instead, I relied on the blocking mechanism built into single-element queues.  This mechanism behaved much more consistently.  It was slightly less efficient on average, but I got rid of the spikes in timing jitter.

- Note: this solution no longer gave precedence to the timing critical loop.  It would have to wait in line for access to the single-element queue just like any other process.  But statistically speaking, it would never have to wait for long.

- This method had a side benefit of giving me access to debugging info related to queue depth, etc. that helped me characterize how frequently 1 or 2 or 3 processes had to wait in line for access.  As I recall, 1 was pretty rare, 2 was extraordinarily rare, and 3 was never observed during testing.

 

 

-Kevin P

Message 9 of 10
(791 Views)

@Kevin_Price wrote:

The subroutine priority may very well be part of the problem.

 

This sounds fairly similar to an RT issue I had to deal with some years back.  I thought I'd posted about it here before, but my searches didn't turn anything up.  Sit back, time for what Ben calls a "sea story":

 

 

In my case, we also had many separate modules of code that interacted via shared access to a big FGV that was essentially a "tag engine".  Our problem arose from the resource contention resolution mechanism LabVIEW RT had in place when a time-critical vi tries to call a subvi (such as our FGV) which is presently in use by another process.  The rules are such that the time-critical vi gets to interrupt and grab access, but the process of doing it takes some extra overhead. 

 

For us, this showed up as loop timing jitter.  We were normally running at 1 or 2 kHz, but each time this kind of contention was resolved there would be a ~2 msec step function of extra time in the loop iteration.

 

We had a large code base that was heavily reliant on this core FGV, so it was important to try to solve the problem without changing the API.  Here's what worked for me:

 

- I actually set the FGV to be reentrant w/ pre-allocated clones.  (Yeah, I know, keep reading, you'll see...)

- Internally, I changed the storage mechanism.  Instead of arrays held directly in shift registers, I stored the data in single-element queues and held the queue refs in those same shift registers.   These were named queues so that every instance of the FGV held a unique ref to the same queue.

- Why it worked: I no longer caused blocking at the attempt to gain access to the FGV itself.  Instead, I relied on the blocking mechanism built into single-element queues.  This mechanism behaved much more consistently.  It was slightly less efficient on average, but I got rid of the spikes in timing jitter.

- Note: this solution no longer gave precedence to the timing critical loop.  It would have to wait in line for access to the single-element queue just like any other process.  But statistically speaking, it would never have to wait for long.

- This method had a side benefit of giving me access to debugging info related to queue depth, etc. that helped me characterize how frequently 1 or 2 or 3 processes had to wait in line for access.  As I recall, 1 was pretty rare, 2 was extraordinarily rare, and 3 was never observed during testing.

 

 

-Kevin P


See that's very interesting because in my application, it has a very similar setup. However I wonder if changing this to single element queues is defeating the purpose of having Timed Loop execution. Isn't the point to guarantee 1ms execution for example, or whatever you set. So if you have 10 timed loops with various priorities, or even the same priority. Wouldn't that single element queue block the timed loop priority structure from being executed correctly. Or do you mean that by implementing the FGV in the manner you mentioned, the application wouldn't compete for those resources the same  way and therefore the overall performance of running the Timed loops that way would decrease, and even though the timed loops are ignoring their priority structures, the overall gain is good enough that everything executes in time.

0 Kudos
Message 10 of 10
(761 Views)