LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Determining Timed Loop Rate based upon Worst Case Performance

I have developed a monitoring application which can monitor the state of up to 64 battery rechargers. These rechargers perform predictably when they are recharging - in that they go through the same set of states in the same order. The logic used to determine current states or for state transitions is different according to the state - so the performance would be different per state as well. Under normal operation, the rechargers will be in separate states at a given time and there are endless possibilities as to what combination of operations will be taking place, based upon the state of each individual charger - making it hard to quantify performance in a repeatable way.

 

However, I think it safe to assume that the "worst case" in terms of performance would be for all chargers to be doing the exact same thing, at the exact same time since they'd be consuming the worst operations at the same time. Running such a worst case test (or simulation) and recording the maximum iteration time of my timed loop during the test should yield the minimum time that I should expect to sample at - such that I will never reach 100% CPU. 

 

For the sake of argument, let's say I run my worst case test and determine that at its worst, the maximum iteration time was 64 ms, or 1 ms per device. While this should be the minimum time, I would want the actual iteration time of the loop to be somewhat higher in order to give some leeway for margin of error. Rather than come up with some arbitrary sample rate that's arbitrarily slower than my worst case, I want to approach this scientifically so I can get the highest possible sample rate while still minimizing the liklihood of have late iterations in my timed loop. What's a good way to approach determining the set sample rate vs. the worst case?

0 Kudos
Message 1 of 3
(2,244 Views)

It may be important to mention that the hardware for this application will always be the same, so the fact that iteration time is linked to the CPU is a known and expected factor. The hardware will always be cFP-2200 with a 400MHz CPU. If we use anything other than this, the CPU will be even faster, not slower.

0 Kudos
Message 2 of 3
(2,220 Views)

Hi kgolden,

 

The answer is going to depend on the consequences of what happens if your loops take too long to execute. Generally though, you should leave extra time in high priority loops for other operations to execute unless it is absolutely necessary that the high priority loops run at 100% and this should only be for critical situations. 

 

<Brian A | Applications Engineering | National Instruments> 

0 Kudos
Message 3 of 3
(2,202 Views)