Cross-posted to LAVA: https://lavag.org/topic/19920-bug-in-ni-pid-vi/
In NI_PID_pid.lvlib:PID.vi, it appears that integrator anti-windup is implemented such that the accumulated error is set so that the sum of the integral and proportional action do not exceed the control limits, but that causes this:
There are two things that I don't expect here that I believe are caused by the anti-windup implementation: The first is that the controller is initially saturated high, but backs off despite being far from the setpoint and having high proportional gain. The second is that upon lowering the setpoint, the controller output goes negative, despite the process variable having always been below the setpoint (and no derivative gain).
The integral gain in the above example was so low that it doesn't have much of an effect on this timescale. Setting the integral time to 0 turns off integration action and anti-windup in PID.vi, and this is what happens (and this response is close to what I would expect to see on a short timescale with a low integral gain, but with the absence of slowly accumulating integral action):
Again, I hope I'm just overlooking something - it wouldn't be the first time.
infinitenothing over on LAVA referred me to some documentation backing up that this is a standard anti-windup method. It just seems potentially problematic to me compared to conditionally accumulating the error (i.e. don't accumulate positive if the controller is saturated high or accumulate negative if the controller is saturated low). Does anyone know why back-calculation may be preferred?
This random person on stackoverflow says that being able to set the back-calculation coefficient is important:
" Back Calculation highly depends on the back calculation coefficient
Kb. If you don't know how to actually calculate the parameter
Kb don't use back-calculation. "
I have no idea whether this is true or not, but the NI PID.vi effectively has a constant Kb of 1.
Yes, your assertions above are correct. The choosen PID algorithm used to do the anti-windup does have this behavior above (as you demonstrated on your first post) and to avoid that, you have to properly tune the integral gain for it. This behavior will considerably improve if you use smaller gains.
We investigated other techniques in the past (using back calculation and saturation) and those other techniques also have side effects like increasing the number of parameters to tune or also have some strange behaviors depending of the parameters used. So, the secret is to actually tune the PID to avoid this effect the happen.
And the reason we did not change was because the feedback from several customers and technical leaders was that "it is better to deal with the devils you know than to have to deal with an unknown one".
Now, since it is LabVIEW, you are welcome to create your own anti-windup schema and define the behavior you'd like to have. One option is to limit output of the integral action, so you would never have that inversion you saw.
Thanks for the response. I agree that the integral gain was low in my demo - that was to make the behavior clear, but I think the behavior will still happen regardless of the integral gain if the proportional gain is high enough and you change the setpoint in the "wrong" direction while the controller is saturated.
It seems to me that clamping, the second method listed here: https://www.mathworks.com/help/simulink/examples/anti-windup-control-using-a-pid-controller.html, would achieve the same thing without this side effect and no extra parameters. Am I missing a case where clamping would have potentially unwanted behavior?
Interesting enough, that was the choosen method of the new implementation of the PID FPGA in floating point and probably we will use that for other PIDs:
However, you need to keep in mind that there is always a side effect. For example, in this situation (large changes of setpoint), we compared the existent PID algorithm with the "clamp" version of it.
Here is the existent algorithm behavior:
And here is using he 'clip' algorithm:
Although for small changes, there was no differences. So, that is the challenge of changing anything inside a PID algorithm: any change to it could modify existent applications and it would make upgrade to newer versions of LabVIEW more challenging.
On the other hand, we know those limitations and for new versions of PID, we are using a more "known" version of this algorithm, but keeping previous VIs with the existent behavior, for the sake of upgrade/update.
One final note: I do agree that the behavior is there if you change gains, but for the example above, with a proper tuning, you use this behavior in your advantage by avoiding the overshoot when in saturation mode but without need to change the linear part of it. So, the secret is always try to tune and validate for all situations: start-up, setpoint change, disturbance rejetion, with and without saturation of the error.
Thanks for the reply and the examples. I don't think what you tested was clamping (conditional integration), but limits on the accumulator. I don't think clamping would have such dramatic overshoot compared to what's currently implemented. I tried out this implementation:
And got this result:
Compared to the PID that ships with LabVIEW:
Now that I've thought about it, I realize that what's currently implemented effectively resets the accumulator during saturation, whereas clamping does not, but it seems like that may or may not be desirable depending on your system.