I have seen the arguments. State space models are superior. PID is all you really need, so why mess with state space? And so on and on. If you have no system model, and are not likely to get one, there is little choice and you need a heuristic controller, such as PID or a fuzzy rule-based controller. For the case where you have a rather good understanding of the system, including a state model, but PID controllers are sufficient for an implementation, it seems a shame that the two schemes are incompatible... or are they?
Check out the following. I have never seen an analysis of this sort in print elsewhere. It may be old business. If this is the case I apologize in advance.
Assume that you have a state space system model of your system, with
x, a single-variable control
applied through system input
u, continuous state matrix
a, input coupling matrix
b, and observation
c to observe the relevant system output
// System model x' = a x + b u y = c x
We will consider the input
u to consist of two parts: a
setpoint driving term
s, and a feedback term
The input terms in variable
u are treated as separate terms
initially, with one input coupling matrix
bf for the
v, and another one
bs for the setpoint
s. For purposes of simulation, we might also include a
third input variable and coupling matrix to represent a class of
disturbances for simulation. This temporary separation will make it a
little easier to think about the setpoint and feedback signals separately;
the notation can be unified later.
Looking ahead, we know that PID control rule will need to compute the
differences between the observed output variable and the setpoint, so an
additional setpoint coupling vector
d (not yet defined) is
reserved to make the setpoint variable visible in the observation equation.
// Reorganized system model x' = a x + bf u + bs s y = c x + d s
A PID controller (in parallel form) applies three control rules to perform the its computations. Each of these rules observes variables dependent on setpoint and system state.
Proportional feedback responds in proportion to the difference
between the desired output (setpoint) and the observed system output.
The current values of the observed variable
y and the setpoint
s are needed. A proportional feedback rule is:
// proportional feedback v = -kp (y - s) = -kp ( c x - s )
kp is the proportional gain setting
of the PID controller.
The derivative part of PID controller is somewhat fictional. A real PID controller observes changes in its signals and from this estimates what the derivative must be.
If you have an exact model of your PID controller internals, it should be possible to use it directly; but most likely you do not. The formulation used here uses the derivative of the observed output variable rather than the derivative of the tracking error. Since the tracking error is the difference between the output variable and the setpoint, the two kinds of derivative are the same for a constant setpoint level. You might not have a good way to model setpoint changes, and the setpoint signal might be non-differentiable. (PID controllers are famous for giving the system a severe jolt through the derivative feedback term when the setpoint level is changed quickly.) Omitting the direct coupling to the setpoint variable eliminates this problem.
Even after this adjustment, the derivative estimate remains sensitive to high frequency noise. Lowpass filtering is typically applied to limit bandwidth. These details might be unspecified for your PID control equipment. Even if you don't know the exact processing that your PID controller equipment uses, you know that it should reasonably (though not perfectly) track the derivatives that appear in your system model.
After all of these disclaimers, the model must now set up
its derivative estimate, using information available in the
state-space model. Ignoring the
s in the observation
equation for reasons just discussed, the derivative of the output
y = c x y' = c x' = c a x + c bf u + c bs s
Then the derivative feedback rule will have the form
//derivative feedback v = -kd y' = -kd (c a x + c bf u + c bs s)
kd is the derivative gain setting of
the PID controller. This expression sometimes is not quite right. After
v into the system input variable
u, the feedback variable
on both sides of this expression.
//derivative feedback v = -kd (c a x + c bf v + c bs s)
An algebraic reduction can combine the two
terms. Define the algebraic factor
Kg and use it
to simplify the derivative feedback expression above.
Kg = 1 /[1 + kd c bf] v = -kd Kg (c a x + c bs s)
For many and possibly most systems, the feedback
does not couple directly into the output variable
and for this common case the
c bf v product evaluates to zero.
Kg term reduces to a value of 1 and can be ignored.
The PID controller integrates the difference between output
y and the setpoint level
s over time. Augment
the system equations with an additional artificial state variable
z to represent the integral state. Include this as an
extra row in the state equations.
z' = y - s = c x - s
The integral feedback rule is then
//integral feedback v = -ki z
ki is the integral gain setting.
The system model, augmented with the additional PID integrator, is now as follows.
X = | x | | z | U = | v | | s | A = | a 0 | B = | b 0 | | c 0 | | 0 -1 | // Augmented system model in original variables x' = a x + 0 z + bf v + bs s z' = c x + 0 z + 0 v - 1 s // Augmented system model X' = A X + B U
We now have everything we need, but we must collect the observed variables before applying the PID control rule.
yp = c x - s // proportional error yi = z // integral of proportional error yd = Kg c a x + Kg c bs s // output derivative y = c x // the original output variable
The expanded observation equations can be reorganized as a matrix expression with separate state- and setpoint-related terms.
Y = | yp | | yi | | yd | | y | C = | c 0 | | 0 1 | | Kg c a 0 | | c 0 | D = | -1 | | 0 | | Kg c bs | | 0 | Y = C x + D s
Now the PID feedback can be computed. PID feedback is a weighted sum
of the P, I and D control rules, with adjustable gain parameters
v = -( kp yp + ki yi + kd yd )
Define the gain vector Kpid as follows, and then the PID computations can be represented in matrix form.
Kpid = [ kp ki kd 0 ] v = Kpid Y = Kpid ( C x + D s )
The complete augmented model is as follows:
states X = | x | | z | inputs U = | v | | s | state equations X' = A X + B u A = | a 0 | | c 0 | B = | bf bs | | 0 -1 | observation equations Kg = 1 /[1 + kd c bf] Y = C X + D U | c 0 | C = | 0 1 | | Kg c a 0 | | c 0 | | 0 -1 | D = | 0 0 | | 0 Kg c bf | | 0 0 | PID feedback rule u = -Kpid Y Kpid = [ kp ki kd 0 ]
We have just obtained a state space model of a system under PID control. This is a matter of notation, not control theory. Because it is clear that this represents a PID control, and that it is a state space representation, there is no theoretical need to choose between PID or linear control theory. That choice might be need to be made because of other practical restrictions: the model is too difficult to identify, the practical difficulties of deploying a state-space controller, etc.
A hypothetical system is constructed deliberately to be extremely difficult for a PID controller, so that the simulation always has something to show regardless of the gain settings. The problem is to cancel out observed displacements in one of the variables, by controlling an input that drives another variable. The two variables change out of phase, consequently the PID controller might need "negative feedback gains" of the sort that would drive ordinary systems straight to instability.
The desired disturbance level is 0, so the setpoint variable
is 0 and the
bs terms are otherwise unused. For this simulation,
bs vector is used artificially to insert a simulated disturbance.
Here is the original system model, with the third state variable observed for feedback, while inputs drive the fourth state variable.
sysF = ... [ 0.0 0.0 1.0 0.0; ... 0.0 0.0 0.0 1.0; ... -0.052 0.047 -0.01 0.0; ... 0.047 -0.052 0.0 -0.04 ]; sysB = [ 0.0; 0.0; 0.0; 0.01 ]; obsC = [ 0.0; 0.0; 1.0; 0.0 ];
There is a PID controller, separately modeled, with gains
kp = 2.0; ki = -.50; kd = -15.0;
This is simulated with time-step
delT = 0.5 through
250 steps, using a trapezoidal rule integration for the system
state model and rectangular rule integration for the PID integral
term. The following plot shows the state 3 that we would like to
regulate to 0. The green trace is without feedback control, and the
blue trace is using PID control.
So now the problem is reformulated to include the PID controller
within the state space model. Because the feedback drives one variable,
but output sees a different variable, there is no direct coupling into
the derivative term. The
Kg term reduces to 1.0 and can
be omitted. Here are the augmented equations:
sVar = 1.0; setpt = 0.0; sysF = [ ... 0.0 0.0 1.0 0.0 0.0; ... 0.0 0.0 0.0 1.0 0.0; ... -0.052 0.047 -0.01 0.0 0.0; ... 0.047 -0.052 0.0 -0.04 0.0; ... 0.0 0.0 1.0 0.0 0.0 ]; sysBset = [ 0.0; 0.0; 0.0; 0.01; 1.0]; sysBfb = [ 0.0; 0.0; 0.0; 0.01; 0.0]; obsC = [ ... 0.0 0.0 1.0 0.0 0.0; ... 0.0 0.0 0.0 0.0 1.0; ... -0.052 0.047 -0.01 0.0 0.0; ... 0.0 0.0 1.0 0.0 0.0 ]; obsD = [ -1.0; 0.0; 0.0; 0.0 ]; pidK = [ 2.0 -0.50 -15.0 0.0 ];
Here is the simulation for the augmented system, recording the state trajectory for later inspection.
sVar = 0; for i=2:steps % Current state and observed variables xstate = hist(:,i-1); yobs = obsC * xstate + obsD * sVar; % Feedback law applied to current output fb(i) = -pidK * yobs; % Predictor step (Rectangular rule) deriv = sysF * xstate + sysBset * sVar + sysBfb * fb(i); xproj = xstate + deriv*delT; yobs = obsC * xproj + obsD * sVar; % Corrector step (Trapezoid rule) dproj = sysF * xproj + sysBset * sVar + sysBfb * fb(i); xstate = xstate + 0.5*(deriv+dproj)*delT; hist(:,i) = xstate; end
Here is the result of this simulation.
At first glance, it is a match for the previous simulation. It has captured the essential behaviors of the PID-controlled system. However, the results of the two simulations do not match exactly, and we should not expect them to, because of the differences in the internal representations of the derivative feedback.
Okay, that's the idea. What I don't know is... how well does this work in practice?
Site: Larry's Barely Operating Site http://home.earthlink.net/~ltrammell Created: Nov 24, 2002 Revised: Dec 15, 2010 Status: Experimental Contact: NOSPAM ltrammell At earthlink DOT net NOSPAM Related: (none) Restrictions: This information is public