State space (controls)

In control engineering, a state space representation is a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form. The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With <math>n<math> inputs and <math>m<math> outputs, we would otherwise have to write down <math>m n<math> Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a vector within that space.

Contents

State Variables

Missing image
Typical_State_Space_model.png
Typical state space model

The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. State variables must be linearly independent; a state variable cannot be a linear combination of other state variables. The minimum number of state variables required to represent a given system, <math>p<math>, is usually equal to the order of the system's defining differential equation. If the system is represented in transfer function form, the minimum number of state variables is equal to the transfer function's denominator after it has been reduced to a proper fraction. In electronic systems, the number of state variables is the same as the number of energy storage elements in the circuit (capacitors and inductors).

Linear systems

The state space representation of a system with <math>n<math> inputs, <math>m<math> outputs and <math>p<math> state variables is written in the following form:

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)<math>

where

<math>\operatorname{dim}[A] = p \times p<math>, <math>\operatorname{dim}[B] = p \times n<math>, <math>\operatorname{dim}[C] = m \times p<math>, <math>\operatorname{dim}[D] = m \times n<math>, <math>\dot{\mathbf{x}}(t) \equiv {d\mathbf{x}(t) \over dt}<math>.

x is the "state vector", y is the "output vector", u is the "input (or control) vector", A is the "state matrix", B is the "input matrix", C is the "output matrix", and D is the "feedthrough (or feedforward) matrix". For simplicity, <math>D<math> is often chosen to be the zero matrix, i.e. the system is chosen not to have direct feedthrough.

There are other forms of the state-space model.

System type State-space model
Continuous time-invariant <math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)<math>
Continuous time-variant <math>\dot{\mathbf{x}}(t) = \mathbf{A}(t) \mathbf{x}(t) + \mathbf{B}(t) \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = \mathbf{C}(t) \mathbf{x}(t) + \mathbf{D}(t) \mathbf{u}(t)<math>
Discrete time-invariant <math>\mathbf{x}(k+1) = A \mathbf{x}(k) + B \mathbf{u}(k)<math>
<math>\mathbf{y}(k) = C \mathbf{x}(k) + D \mathbf{u}(k)<math>
Discrete time-variant <math>\mathbf{x}(k+1) = \mathbf{A}(k) \mathbf{x}(k) + \mathbf{B}(k) \mathbf{u}(k)<math>
<math>\mathbf{y}(k) = \mathbf{C}(k) \mathbf{x}(k) + \mathbf{D}(k) \mathbf{u}(k)<math>
Laplace domain of
continuous time-invariant
<math>s \mathbf{X}(s) = A \mathbf{X}(s) + B \mathbf{U}(s)<math>
<math>\mathbf{Y}(s) = C \mathbf{X}(s) + D \mathbf{U}(s)<math>
Z-domain of
discrete time-invariant
<math>z \mathbf{X}(z) = A \mathbf{X}(z) + B \mathbf{U}(s)<math>
<math>\mathbf{Y}(z) = C \mathbf{X}(z) + D \mathbf{U}(z)<math>

Stability

The stability of a time-invariant state-space model can easiest be determined by looking at the system's transfer function in factored form. It will then look something like this:

<math> \textbf{G}(s) = k \frac{ (s - z_{1})(s - z_{2})(s - z_{3})
                             }{ (s - p_{1})(s - p_{2})(s - p_{3})(s - p_{4})
                               }<math>

The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of

<math>\lambda(s) = |sI - A|<math>.

The roots of this polynomial (the eigenvalues) yields the poles in the system's transfer function. These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability. The zeros found in the numerator of <math>\textbf{G}(s)<math> can similarly be used to determine whether the system is minimum phase.

The system may still be input-output stable (see BIBO stable) even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros.

Controllability and Observability

A continuous time-invariant state-space model is controllable iff

<math>rank\begin{bmatrix}B& AB& ...& A^{n-1}B\end{bmatrix} = n<math>

A continuous time-invariant state-space model is observable iff

<math>rank\begin{bmatrix}C\\ CA\\ ...\\ CA^{n-1}\end{bmatrix} = n<math>

See Controllability and Observability for information about the implications of controllability and observability. (Rank is the number of linearly independent rows in a matrix.)

Transfer Function

The "transfer function" of a continuous time-invariant state-space model can be derived in the following way

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)<math>

which after the laplace transform yields

<math>s\mathbf{X}(s) = A \mathbf{X}(s) + B \mathbf{U}(s)<math>
<math>(s\mathbf{I} - A)\mathbf{X}(s) = B\mathbf{U}(s)<math>
<math>\mathbf{X}(s) = (s\mathbf{I} - A)^{-1}B\mathbf{U}(s)<math>

this is substituted for <math>\mathbf{X}(s)<math> in the output equation

<math>\mathbf{Y}(s) = C\mathbf{X}(s) + D\mathbf{U}(s)<math>
<math>\mathbf{Y}(s) = C((s\mathbf{I} - A)^{-1}B\mathbf{U}(s)) + D\mathbf{U}(s)<math>

which results in our final transfer function

<math>\mathbf{Y}(s) = \mathbf{G}(s)\mathbf{U}(s)<math>
<math>\mathbf{G}(s) = C(s\mathbf{I} - A)^{-1}B + D<math>

Clearly <math>\mathbf{G}(s)<math> must have <math>m<math> by <math>n<math> dimensionality, and thus has <math>mn<math> elements. So for every input there are <math>m<math> transfer functions with one for each output. This is why the state-space representation can easily be the preferred choice for multiple-input, multiple-output (MIMO) systems.

Canonical Realizations

Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach:

Given a transfer funtion, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:

<math> \textbf{G}(s) = \frac{n_{1}s^{3} + n_{2}s^{2} + n_{3}s + n_{4}}{s^{4} + d_{1}s^{3} + d_{2}s^{2} + d_{3}s + d_{4}}<math>.

The coefficients can now be inserted directly into the state-space model by the following approach:

<math>\dot{\textbf{x}}(t) = \begin{bmatrix}
                              -d_{1}& -d_{2}& -d_{3}& -d_{4}\\
                               1&      0&      0&      0\\
                               0&      1&      0&      0\\
                               0&      0&      1&      0
                            \end{bmatrix}\textbf{x}(t) + 
                            \begin{bmatrix} 1\\ 0\\ 0\\ 0\\ \end{bmatrix}\textbf{u}(t)<math>
<math> \textbf{y}(t) = \begin{bmatrix} n_{1}& n_{2}& n_{3}& n_{4} \end{bmatrix}\textbf{x}(t)<math>.

This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable.

The transfer function coefficients can also be used to construct another type of canonical form

<math>\dot{\textbf{x}}(t) = \begin{bmatrix}
                              -d_{1}&   1&  0&  0\\
                              -d_{2}&   0&  1&  0\\
                              -d_{3}&   0&  0&  1\\
                              -d_{4}&   0&  0&  0
                            \end{bmatrix}\textbf{x}(t) + 
                            \begin{bmatrix} n_{1}\\ n_{2}\\ n_{3}\\ n_{4} \end{bmatrix}\textbf{u}(t)<math>
<math> \textbf{y}(t) = \begin{bmatrix} 1& 0& 0& 0 \end{bmatrix}\textbf{x}(t)<math>.

This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable.

Proper transfer functions

Transfer functions which are only proper (and not strictly proper) can also be realised quite easily. The trick here is to separate the transfer function into two parts: a stricly proper part and a constant.

<math> \textbf{G}(s) = \textbf{G}_{SP}(s) + \textbf{G}(\infty)<math>

The strictly proper transfer function can then be transformed into a canonical state space realization using techniques shown above. The state space realization of the constant is trivially <math>\textbf{y}(t) = \textbf{G}(\infty)\textbf{u}(t)<math>. Together we then get a state space realization with matrices A,B and C determined by the strictly proper part, and matrix D determined by the constant.


Here is an example to clear things up a bit:

<math> \textbf{G}(s) = \frac{s^{2} + 3s + 3}{s^{2} + 2s + 1}
                     = \frac{s + 2}{s^{2} + 2s + 1} + 1<math>

which yields the following controllable realization

<math>\dot{\textbf{x}}(t) = \begin{bmatrix}
                              -2& -1\\
                               1&      0\\
                            \end{bmatrix}\textbf{x}(t) + 
                            \begin{bmatrix} 1\\ 0\end{bmatrix}\textbf{u}(t)<math>
<math> \textbf{y}(t) = \begin{bmatrix} 1& 2\end{bmatrix}\textbf{x}(t) + \begin{bmatrix} 1\end{bmatrix}\textbf{u}(t)<math>

Notice how the output also dependends directly on the input. This is due to the <math>\textbf{G}(\infty)<math> constant in the transfer function.

Feedback

Missing image
Typical_State_Space_model_with_feedback.png
Typical state space model with feedback

A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system: <math>\mathbf{u}(t) = K \mathbf{y}(t)<math>. Since the values of K are unrestricted the values can easily be negated for negative feedback. The presence of a negative sign (the common notation) is merely a notational one and its absence has no impact on the end results.

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)<math>

becomes

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B K \mathbf{y}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) + D K \mathbf{y}(t)<math>

solving the output equation for <math>\mathbf{y}(t)<math> and substituting in the state equation results in

<math>\dot{\mathbf{x}}(t) = \left(A + B K \left(I - D K\right)^{-1} C \right) \mathbf{x}(t)<math>
<math>\mathbf{y}(t) = \left(I - D K\right)^{-1} C \mathbf{x}(t)<math>

The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of <math>\left(A + B K \left(I - D K\right)^{-1} C \right)<math>. This assumes that the open-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.

One fairly common simplification to this system is removing D and setting C to identity, which reduces the equations to

<math>\dot{\mathbf{x}}(t) = \left(A + B K \right) \mathbf{x}(t)<math>
<math>\mathbf{y}(t) = \mathbf{x}(t)<math>

This reduces the necessary eigendecomposition to just <math>A + B K<math>.

Feedback with setpoint input

Missing image
Typical_State_Space_model_with_feedback_and_input.png
State feedback with set point

In addition to feedback, an input, <math>r(t)<math>, can be added such that <math>\mathbf{u}(t) = -K \mathbf{y}(t) + \mathbf{r}(t)<math>.

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) + B \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) + D \mathbf{u}(t)<math>

becomes

<math>\dot{\mathbf{x}}(t) = A \mathbf{x}(t) - B K \mathbf{y}(t) + B \mathbf{r}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t) - D K \mathbf{y}(t) + D \mathbf{r}(t)<math>

solving the output equation for <math>\mathbf{y}(t)<math> and substituting in the state equation results in

<math>\dot{\mathbf{x}}(t) = \left(A - B K \left(I + D K\right)^{-1} C \right) \mathbf{x}(t) + B \left(I - K \left(I + D K\right)^{-1}D \right) \mathbf{r}(t)<math>
<math>\mathbf{y}(t) = \left(I + D K\right)^{-1} C \mathbf{x}(t) + \left(I + D K\right)^{-1} D \mathbf{r}(t)<math>

One fairly common simplification to this system is removing D, which reduces the equations to

<math>\dot{\mathbf{x}}(t) = \left(A - B K \right) \mathbf{x}(t) + B \mathbf{r}(t)<math>
<math>\mathbf{y}(t) = C \mathbf{x}(t)<math>

Moving object example

A classical linear system is that of one-dimensional movement of an object. The Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring

<math>m \ddot{y} = u - k_1 \dot{y} - k_2 y<math>

where

  • <math>y<math> is position; <math>\dot y<math> is velocity; <math>\ddot{y}<math> is acceleration
  • u is an applied force
  • <math>k_1<math> is te viscous friction coefficient
  • <math>k_2<math> is the spring constant
  • m is the mass

The state equation would then become

<math>\left[ \begin{matrix} \mathbf{\dot{x_1}}(t) \\ \mathbf{\dot{x_2}}(t) \end{matrix} \right] = \left[ \begin{matrix} 0 & 1 \\ -\frac{k_2}{m} & -\frac{k_1}{m} \end{matrix} \right] \left[ \begin{matrix} \mathbf{x_1}(t) \\ \mathbf{x_2}(t) \end{matrix} \right] + \left[ \begin{matrix} 0 \\ \frac{1}{m} \end{matrix} \right] \mathbf{u}(t)<math>
<math>\mathbf{y}(t) = \left[ \begin{matrix} 1 & 0 \end{matrix} \right] \left[ \begin{matrix} \mathbf{x_1}(t) \\ \mathbf{x_2}(t) \end{matrix} \right]<math>

where

  • <math>x_1<math> is the position of the object
  • <math>\dot{x_1} = x_2<math> is the velocity of the object
  • <math>\ddot{x_1} = \dot{x_2}<math> is the acceleration of the objection
  • the output <math>\mathbf{y}(t)<math> is the position of the object

The controllability test is then

<math>\left[ \begin{matrix} B & AB \end{matrix} \right] = \left[ \begin{matrix} \left[ \begin{matrix} 0 \\ \frac{1}{m} \end{matrix} \right] & \left[ \begin{matrix} 0 & 1 \\ -\frac{k_2}{m} & -\frac{k_1}{m} \end{matrix} \right] \left[ \begin{matrix} 0 \\ \frac{1}{m} \end{matrix} \right] \end{matrix} \right] = \left[ \begin{matrix} 0 & \frac{1}{m} \\ \frac{1}{m} & \frac{k_1}{m^2} \end{matrix} \right]<math>

which has full rank for all <math>k_1<math> and <math>m<math>.

The observability test is then

<math>\left[ \begin{matrix} C \\ CA \end{matrix} \right] = \left[ \begin{matrix} \left[ \begin{matrix} 1 & 0 \end{matrix} \right] \\ \left[ \begin{matrix} 1 & 0 \end{matrix} \right] \left[ \begin{matrix} 0 & 1 \\ -\frac{k_2}{m} & -\frac{k_1}{m} \end{matrix} \right] \end{matrix} \right] = \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right]<math>

which also has full rank. Ergo, this system is both controllable and observable.

Nonlinear systems

The more general form of a state space model can be written as two functions.

<math>\mathbf{\dot{x}}(t) = \mathbf{f}(t, x, u)<math>
<math>\mathbf{y}(t) = \mathbf{h}(t, x, u)<math>

The first is the state equation and the latter is the output equation. If the function f is a linear combination of states and inputs then the equations can be written in matrix notation like above. The u argument to the functions can be dropped if the system is unforced (i.e., it has no inputs).

Pendulum example

A classic nonlinear system is a simple unforced pendulum

<math>ml\ddot\theta = -mg\sin\theta - kl\dot\theta<math>

where

  • <math>\theta<math> is the angle of the pendulum with respect to the direction of gravity
  • m is the mass of the pendulum (pendulum rod's mass is assumed to be zero)
  • g is the gravitational constant
  • k is coefficient of friction at the pivot point
  • l is the radius of the pendulum (to the center of gravity of the mass m)

The state equations are then

<math>\dot{x_1} = x_2<math>
<math>\dot{x_2} = - \frac{g}{l}\sin{x_1} - \frac{k}{m}<math>

where

  • <math>x_1 = \theta<math> is the angle of the pendulum
  • <math>\dot{x_1} = x_2<math> is the rotational velocity of the pendulum
  • <math>\ddot{x_1} = \dot{x_2}<math> is the rotational acceleration of the pendulum

Instead, the state equation can be written in the general form

<math>\dot{x} = \left( \begin{matrix} \dot{x_1} \\ \dot{x_2} \end{matrix} \right) = \mathbf{f}(t, x) = \left( \begin{matrix} x_2 \\ - \frac{g}{l}\sin{x_1} - \frac{k}{m} \end{matrix} \right)<math>

The equilibrium/stationary points of a system are when <math>\dot{x} = 0<math> and so the equilibrium points of a pendium are those that satisfy

<math>\left( \begin{matrix} x_1 \\ x_2 \end{matrix} \right) = \left( \begin{matrix} n\pi \\ 0 \end{matrix} \right)<math>

for integers n.

References

  • Chen, Chi-Tsong 1999. Linear System Theory and Design, 3rd. ed., Oxford University Press (ISBN 0-19-511777-8)
  • Khalil, Hassan K. Nonlinear Systems, 3rd. ed., Prentice Hall (ISBN 0-13-067389-7)
  • Nise, Norman S. 2004. Control Systems Engineering, 4th ed., John Wiley & Sons, Inc. (ISBN 0-47-144577-0)

See also

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools