Kalman filter

From Academic Kids

The Kalman filter is an efficient recursive filter which estimates the state of a dynamic system from a series of incomplete and noisy measurements. An example of an application would be to provide accurate continuously-updated information about the position and velocity of an object given only a sequence of observations about its position, each of which includes some error. It is used in a wide range of engineering applications from radar to computer vision. Kalman filtering is an important topic in control theory and control systems engineering.

The filter is named after its inventor, Rudolf E. Kalman, though Peter Swerling actually developed a similar algorithm earlier. Stanley Schmidt is generally credited with developing the first implementation of a Kalman filter. It was during a visit of Kalman to the NASA Ames Research Center that he saw the applicability of his ideas to the problem of trajectory estimation for the Apollo program, leading to its incorporation in the Apollo navigation computer. The filter was developed in papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).

A wide variety of Kalman filters has now been developed, from Kalman's original formulation, now called the simple Kalman filter, to Schmidt's extended filter, the information filter, and a variety of square-root filters, developed by Bierman, Thornton and many others. Perhaps the most commonly used type of Kalman filter is the phase-locked loop now ubiquitous in radios, computers, and nearly any other type of video or communications equipment.


Underlying dynamic system model

Kalman filters are based on linear algebra and the hidden Markov model. The underlying dynamical system is modelled as a Markov chain built on linear operators perturbed by gaussian noise. The state of the system is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the visible outputs from the hidden state.

In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the framework of the Kalman filter. This means specifying the matrices Fk, Hk, Qk, Rk, and sometimes Bk for each time-step k as described below.

Missing image
Model underlying the Kalman filter. Circles are vectors, squares are matrices, and stars represent gaussian noise with the associated covariance matrix at the lower right.

The Kalman filter model assumes the true state at time k is evolved from the state at (k-1) according to

<math> \textbf{x}_{k} = \textbf{F}_{k} \textbf{x}_{k-1} + \textbf{B}_{k}\textbf{u}_{k} + \textbf{w}_{k} <math>


  • Fk is the state transition model which is applied to the previous state xk-1;
  • Bk is the control-input model which is applied to the control vector uk;
  • wk is the process noise which is assumed to be drawn from a zero mean multivariate normal distribution with covariance Qk.
<math>\textbf{w}_{k} \sim N(0, \textbf{Q}_k) <math>

At time k an observation (or measurement) zk of the true state xk is made according to

<math>\textbf{z}_{k} = \textbf{H}_{k} \textbf{x}_{k} + \textbf{v}_{k}<math>

where Hk is the observation model which maps the true state space into the observed space and vk is the observation noise which is assumed to be zero mean gaussian white noise with covariance Rk.

<math>\textbf{v}_{k} \sim N(0, \textbf{R}_k) <math>

The initial state, and the noise vectors at each step {x0, w1, ..., wk, v1 ... vk} are all assumed to be mutually independent.

Many real dynamical systems do not exactly fit this model; however, because the Kalman filter is designed to operate in the presence of noise, an approximate fit is often good enough for the filter to be very useful. Variations on the Kalman filter described below allow richer and more sophisticated models.

The Kalman filter

The Kalman filter is a recursive estimator. This means that only estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates are required. It is unusual in being purely a time domain filter; most filters (for example, a low-pass filter) are formulated in the frequency domain and then transformed back to the time domain for implementation.

The state of the filter is represented by two variables:

  • <math>\hat{\textbf{x}}_{k|k}<math>, the estimate of the state at time k;
  • <math>\textbf{P}_{k|k}<math>, the error covariance matrix (a measure of the estimated accuracy of the state estimate).

The Kalman filter has two distinct phases: Predict and Update. The predict phase uses the estimate from the previous timestep to produce an estimate of the current state. In the update phase measurement information from the current timestep is used to refine this prediction to arrive at a new, (hopefully) more accurate estimate.


<math>\hat{\textbf{x}}_{k|k-1} = \textbf{F}_{k}\hat{\textbf{x}}_{k-1|k-1} + \textbf{B}_{k} \textbf{u}_{k}<math> (predicted state)
<math>\textbf{P}_{k|k-1} = \textbf{F}_{k} \textbf{P}_{k-1|k-1} \textbf{F}_{k}^{T} + \textbf{Q}_{k} <math> (predicted estimate covariance)


<math>\tilde{\textbf{y}}_{k} = \textbf{z}_{k} - \textbf{H}_{k}\hat{\textbf{x}}_{k|k-1} <math> (innovation or measurement residual)
<math>\textbf{S}_{k} = \textbf{H}_{k}\textbf{P}_{k|k-1}\textbf{H}_{k}^{T} + \textbf{R}_{k}<math> (innovation (or residual) covariance)
<math>\textbf{K}_{k} = \textbf{P}_{k|k-1}\textbf{H}_{k}^{T}\textbf{S}_{k}^{-1} <math> (Kalman gain)
<math>\hat{\textbf{x}}_{k|k} = \hat{\textbf{x}}_{k|k-1} + \textbf{K}_{k}\tilde{\textbf{y}}_{k} <math> (updated state estimate)
<math>\textbf{P}_{k|k} = (I - \textbf{K}_{k} \textbf{H}_{k}) \textbf{P}_{k|k-1}<math> (updated estimate covariance)


If the model is accurate, and the values for <math>\hat{\textbf{x}}_{0|0}<math> and <math>\textbf{P}_{0|0}<math> accurately reflect the distribution of the initial state values, then the following invariants are preserved: all estimates have mean error zero

  • <math>\textrm{E}[\textbf{x}_k - \hat{\textbf{x}}_{k|k}] = \textrm{E}[\textbf{x}_k - \hat{\textbf{x}}_{k|k-1}] = 0<math>
  • <math>\textrm{E}[\tilde{\textbf{y}}_k] = 0<math>

and covariance matrices accurately reflect the covariance of estimates

  • <math>\textbf{P}_{k|k} = \textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k|k})<math>
  • <math>\textbf{P}_{k|k-1} = \textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k|k-1})<math>
  • <math>\textbf{S}_{k} = \textrm{cov}(\tilde{\textbf{y}}_k)<math>

Note that where <math>\textrm{E}[\textbf{a}] = 0<math>, <math>\textrm{cov}(\textbf{a}) = \textrm{E}[\textbf{a}\textbf{a}^T]<math>.


Consider a truck on perfectly frictionless, infinitely long straight rails. Initially the truck is stationary at position 0, but it is buffeted this way and that by random acceleration. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of where the truck is and what its velocity is. We show here how we derive the model from which we create our Kalman filter.

There are no controls on the truck, so we ignore Bk and uk. Since they are constant, time indices for F, H, R and Q have been dropped.

The position and velocity of a point particle is described by the linear state space

<math>\textbf{x}_{k} = \begin{bmatrix} x \\ \dot{x} \end{bmatrix} <math>

where <math>\dot{x}<math> is the velocity, that is, the derivative of position.

We assume that between the (k − 1)th and kth timestep the particle undergoes a constant acceleration of ak that is normally distributed, with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that

<math>\textbf{x}_{k} = \textbf{F} \textbf{x}_{k-1} + \textbf{G}a_{k}<math>


<math>\textbf{F} = \begin{bmatrix} 1 & \Delta t \\ 0 & 1 \end{bmatrix}<math>


<math>\textbf{G} = \begin{bmatrix} \frac{\Delta t^{2}}{2} \\ \Delta t \end{bmatrix} <math>

We find that

<math> \textbf{Q} = \textrm{cov}(\textbf{G}a) = \textrm{E}[(\textbf{G}a)(\textbf{G}a)^{T}] = \textbf{G} \textrm{E}[a^2] \textbf{G}^{T} = \textbf{G}[\sigma_a^2]\textbf{G}^{T} = \sigma_a^2 \textbf{G}\textbf{G}^{T}<math> (since σa is a scalar).

At each time step, a noisy measurement of the true position of the particle is made. Let us suppose the noise is also normally distributed, with mean 0 and standard deviation σz.

<math>\textbf{z}_{k} = \textbf{H x}_{k} + \textbf{v}_{k}<math>


<math>\textbf{H} = \begin{bmatrix} 1 & 0 \end{bmatrix} <math>


<math>\textbf{R} = \textrm{E}[\textbf{v}_k \textbf{v}_k^{T}] = \begin{bmatrix} \sigma_z^2 \end{bmatrix} <math>

We know the initial starting state of the truck with perfect position, so we initialise

<math>\hat{\textbf{x}}_{0|0} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} <math>

and to tell the filter that we know with perfect position, we give it a zero covariance matrix:

<math>\textbf{P}_{0|0} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} <math>

If the initial position and velocity are not known perfectly the covariance matrix should be initialised with a suitably large number, say B, on its diagonal.

<math>\textbf{P}_{0|0} = \begin{bmatrix} B & 0 \\ 0 & B \end{bmatrix} <math>

The filter will then prefer the information from the first measurements over the information already in the model.


Deriving the posterior estimate covariance matrix

Starting with our invariant on the error covariance Pk|k as above

<math>\textbf{P}_{k|k} = \textrm{cov}(\textbf{x}_{k} - \hat{\textbf{x}}_{k|k})<math>

substitute in the definition of <math>\hat{\textbf{x}}_{k|k}<math>

<math>\textbf{P}_{k|k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k|k-1} + \textbf{K}_k\tilde{\textbf{y}}_{k}))<math>

and substitute <math>\tilde{\textbf{y}}_k<math>

<math>\textbf{P}_{k|k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k|k-1} + \textbf{K}_k(\textbf{z}_k - \textbf{H}_k\hat{\textbf{x}}_{k|k-1})))<math>

and <math>\textbf{z}_{k}<math>

<math>\textbf{P}_{k|k} = \textrm{cov}(\textbf{x}_{k} - (\hat{\textbf{x}}_{k|k-1} + \textbf{K}_k(\textbf{H}_k\textbf{x}_k + \textbf{v}_k - \textbf{H}_k\hat{\textbf{x}}_{k|k-1})))<math>

and by collecting the error vectors we get

<math>\textbf{P}_{k|k} = \textrm{cov}((I - \textbf{K}_k \textbf{H}_{k})(\textbf{x}_k - \hat{\textbf{x}}_{k|k-1}) - \textbf{K}_k \textbf{v}_k )<math>

Since the measurement error vk is uncorrelated with the other terms, this becomes

<math>\textbf{P}_{k|k} = \textrm{cov}((I - \textbf{K}_k \textbf{H}_{k})(\textbf{x}_k - \hat{\textbf{x}}_{k|k-1})) + \textrm{cov}(\textbf{K}_k \textbf{v}_k )<math>

by the properties of vector covariance this becomes

<math>\textbf{P}_{k|k} = (I - \textbf{K}_k \textbf{H}_{k})\textrm{cov}(\textbf{x}_k - \hat{\textbf{x}}_{k|k-1})(I - \textbf{K}_k \textbf{H}_{k})^{T} + \textbf{K}_k\textrm{cov}(\textbf{v}_k )\textbf{K}_k^{T}<math>

which, using our invariant on Pk|k-1 and the definition of Rk becomes

<math>\textbf{P}_{k|k} =

(I - \textbf{K}_k \textbf{H}_{k}) \textbf{P}_{k|k-1} (I - \textbf{K}_k \textbf{H}_{k})^T + \textbf{K}_k \textbf{R}_k \textbf{K}_k^T <math> This formula is valid no matter what the value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below.

Kalman gain derivation

The Kalman filter is a minimum mean-square error estimator. The error in the posterior state estimation is

<math>\textbf{x}_{k} - \hat{\textbf{x}}_{k|k}<math>

We seek to minimize the expected value of the square of the magnitude of this vector, <math>\textrm{E}[|\textbf{x}_{k} - \hat{\textbf{x}}_{k|k}|^2]<math>. This is equivalent to minimizing the trace of the posterior estimate covariance matrix Pk|k. By expanding out the terms in the equation above and collecting, we get:

<math> \textbf{P}_{k|k} <math> <math> = \textbf{P}_{k|k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k|k-1} - \textbf{P}_{k|k-1} \textbf{H}_k^T \textbf{K}_k^T + \textbf{K}_k (\textbf{H}_k \textbf{P}_{k|k-1} \textbf{H}_k^T + \textbf{R}_k) \textbf{K}_k^T<math>
<math> = \textbf{P}_{k|k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k|k-1} - \textbf{P}_{k|k-1} \textbf{H}_k^T \textbf{K}_k^T + \textbf{K}_k \textbf{S}_k\textbf{K}_k^T<math>

The trace is minimized when the matrix derivative is zero:

<math>\frac{d \; \textrm{tr}(\textbf{P}_{k|k})}{d \;\textbf{K}_k} = -2 (\textbf{H}_k \textbf{P}_{k|k-1})^T + 2 \textbf{K}_k \textbf{S}_k = 0<math>

Solving this for Kk yields the Kalman gain:

<math>\textbf{K}_k \textbf{S}_k = (\textbf{H}_k \textbf{P}_{k|k-1})^T = \textbf{P}_{k|k-1} \textbf{H}_k^T<math>
<math> \textbf{K}_{k} = \textbf{P}_{k|k-1} \textbf{H}_k^T \textbf{S}_k^{-1}<math>

This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.

Simplification of the posterior error covariance formula

The formula used to calculate the posterior error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that

<math>\textbf{K}_k \textbf{S}_k \textbf{K}_k^T = \textbf{P}_{k|k-1} \textbf{H}_k^T \textbf{K}_k^T<math>

Referring back to our expanded formula for the posterior error covariance,

<math> \textbf{P}_{k|k} = \textbf{P}_{k|k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k|k-1} - \textbf{P}_{k|k-1} \textbf{H}_k^T \textbf{K}_k^T + \textbf{K}_k \textbf{S}_k \textbf{K}_k^T<math>

we find the last two terms cancel out, giving

<math> \textbf{P}_{k|k} = \textbf{P}_{k|k-1} - \textbf{K}_k \textbf{H}_k \textbf{P}_{k|k-1} = (I - \textbf{K}_{k} \textbf{H}_{k}) \textbf{P}_{k|k-1}<math>.

This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the posterior error covariance formula as derived above must be used.

Relationship to recursive Bayesian estimation

The true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model.

Missing image
Hidden Markov Model

Because of the Markov assumption, the true state is conditionally independent all earlier states given the immediately previous state.

<math>p(\textbf{x}_k|\textbf{x}_0,...,\textbf{x}_{k-1}) = p(\textbf{x}_k|\textbf{x}_{k-1})<math>

Similarly the measurement a the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.

<math>p(\textbf{z}_k|\textbf{x}_0,...,\textbf{x}_{k}) = p(\textbf{z}_k|\textbf{x}_{k} )<math>

Using these assumptions the probability distribution over all states of the HMM can be written simply as:

<math>p(\textbf{x}_0,...,\textbf{x}_k,\textbf{z}_1,...,\textbf{z}_k) = p(\textbf{x}_0)\prod_{i=1}^k p(\textbf{z}_i|\textbf{x}_i)p(\textbf{x}_i|\textbf{x}_{i-1})<math>

However, when the Kalman filter to estimate the state x the probability distribution of interest is that associated with the current states conditioned on the measurements upto the current timestep. (This is achieved by marginalising out the previous states and dividing by the probability of the measurement set.)

This leads to the predict and update steps of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is product of the probability distribution associated with the transition from the (k - 1)th timestep to the kth and the probability distribution associated with the previous state, with the true state at (k - 1) integrated out.

<math> p(\textbf{x}_k|\textbf{Z}_{k-1}) = \int p(\textbf{x}_k | \textbf{x}_{k-1}) p(\textbf{x}_{k-1} | \textbf{Z}_{k-1} ) \, d\textbf{x}_{k-1} <math>

The measurement set upto time t is

<math> \textbf{Z}_{t} = \left \{ \textbf{z}_{1},...,\textbf{z}_{t} \right \} <math>

The probability distribution of updated is proportional to the product of the measurement likelihood and the predicted state.

<math> p(\textbf{x}_k|\textbf{Z}_{k}) = \frac{p(\textbf{z}_k|\textbf{x}_k) p(\textbf{x}_k|\textbf{Z}_{k-1})}{p(\textbf{z}_k|\textbf{Z}_{k-1})} <math>

The denominator

<math>p(\textbf{z}_k|\textbf{Z}_{k-1}) = \int p(\textbf{z}_k|\textbf{x}_k) p(\textbf{x}_k|\textbf{Z}_{k-1}) d\textbf{x}_k<math>

is an unimportant normalisation term.

The remaining probability density functions are

<math> p(\textbf{x}_k | \textbf{x}_{k-1}) = N(\textbf{x}_k, \textbf{F}_k\textbf{x}_{k-1}, \textbf{Q}_k)<math>
<math> p(\textbf{z}_k|\textbf{x}_k) = N(\textbf{z}_k,\textbf{H}_{k}\textbf{x}_k, \textbf{R}_k) <math>
<math> p(\textbf{x}_{k-1}|\textbf{Z}_{k-1}) = N(\textbf{x}_{k-1},\hat{\textbf{x}}_{k-1},\textbf{P}_{k-1} )<math>

Note that the PDF at the previous timestep is inductively assumed to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for <math>\mathbf{x}_k<math> given the measurements <math>\mathbf{Z}_k<math> is the Kalman filter estimate.

Information filter

In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively.

<math>\textbf{Y}_{k|k} \equiv \textbf{P}_{k|k}^{-1} <math>
<math>\hat{\textbf{y}}_{k|k} \equiv \textbf{P}_{k|k}^{-1}\hat{\textbf{x}}_{k|k} <math>

Similarly the predicted covariance and state have equivalent information forms,

<math>\textbf{Y}_{k|k-1} \equiv \textbf{P}_{k|k-1}^{-1} <math>
<math>\hat{\textbf{y}}_{k|k-1} \equiv \textbf{P}_{k|k-1}^{-1}\hat{\textbf{x}}_{k|k-1} <math>

as have the measurement covariance and measurement vector.

<math>\textbf{I}_{k} \equiv \textbf{H}_{k}^{T} \textbf{R}_{k}^{-1} \textbf{H}_{k} <math>
<math>\textbf{i}_{k} \equiv \textbf{H}_{k}^{T} \textbf{R}_{k}^{-1} \textbf{z}_{k} <math>

The information update now becomes a trivial sum.

<math>\textbf{Y}_{k|k} = \textbf{Y}_{k|k-1} + \textbf{I}_{k}<math>
<math>\hat{\textbf{y}}_{k|k} = \hat{\textbf{y}}_{k|k-1} + \textbf{i}_{k}<math>

The main advantage of the information filter is that N measurements can be filtered at each timestep simply by summing their information matrices and vectors.

<math>\textbf{Y}_{k|k} = \textbf{Y}_{k|k-1} + \sum_{j=1}^N \textbf{I}_{k,j}<math>
<math>\hat{\textbf{y}}_{k|k} = \hat{\textbf{y}}_{k|k-1} + \sum_{j=1}^N \textbf{i}_{k,j}<math>

To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.

<math>\textbf{M}_{k} = [\textbf{F}_{k}^{-1}]^{T} \textbf{Y}_{k|k} \textbf{F}_{k}^{-1} <math>
<math>\textbf{C}_{k} = \textbf{M}_{k} [\textbf{M}_{k}+\textbf{Q}_{k}^{-1}]^{-1}<math>
<math>\textbf{L}_{k} = I - \textbf{C}_{k} <math>
<math>\textbf{Y}_{k|k-1} = \textbf{L}_{k} \textbf{M}_{k} \textbf{L}_{k}^{T} +
                                 \textbf{C}_{k} \textbf{Q}_{k}^{-1} \textbf{C}_{k}^{T}<math>
<math>\hat{\textbf{y}}_{k|k-1} = \textbf{L}_{k} [\textbf{F}_{k}^{-1}]^{T}\hat{\textbf{y}}_{k|k} <math>

Note that if F and Q are time invariant these values can be cached. Note also that F and Q need to be invertible.

Non-linear filters

The basic Kalman filter is limited to a linear assumption. However most non-trivial systems are non-linear. The non-linearity can be associated either with the process model or with the observation model or with both.

Extended Kalman filter

In the Extended Kalman filter (EKF) the state transition and observation models need not be linear functions of the state but may instead be (differentiable) functions.

<math>\textbf{x}_{k} = f(\textbf{x}_{k-1}, \textbf{u}_{k}, \textbf{w}_{k})<math>
<math>\textbf{z}_{k} = h(\textbf{x}_{k}, \textbf{v}_{k})<math>

The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.

At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearises the non-linear function around the current estimate.

This results in the following extended Kalman filter equations:


<math>\hat{\textbf{x}}_{k|k-1} = f(\textbf{x}_{k-1}, \textbf{u}_{k}, 0)<math>
<math> \textbf{P}_{k|k-1} = \textbf{F}_{k} \textbf{P}_{k-1|k-1} \textbf{F}_{k}^{T} + \textbf{Q}_{k} <math>

Update model using Jacobians

<math> \textbf{F}_{k} = \left . \frac{\partial f}{\partial \textbf{x} } \right \vert _{\hat{\textbf{x}}_{k|k-1},\textbf{u}_{k}} <math>
<math> \textbf{H}_{k} = \left . \frac{\partial h}{\partial \textbf{x} } \right \vert _{\hat{\textbf{x}}_{k|k-1}} <math>


<math>\tilde{\textbf{y}}_{k} = \textbf{z}_{k} - h(\textbf{x}_{k}, 0)<math>
<math>\textbf{S}_{k} = \textbf{H}_{k}\textbf{P}_{k|k-1}\textbf{H}_{k}^{T} + \textbf{R}_{k}<math>
<math>\textbf{K}_{k} = \textbf{P}_{k|k-1}\textbf{H}_{k}^{T}\textbf{S}_{k}^{-1} <math>
<math>\hat{\textbf{x}}_{k|k} = \hat{\textbf{x}}_{k|k-1} + \textbf{K}_{k}\tilde{\textbf{y}}_{k} <math>
<math> \textbf{P}_{k|k} = (I - \textbf{K}_{k} \textbf{H}_{k}) \textbf{P}_{k|k-1} <math>

Unscented Kalman filter

The extended Kalman filter gives particularly poor performance on highly non-linear functions because only the mean is propagated through the non-linearity. The unscented Kalman filter (UKF) [JU97] uses a deterministic sampling technique to pick a minimal set of sample points (called sigma points) around the mean. These sigma points are then propagated through the non-linear functions and the covariance of the estimate is then recovered. The result is a filter which more accurately captures the true mean and covariance. (This can be verified using Monte Carlo sampling or through a Taylor series expansion of the posterior statistics.) In addition, this technique removes the requirement to analytically calculate Jacobians, which for complex functions can be a difficult task in itself.


As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.

The estimated state and covariance are augmented with the mean and covariance of the process noise.

<math> \textbf{x}_{k-1|k-1}^{a} = [ \hat{\textbf{x}}_{k-1|k-1}^{T} \quad E[\textbf{w}_{k}^{T}] \ ]^{T} <math>
<math> \textbf{P}_{k-1|k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k-1|k-1} & & 0 & \\ & 0 & &\textbf{Q}_{k} & \end{bmatrix} <math>

A set of 2L+1 sigma points is derived from the augmented state and covariance where L is the dimension of the augmented state.

<math>\chi_{k-1|k-1}^{0} <math> <math>= \textbf{x}_{k-1|k-1}^{a} <math>
<math>\chi_{k-1|k-1}^{i} <math> <math>=\textbf{x}_{k-1|k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1|k-1}^{a} } \right )_{i}<math> <math>i = 1..L \,\!<math>
<math>\chi_{k-1|k-1}^{i} <math> <math>= \textbf{x}_{k-1|k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k-1|k-1}^{a} } \right )_{i-L}<math> <math>i = L+1..2L \,\!<math>

The sigma points are propagated through the transition function f.

<math>\chi_{k|k-1}^{i} = f(\chi_{k-1|k-1}^{i}) \quad i = 0..2L <math>

The weighted sigma points are recombined to produce the predicted state and covariance.

<math>\hat{\textbf{x}}_{k|k-1} = \sum_{i=1}^N W_{s}^{i} \chi_{k|k-1}^{i} <math>
<math>\textbf{P}_{k|k-1} = \sum_{i=1}^N W_{c}^{i}\ [\chi_{k|k-1}^{i} - \hat{\textbf{x}}_{k|k-1}] [\chi_{k|k-1}^{i} - \hat{\textbf{x}}_{k|k-1}]^{T} <math>

Where the weights for the state and covariance are given are:

<math>W_{s}^{0} = \frac{\lambda}{L+\lambda}<math>
<math>W_{c}^{0} = \frac{\lambda}{L+\lambda} + (1 - \alpha^2 + \beta)<math>
<math>W_{s}^{i} = W_{c}^{i} = \frac{1}{2(L+\lambda)}<math>
<math>\lambda = \alpha^2 / (L+\kappa) - L \,\! <math>

Typical values for <math>\alpha<math>, <math>\beta<math>, and <math>\kappa<math> are <math>10^{-3}<math>, 2 and 0 respectively. (These values should suffice for most purposes.)


The predicted state and covariance are augmented as before, except now with the mean and covariance of the measurement noise.

<math> \textbf{x}_{k|k-1}^{a} = [ \hat{\textbf{x}}_{k|k-1}^{T} \quad E[\textbf{v}_{k}^{T}] \ ]^{T} <math>
<math> \textbf{P}_{k|k-1}^{a} = \begin{bmatrix} & \textbf{P}_{k|k-1} & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix} <math>

As before, a set of 2L+1 sigma points is derived from the augmented state and covariance where L is the dimension of the augmented state.

<math>\chi_{k|k-1}^{0} <math> <math>= \textbf{x}_{k|k-1}^{a} <math>
<math>\chi_{k|k-1}^{i} <math> <math>=\textbf{x}_{k|k-1}^{a} + \left ( \sqrt{ (L + \lambda) \textbf{P}_{k|k-1}^{a} } \right )_{i}<math> <math>i = 1..L \,\!<math>
<math>\chi_{k|k-1}^{i} <math> <math>= \textbf{x}_{k|k-1}^{a} - \left ( \sqrt{ (L + \lambda) \textbf{P}_{k|k-1}^{a} } \right )_{i-L}<math> <math>i = L+1..2L \,\!<math>

Alternatively if the UKF prediction has been used the sigma points themselves can be augmented along the following lines

<math> \chi_{k|k-1} := [ \chi_{k|k-1} \quad E[\textbf{v}_{k}^{T}] \ ]^{T} \pm \sqrt{ (L + \lambda) \textbf{R}_{k}^{a} }<math>


<math> \textbf{R}_{k}^{a} = \begin{bmatrix} & 0 & & 0 & \\ & 0 & &\textbf{R}_{k} & \end{bmatrix} <math>

The sigma points are projected through the observation function h.

<math>\gamma_{k}^{i} = h(\chi_{k|k-1}^{i}) \quad i = 0..2L <math>

The weighted sigma points are recombined to produce the predicted measurement and predicted measurement covariance.

<math>\hat{\textbf{z}}_{k} = \sum_{i=1}^N W_{s}^{i} \gamma_{k}^{i} <math>
<math>\textbf{P}_{z_{k}z_{k}} = \sum_{i=1}^N W_{c}^{i}\ [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^{T} <math>

The state-measurement cross-correlation matrix,

<math>\textbf{P}_{x_{k}z_{k}} = \sum_{i=1}^N W_{c}^{i}\ [\chi_{k|k-1}^{i} - \hat{\textbf{x}}_{k|k-1}] [\gamma_{k}^{i} - \hat{\textbf{z}}_{k}]^{T} <math>

is used to compute the UKF Kalman gain.

<math>K_{k} = \textbf{P}_{x_{k}z_{k}} \textbf{P}_{z_{k}z_{k}}^{-1}<math>

As with the Kalman filter, the updated state is the predicted state plus the innovation weighted by the Kalman gain,

<math>\hat{\textbf{x}}_{k|k} = \hat{\textbf{x}}_{k|k-1} + K_{k}( \textbf{z}_{k} - \hat{\textbf{z}}_{k} )<math>

And the updated covariance is the predicted covariance, minus the predicted measurement covariance, weighted by the Kalman gain.

<math>\textbf{P}_{k|k} = \textbf{P}_{k|k} - K_{k} \textbf{P}_{z_{k}z_{k}} K_{k}^{T} <math>


See also

External links


  • Kalman, R. E. A New Approach to Linear Filtering and Prediction Problems, Transactions of the ASME - Journal of Basic Engineering Vol. 82: pp. 35-45 (1960)
  • Kalman, R. E., Bucy R. S., New Results in Linear Filtering and Prediction Theory, Transactions of the ASME - Journal of Basic Engineering Vol. 83: pp. 95-107 (1961)
  • [JU97] Julier, Simon J. and Jeffery K. Uhlmann. A New Extension of the Kalman Filter to nonlinear Systems. In The Proceedings of AeroSense: The 11th International Symposium on Aerospace/Defense Sensing,Simulation and Controls, Multi Sensor Fusion, Tracking and Resource Management II, SPIE, 1997.de:Kalman-Filter

nl:Kalman-filter pl:Filtr Kalmana fa:فیلتر کالمن


Academic Kids Menu

  • Art and Cultures
    • Art (http://www.academickids.com/encyclopedia/index.php/Art)
    • Architecture (http://www.academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (http://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (http://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)


  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Personal tools