Freesteel Blog » The pointlessness of 1d kalman filters with constant variances

The pointlessness of 1d kalman filters with constant variances

Thursday, July 27th, 2017 at 12:45 pm Written by:

Before I dampened and broke my brand new computer by keeping it overnight in the tent I was trying some simulations of Kalman filters derived from open source implementations in order to get a handle on the overly complex mathematical formulations of this technology in, say, one dimensional filter data.

It appears that the one dimensional kalman filter is a worthless beast that obscures a simple trivial exponential filter behind it.

Now I had mistakenly imagined that the kalman filter was adaptive, in that it ran with an error bar around the estimate and combined it with the error bar around the observation to make a new estimate and error bar that would be wide if the observation and estimate were far apart, and progressively narrow when they agreed. That way it could filter out noise when the measurements were aligned, but be able to track changes when they diverged.

This does not happen because the value of the error bar/variance/covariance is not affected by the value of the observation, as can be seen from the official formula of the kalman filter:

    x, P       # state estimate, state estimate error/covariance
    F          # state transition model
    Q          # process noise/covariance
    B          # control input
x = Fx + Bu    # estimate/prediction
P = FPF^T + Q  # covariance
    H          # observation model (state -> observation transform)
    R          # observation noise/covariance
y = z - Hx     # z=observation, y=error
S = R + HPH^T  # pre-fit covariance
K = PH^TS^-1   # kalman gain
x = x + Ky     # new state estimate
P = (1 - KH)P  # new state covariance

This is not obvious until you delete all the equations that don’t pertain to the computation of P:

P = FPF^T + Q  # covariance
S = R + HPH^T  # pre-fit covariance
K = PH^TS^-1   # kalman gain
P = (1 - KH)P  # new state covariance

Let’s confirm this with the source code from XCSoar which filters the pressure readings. The call to the update for pressure reading is:

kalman_filter.Update(pressure.GetHectoPascal(), 0.05);

And this Update function can be simplified to:

KalmanFilter1d::Update(
   z_abs,      # observation
   var_z_abs,  # constant (0.05)
   dt          # constant
)
{
  x_abs_ += x_vel_ * dt;

  dt2 = Square(dt);
  dt3 = dt * dt2;
  dt4 = Square(dt2);
  p_abs_abs_ += 2 * dt * p_abs_vel_ + dt2 * p_vel_vel_ + var_x_accel_ * dt4 / 4;
  p_abs_vel_ += dt * p_vel_vel + var_x_accel * dt3 / 2;
  p_vel_vel_ += var_x_accel_ * dt2;

  y = z_abs - x_abs_; 
  s_inv = 1 / (p_abs_abs + var_z_abs); 
  k_abs = p_abs_abs*s_inv; 
  k_vel = p_abs_vel*s_inv;

  x_abs += k_abs * y;
  x_vel += k_vel * y;

  # update the covariance does not depend on z_abs
  p_vel_vel -= p_abs_vel*k_vel;
  p_abs_vel -= p_abs_vel*k_abs;
  p_abs_abs -= p_abs_abs*k_abs;
}

This is a kalman filter that updates a hidden variable known as velocity, so it gave a lively curve as it tracked my fake data in my simulations (on my now waterlogged computer, so no images).

But there are even simpler one-dimensional kalman filter implementations, given in this fine stack overflow question:

Let’s start with the proper general equations for the 1D case:

# prediction
x[k] = a * x[k - 1]
p[k] = a * p[k - 1] * a + q
# update
y = z - h * x[k]
kg = p * h / (h * p * h + r)
x[k] = x[k] + kg * y
p[k] = (1 - kg * h) * p[k]


All of the parameters of the model (a, q, r, h) could in principal also have an index k and change as the system evolves. But in simple cases they can all be taken as constant.

Yes, and when you do that, you get the following sequential update for the p[k] value as k increments per step:

p[k] = a * p[k - 1] * a + q
kg = p[k] * h / (h * p[k] * h + r)
p[k] = (1 - kg * h) * p[k]

Normally you seed the p with a number and it oscillates a bit — as if it is doing something useful — and converges to a fixed value of p which is the solution to:

p = (1 - (apa + q)h / (h(apa + q)h + r)h)(apa + q)

from which the value of:

lambda = ph/(hph + r)

becomes your boring exponential filter constant where:

x_filtered = x_filtered + lambda * (x - x_filtered)

The contributor says if the filter adapting too much (filtering too little) then either q is too large, or r is too small. By inspection, these changes reduce the value of lambda — which you might as well do directly without being bamboozled by this complexity.

Basically, unless your error values/covariance inputs are changing, the kalman filter does nothing interesting. It may be relevant in the classical case of radar triangulation where the absolute errors do depend on the proximity of the target to the radar, but when you’ve got solid state sensors making local measurements of temperature and pressure (ie not multiple sensors measuring a remote property, such as IR temperature readings of a common object at a distance).

Even if the Kalman filter worked in the magical way I’d thought it did (of being able to estimate the process error from the measured values themselves), it couldn’t be the right answer as it only looks at past data.

My problem is I’m trying to recreate events (and define the mathematics that goes with them) in the past from high frequency records of the sensor measurements. Therefore I should be doing some form of iterative model and curve fitting, and not wasting efforts with this one-way-time one-pass data technology.

But this is hard likely-to-fail work out with no off-the-shelf convincing-looking mathematics to learn and implement.

Leave a comment

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <blockquote cite=""> <code> <em> <strong>