Robotics Fundamentals - Extended Kalman Filter (EKF)

Kalman Filter Framework

Posted by Rico's Nerd Cluster on March 15, 2024

Welcome To EKF - A Diff Drive Example

EKF (Extended Kalman Filter) is widely applied in robotics to smooth sensor noises to get a better estimate of a car’s position.

Now let’s take a look at a diff (differential) drive example.

Imagine our world has two cones with known locations: c1, c2, and a diff drive with wheel encoders and a cone detector.

  • A wheel encoder tells us how far (in m/s) a wheel has travelled within a known time window.
  • The cone detector can detect the range d and bearing β of each cone at any given time.

The goal is to estimate the [x,y,θ] of the vehicle.

Source

Mathematical Formulation

Wheel Encoder

The wheel encoder is able to give us a noisy estimate of the linear v and angular velocity w of the robot:

v=l+rcw=lr2R=2(l+r)c(rl)

Where:

  • v is the linear velocity of the robot’s center in m/s
  • w is the angular velocity of the robot’s rotation in rad/s
  • l,r are the wheel encoder increments within time window Δt in m/s.
  • c is the distance between the two wheels (wheel base).

One can also notice that v=wR in this case!

Conventionally, we use x to represent our state vector as:

x=[xyθ]

And we deem the linear and angular velocities as control input ,u: u=[vw]

Motion Model

In discrete time t+1=t+Δt, we assume the robot follows a circular motion. That is, it has a constant linear and angular velocity.

Source

Accordingly,

x=vcos(θ)y=vsin(θ)θ=w

If we integrate the above derivatives, we can get the motion update. This motion update is equivalent to gettting the new coordinates on a circle. In the real world, we have a gaussian noise η from the control update.

xt+1=xtωvsin(θt)+ωvsin(θt+ωΔt)+ηx,yt+1=yt+ωvcos(θt)ωvcos(θt+ωΔt)+ηy,θt+1=θt+ωΔt+ηθ.xt+1=f(xt,u,η)
  • The process noise has a covariance matrix Q:
Q=[var(v)00var(θ)]

However, to do Kalman Filtering, we must be able to compute kalman gain and covariance matrix using a linear form:

xt+1=Ftxt+Btut+η

However, because of the non linear terms like wvsin(θ), we can’t write the above into a linear form directly. However, we can use the taylor expansion to linearize it. One tricky thing is, the linearization is around an estimate. So the EKF framework is formulated about the deviation from the true value.

  1. We first get a prediction xt+1 using the full non-linear update model above,
  2. Define Deviation between the last state estimate and the next state estimate:

δx:=xt+1,truext¯

  1. Linearize the deviation δx
xt+1=f(xt¯,ut)xt+1,true=f(xt¯+δx)xt+1+Jx(xt+1xt¯)=Jxxt+1+b

So xt+1,true=Jxxt+1+b is linear, w.r.t xt+1! This is the foundamental difference from ESKF, where a linear system is built on error δx=xt+1,truext+1

  • This is because we our motion model estimates a single state estimate xt+1, and still linearizes around xt. However ESKF estimates the error δx=xt+1,truext+1

Jx is the Jacobian of fx

xt+1[10wvcos(θt)+wvcos(θt+Δt)01wvsin(θt)+wvsin(θt+Δt)Δt001]xt+1+b

To be consistent with most other literature, we use Ft to represent this Jacobian (a.k.a Prediction Jacobian)

Ft=Jx=[10wvcos(θt)+wvcos(θt+Δt)01wvsin(θt)+wvsin(θt+Δt)Δt001]

Observation Model

We assume that we know the positions of each landmark [cx,cy] A cone appears to be \beta, d in our observation. Note that in real life, this observation is subject to observation noise:

d=(cxx)2+(cyy)2+ηodβ=atan2(ycy1xcx1)θ+ηob

Similarly, we think that deviation of the predicted observation from the real observation comes from the deviation of state estimate xt+1 and the true state: xt+1,true. Using Taylor Expansion to linearize the observation model:

h(xt+1,true)h(xt+1)+h(x)|x=xt+1(xt+1xt+1)=h(xt+1)+H(x)(xt+1xt+1)h(xt+1,true)h(xt+1)=zth(xt+1)=H(x)(xt+1xt+1)
  • h(xt+1,true):=zt is the real observation

Linearizing the above gives us the observation Jacobian H:

H=h(x)x|x=xt+1=[xcxdycyd0cyyd2xcxd21]

This is another difference from ESKF, where

H=h(x)δx=h(x)xxδx
  • This is because we our observation model is still linearized on x (at $x=x_{t+1}^),butESKFsobservationmodelviewstheentireh(x_{t+1, true}) - h(x_{t+1}^)causedbytheerror\delta x$

Filtering Process

In EKF, we are implicitly applying the Kalman Filtering on the devivation δx from true state values. By applying kalman filtering, we can minimize the error covariance on the deviation given observation.

  • Prediction is
xt+1=f(xt¯,ut)=[xtωvsin(θt)+ωvsin(θt+ωΔt)yt+ωvcos(θt)ωvcos(θt+ωΔt)θt+ωΔt]Pt+1=FtPtFt+Q,
  • Update is:
S=HPt+1H+RK=Pt+1HS1xt+1¯=xt+1+K(zth(xt+1))Pt+1=(IKH)Pt+1

Related Issues not found

Please contact @ricojia to initialize the comment