I understand that one may use a particle filter to solve the filtering problem (estimating the hidden state of a system which can be described as a Hidden Markov Model).

If I have a system where I have the following Hidden Markov Model:

The hidden state of my system $X_k = begin{bmatrix} x_k \ v_k end{bmatrix} $ is governed by

$$ begin{bmatrix} x_k \ v_k end{bmatrix} = gleft(begin{bmatrix} x_{k-1} \ v_{k-1} end{bmatrix}right) + W_{k-1} $$

where the function $g$ is known and the probability distribution from which $W_{k-1}$ is drawn is known.

The observations of my system take the form

$$ z_k = h left(begin{bmatrix} x_{k} \ v_{k} end{bmatrix}right) + V_k $$

where the function $h$ is known and the probability distribution from which $V_{k}$ is drawn is known.

If I am trying to write some code to perform a SIR (Sequential Important Resampling) particle filter to extract the hidden state $x_{0:N}$ and $v_{0:N}$ given a series of observations $z_{0:N}$.

How do I calculate the important weight to use in the update step (prior to normalisation)?

The unnormalised importance weight of particle $i$ at time step $k$ should take the following form:

$$ hat{omega}^{(i)}_k = omega^{(i)}_{k-1} dfrac{p(z_k|X^{(i)}_k)p(X^{(i)}_k|X^{(i)}_{k-1})}{pi(X^{(i)}_k|X^{(i)}_{0:k-1},z_{0:k-1})} $$

How is this calculated in practise when one wants to perform this in code?

I have a working implementation of the SIR filter in python where for the update step I have observations of with the function $h$ simply being multiplication by 1 and $v$ and $V_k$ being drawn from a Gaussian distribution with a known variance. In practise this means I effectively have noisy measurements of $x$ and $v$ where the noise on those measurements is white Gaussian noise.

I then use the an update function where the unnormalised importance weight for each particle is simply the product of the reciprocals of the distance of the particle's state from the observation i.e. I use the following:

$$ hat{omega}^{(i)}_k = omega^{(i)}_{k-1} dfrac{1}{x^{(i)}_k – z_k[0]} dfrac{1}{v^{(i)}_k – z_k[1]} $$

This simple form works when the variance of the Gaussian noise is low but not when the variance increases. How should I go about calculate the importance weight in practise for this system?

**Contents**hide

#### Best Answer

When you write $$ hat{omega}^{(i)}_k = omega^{(i)}_{k-1} dfrac{p(z_k|X^{(i)}_k)p(X^{(i)}_k|X^{(i)}_{k-1})}{pi(X^{(i)}_k|X^{(i)}_{0:k-1},z_{0:k-1})} $$ you're looking at a ratio of densities (on the right hand side). When you write your state transitions and observation equations, the functions $h$ and $g$ are transformations, not densities. So you need to be more explicit about what kind of additive noise you have with the $V_k$s and the $W_k$s. You need $p(z_k|X^{(i)}_k)$, the observation density, $p(X^{(i)}_k|X^{(i)}_{k-1})$ your state transition density, and $pi(X^{(i)}_k|X^{(i)}_{0:k-1},z_{0:k-1})$ your importance/proposal/instrumental density.

I have a working implementation of the SIR filter in python where for the update step I have observations of with the function h simply being multiplication by 1 and v and Vk being drawn from a Gaussian distribution with a known variance.

If I understand this correctly, then you have the observation equation $$ z_k = begin{bmatrix} x_{k} \ v_{k} end{bmatrix} + v_k $$ which means $z_k mid x_k, v_k$ is normally distributed with a mean $$ begin{bmatrix} x_{k} \ v_{k} end{bmatrix} $$ and a variance-covariance matrix $Sigma^{} = sigma^2 mathbf{I}.$ So you need to use the density of this, and you still have to choose the other two densities.