Typically Kalman Filter or any other time series forecasting methods use a single step prediction – update step.
For eg: Let us say I have sensor data collected at every 1ms.
Let z denote measurement and x denote true state.
i.e at t = 100ms I have $z_0, z_1, z_2, … z_{100}$.
Now typically in the prediction step we predict $x_{101}$ and in the next timestep, we update the state parameters when we have a new measurement $z_{101}$.
But what if i need to predict $x_{110}$ at t=100ms?
My initial idea was to use 10ms as the timestep.
at t = 100ms, we have $z_0, z_{10}, z_{20},…z_{100}$. We can now predict $x_{110}$. But this is essentially wasting so much sensor data.
Is there a better way to approach this problem in general?
Best Answer
In the state space model $$ z_t=Ax_t+epsilon_t\ x_t=Bx_{t-1}+nu_t $$ where the errors are independent and separately identically distributed the usual one-step prediction is $$ hat{z}_{t+1|t}=E(z_{t+1}|z_t)=E(Ax_{t+1}+epsilon_{t+1}|z_t)=AE(x_{t+1}|z_t)=Ahat{x}_{t+1|t} $$ where $hat{x}_{t+1|t}=E(x_{t+1}|z_t)=BE(x_t|z_t)=Bhat{x}_{t|t}$. If we want the $t+h$ prediction, you simply do the same thing and write it in terms of the filtered estimate $hat{x}_{t|t}$: $$ hat{z}_{t+h|t}=E(z_{t+h}|z_t)=AE(x_{t+h}|z_t)=Ahat{x}_{t+h|t}=AB^hhat{x}_{t|t}. $$
Thus, if you want a prediction for $E(x_{110}|z_{100})$ you'd use $hat{z}_{110|100}=AB^{10}hat{x}_{100|100}$.
Similar Posts:
- Solved – kalman filter multiple observations per time step
- Solved – kalman filter multiple observations per time step
- Solved – Multi-target Tracking: calculate the association gate from Kalman filter
- Solved – Non-overlapping state and measurement covariances in Kalman Filter
- Solved – RNN vs Kalman filter : learning the underlying dynamics