11
$\begingroup$

In the seminal paper "Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks" by A. Doucet et. al. a sequential monte carlo filter (particle filter) is proposed, which makes use of a linear substructure $x^L_k$ in a markov process $x_k = (x^L_k, x^N_k)$. By marginalization of this linear structure, the filter can be split into two parts: a non-linear part which uses a particle filter, and one linear part which can be handled by a Kalman filter (conditioned on the non-linear part $x^N_k$).

I understand the marginalization part (and sometimes the described filter is also called marginalized filter). My intuition why it is called a Rao-Blackwellized Particle Filter (RBPF) is that the Gaussian parameters are a sufficient statistics for the underlying linear process, and following from the theorem of Rao-Blackwell an estimator conditioned on these parameters performs at least as good as the sampling estimator.

The Rao-Blackwell estimator is defined as $E(\delta(X)|T(X)) = \delta_1(X)$. In this context I would guess that $\delta(X)$ is the monte carlo estimator, $\delta_1(X)$ the RBPF, and $T(X)$ the gaussian parametrization. My problem is that I don't see where this is actually applied in the paper.

So why is this called a Rao-Blackwellized Particle Filter, and where does the Rao-Blackwellization actually happen?

$\endgroup$

1 Answer 1

1
$\begingroup$

In $\widehat{I^1}$ the Monte Carlo estimate of $\mathbb{E}[f]$ is used. In $\widehat{I^2}$ the expectation is computed exactly. This is the RB-part.

Later in the paper the expectation is computed using Kalman filters.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.