Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

### 6.5.3 Algorithms for Monitoring and Smoothing

You can use any standard belief-network algorithms, such as VE or particle filtering, to carry out monitoring or smoothing. However, you can take advantage of the fact that time moves forward and that you are getting observations in time and are interested in the state at the current time.

In **belief monitoring** or **filtering**, an agent computes the probability of
the current state given the history of observations. In terms of the
HMM of Figure 6.14, for each *i*, the agent wants to
compute *P(S _{i}|o_{0},...,o_{i})*, which is the distribution over the
state at time

*i*given the particular observation of

*o*. This can easily be done using VE:

_{0},...,o_{i}Suppose the agent has computed the previous belief based on the
observations received up until time *i-1*. That is, it has a factor representing
*P(S _{i-1}|o_{0},...,o_{i-1})*. Note that this is just a factor on

*S*. To compute the next belief, it multiplies this by

_{i-1}*P(S*, sums out

_{i}|S_{i-1})*S*, multiplies this by the factor

_{i-1}*P(o*, and normalizes.

_{i}|S_{i})Multiplying a factor on *S _{i-1}* by the factor

*P(S*and summing out

_{i}|S_{i-1})*S*is

_{i-1}**matrix multiplication**. Multiplying the result by

*P(o*is called the

_{i}|S_{i})**dot product**. Matrix multiplication and dot product are simple instances of VE.

**Example 6.30:**Consider the domain of Example 6.28. An observation of a door involves multiplying the probability of each location

*L*by

*P(door|Loc=L)*and renormalizing. A move right involves, for each state, doing a forward simulation of the move-right action in that state weighted by the probability of being in that state.

For many problems the state space is too big for exact inference. For these domains, particle filtering is often very effective. With temporal models, resampling typically occurs at every time step. Once the evidence has been observed, and the posterior probabilities of the samples have been computed, they can be resampled.

**Smoothing** is the problem of computing the
probability distribution of a state variable in an HMM
given past and future observations. The use of future observations can
make for more accurate predictions. Given a new observation it is
possible to update all previous state estimates with one sweep
through the states using VE; see Exercise 6.11.