12.5 Decision Processes

Recall that the planning horizon is how far ahead an agent considers when planning. The decision networks of Section 12.3.1 were for finite-stage, partially observable domains. This section considers indefinite-horizon and infinite-horizon problems.

Often an agent must reason about an ongoing process or it does not know how many actions it will be required to do. These are called infinite-horizon problems when the process may go on forever or indefinite-horizon problems when the agent will eventually stop, but it does not know when it will stop.

For ongoing processes, it may not make sense to consider only the utility at the end, because the agent may never get to the end. Instead, an agent can receive a sequence of rewards. Rewards provide a way to factor utility through time, by having a reward for each time, and accumulating the rewards to determine utility. Rewards can incorporate action costs in addition to any prizes or penalties that may be awarded. Negative rewards are called punishments. Indefinite-horizon problems can be modeled using a stopping state. A stopping state or absorbing state is a state in which all actions have no effect; that is, when the agent is in that state, all actions immediately return to that state with a zero reward. Goal achievement can be modeled by having a reward for entering such a stopping state.

A Markov decision process can be seen as a Markov chain augmented with actions and rewards or as a decision network extended in time. At each stage, the agent decides which action to perform; the reward and the resulting state depend on both the previous state and the action performed.

Unless noted, assume a stationary model, where the state transitions and the rewards do not depend on the time.

A Markov decision process, or MDP, consists of

  • S, a set of states of the world

  • A, a set of actions

  • P:S×S×A[0,1], which specifies the dynamics. This is written as P(ss,a), the probability of the agent transitioning into state s given that the agent is in state s and does action a. Thus

    sSaAsSP(ss,a)=1.
  • R:S×A×S, where R(s,a,s), the reward function, gives the expected immediate reward from doing action a and transitioning to state s from state s. Sometimes it is convenient to use R(s,a), the expected value of doing a in state s, which is R(s,a)=sR(s,a,s)P(ss,a).

A finite part of a Markov decision process can be depicted using a decision network as in Figure 12.15.

Refer to caption
Figure 12.15: Decision network representing a finite part of an MDP
Example 12.29.

Suppose Sam wanted to make an informed decision about whether to party or relax over the weekend. Sam prefers to party, but is worried about getting sick. Such a problem can be modeled as an MDP with two states, healthy and sick, and two actions, relax and party. Thus

S ={healthy,sick}
A ={relax,party}.

Based on experience, Sam estimates that the dynamics P(ss,a) is given by

S A Probability of s=healthy
healthy relax 0.95
healthy party 0.7
sick relax 0.5
sick party 0.1

So, if Sam is healthy and parties, there is a 30% chance of becoming sick. If Sam is healthy and relaxes, Sam will more likely remain healthy. If Sam is sick and relaxes, there is a 50% chance of getting better. If Sam is sick and parties, there is only a 10% chance of becoming healthy.

Sam estimates the rewards to be the following, irrespective of the resulting state:

S A Reward
healthy relax 7
healthy party 10
sick relax 0
sick party 2

Thus, Sam always enjoys partying more than relaxing. However, Sam feels much better overall when healthy, and partying results in being sick more than relaxing does.

The problem is to determine what Sam should do each weekend.

Example 12.30.

A grid world is an idealization of a robot in an environment. At each time, the robot is at some location and can move to neighboring locations, collecting rewards and punishments. Suppose that the actions are stochastic, so that there is a probability distribution over the resulting states given the action and the state.

Refer to caption
Figure 12.16: The grid world of Example 12.30

Figure 12.16 shows a 10×10 grid world, where the robot can choose one of four actions: up, down, left, or right. If the agent carries out one of these actions, it has a 0.7 chance of going one step in the desired direction and a 0.1 chance of going one step in any of the other three directions. If it bumps into the outside wall (i.e., the location computed is outside the grid), there is a penalty of 1 (i.e., a reward of 1) and the agent does not actually move. There are four rewarding states (apart from the walls), one worth +10 (at position (9,8); 9 across and 8 down), one worth +3 (at position (8,3)), one worth 5 (at position (4,5)), and one worth 10 (at position (4,8)). In each of these states, the agent gets the reward after it carries out an action in that state, not when it enters the state. When the agent reaches one of the states with positive reward (either +3 or +10), no matter what action it performs, at the next step it is flung, at random, to one of the four corners of the grid world.

Note that the reward in this example depends on both the initial state and the final state. The agent bumped into the wall, and so received a reward of 1, if and only if the agent remains in the same state. Knowing just the initial state and the action, or just the final state and the action, does not provide enough information to infer the reward.

As with decision networks, the designer also has to consider what information is available to the agent when it decides what to do. There are two variations:

  • In a fully observable Markov decision process (MDP), the agent gets to observe the current state when deciding what to do.

  • A partially observable Markov decision process (POMDP) is a combination of an MDP and a hidden Markov model (HMM). At each time, the agent gets to make some (ambiguous and possibly noisy) observations that depend on the state. The agent only has access to the history of rewards, observations, and previous actions when making a decision. It cannot directly observe the current state.

Rewards

To decide what to do, the agent compares different sequences of rewards. The most common way to do this is to convert a sequence of rewards into a number called the return, the cumulative reward, or the value. This is a number that specifies the utility to an agent of the current and future rewards. To compute the return, the agent combines the current reward with other rewards in the future. Suppose the agent receives the sequence of rewards

r1,r2,r3,r4,.

Three common reward criteria are used to combine rewards into a value V:

Total reward

V=i=1ri. In this case, the value is the sum of all of the rewards. This works when you can guarantee that the sum is finite; but if the sum is infinite, it does not give any opportunity to compare which sequence of rewards is preferable. For example, a sequence of 1 rewards has the same total as a sequence of 100 rewards (both are infinite). One case where the total reward is finite is when there are stopping states and the agent always has a non-zero probability of eventually entering a stopping state.

Average reward

V=limn(r1++rn)/n. In this case, the agent’s value is the average of its rewards, averaged over for each time period. As long as the rewards are finite, this value will also be finite. However, whenever the total reward is finite, the average reward is zero, and so the average reward will fail to allow the agent to choose among different actions that each have a zero average reward. Under this criterion, the only thing that matters is where the agent ends up. Any finite sequence of bad actions does not affect the limit. For example, receiving 1,000,000 followed by rewards of 1 has the same average reward as receiving 0 followed by rewards of 1 (they both have an average reward of 1).

Discounted reward

V=r1+γr2+γ2r3++γi1ri+, where γ, the discount factor, is a number in the range 0γ<1. Under this criterion, future rewards are worth less than the current reward. If γ was 1, this would be the same as the total reward. When γ=0, the agent ignores all future rewards. Having 0γ<1 guarantees that, whenever the rewards are finite, the total value will also be finite.

The discounted reward can be rewritten as

V =i=1γi1ri
=r1+γr2+γ2r3++γi1ri+
=r1+γ(r2+γ(r3+)).

Suppose Vk is the reward accumulated from time k:

Vk =rk+γ(rk+1+γ(rk+2+))
=rk+γVk+1.

To understand the properties of Vk, suppose S=1+γ+γ2+γ3+, then S=1+γS. Solving for S gives S=1/(1γ). Thus, with the discounted reward, the value of all of the future is at most 1/(1γ) times as much as the maximum reward and at least 1/(1γ) times as much as the minimum reward. Therefore, the eternity of time from now only has a finite value compared with the immediate reward, unlike the average reward, in which the immediate reward is dominated by the cumulative reward for the eternity of time.

In economics, γ is related to the interest rate: getting $1 now is equivalent to getting $(1+i) in one year, where i is the interest rate. You could also see the discount rate as the probability that the agent survives; γ can be seen as the probability that the agent keeps going.

The rest of this chapter considers discounted rewards, referred to as the return or value.

 

Foundations of Discounting

You might wonder whether there is something special about discounted rewards, or they are just an arbitrary way to define utility over time. Utility follows from the axioms of Section 12.1.1. Discounting can be proved to follow from a set of assumptions, as follows.

With an infinite sequence of outcomes o1,o2,o3, the following assumptions hold

  • the first time period matters, so there exist o1,o2,o3, and o1 such that

    o1,o2,o3,o1,o2,o3,
  • a form of additive independence, where preferences for the first two time periods do not depend on the future:

    x1,x2,o3,o4y1,y2,o3,o4
         if and only if
    x1,x2,o3,o4y1,y2,o3,o4
  • time stationarity, where if the first outcome is the same, preference depends on the remainder:

    o1,o2,o3,o1,o2,o3,
         if and only if
    o2,o3,o2,o3,
  • some extra technical conditions specifying that the agent is only concerned about finite subspaces of infinite time

if and only if there exists a discount factor γ and function r such that

utility(o1,o2,o3,)=iγi1r(oi)

where r does not depend on the time, only the outcome. These are quite strong assumptions, for example, disallowing complements and substitutes. It is standard to engineer the rewards to make them true.

 

12.5.1 Policies

In a fully-observable Markov decision process, the agent gets to observe its current state before deciding which action to carry out. For now, assume that the Markov decision process is fully observable. A policy for an MDP specifies what the agent should do as a function of the state it is in. A stationary policy is a function π:SA. In a non-stationary policy the action is a function of the state and the time; we assume policies are stationary.

Given a reward criterion, a policy has an expected return, often referred to as the expected value, for every state. Let Vπ(s) be the expected value of following π in state s. This is the utility an agent that is in state s and following policy π receives, on average. Policy π is an optimal policy if for every policy π and state s, Vπ(s)Vπ(s). That is, an optimal policy is at least as good as any other policy for every state.

Example 12.31.

For Example 12.29, with two states and two actions, there are 22=4 policies:

  • Always relax.

  • Always party.

  • Relax if healthy and party if sick.

  • Party if healthy and relax if sick.

The total reward for all of these is infinite because the agent never stops, and can never continually get a reward of 0. To determine the average reward is left as an exercise (Exercise 12.15). How to compute the discounted reward is discussed in the next section.

Example 12.32.

In the grid-world MDP of Example 12.30, there are 100 states and 4 actions, therefore there are 41001060 stationary policies. Each policy specifies an action for each state.

For infinite-horizon problems, a stationary MDP always has an optimal stationary policy. However, for finite-stage problems, a non-stationary policy might be better than all stationary policies. For example, if the agent had to stop at time n, for the last decision in some state, the agent would act to get the largest immediate reward without considering the future actions, but for earlier decisions it may decide to get a lower reward immediately to obtain a larger reward later.

Value of a Policy

Consider how to compute the expected value, using the discounted reward of a policy, given a discount factor of γ. The value is defined in terms of two interrelated functions:

  • Vπ(s) is the expected value for an agent that is in state s and following policy π.

  • Qπ(s,a) is the expected value for an agent that is starting in state s, then doing action a, then following policy π. This is called the Q-value of policy π.

Qπ and Vπ are defined recursively in terms of each other. If the agent is in state s, performs action a, and arrives in state s, it gets the immediate reward of R(s,a,s) plus the discounted future return, γVπ(s). When the agent is planning it does not know the actual resulting state, so it uses the expected value, averaged over the possible resulting states:

Qπ(s,a) =sP(ss,a)(R(s,a,s)+γVπ(s))
=R(s,a)+γsP(ss,a)Vπ(s) (12.2)

where R(s,a)=sP(ss,a)R(s,a,s).

Vπ(s) is obtained by doing the action specified by π and then following π:

Vπ(s) =Qπ(s,π(s)).

Value of an Optimal Policy

Let Q(s,a), where s is a state and a is an action, be the expected value of doing a in state s and then following the optimal policy. Let V(s), where s is a state, be the expected value of following an optimal policy from state s.

Q can be defined analogously to Qπ:

Q(s,a) =sP(ss,a)(R(s,a,s)+γV(s))
=R(s,a)+γsP(ss,a)γV(s).

V(s) is obtained by performing the action that gives the best value in each state:

V(s) =maxaQ(s,a).

An optimal policy π is one of the policies that gives the best value for each state:

π(s) =argmaxaQ(s,a)

where argmaxaQ(s,a) is a function of state s, and its value is an action a that results in the maximum value of Q(s,a).

12.5.2 Value Iteration

Value iteration is a method of computing an optimal policy for an MDP and its value.

Value iteration starts at the “end” and then works backward, refining an estimate of either Q or V. There is really no end, so it uses an arbitrary end point. Let Vk be the value function assuming there are k stages to go, and let Qk be the Q-function assuming there are k stages to go. These can be defined recursively. Value iteration starts with an arbitrary function V0. For subsequent stages, it uses the following equations to get the functions for k+1 stages to go from the functions for k stages to go:

Qk+1(s,a) =R(s,a)+γsP(ss,a)Vk(s)
Vk(s) =maxaQk(s,a).

Value iteration can either save a V[S] array or a Q[S,A] array. Saving the V array results in less storage, but it is more difficult to determine an optimal action, and one more iteration is needed to determine which action results in the greatest value.

1: procedure Value_iteration(S,A,P,R)
2:      Inputs
3:          S is the set of all states
4:          A is the set of all actions
5:          P is the state transition function specifying P(ss,a)
6:          R is a reward function R(s,a)      
7:      Output
8:          π[S] approximately optimal policy
9:          V[S] value function      
10:      Local
11:          real array Vk[S] is a sequence of value functions
12:          action array π[S]      
13:      assign V0[S] arbitrarily
14:      k:=0
15:      repeat
16:          k:=k+1
17:          for each state s do
18:               Vk[s]=maxaR(s,a)+γsP(ss,a)Vk1[s]          
19:      until termination
20:      for each state s do
21:          π[s]=argmaxaR(s,a)+γsP(ss,a)Vk[s]      
22:      return π,Vk
Figure 12.17: Value iteration for MDPs, storing V

Figure 12.17 shows the value iteration algorithm when the V array is stored. This procedure converges no matter what the initial value function V0 is. An initial value function that approximates V converges quicker than one that does not. The basis for many abstraction techniques for MDPs is to use some heuristic method to approximate V and to use this as an initial seed for value iteration.

Example 12.33.

Consider the two-state MDP of Example 12.29 with discount γ=0.8. We write the value function as [healthy_value,sick_value] and the Q-function as [[healthy_relax,healthy_party],[sick_relax,sick_party]]. Suppose initially the value function is [0,0]. The next Q-value is [[7,10],[0,2]], so the next value function is [10,2] (obtained by Sam partying). The next Q-value is then

State Action Value
healthy relax 7+0.8(0.9510+0.052) = 14.68
healthy party 10+0.8(0.710+0.32) = 16.08
sick relax 0+0.8(0.510+0.52) = 4.8
sick party 2+0.8(0.110+0.92) = 4.24

So the next value function is [16.08,4.8]. After 1000 iterations, the value function is [35.71,23.81]. So the Q-function is [[35.10,35.71],[23.81,22.0]]. Therefore, the optimal policy is to party when healthy and relax when sick.

Example 12.34.

Consider the nine squares around the +10 reward of Example 12.30. The discount is γ=0.9. Suppose the algorithm starts with V0[s]=0 for all states s.

The values of V1, V2, and V3 (to one decimal point) for these nine cells are

0 0 0.1
0 10 0.1
0 0 0.1
V1
0 6.3 0.1
6.3 9.8 6.2
0 6.3 0.1
V2
4.5 6.2 4.4
6.2 9.7 6.6
4.5 6.1 4.4
V3

After the first step of value iteration (in V1), the nodes get their immediate expected reward. The center node in this figure is the +10 reward state. The right nodes have a value of 0.1, with the optimal actions being up, left, and down; each of these has a 0.1 chance of crashing into the wall for an immediate expected reward of 1.

V2 are the values after the second step of value iteration. Consider the node that is immediately to the left of the +10 reward state. Its optimal value is to go to the right; it has a 0.7 chance of getting a reward of 10 in the following state, so that is worth 9 (10 times the discount of 0.9) to it now. The expected reward for the other possible resulting states is 0. Thus, the value of this state is 0.79=6.3.

Consider the node immediately to the right of the +10 reward state after the second step of value iteration. The agent’s optimal action in this state is to go left. The value of this state is

ProbRewardFuture Value 0.7(0+0.910)Agent goes left+0.1(0+0.90.1)Agent goes up+0.1(1+0.90.1)Agent goes right+0.1(0+0.90.1)Agent goes down

which evaluates to 6.173, which is approximated to 6.2 in V2 above.

The +10 reward state has a value less than 10 in V2 because the agent gets flung to one of the corners and these corners look bad at this stage.

After the next step of value iteration, shown on the right-hand side of the figure, the effect of the +10 reward has progressed one more step. In particular, the corners shown get values that indicate a reward in three steps.

The value iteration algorithm of Figure 12.17 has an array for each stage, but it really only needs to store the current and previous arrays. It can update one array based on values from the other.

A common refinement of this algorithm is asynchronous value iteration. Rather than sweeping through the states to create a new value function, asynchronous value iteration updates the states one at a time, in any order, and stores the values in a single array. Asynchronous value iteration can store either the Q[s,a] array or the V[s] array. Figure 12.18 shows asynchronous value iteration when the Q-array is stored. It converges faster than value iteration and is the basis of some of the algorithms for reinforcement learning. Termination can be difficult to determine if the agent must guarantee a particular error, unless it is careful about how the actions and states are selected. Often, this procedure is run indefinitely as an anytime algorithm, where it is always prepared to give its best estimate of the optimal action in a state when asked.

1: procedure Asynchronous_value_iteration(S,A,P,R)
2:      Inputs
3:          S is the set of all states
4:          A is the set of all actions
5:          P is the state transition function specifying P(ss,a)
6:          R is a reward function R(s,a)      
7:      Output
8:          π[s] policy
9:          Q[S,A] value function      
10:      Local
11:          real array Q[S,A]
12:          action array π[S]      
13:      assign Q[S,A] arbitrarily
14:      repeat
15:          select a state s
16:          select an action a
17:          Q[s,a]=R(s,a)+γsP(ss,a)maxaQ[s,a]
18:      until termination
19:      for each state s do
20:          π[s]=argmaxaQ[s,a]      
21:      return π,Q
Figure 12.18: Asynchronous value iteration for MDPs

Asynchronous value iteration could also be implemented by storing just the V[s] array. In that case, the algorithm selects a state s and carries out the update

V[s]:=maxaR(s,a)+γsP(ss,a)V[s].

Although this variant stores less information, it is more difficult to extract the policy. It requires one extra backup to determine which action a results in the maximum value. This can be done using

π[s]:=argmaxaR(s,a)+γsP(ss,a)V[s].
Example 12.35.

In Example 12.34, the state one step up and one step to the left of the +10 reward state only had its value updated after three value iterations, in which each iteration involved a sweep through all of the states.

In asynchronous value iteration, the +10 reward state can be chosen first. Next, the node to its left can be chosen, and its value will be 0.70.910=6.3. Next, the node above that node could be chosen, and its value would become 0.70.96.3=3.969. Note that it has a value that reflects that it is close to a +10 reward after considering three states, not 300 states, as does value iteration.

12.5.3 Policy Iteration

Policy iteration starts with a policy and iteratively improves it. It starts with an arbitrary policy π0 (an approximation to the optimal policy works best) and carries out the following steps, starting from i=0.

  • Policy evaluation: determine Vπi(S). The definition of Vπ is a set of |S| linear equations in |S| unknowns. The unknowns are the values of Vπi(S). There is an equation for each state. These equations can be solved by a linear equation solution method (such as Gaussian elimination) or they can be solved iteratively.

  • Policy improvement: choose πi+1(s)=argmaxaQπi(s,a), where the Q-value can be obtained from V using Equation 12.2. To detect when the algorithm has converged, it should only change the policy if the new action for some state improves the expected value; that is, it should set πi+1(s) to be πi(s) if πi(s) is one of the actions that maximizes Qπi(s,a).

  • Stop if there is no change in the policy, if πi+1=πi, otherwise increment i and repeat.

1: procedure Policy_iteration(S,A,P,R)
2:      Inputs
3:          S is the set of all states
4:          A is the set of all actions
5:          P is the state transition function specifying P(ss,a)
6:          R is a reward function R(s,a)      
7:      Output
8:          optimal policy π
9:      Local
10:          action array π[S]
11:          Boolean variable noChange
12:          real array V[S]      
13:      set π arbitrarily
14:      repeat
15:          noChange:=true
16:          Solve V[s]=R(s,a)+γsSP(ss,π[s])V[s]
17:          for each sS do
18:               QBest:=V[s]
19:               for each aA do
20:                   Qsa:=R(s,a)+γsSP(ss,a)V[s]
21:                   if Qsa>QBest then
22:                        π[s]:=a
23:                        QBest:=Qsa
24:                        noChange:=false                                          
25:      until noChange
26:      return π
Figure 12.19: Policy iteration for MDPs

The algorithm is shown in Figure 12.19. Note that it only keeps the latest policy and notices if it has changed. This algorithm always halts, usually in a small number of iterations. Unfortunately, solving the set of linear equations is often time consuming.

A variant of policy iteration, called modified policy iteration, is obtained by noticing that the agent is not required to evaluate the policy to improve it; it can just carry out a number of backup steps using Equation 12.2 and then do an improvement.

Policy iteration is useful for systems that are too big to be represented explicitly as MDPs. One case is when there is a large action space, and the agent does not want to enumerate all actions at each time. The algorithm also works as long as an improving action is found, and it only needs to find an improving action probabilistically, for example, by testing some promising actions, rather than all.

Suppose a controller has some parameters that can be varied. An estimate of the derivative of the cumulative discounted reward of a parameter a in some context s, which corresponds to the derivative of Q(a,s), can be used to improve the parameter. Such an iteratively improving controller can get into a local maximum that is not a global maximum. Policy iteration for state-based MDPs does not result in non-optimal local maxima, because it is possible to improve an action for a state without affecting other states, whereas updating parameters can affect many states at once.

12.5.4 Dynamic Decision Networks

A Markov decision process is a state-based representation. Just as in classical planning, where reasoning in terms of features can allow for more straightforward representations and more efficient algorithms, planning under uncertainty can also take advantage of reasoning in term of features. This forms the basis for decision-theoretic planning.

A dynamic decision network (DDN) can be seen in a number of different ways:

A dynamic decision network consists of

  • a set of state features

  • a set of possible actions

  • a two-stage decision network with chance nodes F0 and F1 for each feature F (for the features at time 0 and time 1, respectively) and decision node A0, such that

    • the domain of A0 is the set of all actions

    • the parents of A0 are the set of time 0 features (these arcs are often not shown explicitly)

    • the parents of time 0 features do not include A0 or time 1 features, but can include other time 0 features as long as the resulting network is acyclic

    • the parents of time 1 features can contain A0 and other time 0 or time 1 features as long as the graph is acyclic

    • there are probability distributions for P(F0parents(F0)) and P(F1parents(F1)) for each feature F

    • the reward function depends on any subset of the action and the features at times 0 or 1.

As in a dynamic belief network, a dynamic decision network can be unfolded into a decision network by replicating the features and the action for each subsequent time. For a time horizon of n, there is a variable Fi for each feature F and for each time i for 0in. For a time horizon of n, there is a variable Ai for each time i for 0i<n. The horizon, n, can be unbounded, which allows us to model processes that do not halt.

Thus, if there are k features for a time horizon of n, there are k(n+1) chance nodes (each representing a random variable) and k decision nodes in the unfolded network.

The parents of Ai are random variables Fi (so that the agent can observe the state). Each Fi+1 depends on the action Ai and the features at time i and i+1 in the same way, with the same conditional probabilities, as F1 depends on the action A0 and the features at time 0 and 1. The F0 variables are modeled directly in the two-stage decision network.

Example 12.36.

Example 6.1 models a robot that can deliver coffee and mail in a simple environment with four locations. Consider representing a stochastic version of Example 6.1 as a dynamic decision network. We use the same features as in that example.

Refer to caption
Figure 12.20: Two-stage dynamic decision network

Feature RLoc models the robot’s location. The parents of variables RLoc1 are Rloc0 and A.

Feature RHC is true when the robot has coffee. The parents of RHC1 are RHC0, A0, and RLoc0; whether the robot has coffee depends on whether it had coffee before, what action it performed, and its location. The probabilities can encode the possibilities that the robot does not succeed in picking up or delivering the coffee, that it drops the coffee, or that someone gives it coffee in some other state (which we may not want to say is impossible).

Variable SWC is true when Sam wants coffee. The parents of SWC1 include SWC0, RHC0, A0, and RLoc0. You would not expect RHC1 and SWC1 to be independent because they both depend on whether or not the coffee was successfully delivered. This could be modeled by having one be a parent of the other.

The two-stage belief network representing how the state variables at time 1 depend on the action and the other state variables is shown in Figure 12.20. This figure also shows the reward as a function of the action, whether Sam stopped wanting coffee, and whether there is mail waiting.

Figure 12.21 shows the unfolded decision network for a horizon of 3.

Refer to caption
Figure 12.21: Dynamic decision network unfolded for a horizon of 3
Example 12.37.

An alternate way to model the dependence between RHC1 and SWC1 is to introduce a new variable, CSD1, which represents whether coffee was successfully delivered at time 1. This variable is a parent of both RHC1 and SWC1. Whether Sam wants coffee is a function of whether Sam wanted coffee before and whether coffee was successfully delivered. Whether the robot has coffee depends on the action and the location, to model the robot picking up coffee. Similarly, the dependence between MW1 and RHM1 can be modeled by introducing a variable MPU1, which represents whether the mail was successfully picked up. The resulting DDN unfolded to a horizon of 2, but omitting the reward, is shown in Figure 12.22.

Refer to caption
Figure 12.22: Dynamic decision network with intermediate variables for a horizon of 2, omitting the reward nodes

If the reward comes only at the end, variable elimination for decision networks, shown in Figure 12.14, can be applied directly. Variable elimination for decision networks corresponds to value iteration. Note that in fully observable decision networks, variable elimination does not require the no-forgetting condition. Once the agent knows the state, all previous decisions are irrelevant. If rewards are accrued at each time step, the algorithm must be augmented to allow for the addition (and discounting) of rewards. See Exercise 12.19.

12.5.5 Partially Observable Decision Processes

A partially observable Markov decision process (POMDP) is a combination of an MDP and a hidden Markov model (HMM). Whereas the state in an MPD is assumed to be fully observable, the environment state in a POMDP is partially observable, which means the agent receives partial and/or noisy observations of the environment before it has to act.

Refer to caption
Figure 12.23: A POMDP with explicit belief states

A POMDP can be expressed as the infinite extension of the decision network of Figure 12.23, which explicitly shows the belief state of the agent. The network extends indefinitely to the right.

A POMDP consists of the following variables and factors defining the external behavior of the agent:

  • S, a set of states of the world

  • A, a set of actions

  • O, a set of possible observations

  • P(S0), the probability distribution of the starting state

  • P(SS,A), the dynamics of the environment, is the probability of getting to state S by doing action A from state S

  • R(S,A), the expected reward of starting in state S, doing action A

  • P(OS,A,R), the probability of observing O given the state is S, the previous action is A, and the reward is R.

P(SS,A) and R(S,A) are the same as in an MDP. The arc from Si to Oi means that what the agent observes can depend on the state. The arc from Ai1 to Oi allows the agent to have actions that don’t affect the environment, but affect its observations, such as paying attention to some part of the environment. The arc from Ri1 to Oi indicates that the agent’s observation can depend on the reward received; the agent is not assumed to observe the reward directly, as sometimes the rewards are only seen in retrospect. Observing the reward can often provide hints about the state that the agent might not actually be able to observe.

Internally, an agent has

  • B, the set of possible belief states

  • T(B,A,O), a belief state transition function, which specifies the agent’s new belief state given the previous belief state B, the action the agent did, A, and the observation, O; the belief state at stage i is Bi=T(Bi1,Ai1,Oi).

  • π(AiBi,Oi), a command function or policy, which specifies a conditional plan defining what the agent will do as a function of the belief state and the observation.

A policy might be stochastic to allow for exploration or to confound other agents. The belief-state transition function is typically deterministic, representing probability distributions, that are updated based on the action and new observations.

Planning in a POMDP involves creating both a belief state transition function and a command function. The Bi variables are special in that the world does not specify the domain or structure of these variables; an agent or its designer gets to choose the structure of a belief state, and how the agent acts based on its belief state, previous action, and latest observations. The belief state encodes all that the agent has remembered about its history.

There are a number of ways to represent a belief state and to find the optimal policy:

  • The decision network of Figure 12.23 can be solved using variable elimination for decision networks, shown in Figure 12.14, extended to include discounted rewards. Adding the no-forgetting arcs is equivalent to a belief state being the sequence of observations and actions; Bi is Ai1 and Oi appended to the sequence Bi1. The problem with this approach is that the history is unbounded, and the size of a policy is exponential in the length of the history. This is only practical when the history is short or is deliberately cut short.

  • The belief state can be a probability distribution over the states of the environment. Maintaining the belief state is then the problem of filtering in the associated hidden Markov model. The problem with this approach is that, with n world states, the set of belief states is an (n1)-dimensional real space. The agent then needs to find the optimal function from this multidimensional real space into the actions. The value of a sequence of actions only depends on the world states; as the belief state is a probability distribution over world states, the expected value of a belief state is a linear function of the values of the world states. Not all points in n1 are possible belief states, only those resulting from sequences of observations are possible. Policies can be represented as a decision tree with conditions being (functions of) the observations. The observations dictate which path in the tree the agent will follow. The value function for an agent choosing the best policy for any finite look-ahead is piecewise linear and convex. Although this is simpler than a general n1 function, the number of conditional policies grows like Ot, where O is the set of possible observations, and t is the stage.

  • In general, the belief state can be anything. A belief state can include probability distributions over some of the variables, and remembered observations and actions. The agent can search over the space of controllers for the best controller. Thus, the agent searches over what to remember (the belief state) and what to do based on its belief state. Note that the first two proposals are instances of this approach: the agent’s belief state is all of its history, or the agent’s belief state is a probability distribution over world states, both of which are intractable. Because the general case is unconstrained over what to remember, the search space is enormous.