9.5 Decision Processes

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

9.5.1 Policies

In a fully-observable Markov decision process, the agent gets to observe its current state before deciding which action to carry out. For now assume that the Markov decision process is fully observable. A policy specifies what the agent should do as a function of the state it is in. A stationary policy is a function π:SA. In a non-stationary policy the action is a function of the state and the time; we assume policies are stationary.

Given a reward criterion, a policy has an expected value for every state. Let Vπ(s) be the expected value of following π in state s. This specifies how much value the agent expects to receive from following the policy in that state. Policy π is an optimal policy if there is no policy π and no state s such that Vπ(s)>Vπ(s). That is, it is a policy that has a greater or equal expected value at every state than any other policy.

Example 9.29.

For Example 9.27, with two states and two actions, there are 22=4 policies:

  • Always relax.

  • Always party.

  • Relax if healthy and party if sick.

  • Party if healthy and relax if sick.

The total reward for all of these is infinite because the agent never stops, and can never continually get a reward of 0. To determine the average reward is left as an exercise (Exercise 14). How to compute the discounted reward is discussed in the next section.

Example 9.30.

In the MDP of Example 9.28 there are 100 states and 4 actions, therefore there are 41001060 stationary policies. Each policy specifies an action for each state.

For infinite horizon problems, a stationary MDP always has an optimal stationary policy. However for finite-stage problems, a non-stationary policy might be better than all stationary policies. For example, if the agent had to stop at time n, for the last decision in some state, the agent would act to get the largest immediate reward without considering the future actions, but for earlier decisions it may decide to get a lower reward immediately to obtain a larger reward later.

Value of a Policy

Consider how to compute the expected value, using the discounted reward of a policy, given a discount factor of γ. The value is defined in terms of two interrelated functions:

  • Vπ(s) is the expected value of following policy π in state s.

  • Qπ(s,a), is the expected value, starting in state s of doing action a, then following policy π. This is called the Q-value of policy π.

Qπ and Vπ are defined recursively in terms of each other. If the agent is in state s, performs action a, and arrives in state s, it gets the immediate reward of R(s,a,s) plus the discounted future reward, γVπ(s). When the agent is planning it does not know the actual resulting state, so it uses the expected value, averaged over the possible resulting states:

Qπ(s,a) =sP(ss,a)(R(s,a,s)+γVπ(s))
=R(s,a)+γsP(ss,a)Vπ(s) (9.2)

where R(s,a)=sP(ss,a)R(s,a,s).

Vπ(s) is obtained by doing the action specified by π and then following π:

Vπ(s) =Qπ(s,π(s)).

Value of an Optimal Policy

Let Q*(s,a), where s is a state and a is an action, be the expected value of doing a in state s and then following the optimal policy. Let V*(s), where s is a state, be the expected value of following an optimal policy from state s.

Q* can be defined analogously to Qπ:

Q*(s,a) =sP(ss,a)(R(s,a,s)+γV*(s))
=R(s,a)+γsP(ss,a)γV*(s).

V*(s) is obtained by performing the action that gives the best value in each state:

V*(s) =maxaQ*(s,a).

An optimal policy π* is one of the policies that gives the best value for each state:

π*(s) =argmaxaQ*(s,a)

where argmaxaQ*(s,a) is a function of state s, and its value is an action a that results in the maximum value of Q*(s,a).