foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
In a fully-observable Markov decision process, the agent gets to observe its current state before deciding which action to carry out. For now assume that the Markov decision process is fully observable. A policy specifies what the agent should do as a function of the state it is in. A stationary policy is a function . In a non-stationary policy the action is a function of the state and the time; we assume policies are stationary.
Given a reward criterion, a policy has an expected value for every state. Let be the expected value of following in state . This specifies how much value the agent expects to receive from following the policy in that state. Policy is an optimal policy if there is no policy and no state such that . That is, it is a policy that has a greater or equal expected value at every state than any other policy.
For Example 9.27, with two states and two actions, there are policies:
Always relax.
Always party.
Relax if healthy and party if sick.
Party if healthy and relax if sick.
The total reward for all of these is infinite because the agent never stops, and can never continually get a reward of 0. To determine the average reward is left as an exercise (Exercise 14). How to compute the discounted reward is discussed in the next section.
In the MDP of Example 9.28 there are 100 states and 4 actions, therefore there are stationary policies. Each policy specifies an action for each state.
For infinite horizon problems, a stationary MDP always has an optimal stationary policy. However for finite-stage problems, a non-stationary policy might be better than all stationary policies. For example, if the agent had to stop at time , for the last decision in some state, the agent would act to get the largest immediate reward without considering the future actions, but for earlier decisions it may decide to get a lower reward immediately to obtain a larger reward later.
Consider how to compute the expected value, using the discounted reward of a policy, given a discount factor of . The value is defined in terms of two interrelated functions:
is the expected value of following policy in state .
, is the expected value, starting in state of doing action , then following policy . This is called the Q-value of policy .
and are defined recursively in terms of each other. If the agent is in state , performs action , and arrives in state , it gets the immediate reward of plus the discounted future reward, . When the agent is planning it does not know the actual resulting state, so it uses the expected value, averaged over the possible resulting states:
(9.2) |
where .
is obtained by doing the action specified by and then following :
Let , where is a state and is an action, be the expected value of doing in state and then following the optimal policy. Let , where is a state, be the expected value of following an optimal policy from state .
can be defined analogously to :
is obtained by performing the action that gives the best value in each state:
An optimal policy is one of the policies that gives the best value for each state:
where is a function of state , and its value is an action that results in the maximum value of .