Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

2.8 Exercises

Exercise 2.1:
Section 2.3 argued that it was impossible to build a representation of a world that is independent of what the agent will do with it. This exercise lets you evaluate this argument.

Choose a particular world, for example, what is on some part of your desk at the current time.

  1. Get someone to list all of the things that exist in this world (or try it yourself as a thought experiment).
  2. Try to think of twenty things that they missed. Make these as different from each other as possible. For example, the ball at the tip of the rightmost ball-point pen on the desk, or the spring in the stapler, or the third word on page 2.1 of a particular book on the desk.
  3. Try to find a thing that cannot be described using natural language.
  4. Choose a particular task, such as making the desk tidy, and try to write down all of the things in the world at a level of description that is relevant to this task.

Based on this exercise, discuss the following statements:

  1. What exists in a world is a property of the observer.
  2. We need ways to refer to individuals other than expecting each individual to have a separate name.
  3. What individuals exist is a property of the task as well as of the world.
  4. To describe the individuals in a domain, you need what is essentially a dictionary of a huge number of words and ways to combine them to describe individuals, and this should be able to be done independently of any particular domain.
Exercise 2.2:
Explain why the middle layer in Example 2.5 must have both the previous target position and the current target position as inputs. Suppose it had only one of these as input; which one would it have to be, and what would the problem with this be?
Exercise 2.3:
The definition of the target position in Example 2.6 means that, when the plan ends, the robot will just keep the last target position as its target position and keep circling forever. Change the definition so that the robot goes back to its home and circles there.
Exercise 2.4:
The obstacle avoidance implemented in Example 2.5 can easily get stuck.
  1. Show an obstacle and a target for which the robot using the controller of Example 2.5 would not be able to get around (and it will crash or loop).
  2. Even without obstacles, the robot may never reach its destination. For example, if it is next to its target position, it may keep circling forever without reaching its target. Design a controller that can detect this situation and find its way to the target.
Exercise 2.5:
Consider the "robot trap" in Figure 2.11.
figures/ch02/robot-trap.gif
Figure 2.11: A robot trap

  1. Explain why it is so tricky for a robot to get to location g. You must explain what the current robot does as well as why it is difficult to make a more sophisticated robot (e.g., one that follows the wall using the "right-hand rule": the robot turns left when it hits an obstacle and keeps following a wall, with the wall always on its right) to work.
  2. An intuition of how to escape such a trap is that, when the robot hits a wall, it follows the wall until the number of right turns equals the number of left turns. Show how this can be implemented, explaining the belief state, the belief-state transition function, and the command function.
Exercise 2.6:
When the user selects and moves the current target location, the robot described in this chapter travels to the original position of that target and does not try to go to the new position. Change the controller so that the robot will try to head toward the current location of the target at each step.
Exercise 2.7:
The current controller visits the locations in the todo list sequentially.
  1. Change the controller so that it is opportunistic; when it selects the next location to visit, it selects the location that is closest to its current position. It should still visit all of the locations.
  2. Give one example of an environment in which the new controller visits all of the locations in fewer time steps than the original controller.
  3. Give one example of an environment in which the original controller visits all of the locations in fewer time steps than the modified controller.
  4. Change the controller so that, at every step, the agent heads toward whichever target location is closest to its current position.
  5. Can the controller from part (d) get stuck in a loop and never reach a target in an example where the original controller will work? Either give an example in which it gets stuck in a loop and explain why it cannot find a solution, or explain why it does not get into a loop.
Exercise 2.8:
Change the controller so that the robot senses the environment to determine the coordinates of a location. Assume that the body can provide the coordinates of a named location.
Exercise 2.9:
Suppose you have a new job and must build a controller for an intelligent robot. You tell your bosses that you just have to implement a command function and a state transition function. They are very skeptical. Why these functions? Why only these? Explain why a controller requires a command function and a state transition function, but not other functions. Use proper English. Be concise.