3 Searching for Solutions

3.1 Problem Solving as Search

In the simplest case of an agent deciding what it should do, the agent has a state-based model of the world, with a goal to achieve and no uncertainty. This is either a flat (non-hierarchical) representation or a single level of a hierarchy. The agent is able to determine how to achieve its goal by searching in its representation of the world state space for a way to get from its current state to a state that satisfies its goal. Given a complete model, it tries to find a sequence of actions that will achieve its goal before it has to act in the world.

This problem can be abstracted to the mathematical problem of finding a path from the start node to a goal node in a directed graph. Many other problems can also be mapped to this abstraction, so it is worthwhile to consider this level of abstraction. Most of this chapter explores various algorithms for finding such paths.

Example 3.1.

Computer maps provide path-finding: showing how to drive (or ride, walk or take transit) from one location to another. Finding the best route from a current location to a destination is a search problem. The state includes the location, and possibly the driving direction and speed. A legal route will include the roads (going the correct way down one-way streets) and intersections the traveler will traverse.

The best route could mean

  • the shortest (least distance) route

  • the quickest route

  • the lowest-cost route that takes into account time, money (e.g, fuel and tolls) and the route’s attractiveness.

Finding the shortest route is usually easiest to implement as computing distances from a map is usually straightforward.

Estimating the time it will take to travel is difficult. The route planner might need to take into account regular traffic volumes as well as known local conditions, such as roadworks or accidents. If the route planner is advising many people, and advises them all to take the same route, that route may become more congested because of the advice, and so it may be better for users to deliberately avoid the advice. The system could avoid this by telling different users different routes. It would be good for the system to guarantee that the user will not do better by ignoring the advice, but that is beyond the scope of this chapter.

Finding the best route that takes into account other preferences people may have is complicated. It is difficult to acquire these preferences and people may not even be able to articulate these preferences and trade-offs. But given the preferences, the problem reduces to one of searching, albeit with a complex cost function.

Another challenge is that a driver might not actually take the route suggested either by design, perhaps visiting a place off the suggested route, or by accident, such as taking a wrong turn, or where a road is closed. This is explored in Example 3.22.

This notion of search is computation solely inside the agent. It is different from searching in the world, when an agent may have to act in the world, for example, a robot searching for keys, lifting up cushions, and so on. It is also different from searching the web, which involves searching for information by indexing huge amounts of data and trying to find the best response for each search query. Searching in this chapter means searching in an internal representation for a path to a goal.

Search underlies much of artificial intelligence. When an agent is given a problem, it is usually given only a description that lets it recognize a solution, not an algorithm to solve it. It has to search for a solution. The existence of NP-complete problems, with efficient means to recognize solutions but no efficient methods for finding them, indicates that searching is a necessary part of solving problems.

It is often believed that humans are able to use intuition to jump to solutions to difficult problems. However, humans cannot find optimal solutions to computationally difficult problems. Humans do not tend to solve general problems; instead they solve specific instances about which they may know much more than the underlying search space. They often do not find optimal solutions, but find satisficing, or good enough, solutions. Problems with little structure, or ones in which the structure cannot be related to the physical world, are very difficult for humans to solve. The existence of public key encryption codes, where the search space is clear and the test for a solution is given – which humans nevertheless have no hope of solving and computers cannot solve in a realistic time frame – demonstrates the difficulty of search.

The difficulty of search and the fact that humans are able to solve some search problems efficiently suggests that computer agents should exploit knowledge about special cases to guide them to a solution. This extra knowledge beyond the search space is called heuristic knowledge. This chapter considers one kind of heuristic knowledge in the form of an estimate of the cost from a node to a goal.