Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

15.1 Dimensions of Complexity Revisited

What has AI research achieved? Where do the current frontier issues lie? To get a systematic sense of the big picture, we use the design space for AI systems presented in Section 1.5. There, we described nine dimensions that span part of that design space. In this section, we show how some of the representations presented in the book can be positioned in that space.


Dimension key Value
Modularity * flat
** modular
*** hierarchical
Representation scheme * states
** features
*** relations
Planning horizon * non-planning
** finite stage
*** indefinite stage
**** infinite stage
Sensing Uncertainty * fully observable
** partially observable
Effect Uncertainty * deterministic
** stochastic
Preference * goals
** complex preferences
Learning * knowledge is given
** knowledge is learned
Number of agents * single agent
** multiple agents
Computational limits * perfect rationality
** bounded rationality
Figure 15.1: Dimensions of complexity

Figure 15.1 reviews the dimensions of complexity and extends Figure 1.6 to include a key, in terms of numbers of stars, that is used in the subsequent figure.


Dimension Modularity Representation Scheme Planning Horizon Sensing Uncertainty Effect Uncertainty Preference Learning Number of Agents Computational Limits
State-Space Search * * *** * * * * * *
Regression Planning * ** *** * * * * * *
CSP Planning * ** ** * * * * * *
Decision Networks * ** ** ** ** ** * * *
MDPs * * **** * ** ** * * *
POMDPs * * **** ** ** ** * * *
Dynamic DNs * ** **** * ** ** * * *
Multiagent DNs * ** ** ** ** ** * ** *
Policy Improvement * * * * * ** ** ** *
Reinforcement Learning * ** **** * ** ** ** * *
Situation Calculus * *** *** * * * * * *
Indep. Choice Logic * *** *** ** ** ** * * *
Figure 15.2: Some representations rated by dimensions of complexity

Figure 15.2 classifies, in terms of the values for each dimension, some of the representations covered in this book.

State-space search, as presented in Chapter 3, allows for an indefinite horizon but otherwise gives the simplest value in all the other dimensions. Regression planning, using either the feature-based representation or the STRIPS representation, extends state-space search to reason in terms of features. Constraint satisfaction problem (CSP) planning allows for better efficiency but at the cost of planning for only a finite stage.

Decision networks allow for the representation of features, stochastic effects, partial observability, and complex preferences in terms of utilities. However, as with CSP planning, these networks only reason for a finite stage planning horizon.

Markov decision processes (MDPs) allow for indefinite and infinite stage problems with stochastic actions and complex preferences; however, they are state-based representations that assume the state is fully observable. Partially observable MDPs (POMDPs) allow for a partially observable state but are much more difficult to solve. Dynamic decision networks extend MDPs to allow for a feature-based representation of states. They extend decision networks to allow for indefinite or infinite horizons, but they do not model sensing uncertainty.

Multiagent decision networks extend decision networks to allow for multiple agents. The policy improvement algorithm allows for learning with multiple agents, but it only allows a single state and a planning horizon of 1 (it plays a repeated single-step game); the only uncertainty is in the actions of the other agents. It can be seen as a reinforcement learning algorithm with a single state but with multiple agents.

Reinforcement learning extends MDPs to allow for learning. Reinforcement learning with features is discussed in Section 11.3.9.

Situation calculus and event calculus allow for the representation of individuals and relations and an indefinite planning horizon, but they do not represent uncertainty.

Independent choice logic is a relational representation that allows for the representation of uncertainty in the effect of actions, sensing uncertainty, and utilities; however, in this most general framework, reasoning is not efficient.

The dimension that adds the most difficulty to building an agent is sensing uncertainty. Most of the representations that allow sensing uncertainty work by making actions a function of the history of the agent, which only works for finite stages. How to extend planning with sensing uncertainty and indefinite and infinite horizon problems is discussed in the context of POMDPs, but the suggested solutions rely on explicit states. How to handle sensing in all of its forms is one of the most active areas of current AI research.

None of representations presented in the preceding table handle hierarchical decomposition. However, large bodies of work on hierarchical planning and hierarchical reinforcement learning are not presented in this book.

Bounded rationality underlies many of the approximation methods used for applications; however, making the explicit trade-off between thinking and acting, in which the agent reasons about whether it should act immediately or think more, is still relatively rare.

As can be seen, we have presented a very small part of the design space of AI. The current frontier of research goes well beyond what is covered in this textbook. There is much active research in all areas of AI. There have been and continue to be impressive advances in planning, learning, perception, natural language understanding, robotics, and other subareas of AI. In terms of the dimensions presented here, much of this is very deep work in what is still only a small part of the design space of AI.

A divide has traditionally existed between relational reasoning, typically in terms of first-order logic or frames, and reasoning under uncertainty in terms of probability. This divide is being bridged, but it is still evident in conferences and research papers.

The decomposition of AI into subareas is not surprising. The design space is too big to explore all at once. Once a researcher has decided to handle, say, relational domains and reasoning about the existence of objects, it is difficult to add sensor uncertainty. If a researcher starts with learning with infinite horizons, it is difficult to add features or hierarchical reasoning, let alone learning with infinite horizons and relations together with hierarchies.

Some particular points in design space that have been at the frontier of research over the past few years include the following:

  • hierarchical reinforcement learning, where the agent learns at multiple levels of abstraction at once;
  • multiagent reinforcement learning;
  • relational probabilistic learning;
  • natural understanding that takes into account ambiguity, context, and pragmatics to give appropriate answers;
  • robot cars that can drive through uncertain environments; and
  • intelligent tutoring systems that take into account noisy sensors of students' emotions.

We still do not know how to build an agent that learns about relations for infinite-stage, partially observable domains in which there are multiple agents. Arguably humans do this, perhaps by reasoning hierarchically and approximately. So although we may not yet be able to build an intelligent artificial agent with human-level performance, we seem to have the building blocks to develop one. The main challenge is handling the complexity of the real world. However, so far there seem to be no intrinsic obstacles to building computational embodied agents capable of human-level performance.