Online Learning Resources

Here are some online learning resources for Artificial Intelligence: foundations of computational agents, 3rd edition by David L. Poole and Alan K. Mackworth, Cambridge University Press, 2023. All material is copyright, and most is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and under the GPL.

Chapter 1: Artificial Intelligence and Agents

The AAAI AI Topics site provides a wealth of introductory material on AI.

Chapter 2: Agent Architectures and Hierarchical Control

See AIPython for Python implementations of the robot controllers:

  • defines a simple agent controller. TP_env and TP_agent define the environment and agent for Examples 2.1 and 2.2.
  •,, define the hierarchical agent of Section 2.2 (Examples 2.4, 2.5, 2.6).

Chapter 3: Searching for Solutions

See AIPython for Python implementations of search algorithms.

  • defines a search problem in terms of the start nodes, a predicate to test if a node is a goal, the neighbors function, and an optional heuristic function. simp_delivery_graph is the graph of Figures 3.3 and 3.7. cyclic_simp_delivery_graph is the graph of Figure 3.10.
  • the generic search algorithm that implements both depth-first search (Searcher) and A* search (AStarSearcher). It does not do multi-path pruning or cycle checking. It is easy to modify to make other search algorithms.
  • implements A* with multiple path pruning.
  • defines depth-first branch-and-bound search search. It does not do multi-path pruning or cycle checking.

Chapter 4: Reasoning With Constraints

See AIPython for Python implementations of the CSP algorithms. This includes:

  • defines a constraint satisfaction problem (CSP).
  • defines example CSPs. csp2 is the CSP of Examples 4.9, 4.19, and 4.23-4.28 and Figures 4.5 and 4.10. csp1s is the CSP of Examples 4.12-4.14, 4.17, 4.18, 4.21, and 4.22. crossword1 and crossword2 are two representations of Figure 4.15 (see Exercise 4.3)
  • defines depth-first search for CSPs.
  • defines a searcher, which can use any of the searching techniques from Chapter 3, for CSPs that allows searches through the space of partial assignments.
  • uses domain splitting and generalized arc consistency to solve CSPs.
  • uses stochastic local search, in particular a probabilistic mix of the variable with the most conflicts, any-conflict and a random variable, to solve CSPs. It only maintain the data structures needed for the algorithm (e.g., a priority queue when we need the best variable, but not when we do an any-conflict). Each step is at most logarithmic in the number of variables (to maintain the priority queue), but depends on the number of neighbors. It also plots runtime distributions (for number of steps only).
  • gives a representation for soft constraints an implements branch-and-bound search (Figure 4.1).

Chapter 5: Propositions and Inference

AIPython has Python implementations of the reasoners:

  • defines definite clauses. elect implements Example 5.8 (Figure 5.2)
  • bottom-up inference for definite clauses (Figure 5.3)
  • top-down inference for definite clauses (Figure 5.4), including ask-the-user (Section 5.4)
  • knowledge-based debugging for definite clauses; see Figure 5.6 and 5.7 (Section 5.5)
  • Horn clauses for assumables, including consistency-based diagnosis (Figure 5.10). electa implements Example 5.21 (Figure 5.8)
  • implements negation-as-failure (Figure 5.12). beach_KB is Example 5.28.
  • is the code referred to in Excerise 5.6. (Try also elect_bug in
In Prolog,

Chapter 6: Deterministic Planning

See AIPython for Python implementations of the planners:

  • defines representations of actions using STRIPS. delivery_domain implements Examples 6.1-6.6 (Figure 6.1). It also implements blocks-world domains.
  • implements forward planning (Section 6.2).
  • implements heuristic functions for the forward (Example 6.10) and regression planners.
  • implements planning as a CSP (Section 6.4).
  • implements partial-order planning (Section 6.5).

Chapter 7: Supervised Machine Learning

See AIPython for Python implementations of learning algorithms:

  • is the infrastructure assumed by the AIPython learning algorithms and features to allow for experimentation with various algorithms
  • lets you experiment with the simplest case of no input features (Section 7.2.2).
  • implements decision-tree learning (Section 7.3.1)
  • implements various techniques for cross validation (Section 7.4.3)
  • implements linear and logistic regression, including stochastic gradient descent (Section 7.3.2)
  • implements gradient-boosted trees for classification (Section 7.5.2)

Chapter 8: Neural Networks and Deep Learning

See AIPython for Python implementations neural networks. This is meant to be runnable pseudo-code and is much less efficient than state-af-the-art systems such as Keras or PyTorch. If you want to use a library, use one of those. If you want to see how the underlying algorithms work, see:

  • allows one to build and train feed-forward neural networks (Section 8.1), including stochastic gradient descent, momentum, RMS-Prop, and Adam (section 8.2) and dropout (section 8.3).
  • is Keras code to implement Figure 8.5.
Further reading:

Chapter 9: Reasoning with Uncertainty

AIPython contains Python implementations of probabilistic inference algorithms:

  • defines random variables
  • defines (probabilistic) factors and conditional probability distributions.
  • defines graphical models (including belief networks)
  • implements recursive conditioning (Section 9.5.1)
  • implements variable elimination for graphical models (Section 9.5.2)
  • implements various stochastic simulation algorithms, including rejection sampling, likelihood weighting (a form of importance sampling), particle filtering, Gibbs sampling (a form of MCMC)
  • implements algorithms for hidden Markov models (Section 9.6.2)
  • implements the localization of Example 9.32, and is used to generate Figure 8.19.
  • implements dynamic belief networks (Section 9.6.4)

Chapter 10: Learning With Uncertainty

AIPython has Python implementations for learning with uncertainty

  • implements the k-means algorithm (Section 10.3.1)
  • implements the expectation maximization (EM) algorithm for soft clustering (Section 10.3.2)

Chapter 11: Causality

See AIPython for Python implementations of learning with uncertainty algorithms.

  • adds the do-operator to the probabilistic inference algorithms (Section 11.1.1)
  • implements some counterfactual reasoning examples (Section 11.5)

Chapter 12: Planning with Uncertainty

See AIPython for Python implementations of planning under uncertainty:

  • implements decision networks (Sections 12.2 and 12.3.1), including search (Section 12.3.3) and variable elimination (Section 12.3.4)
  • implements Markov decision processes (MDPs) (Section 12.5) including value iteration (Section 12.5.2) and asynchronous value iteration (Figure 12.8)
  • implements example Markov decision processes (MDPs) (Section 12.5)

Chapter 13:Reinforcement Learning

See AIPython for Python implementations of reinforcement learning algorithms.

  • define reinforcement learning (RL) problems, including from evironments fom MDPs and plotting the accumulated reward.
  • defined some of the RL problems (Example 12.29 an 13.2)
  • implements Q-learning (Section 13.4.1)
  • implements Q-learning with experience replay
  • model-based RL (Section 13.8)
  • implements a feature-based reinforcement learner (Section 13.9.1); defines features for the monster game of Figure 13.2 and Example 13.6.

Chapter 14: Multiagent Systems

See AIPython for Python implementations of the following:

  • defines two-player zero-sum games (Section 14.3)
  • implements minimax with alpha-beta pruning (Section 14.3.1)
  • implements multiagent reinforcement Learning with stochastic policies (Section 14.7.2)

Chapter 15: Individuals and Relations

The following code runs in Prolog:

  • electrical wiring example from Example 15.11
  • database of geography of some of South America; Figures 15.2 and 15.3
  • simple English grammar about the geography of some of South America; Figures 15.8 and 15.9
  • can answer English questions about the geography of some of South America; Examples 15.35 and 15.36, and Figure 15.10
  • building query first; Figures 15.11 and 15.12

Chapter 16: Knowledge Graphs and Ontologies

Chapter 17: Relational Learning and Probabilistic Reasoning

See AIPython for Python implementations of the following:

  • implements the collaborative filtering learning of Section 17.2.1
  • converts relational belief networks into standard belief networks, given a population for each logical variable (plate). Any of the inference methods can be used on the resulting network; see Section 17.3

Chapter 18: The Social Impact of Artificial Intelligence

Some additional resources:

Chapter 19: Retrospect and Prospect

Some additional resources:
Creative Commons License AIPython and our other code is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Last updated 2023-08-24, David Poole, Alan Mackworth