Online Learning Resources

Here are some online learning resources for Artificial Intelligence: foundations of computational agents, 3rd edition by David L. Poole and Alan K. Mackworth, Cambridge University Press, 2023. All material is copyright, and most is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License and under the GPL.

Chapter 1: Artificial Intelligence and Agents

The AAAI AI Topics site provides a wealth of introductory material on AI.

Chapter 2: Agent Architectures and Hierarchical Control

See AIPython for Python implementations of the robot controllers:

  • agents.py defines a simple agent controller. TP_env and TP_agent define the environment and agent for Examples 2.1 and 2.2.
  • agentEnv.py, agentMiddle.py, agentTop.py define the hierarchical agent of Section 2.2 (Examples 2.4, 2.5, 2.6).

Chapter 3: Searching for Solutions

See AIPython for Python implementations of search algorithms.

  • searchProblem.py defines a search problem in terms of the start nodes, a predicate to test if a node is a goal, the neighbors function, and an optional heuristic function. simp_delivery_graph is the graph of Figures 3.3 and 3.7. cyclic_simp_delivery_graph is the graph of Figure 3.10.
  • searchGeneric.py the generic search algorithm that implements both depth-first search (Searcher) and A* search (AStarSearcher). It does not do multi-path pruning or cycle checking. It is easy to modify to make other search algorithms.
  • searchMPP.py implements A* with multiple path pruning.
  • searchBranchAndBound.py defines depth-first branch-and-bound search search. It does not do multi-path pruning or cycle checking.

Chapter 4: Reasoning With Constraints

See AIPython for Python implementations of the CSP algorithms. This includes:

  • cspProblem.py defines a constraint satisfaction problem (CSP).
  • cspExamples.py defines example CSPs. csp2 is the CSP of Examples 4.9, 4.19, and 4.23-4.28 and Figures 4.5 and 4.10. csp1s is the CSP of Examples 4.12-4.14, 4.17, 4.18, 4.21, and 4.22. crossword1 and crossword2 are two representations of Figure 4.15 (see Exercise 4.3)
  • cspDFS.py defines depth-first search for CSPs.
  • cspSearch.py defines a searcher, which can use any of the searching techniques from Chapter 3, for CSPs that allows searches through the space of partial assignments.
  • cspConsistency.py uses domain splitting and generalized arc consistency to solve CSPs.
  • cspSLS.py uses stochastic local search, in particular a probabilistic mix of the variable with the most conflicts, any-conflict and a random variable, to solve CSPs. It only maintain the data structures needed for the algorithm (e.g., a priority queue when we need the best variable, but not when we do an any-conflict). Each step is at most logarithmic in the number of variables (to maintain the priority queue), but depends on the number of neighbors. It also plots runtime distributions (for number of steps only).
  • cspSoft.py gives a representation for soft constraints an implements branch-and-bound search (Figure 4.1).

Chapter 5: Propositions and Inference

AIPython has Python implementations of the reasoners:

  • logicProblem.py defines definite clauses. elect implements Example 5.8 (Figure 5.2)
  • logicBottomUp.py bottom-up inference for definite clauses (Figure 5.3)
  • logicTopDown.py top-down inference for definite clauses (Figure 5.4), including ask-the-user (Section 5.4)
  • logicExplain.py knowledge-based debugging for definite clauses; see Figure 5.6 and 5.7 (Section 5.5)
  • logicAssumables.py Horn clauses for assumables, including consistency-based diagnosis (Figure 5.10). electa implements Example 5.21 (Figure 5.8)
  • logicNegation.py implements negation-as-failure (Figure 5.12). beach_KB is Example 5.28.
  • elect_bug2.py is the code referred to in Excerise 5.6. (Try also elect_bug in logicProblem.py)
In Prolog,

Chapter 6: Deterministic Planning

See AIPython for Python implementations of the planners:

  • stripsProblem.py defines representations of actions using STRIPS. delivery_domain implements Examples 6.1-6.6 (Figure 6.1). It also implements blocks-world domains.
  • stripsForwardPlanner.py implements forward planning (Section 6.2).
  • stripsHeuristics.py implements heuristic functions for the forward (Example 6.10) and regression planners.
  • stripsCSPPlanner.py implements planning as a CSP (Section 6.4).
  • stripsPOP.py implements partial-order planning (Section 6.5).

Chapter 7: Supervised Machine Learning

See AIPython for Python implementations of learning algorithms:

  • learnProblem.py is the infrastructure assumed by the AIPython learning algorithms and features to allow for experimentation with various algorithms
  • learnNoInputs.py lets you experiment with the simplest case of no input features (Section 7.2.2).
  • learnDT.py implements decision-tree learning (Section 7.3.1)
  • learnCrossValidation.py implements various techniques for cross validation (Section 7.4.3)
  • learnLinear.py implements linear and logistic regression, including stochastic gradient descent (Section 7.3.2)
  • learnBoosting.py implements gradient-boosted trees for classification (Section 7.5.2)

Chapter 8: Neural Networks and Deep Learning

See AIPython for Python implementations neural networks. This is meant to be runnable pseudo-code and is much less efficient than state-af-the-art systems such as Keras or PyTorch. If you want to use a library, use one of those. If you want to see how the underlying algorithms work, see:

  • learnNN.py allows one to build and train feed-forward neural networks (Section 8.1), including stochastic gradient descent, momentum, RMS-Prop, and Adam (section 8.2) and dropout (section 8.3).
  • keras_mnist.py is Keras code to implement Figure 8.5.
Further reading:

Chapter 9: Reasoning with Uncertainty

AIPython contains Python implementations of probabilistic inference algorithms:

  • probVariables.py defines random variables
  • probFactors.py defines (probabilistic) factors and conditional probability distributions.
  • probGraphicalModels.py defines graphical models (including belief networks)
  • probRC.py implements recursive conditioning (Section 9.5.1)
  • probVE.py implements variable elimination for graphical models (Section 9.5.2)
  • probStochSim.py implements various stochastic simulation algorithms, including rejection sampling, likelihood weighting (a form of importance sampling), particle filtering, Gibbs sampling (a form of MCMC)
  • probHMM.py implements algorithms for hidden Markov models (Section 9.6.2)
  • probLocalization.py implements the localization of Example 9.32, and is used to generate Figure 8.19.
  • probDBN.py implements dynamic belief networks (Section 9.6.4)

Chapter 10: Learning With Uncertainty

AIPython has Python implementations for learning with uncertainty

  • learnKMeans.py implements the k-means algorithm (Section 10.3.1)
  • learnEM.py implements the expectation maximization (EM) algorithm for soft clustering (Section 10.3.2)

Chapter 11: Causality

See AIPython for Python implementations of learning with uncertainty algorithms.

  • probDo.py adds the do-operator to the probabilistic inference algorithms (Section 11.1.1)
  • probCounterfactual.py implements some counterfactual reasoning examples (Section 11.5)

Chapter 12: Planning with Uncertainty

See AIPython for Python implementations of planning under uncertainty:

  • decnNetworks.py implements decision networks (Sections 12.2 and 12.3.1), including search (Section 12.3.3) and variable elimination (Section 12.3.4)
  • mdpProblem.py implements Markov decision processes (MDPs) (Section 12.5) including value iteration (Section 12.5.2) and asynchronous value iteration (Figure 12.8)
  • mdpExamples.py implements example Markov decision processes (MDPs) (Section 12.5)

Chapter 13:Reinforcement Learning

See AIPython for Python implementations of reinforcement learning algorithms.

  • rlProblem.py define reinforcement learning (RL) problems, including from evironments fom MDPs and plotting the accumulated reward.
  • rlExamples.py defined some of the RL problems (Example 12.29 an 13.2)
  • rlQLearner.py implements Q-learning (Section 13.4.1)
  • rlQExperienceReplay.py implements Q-learning with experience replay
  • rlModelLearner.py model-based RL (Section 13.8)
  • rlFeatures.py implements a feature-based reinforcement learner (Section 13.9.1); rlMonsterGameFeatures.py defines features for the monster game of Figure 13.2 and Example 13.6.

Chapter 14: Multiagent Systems

See AIPython for Python implementations of the following:

  • masProblem.py defines two-player zero-sum games (Section 14.3)
  • masMiniMax.py implements minimax with alpha-beta pruning (Section 14.3.1)
  • masLearn.py implements multiagent reinforcement Learning with stochastic policies (Section 14.7.2)

Chapter 15: Individuals and Relations

The following code runs in Prolog:

  • elect_reln.pl electrical wiring example from Example 15.11
  • geography_DB.pl database of geography of some of South America; Figures 15.2 and 15.3
  • geography_CFG.pl simple English grammar about the geography of some of South America; Figures 15.8 and 15.9
  • geography_QA.pl can answer English questions about the geography of some of South America; Examples 15.35 and 15.36, and Figure 15.10
  • geography_QA_query.pl building query first; Figures 15.11 and 15.12

Chapter 16: Knowledge Graphs and Ontologies

Chapter 17: Relational Learning and Probabilistic Reasoning

See AIPython for Python implementations of the following:

  • relnCollFilt.py implements the collaborative filtering learning of Section 17.2.1
  • relnProbModels.py converts relational belief networks into standard belief networks, given a population for each logical variable (plate). Any of the inference methods can be used on the resulting network; see Section 17.3

Chapter 18: The Social Impact of Artificial Intelligence

Some additional resources:

Chapter 19: Retrospect and Prospect

Some additional resources:
Creative Commons License AIPython and our other code is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Last updated 2023-08-24, David Poole, Alan Mackworth