AIPython: Python Code for AIFCA

David Poole and Alan Mackworth

This Python code is meant to demonstrate some of the algorithms in Artificial Intelligence: foundations of computational agents, third edition. All code is copyright David Poole and Alan Mackworth 2023, and licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


  • Latest version: 0.9.12 (2023-11-21). Recent Updates:
    • Now we have Datalog and logic prgramming top-down proof procedure (Ch 15) and triple store (ch 16) implemented.
    • Includes graphical user interfaces (GUIs) to the search and CSP code; implementing most of solve mode of the search and consistency AIspace apps.
    • GUIs for decision-theoretic planning (MDPs) and reinforcement learning.
    • Relational Learning updates, including better graphical display.

We have followed the following principles:

  • The code should be simple and as close the pseudo-code as possible. We have chosen readability over efficiency: we have tried to keep the asymptotic complexity as good as possible (except in some cases where the more efficient code is an exercise), but have not optimized the constant factors.
  • The code should work, but it does not include all possible functionality. There are parts missing that could be used as exercises.
  • We make extensive use of list comprehensions, sets, and dictionaries. The only library we use is matplotlib; students don't need to understand multiple libraries. We tried to write the sort of code that a student could write from the psudo-code without extensive searching through libraries.
  • It is designed for Python 3.9 and beyond. You may as well use the latest version of Python in the distribution you prefer.
  • We use a simple method for tracing the code, using a method display, which is like print but includes a integer display level. The user can then set a maximum display level for the corresponding object (or class), and so has control over the amount of detail printed. We try to not litter the code with too many tracing statements. The use of display enables a graphical user interface (GUI) without changing the algorithm code.

This is not polished code. It is meant to code representations that work together. It will probably never be polished (or when it is polished it is probably time to throw it away and start again), and should no be relied on. We make no warranty of any kind, expressed or implied, with regard to these programs or the documentation.

Main Source Code Document and Distribution

The code document, aipython.pdf contains and explains all of the code. The whole code base can be downloaded from

Note that the document and the code are derived from the same source, and so should be consistent (even up to the same line numbers).

What is in the distribution?

The above zip file and pdf is probably what you want. For the brave, you can get all of the latex sources, but it may not be up-to-date during development and it contains of non-working code under development and lots of extraneous stuff. can be used to extract the code from the latex sources.

The following is a list of all of the files (but may be out of date). Please refer to aipython.pdf and for the documentation and source code.

1. Python for Artificial Intelligence

  • some tricky cases.
  • some useful utilities for tracing, argmax, etc.
  • simple code for displaying intermediate results. This can be overridden if you want fancier displaying (e.g. in a GUI).

2. Agent Architectures and Hierarchical Control

  • defines a simple infrastructure for agents and environments
  • A paper buying agent and environment, that is of the form that might be used in a smart home.
  •,, define a hierarchical agent. See also the reinforcement learning agents.

3. Searching for Solutions

  • defines a search problem in terms of the start nodes, a predicate to test if a node is a goal, the neighbors function, and an optional heuristic function. This is the interface that algorithms that search use.
  • contains some example graphs.
  • the generic search algorithm that implements both depth-first search and A* search. It does not do multi-path pruning or cycle checking. It is easy to modify to make other search algorithms.
  • does A* and other search algorithms with multiple path pruning.
  • contains a GUI for stepping through search algorithms
  • defines depth-first branch-and-bound search search. It does not do multi-path pruning or cycle checking.
  • and contain code to enable students to understand particular problems that are explored in exercises.

4. Reasoning with Constraints

  • defines variables that are used for constraints satisfaction problems (CSPs), probabilistic models, and decision networks.
  • provides the primitives to define constraint satisfaction problems (CSPs). defines some example CSPs.
  • solves CSPs with depth-first search
  • converts CSPs into search problems for which any of the search methods will work.
  • uses domain splitting and generalized arc consistency to solve CSPs.
  • lets users understand arc consistency and domain splitting. Users select arcs to process and select nodes to split. It provides animation of domain pruning.
  • uses stochastic local search, in particular a probabilistic mix of the variable with the most conflicts, any-conflict and a random variable, to solve CSPs. It only maintain the data structures needed for the algorithm (e.g., a priority queue when we need the best variable, but not when we do an any-conflict). Each step is at most logarithmic in the number of variables (to maintain the priority queue), but depends on the number of neighbors. It also plots runtime distributions (for number of steps only).
  • implements optimization with soft constraints.

5. Propositions and Inference

  • defines definite clauses
  • bottom-up inference for definite clauses
  • top-down inference for definite clauses (including ask-the-user)
  • provides an interactive interface to explore proofs.
  • Horn clauses for assumables, including consistency-based diagnosis.
  • negation-as-failure.

6. Deterministic Planning

  • defines planning problems in STRIPS.
  • defines a forward planner that can use either A* or the branch-and-bound searchers.
  • defines heuristics for the running example
  • defines a regression planner that can use either A* or the branch-and-bound searchers. The heuristic function is always 0.
  • creates a CSP from a description of a planning problem, which can then be solved with any of the CSP solvers.
  • implements a partial order planner that can use either of the searchers.

7. Supervised Machine Learning

  • representing learning problems.
  • making predictions and learning probabilities (learning with no inputs).
  • decision tree learning.
  • cross validation and parameter tuning.
  • linear classifier and regression and logistic regression.
  • functional gradient boosting.

8. Neural Networks and Deep Learning

  • deep neural network learning.
  • Including feedforward networks, momentum, RMS-Prop and Adam optimizers, and momentum. This code is very inefficient compared to the state-of-the-art; if you want to train deep networks for non-trivial cases we suggest using Keras or PyTorch. The naming of parameters in the AIPython code is consistent with Keras.

9. Reasoning with Uncertainty

  • defines factors and multiple representations for conditional probabilities.
  • defines graphical models, and belief networks as special cases.
  • provides a suite of example belief networks.
  • implements recursive conditioning for graphical models (and so for belief networks).
  • implements variable elimination for graphical models.
  • implementations of multiple stochastic simulation algorithms for belief networks.
  • implementation of hidden Markov models. It defines both exact inference for filtering and particle filtering.
  • provides a GUI for an example of probabilistic localization.
  • defines dynamic belief networks and uses the variable elimination code for filtering.

10. Learning with Uncertainty

  • demonstrates beta distribution as a belief network and as plots.
  • k-means for unsupervised learning.
  • EM for unsupervised learning.

11. Causality

  • enables queries with "do" as well as observations
  • allows users to explore counterfactual reasoning

12. Planning with Uncertainty

  • defines and solves decision networks (influence diagrams).
  • defines agent-environment problem domains that can be used for MDPs and reinforcement learning. It also defined value iteration and asynchronous value iteration.
  • defines some example problem domains.
  • defines a GUI to explore value iteration.

13. Reinforcement Learning

  • Reinforcement learning environments:
    • defines reinforcement learning agents and environments, constructing an environment from an MDP and agent-environment problem domains of the previous chapter. It also implements exploration strategies, simulates learning agents and plots performance
    • defines multiple reinforcement leaning domains, including a simple game and a monster game.
  • Learners:
    • Q-learner
    • Q-learner with experience replay
    • Model-based reinforcement learner,
    • Feature-based reinforcement learner. defines features for some environments, including the monster game.
  • defines a graphical user interface (GUI) for exploring how the value function, the Q-function and optimal policy change as an agent acts.

14. Multiagent Systems

  • defines two-player zero-sum games.
  • minimax with alpha-beta pruning.
  • learns stochastic policies for repeated normal-form games.

15. Individuals and Relations

  • Implementations of Datlog and Logic Programs, with a top-down proof procedure. Not nearly as efficient as Prolog.

16. Knowledge Graphs and Ontologies

  • implements a triple store with efficient indexing of triple no matter which of the subject, verb or object is specified.

17. Relational Learning

  • collaborative filtering.
  • represents relational probabilistic models (plate models), and grounds them into graphical models that can use any of the inference methods.

Copyright © David Poole and Alan Mackworth, 2023. This web page and the code are released under a Creative Commons Attribution-Noncommercial-Share Alike license.