AIPython: Python Code for AIFCA

David Poole and Alan Mackworth

This Python code is meant to demonstrate some of the algorithms in Artificial Intelligence: foundations of computational agents, second edition. All code is copyright © David Poole and Alan Mackworth 2017, and licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

We have followed the following principles:

  • The code should be simple and as close the pseudo-code as possible. We have chosen readbility over efficiency: we have tried to keep the asymptotic complexity as good as possible (except in some cases where the more efficient code is an exercise), but have not optimized the constant factors.
  • The code should work, but it does not include all possible functionality. There are parts missing that could be used as exercises.
  • We make extensive use of list comprehensions, sets, and dictionaries. We try to not use libraries where it is not obvious that the library is appropriate. It is the sort of code that a student could write without extensive searching through libraries.
  • It is designed for Python 3.8 and beyond. (Not Python 2). You may as well use the latest version.
  • We use a simple method for tracing the code, using a method display, which is like print but includes a integer display level. The user can then set a maximum display level for the corresponding object (or class), and so has control over the amount of detail printed. We try to not litter the code with too many tracing statements. The use of display is designed to enable future graphical displays of the algorithms.

This is not polished code. It is meant to code representations that work together. It will probably never be polished (or when it is polished it is probably time to throw it away and start again), and should no be relied on. We make no warranty of any kind, expressed or implied, with regard to these programs or the documentation.

Main Source Code Document and Distribution

The code document, aipython.pdf contains and explains all of the code. The whole code base can be downloaded from

Note that the document and the code are derived from the same source, and so should be consistent (even up to the same line numbers).

AISpace 2

We have a beta prototype of AISpace 2, which includes AIPpython and Jupyter Lab integration, with graphical interactions with (currently) the searching, constraints and planning algorithms. This is not under continued development any more and unfortunately cannot be maintained.

Separate Files

The above zip file and pdf is probably what you want. For the brave, you can get all of the latex sources, but it may not be up-to-date during development and it contains of non-working code under development and extraneous stuff. can be used to extract the code from the latex sources.

The following is a list of all of the files, and may be out of date. Please refer to aipython.pdf and for the latest versions (and for code that includes all dependencies).

Python for Artificial Intelligence

  • some tricky cases.
  • some useful utilities for tracing, argmax, etc.
  • simple code for displaying intermediate results. This is overridden in the AISpace 2 code above to allow for interaction.

Agents and Control

Searching for Solutions

  • defines a search problem in terms of the start nodes, a predicate to test if a node is a goal, the neighbors function, and an optional heuristic function. This is the interface that algorithms that search use. It also contains some example graphs.
  • the generic search algorithm that implements both depth-first search and A* search. It does not do multi-path pruning or cycle checking. It is easy to modify to make other search algorithms. does A* with multiple path pruning.
  • defines depth-first branch-and-bound search search. It does not do multi-path pruning or cycle checking.

Reasoning with Constraints

  • defines a constraint satisfaction problem (CSP). defines some example CSPs.
  • defines a searcher for CSPs that searches through the space of partial assignments.
  • uses domain splitting and generalized arc consistency to solve CSPs. It may be tricky for some students to write this from scratch because they need to be careful when side effects are okay, and when CSPs need to be copied.
  • uses stochastic local search, in particular a probabilistic mix of the variable with the most conflicts, any-conflict and a random variable, to solve CSPs. It only maintain the data structures needed for the algorithm (e.g., a priority queue when we need the best variable, but not when we do an any-conflict). Each step is at most logarithmic in the number of variables (to maintain the priority queue), but depends on the number of neighbours. It also plots runtime distributions (for number of steps only).

Propositions and Inference

Deterministic Planning

  • defines planning problems in STRIPS.
  • defines a forward planner that can use either A* or the branch-and-bound searchers.
  • defines a regression planner that can use either A* or the branch-and-bound searchers. The heuristic function is always 0. An exercise could be to add a heuristic.
  • creates a CSP from a description of a planning problem, which can then be solved with any of the CSP solvers.
  • implements a partial order planner that can use either of the searchers.

Supervised Machine Learning

Neural Networks and Deep Learning

  • deep neural network learning.
  • Including feedforward networks, momentum, RMS-Prop and Adam optimizers, and momentum. This code is very inefficient compared to the state-of-the-art; if you want to train deep networks for non-trivial cases we suggest using Keras or PyTorch. The naming in the AIPython code is consistent with Keras.

Reasoning with Uncertainty

  • defines random variables
  • defines graphical models, and belief networks as special cases.
  • implements recursive conditioning for graphical models (and so for belief networks).
  • implements variable elimination for graphical models.
  • defines factors and factor operations, including multiplying factors, summing out a variable and observing a variable.
  • implementations of multiple stochastic simulation algorithms for belief networks.
  • implementation of hidden Markov models. It defines both exact inference for filtering and particle filtering.
  • defines dynamic belief networks and uses the variable elimination code for filtering.

Learning with Uncertainty


  • enables queries with "do" as well as observations

Planning with Uncertainty

Reinforcement Learning

Reinforcement learning environments:


Multiagent Systems

Relational Learning

Copyright © David Poole and Alan Mackworth, 2017. This web page and the code are released under a Creative Commons Attribution-Noncommercial-Share Alike license.