AIPython: Python Code for AIFCA

David Poole and Alan Mackworth

This Python code is meant to demonstrate some of the algorithms in Artificial Intelligence: foundations of computational agents, second edition . All code all copyright by David Poole and Alan Mackworth and licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

We have followed the following principles:

  • The code should be simple and as close the pseudo-code as possible. We have chosen readbility over efficiency: we have tried to keep the asymptotic complexity as good as possible (except in some cases where the more efficient code is an exercise), but have not optimized the constant factors.
  • The code should work, but it does not include all possible functionality. There are parts missing that could be used as exercises.
  • We make extensive use of list comprehensions, sets, and dictionaries. We try to not use libraries where it is not obvious that the library is appropriate. It is the sort of code that a student could write without extensive searching through libraries.
  • It is designed for Python 3.3 and beyond. (Not Python 2). You may as well use the latest version.
  • We use a simple method for tracing the code, using a method display, which is like print but includes a integer display level. The user can then set a maximum display level for the corresponding object (or class), and so has control over the amount of detail printed. We try to not litter the code with too many tracing statements. The use of display is designed to enable future graphical displays of the algorithms.

This is not polished code. It is meant to code representations that work together. It will probably never be polished (or when it is polished it is probably time to throw it away and start again), and should no be relied on. We make no warranty of any kind, expressed or implied, with regard to these programs or the documentation.

Main Source Code Document and Distribution

The code document, aipython.pdf contains and explains all of the code. The whole code base can be downloaded from aipython.zip.

Note that the document and the code are derived from the same source, and so should be consistent (even up to the same line numbers). extract_code.py can be used to extract the code from the latex sources. You can get all of the sources, but it may not be up-to-date during development and it contains lots of working code.

Separate Files

This list may be out of date. Please refer to aipython.pdf and aipython.zip for the latest versions.

Python for Artificial Intelligence

Agents and Control

Searching for Solutions

  • searchProblem.py defines a search problem in terms of the start nodes, a predicate to test if a node is a goal, the neighbors function, and an optional heuristic function. This is the interface that algorithms that search use. It also contains some example graphs.
  • searchAStar.py defines A* search. It does not do multi-path pruning or cycle checking. It is easy to modify to make other search algorithms. searchMPP.py does A* with multiple path pruning.
  • searchBranchAndBound.py defines depth-first branch-and-bound search search. It does not do multi-path pruning or cycle checking.
  • searchDepthFirst.py defines depth-first search for finding a goal node. It can be simpler as it does not return a path to a goal.

Reasoning with Constraints

  • cspProblem.py defines a constraint satisfaction problem. cspExamples.py defines some example CSPs.
  • cspSearch.py defines a searcher for CSPs that searches through the space of partial assignments.
  • cspConsistency.py uses domain splitting and generalized arc consistency to solve CSPs. It may be tricky for some students to write this from scratch because they need to be careful when side effects are okay, and when CSPs need to be copied.
  • cspSLS.py uses stochastic local search, in particular a probabilistic mix of the variable with the most conflicts, any-conflict and a random variable, to solve CSPs. It only maintain the data structures needed for the algorithm (e.g., a priority queue when we need the best variable, but not when we do an any-conflict). Each step is at most logarithmic in the number of variables (to maintain the priority queue), but depends on the number of neighbours. It also plots runtime distributions (for number of steps only).
Note: There are two "obvious" choices as to the representation of constraints: (1) as a set of variables and a string that can be evaluated using "eval" when the variables are assigned or (2) as a tuple of variables and a function (e.g., a lambda expression) that can be applied to the values of the variables. The above do (2). We eventually converged on (2) mainly because it was more natural for the complicated cases, e.g., when we have to construct the string or function in the CSP planner.

Propositions and Inference

Planning with Certainty

  • stripsProblem.py defines planning problems in STRIPS.
  • stripsForwardPlanner.py defines a forward planner that can use either A* or the branch-and-bound searchers.
  • stripsRegressionPlanner.py defines a regression planner that can use either A* or the branch-and-bound searchers. The heuristic function is always 0. An exercise could be to add a heuristic.
  • stripsCSPPlanner.py creates a CSP from a description of a planning problem, which can then be solved with any of the CSP solvers. This is tricky to keep simple when we have to construct a string to evaluate: the frame constraints are tricky.
  • stripsPOP.py implements a partial order planner that can use either of the searchers.

Supervised Machine Learning

Reasoning with Uncertainty

  • probVariables.py defines random variables
  • probGraphicalModels.py defines graphical models, and belief networks as special cases.
  • probVE.py implements variable elimination for graphical models (and so for belief networks).
  • probFactors.py defines factors and factor operations, including multiplying factors, summing out a variable and observing a variable.
  • probStochSim.py implementations of multiple stochastic simulation algorithms for belief networks.
  • probMCMC.py Markov chain Monte Carlo for belief networks.
  • probHMM.py implementation of hidden Markov models. It defines both exact inference for filtering and particle filtering.
  • probDBN.py defines dynamic belief networks and uses the variable elimination code for filtering.

Planning with Uncertainty

Learning with Uncertainty

Multiagent Systems

Reinforcement Learning

Reinforcement learning environments:

Learners:

Relational Learning


Copyright © David Poole and Alan Mackworth, 2017. This web page and the code are released under a Creative Commons Attribution-Noncommercial-Share Alike license.