foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Introductions to probability theory from an AI perspective, and belief (Bayesian) networks, are by Pearl [1988], Jensen [1996], Castillo et al. [1996], Koller and Friedman [2009], and Darwiche [2009]. Halpern [2003] overviews the foundations of probability.
Variable elimination for evaluating belief networks is presented in Zhang and Poole [1994], Dechter [1996], Darwiche [2009] and Dechter [2013]. Treewidth is discussed by Bodlaender [1993].
For comprehensive reviews of information theory, see Cover and Thomas [1991], MacKay [2003], and Grünwald [2007].
Brémaud [1999] describes theory and applications of Markov chains. HMMs are described by Rabiner [1989]. Dynamic Bayesian networks were introduced by Dean and Kanazawa [1989]. Markov localization and other issues on the relationship of probability and robotics are described by Thrun et al. [2005]. The use of particle filtering for localization is due to Dellaert et al. [1999].
Manning and Schütze [1999] and Jurafsky and Martin [2008] present probabilistic and statistical methods for natural language. The topic model of Example 8.37 is based on Google’s Rephil described by Murphy [2012].
For introductions to stochastic simulation, see Rubinstein [1981] and Andrieu et al. [2003]. Likelihood weighting in belief networks is based on Henrion [1988]. Importance sampling in belief networks is based on Cheng and Druzdzel [2000], who also consider how to learn the proposal distribution. There is a collection of articles on particle filtering in Doucet et al. [2001].
The annual Conference on Uncertainty in Artificial Intelligence, and the general AI conferences, provide up-to-date research results.