11 Probabilities

AILog contains support for first-order probabilistic reasoning. You can have probabilities associated with atoms that have no associated rules defining them. There atoms are grouped into disjoint sets that correspond to random variables.

You can define these probabilistic assumptions using either:

ailog: prob atom:probability.

where atom is a atomic symbol and probability is a number between 0 and 1. atom can contain variables, in which case every ground instance is a proposition with the corresponding probability. All instances of the same atom, and all atoms defined in different prob declarations are assumed to be probabilistically independent.

The alternative syntax is to write:

ailog: prob a1:p1, a2:p2, ..., ak:pk.

Where the pi are non-negative numbers that sum to 1 and the ai are atoms that share the same variables. In this case every ground instantiation of the ais forms a random variable: the ais are mutually exclusive and covering.

The atoms defined in a prob statement cannot be used in the head of rules, but can be used in the body. When called, the instance of these atoms must be ground.

We also allow statements where the probability is free, for example in:

ailog: prob heads_happens(E,P):P.

This is useful when we want to learn parameters from observations. Note that when this is called, P must be bound.

There are two main interfaces to the probabilistic reasoning:

ailog: observe obs.

where obs is a body. This declares that obs has been observed. This returns the probability of obs conditioned on all previous observations. Observations accumulate, so that subsequent queries and observations are with respect to all previous observations.

You can ask the posterior probability of a query (this is the conditional probability given all of the current observations) using:

ailog: predict query.

where query is a body. This gives the posterior probability of query, conditioned on all previous observations.

Given any prediction or observations you can inspect the explanations of the query. An explanation corresponds to a proof of the query based on a set of the probabilistic atoms. For each of these explanations you can examine the proof tree, and ask how that explanation was computed. This can either be done after a proof, or as a stand-alone command:

ailog: explanations.

which returns the explanations of all of the observations.

You can also ask for the worlds in which the query and all of the previous explanations are true. This gives a set of descriptions of possible worlds where the possible worlds are described in terms of the probabilistic atoms that are true in them, and the descriptions are mutually exclusive. This can be used form computing the probability of the query (by summing over all of the worlds where it is true). The worlds are computed from the explanations by making sure that they are mutually exclusive. This can either be done after a prediction or observation or by issuing the command

ailog: worlds.

Which returns the worlds in which all of the observations are true.

Note that the ask command ignores all previous observations. A future version may allow you to ask in the context of previous explanations.

You can undo observations using

ailog: unobserve.

which undoes the last observation or using

ailog: unobserve all.

which undoes all observations.

The command

ailog: probs.

lists all probabilistic assertions.

Note that probabilities integrates cleanly with negation as failure and naively with the depth-bounded search. (It fails if the probability cannot be computed due to the depth-bound. A more sophisticated version may give bounds on the probability.) Probabilistic hypotheses and non-probabilistic hypotheses (using assumable) are not integrated and should not be used together.

Example. This may seem like not a very powerful probabilistic inference system, however, arbitrary Bayesian belief nets can be represented. For example, a belief network with Boolean variable A as a parent of variable B can be represented as:
ailog: tell b <- a & bifa.
ailog: tell b <- ~a & bifna.
ailog: prob a:0.6.
ailog: prob bifa:0.9.
ailog: prob bifna:0.3.

You can ask the prior on a using:

ailog: predict a.
Answer: P(a|Obs)=0.6.
 Runtime since last report: 0.03 secs.
  [ok,more,explanations,worlds,help]: ok.

And conditioning on  b.

ailog: observe ~b.
Answer: P(~b|Obs)=0.34.
 Runtime since last report: 0 secs.
  [ok,more,explanations,worlds,help]: ok.
ailog: predict a.
Answer: P(a|Obs)=0.176471.
 Runtime since last report: 0 secs.
  [ok,more,explanations,worlds,help]: ok.

Thus P(a |  b) = 0.176471.

Note that Ailog does not find all explanations, but only those within a threshold. This threshold can be queried and varied with the prob_threshold command.