Intelligence 2E

foundations of computational agents

To make a good decision, an agent cannot simply assume what the world is like and act according to that assumption. It must consider multiple hypotheses when making a decision. Consider the following example.

Many people consider it sensible to wear a seat belt when traveling in a car because, in an accident, wearing a seat belt reduces the risk of serious injury. However, consider an agent that commits to assumptions and bases its decision on these assumptions. If the agent assumes it will not have an accident, it will not bother with the inconvenience of wearing a seat belt. If it assumes it will have an accident, it will not go out. In neither case would it wear a seat belt! A more intelligent agent may wear a seat belt because the inconvenience of wearing a seat belt is far outweighed by the increased risk of injury or death if it has an accident. It does not stay at home too worried about an accident to go out; the benefits of being mobile, even with the risk of an accident, outweigh the benefits of the extremely cautious approach of never going out. The decisions of whether to go out and whether to wear a seat belt depend on the likelihood of having an accident, how much a seat belt helps in an accident, the inconvenience of wearing a seat belt, and how important it is to go out. The various trade-offs may be different for different agents. Some people do not wear seat belts, and some people do not go in cars because of the risk of accident.

Reasoning with uncertainty has been studied in the fields of probability theory and decision theory. Probability is the calculus of gambling. When an agent makes decisions and is uncertain about the outcomes of its actions, it is gambling on the outcomes. However, unlike a gambler at the casino, an agent that has to survive in the real world cannot opt out and decide not to gamble; whatever it does – including doing nothing – involves uncertainty and risk. If it does not take the probabilities of possible outcomes into account, it will eventually lose at gambling to an agent that does. This does not mean, however, that making the best decision guarantees a win.

Many of us learn probability as the theory of tossing coins and rolling dice. Although this may be a good way to present probability theory, probability is applicable to a much richer set of applications than coins and dice. In general, probability is a calculus for belief designed for making decisions.

The view of probability as a measure of belief is known as Bayesian probability or
subjective probability. The term *subjective* means “belonging to the
subject” (as opposed to *arbitrary*). For example, suppose there are three agents, Alice, Bob, and
Chris, and one six-sided die that has been tossed, that they all agree is fair. Suppose Alice observes
that the outcome is a “$6$” and tells Bob that the outcome is even,
but Chris knows nothing about the outcome. In this case, Alice has a
probability of $1$ that the outcome is a “$6$,” Bob has a
probability of $\frac{1}{3}$ that it is a “$6$” (assuming Bob
believes Alice), and Chris may have probability of $\frac{1}{6}$ that the
outcome is a “$6$.” They all have different probabilities because
they all have different knowledge. The probability is about the
outcome of this particular toss of the die, not of some generic event
of tossing dice.

We are assuming that the uncertainty is epistemological – pertaining to an agent’s beliefs of the world – rather than ontological – how the world is. For example, if you are told that someone is very tall, you know they have some height but you only have vague knowledge about the actual value of their height.

Probability theory is the study of *how knowledge
affects belief*. Belief in some proposition, $\alpha $, is measured in
terms of a number between $0$ and $1$. The probability of $\alpha $ is
$0$ means that $\alpha $ is believed to be definitely false (no new evidence
will shift that belief), and the probability of $\alpha $ is $1$ means that $\alpha $ is
believed to be definitely true. Using $0$ and $1$ is purely a
convention.
If an agent’s probability
of $\alpha $ is greater than zero and less than one, this does not mean
that $\alpha $ is true to some degree but rather that the agent is ignorant of
whether $\alpha $ is true or false. The probability reflects the
agent’s ignorance.