foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Semantics defines the meaning of the sentences of a language. When the sentences are about a (real or imagined) world, semantics specifies how to put symbols of the language into correspondence with the world.
The semantics of propositional calculus is defined below. Intuitively, atoms have meaning to someone and are either true or false in interpretations. The truth of atoms gives the truth of other propositions in interpretations.
An interpretation consists of a function $\pi $ that maps atoms to $\{\text{true}$, $\text{false}\}$. If $\pi (a)=\text{true}$, atom $a$ is true in the interpretation, or the interpretation assigns true to $a$. If $\pi (a)=\text{false}$, atom $a$ is false in the interpretation. Sometimes it is useful to think of $\pi $ as the set of the atoms that map to true, and the rest of the atoms map to false.
Whether a compound proposition is true in an interpretation is inferred using the truth table of Figure 5.1 from the truth values of the components of the proposition.
Note that truth values are only defined with respect to interpretations; propositions may have different truth values in different interpretations.
$p$ | $q$ | $\mathrm{\neg}p$ | $p\wedge q$ | $p\vee q$ | $p\leftarrow q$ | $p\to q$ | $p\leftrightarrow q$ |
---|---|---|---|---|---|---|---|
true | true | false | true | true | true | true | true |
true | false | false | false | true | true | false | false |
false | true | true | false | true | false | true | false |
false | false | true | false | false | true | true | true |
Suppose there are three atoms: ai_is_fun, happy, and light_on.
Suppose interpretation ${{I}}_{{\mathrm{1}}}$ assigns true to ai_is_fun, false to happy, and true to light_on. That is, ${{I}}_{{\mathrm{1}}}$ is defined by the function ${{\pi}}_{{\mathrm{1}}}$
$${{\pi}}_{{1}}{}{(}{\mathit{\text{ai\_is\_fun}}}{)}{=}{\text{\mathit{t}\mathit{r}\mathit{u}\mathit{e}}}{\mathit{\text{,}}}{}{{\pi}}_{{1}}{}{(}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}{)}{=}{\text{\mathit{f}\mathit{a}\mathit{l}\mathit{s}\mathit{e}}}{\mathit{\text{,}}}{}{{\pi}}_{{1}}{}{(}{\mathit{\text{light\_on}}}{)}{=}{\text{\mathit{t}\mathit{r}\mathit{u}\mathit{e}}}{.}$$ |
Then
ai_is_fun is true in ${{I}}_{{1}}$
${\mathrm{\neg}}{}{\mathit{\text{ai\_is\_fun}}}$ is false in ${{I}}_{{1}}$
happy is false in ${{I}}_{{1}}$
${\mathrm{\neg}}{}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is true in ${{I}}_{{1}}$
${\mathit{\text{ai\_is\_fun}}}{\vee}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is true in ${{I}}_{{1}}$
${\mathit{\text{ai\_is\_fun}}}{\leftarrow}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is true in ${{I}}_{{1}}$
${\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}{\leftarrow}{\mathit{\text{ai\_is\_fun}}}$ is false in ${{I}}_{{1}}$
${\mathit{\text{ai\_is\_fun}}}{\leftarrow}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}{\wedge}{\mathit{\text{light\_on}}}$ is true in ${{I}}_{{1}}$.
Suppose interpretation ${{I}}_{{\mathrm{2}}}$ assigns false to ai_is_fun, true to happy, and false to light_on:
ai_is_fun is false in ${{I}}_{{2}}$
${\mathrm{\neg}}{}{\mathit{\text{ai\_is\_fun}}}$ is true in ${{I}}_{{2}}$
happy is true in ${{I}}_{{2}}$
${\mathrm{\neg}}{}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is false in ${{I}}_{{2}}$
${\mathit{\text{ai\_is\_fun}}}{\vee}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is true in ${{I}}_{{2}}$
${\mathit{\text{ai\_is\_fun}}}{\leftarrow}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}$ is false in ${{I}}_{{2}}$
${\mathit{\text{ai\_is\_fun}}}{\leftarrow}{\mathit{\text{light\_on}}}$ is true in ${{I}}_{{2}}$
${\mathit{\text{ai\_is\_fun}}}{\leftarrow}{\text{\u210e\mathit{a}\mathit{p}\mathit{p}\mathit{y}}}{\wedge}{\mathit{\text{light\_on}}}$ is true in ${{I}}_{{2}}$.
A knowledge base is a set of propositions that are stated to be true. An element of the knowledge base is an axiom.
A model of a knowledge base KB is an interpretation in which all the propositions in KB are true.
If KB is a knowledge base and $g$ is a proposition, $g$ is a logical consequence of KB, written as
$$\text{KB}\vDash g$$ |
if $g$ is true in every model of KB. Thus $\text{KB}\vDash \u0338g$, meaning $g$ is not a logical consequence of KB, when there is a model of KB in which $g$ is false.
That is, $\text{KB}\vDash g$ means no interpretation exists in which KB is true and $g$ is false. The definition of logical consequence places no constraints on the truth value of $g$ in an interpretation where KB is false.
If $\text{KB}\vDash g$ we also say $g$ logically follows from KB, or KB entails $g$.
Suppose KB is the following knowledge base:
${\mathit{\text{sam\_is\_happy}}}{.}$ | ||
${\mathit{\text{ai\_is\_fun}}}{.}$ | ||
${\mathit{\text{worms\_live\_underground}}}{.}$ | ||
${\mathit{\text{night\_time}}}{.}$ | ||
${\mathit{\text{bird\_eats\_apple}}}{.}$ | ||
${\mathit{\text{apple\_is\_eaten}}}{\leftarrow}{\mathit{\text{bird\_eats\_apple}}}{.}$ | ||
${\mathit{\text{switch\_1\_is\_up}}}{\leftarrow}{\mathit{\text{sam\_is\_in\_room}}}{\wedge}{\mathit{\text{night\_time}}}{.}$ |
Given this knowledge base,
${\text{\mathit{K}\mathit{B}}}{\vDash}{\mathit{\text{bird\_eats\_apple}}}{.}$ | ||
${\text{\mathit{K}\mathit{B}}}{\vDash}{\mathit{\text{apple\_is\_eaten}}}{.}$ |
KB does not entail switch_1_is_up as there is a model of the knowledge base where switch_1_is_up is false. Note that sam_is_in_room must be false in that interpretation.
The description of semantics does not tell us why semantics is interesting or how it can be used as a basis to build intelligent systems. The basic idea behind the use of logic is that, when a knowledge base designer has a particular world to characterize, the designer can choose that world as an intended interpretation, choose meanings for the symbols with respect to that world, and write propositions about what is true in that world. When the system computes a logical consequence of a knowledge base, the designer can interpret this answer with respect to the intended interpretation. A designer should communicate this meaning to other designers and users so that they can also interpret the answer with respect to the meaning of the symbols.
The logical entailment “$\text{KB}\vDash g$” is a semantic relation between a set of propositions (KB) and a proposition it entails, $g$. Both KB and $g$ are symbolic, and so they can be represented in the computer. The meaning may be with reference to the world, which is typically not symbolic. The $\vDash $ relation is not about computation or proofs; it provides the specification of what follows from some statements about what is true.
The methodology used by a knowledge base designer to represent a world can be expressed as follows:
A knowledge base designer chooses a task domain or world to represent, which is the intended interpretation. This could be some aspect of the real world (for example, the structure of courses and students at a university, or a laboratory environment at a particular point in time), some imaginary world (such as the world of Alice in Wonderland, or the state of the electrical environment if a switch breaks), or an abstract world (for example, the world of numbers and sets).
The knowledge base designer selects atoms to represent propositions of interest. Each atom has a precise meaning with respect to the intended interpretation.
The knowledge base designer tells the system propositions that are true in the intended interpretation. This is often called axiomatizing the domain, where the given propositions are the axioms of the domain.
The knowledge base designer can now ask questions about the intended interpretation. The system can answer these questions. The designer is able to interpret the answers using the meaning assigned to the symbols.
Within this methodology, the designer does not actually tell the computer anything until step 3. The first two steps are carried out in the head of the designer.
Designers should document the meanings of the symbols so that they can make their representations understandable to other people, so that they remember what each symbol means, and so that they can check the truth of the given propositions. A specification of meaning of the symbols is called an ontology. Ontologies can be informally specified in comments, but they are increasingly specified in formal languages to enable semantic interoperability – the ability to use symbols from different knowledge bases together. Ontologies are discussed in detail in Chapter 14.
Step 4 can be carried out by people as long as they understand the meaning of the symbols. Other people who know the meaning of the symbols in the question and the answer, and who trust the knowledge base designer to have told the truth, can interpret answers to their questions as being correct in the world under consideration.
The knowledge base designer who provides information to the system has an intended interpretation and interprets symbols according to that intended interpretation. The designer states knowledge, in terms of propositions, about what is true in the intended interpretation. The computer does not have access to the intended interpretation – only to the propositions in the knowledge base. Let KB be a given knowledge base. As will be shown, the computer is able to tell if some statement is a logical consequence of KB. The intended interpretation is a model of the axioms if the knowledge base designer has been truthful according to the meaning assigned to the symbols. Assuming the intended interpretation is a model of KB, if a proposition is a logical consequence of KB, it is true in the intended interpretation because it is true in all models of KB.
The concept of logical consequence seems like exactly the right tool to infer implicit information from an axiomatization of a world. Suppose KB represents the knowledge about the intended interpretation; that is, the intended interpretation is a model of KB, and that is all the system knows about the intended interpretation. If $\text{KB}\vDash g$, then $g$ must be true in the intended interpretation, because it is true in all models of the knowledge base. If $\text{KB}\vDash \u0338g$, meaning $g$ is not a logical consequence of KB, there is a model of KB in which $g$ is false. As far as the computer is concerned, the intended interpretation may be the model of KB in which $g$ is false, and so it does not know whether $g$ is true in the intended interpretation.
Given a knowledge base, the models of the knowledge base correspond to all of the ways that the world could be, given that the knowledge base is true.
Consider the knowledge base of Example 5.2. The user could interpret these symbols as having some meaning. The computer does not know the meaning of the symbols, but it can still draw conclusions based on what it has been told. It can conclude that apple_is_eaten is true in the intended interpretation. It cannot conclude switch_1_is_up because it does not know if sam_is_in_room is true or false in the intended interpretation.
If the knowledge base designer tells lies – some axioms are false in the intended interpretation – the computer’s answers are not guaranteed to be true in the intended interpretation.
It is very important to understand that, until we consider computers with perception and the ability to act in the world, the computer does not know the meaning of the symbols. It is the human that gives the symbols meaning. All the computer knows about the world is what it is told about the world. However, because the computer can provide logical consequences of the knowledge base, it can draw conclusions that are true in the world.