# 5.1 Propositions

## 5.1.1 Syntax of the Propositional Calculus

A proposition is a sentence, written in a language, that has a truth value (i.e., it is true or false) in a world. A proposition is built from atomic propositions and logical connectives.

An atomic proposition, or just an atom, is a symbol. Atoms are written as sequences of letters, digits, and the underscore (_) and start with a lower-case letter. For example, $a$, ${ai\_is\_fun}$, ${lit\_l}_{1}$, ${live\_outside}$, ${mimsy}$, and ${sunny}$ are all atoms.

Propositions can be built from simpler propositions using logical connectives. A proposition or logical formula is either

• an atomic proposition or

• a compound proposition of the form

$\neg p$ “not $p$ negation of $p$
$p\wedge\mbox{}q$ $p$ and $q$ conjunction of $p$ and $q$
$p\vee\mbox{}q$ $p$ or $q$ disjunction of $p$ and $q$
$p\rightarrow q$ $p$ implies $q$ implication of $q$ from $p$
$p\leftarrow\mbox{}q$ $p$ if $q$ implication of $p$ from $q$
$p\leftrightarrow q$ $p$ if and only if $q$ equivalence of $p$ and $q$
$p\oplus q$ $p$ XOR $q$ exclusive-or of $p$ and $q$

where $p$ and $q$ are propositions.

The operators $\neg$, $\wedge\mbox{}$, $\vee\mbox{}$, $\rightarrow$, $\leftarrow\mbox{}$, $\leftrightarrow$, and $\oplus$ are logical connectives.

Parentheses can be used to make logical formulas unambiguous. When parentheses are omitted, the precedence of the operators is in the order they are given above. Thus, a compound proposition can be disambiguated by adding parentheses to the subexpressions in the order the operations are defined above. For example, $\neg a\vee b\land c\rightarrow d\land\neg e\vee f$ is an abbreviation for $((\neg a)\vee(b\land c))\rightarrow((d\land(\neg e))\vee f).$

## 5.1.2 Semantics of the Propositional Calculus

Semantics defines the meaning of the sentences of a language. The semantics of propositional calculus is defined below. Intuitively, propositions have meaning to someone who specifies propositions they claim are true, and then queries about what else must also be true. An interpretation is a way that the world could be. All that the system knows about the world is that the propositions specified are true, so the world must correspond to an interpretation in which those propositions hold.

An interpretation consists of a function $\pi$ that maps atoms to $\{\mbox{true}$, $\mbox{false}\}$. If $\pi(a){\,{=}\,}\mbox{true}$, atom $a$ is true in the interpretation. If $\pi(a){\,{=}\,}\mbox{false}$, atom $a$ is false in the interpretation. Sometimes it is useful to think of $\pi$ as the set of the atoms that map to $true$, and the rest of the atoms map to $false$.

Whether a compound proposition is true in an interpretation is inferred using the truth table of Figure 5.1 from the truth values of the components of the proposition.

Truth values are only defined with respect to interpretations; propositions may have different truth values in different interpretations.

###### Example 5.1.

Suppose there are three atoms: ${ai\_is\_fun}$, ${happy}$, and ${light\_on}$.

Suppose interpretation $I_{1}$ assigns ${true}$ to ${ai\_is\_fun}$, ${false}$ to ${happy}$, and ${true}$ to ${light\_on}$. That is, $I_{1}$ is defined by the function $\pi_{1}$:

 $\pi_{1}({ai\_is\_fun}){\,{=}\,}{true}\mbox{,~{} }\pi_{1}({happy}){\,{=}\,}{% false}\mbox{, ~{} }\pi_{1}({light\_on}){\,{=}\,}{true}.$

Then

• ${ai\_is\_fun}$ is true in $I_{1}$

• $\neg{{ai\_is\_fun}}$ is false in $I_{1}$

• ${happy}$ is false in $I_{1}$

• $\neg{happy}$ is true in $I_{1}$

• ${ai\_is\_fun}\vee\mbox{}{happy}$ is true in $I_{1}$

• ${ai\_is\_fun}\leftarrow\mbox{}{happy}$ is true in $I_{1}$

• ${happy}\leftarrow\mbox{}{ai\_is\_fun}$ is false in $I_{1}$

• ${ai\_is\_fun}\leftarrow\mbox{}{happy}\wedge\mbox{}{light\_on}$ is true in $I_{1}$.

Suppose interpretation $I_{2}$ assigns ${false}$ to ${ai\_is\_fun}$, ${true}$ to ${happy}$, and ${false}$ to ${light\_on}$:

• ${ai\_is\_fun}$ is false in $I_{2}$

• $\neg{{ai\_is\_fun}}$ is true in $I_{2}$

• ${happy}$ is true in $I_{2}$

• $\neg{happy}$ is false in $I_{2}$

• ${ai\_is\_fun}\vee\mbox{}{happy}$ is true in $I_{2}$

• ${ai\_is\_fun}\leftarrow\mbox{}{happy}$ is false in $I_{2}$

• ${ai\_is\_fun}\leftarrow\mbox{}{light\_on}$ is true in $I_{2}$

• ${ai\_is\_fun}\leftarrow\mbox{}{happy}\wedge\mbox{}{light\_on}$ is true in $I_{2}$.

A knowledge base is a set of propositions that are stated to be true. An element of the knowledge base is an axiom. The elements of a knowledge base are implicitly conjoined, so that a knowledge base is true when all of the axioms in it are true.

A model of knowledge base ${KB}$ is an interpretation in which all the propositions in ${KB}$ are true.

If ${KB}$ is a knowledge base and $g$ is a proposition, $g$ is a logical consequence of ${KB}$, or $g$ logically follows from ${KB}$, or ${KB}$ entails $g$, written

 ${KB}\models g$

if $g$ is true in every model of ${KB}$. Thus, $g$ is not a logical consequence of ${KB}$, written ${KB}\not\models g$, when there is a model of ${KB}$ in which $g$ is false.

###### Example 5.2.

Suppose ${KB}$ is the following knowledge base:

 ${sam\_is\_happy}.$ ${ai\_is\_fun}.$ ${worms\_live\_underground}.$ ${night\_time}.$ ${bird\_eats\_apple}.$ ${apple\_is\_eaten}\leftarrow\mbox{}{bird\_eats\_apple}.$ ${switch\_1\_is\_up}\leftarrow\mbox{}{someone\_is\_in\_room}\wedge\mbox{}{night% \_time}.$

Given this knowledge base:

 ${KB}\models{bird\_eats\_apple}.$ ${KB}\models{apple\_is\_eaten}.$

${KB}$ does not entail ${switch\_1\_is\_up}$ as there is a model of the knowledge base where ${switch\_1\_is\_up}$ is false. The atom ${someone\_is\_in\_room}$ must be false in that interpretation.

### The Human’s View of Semantics

The description of semantics does not tell us why semantics is interesting or how it can be used as a basis to build intelligent systems. The basic idea behind the use of logic is that, when a knowledge base designer has a particular world to characterize, the designer can choose that world as an intended interpretation, choose meanings for the symbols with respect to that world, and write propositions about what is true in that world. When the system computes a logical consequence of a knowledge base, the designer can interpret this answer with respect to the intended interpretation.

The methodology used by a knowledge base designer to represent a world can be expressed as follows:

Step 1

A knowledge base designer chooses a task domain or world to represent, which is the intended interpretation. This could be some aspect of the real world (for example, the structure of courses and students at a university, or a laboratory environment at a particular point in time), some imaginary world (such as the world of Alice in Wonderland, or the state of the electrical environment if a switch breaks), or an abstract world (for example, the world of numbers and sets).

Step 2

The knowledge base designer selects atoms to represent propositions of interest. Each atom has a precise meaning to the designer with respect to the intended interpretation. This meaning of the symbols forms an intended interpretation or conceptualization.

Step 3

The knowledge base designer tells the system propositions that are true in the intended interpretation. This is often called axiomatizing the domain, where the given propositions are the axioms of the domain.

Step 4

A user can now ask questions about the intended interpretation. The system can answer these questions. A user who knows the intended interpretation is able to interpret the answers using the meaning assigned to the symbols.

Within this methodology, the designer does not actually tell the computer anything until step 3.

### The Computer’s View of Semantics

A computer does not have access to the intended interpretation. All the computer knows about the intended interpretation is the knowledge base ${KB}$. If ${KB}\models g$, then $g$ must be true in the intended interpretation, because it is true in all models of the knowledge base. If ${KB}\not\models g$, meaning $g$ is not a logical consequence of ${KB}$, there is a model of ${KB}$ in which $g$ is false. As far as the computer is concerned, the intended interpretation may be the model of ${KB}$ in which $g$ is false, and so it does not know whether $g$ is true in the intended interpretation.

Given a knowledge base, the models of the knowledge base correspond to all of the ways that the world could be, given that the knowledge base is true.

###### Example 5.3.

Consider the knowledge base of Example 5.2. The user could interpret these symbols as having some meaning. The computer does not know the meaning of the symbols, but it can still draw conclusions based on what it has been told. It can conclude that ${apple\_is\_eaten}$ is true in the intended interpretation. It cannot conclude ${switch\_1\_is\_up}$ because it does not know if ${someone\_is\_in\_room}$ is true or false in the intended interpretation.

If the knowledge base designer tells lies – some axioms are false in the intended interpretation – the computer’s answers are not guaranteed to be true in the intended interpretation.