Intelligence 3E

foundations of computational agents

A database is often complete in the sense that anything not implied is false.

You may want the user to specify which switches are up and which circuit breakers are broken so that the system can conclude that any switch not mentioned as up is down and any circuit breaker not specified as broken is ok. Thus, down is the default value of switches and ok is the default value for circuit breakers. It is easier for users to communicate using defaults than it is to specify the seemingly redundant information about which switches are down and which circuit breakers are ok. To reason with such defaults, an agent must assume it has complete knowledge; a switch’s position is not mentioned because it is down, not because the agent does not know whether it is up or down.

The given definite-clause logic does not allow the derivation of a conclusion from a lack of knowledge or a failure to prove. It does not assume that the knowledge is complete. In particular, the negation of an atom can never be a logical consequence of a definite-clause knowledge base.

The complete knowledge assumption assumes that, for every atom, the clauses with the atom as the head cover all the cases when the atom is true. In particular, an atom with no clauses is false. Under this assumption, an agent can conclude that an atom is false if it cannot derive that the atom is true. This is also called the closed-world assumption. It can be contrasted with the open-world assumption, which is that the agent does not know everything and so cannot make any conclusions from a lack of knowledge. The closed-world assumption requires that everything relevant about the world is known to the agent.

This assumption that there is a definition of each atom in terms of clauses is the basis of logic programming. Here we give the propositional version; richer variants are presented in Section 15.4 and Section 15.6.

Suppose the clauses for atom $a$ are

$a\leftarrow {b}_{1}.$ |

$\mathrm{\vdots}$ |

$a\leftarrow {b}_{n}.$ |

where an atomic clause $a$ is treated as the rule $a\leftarrow true$. The complete knowledge assumption specifies that if $a$ is true in some interpretation then one of the ${b}_{i}$ must be true in that interpretation; that is,

$a\to {b}_{1}\vee \mathrm{\dots}\vee {b}_{n}.$ |

Because the clauses defining $a$ are equivalent to

$a\leftarrow {b}_{1}\vee \mathrm{\dots}\vee {b}_{n}$ |

the meaning of the clauses can be seen as the conjunction of these two propositions, namely, the equivalence

$a\leftrightarrow {b}_{1}\vee \mathrm{\dots}\vee {b}_{n}$ |

where $\leftrightarrow $ is read as “if and only if” (see Figure 5.1). This equivalence is called Clark’s completion of the clauses for $a$. Clark’s completion of a knowledge base is the completion for each atom in the knowledge base.

Clark’s completion means that if there are no rules for an atom $a$, then the completion of this atom is $a\leftrightarrow false$, which means that $a$ is false.

Consider the clauses from Example 5.8:

$down\mathrm{\_}{s}_{1}.$ |

$up\mathrm{\_}{s}_{2}.$ |

$ok\mathrm{\_}c{b}_{1}.$ |

$live\mathrm{\_}{l}_{1}\leftarrow live\mathrm{\_}{w}_{0}.$ |

$live\mathrm{\_}{w}_{0}\leftarrow live\mathrm{\_}{w}_{1}\wedge up\mathrm{\_}{s}_{2}.$ |

$live\mathrm{\_}{w}_{0}\leftarrow live\mathrm{\_}{w}_{2}\wedge down\mathrm{\_}{s}_{2}.$ |

$live\mathrm{\_}{w}_{1}\leftarrow live\mathrm{\_}{w}_{3}\wedge up\mathrm{\_}{s}_{1}.$ |

$live\mathrm{\_}{w}_{2}\leftarrow live\mathrm{\_}{w}_{3}\wedge down\mathrm{\_}{s}_{1}.$ |

$live\mathrm{\_}{w}_{3}\leftarrow live\mathrm{\_}outside\wedge ok\mathrm{\_}c{b}_{1}.$ |

$live\mathrm{\_}outside.$ |

Suppose that these are the only clauses for the atoms in the heads of these clauses, and there are no clauses for $up\mathrm{\_}{s}_{1}$ or $down\mathrm{\_}{s}_{2}$. The completion of these atoms is

$down\mathrm{\_}{s}_{1}\leftrightarrow true.$ |

$up\mathrm{\_}{s}_{1}\leftrightarrow false.$ |

$up\mathrm{\_}{s}_{2}\leftrightarrow true.$ |

$down\mathrm{\_}{s}_{2}\leftrightarrow false.$ |

$ok\mathrm{\_}c{b}_{1}\leftrightarrow true.$ |

$live\mathrm{\_}{l}_{1}\leftrightarrow live\mathrm{\_}{w}_{0}.$ |

$live\mathrm{\_}{w}_{0}\leftrightarrow (live\mathrm{\_}{w}_{1}\wedge up\mathrm{\_}{s}_{2})\vee (live\mathrm{\_}{w}_{2}\wedge down\mathrm{\_}{s}_{2}).$ |

$live\mathrm{\_}{w}_{1}\leftrightarrow live\mathrm{\_}{w}_{3}\wedge up\mathrm{\_}{s}_{1}.$ |

$live\mathrm{\_}{w}_{2}\leftrightarrow live\mathrm{\_}{w}_{3}\wedge down\mathrm{\_}{s}_{1}.$ |

$live\mathrm{\_}{w}_{3}\leftrightarrow live\mathrm{\_}outside\wedge ok\mathrm{\_}c{b}_{1}.$ |

$live\mathrm{\_}outside\leftrightarrow true.$ |

This implies that $up\mathrm{\_}{s}_{1}$ is false, $live\mathrm{\_}{w}_{1}$ is false, and $live\mathrm{\_}{w}_{2}$ is true.

With the completion, the system can derive negations, and so it is useful to extend the language to allow negations in the body of clauses. A literal is either an atom or the negation of an atom. The definition of a definite clause can be extended to allow literals in the body rather than just atoms. We write the negation of atom $a$ under the complete knowledge assumption as $\sim a$ to distinguish it from classical negation that does not assume the completion. This negation is often called negation as failure.

Under negation as failure, body $g$ is a consequence of the knowledge base $KB$ if $K{B}^{\prime}\vDash g$, where $K{B}^{\prime}$ is Clark’s completion of $KB$. A negation $\sim a$ in the body of a clause or the query becomes $\neg a$ in the completion. That is, a query follows from a knowledge base under the complete knowledge assumption means that the query is a logical consequence of the completion of the knowledge base.

Consider the axiomatization of Example 5.8. Representing a domain can be made simpler by expecting the user to tell the system only what switches are up and by the system concluding that a switch is down if it has not been told the switch is up. This can be done by adding the following rules:

$down\mathrm{\_}{s}_{1}\leftarrow \sim up\mathrm{\_}{s}_{1}.$ |

$down\mathrm{\_}{s}_{2}\leftarrow \sim up\mathrm{\_}{s}_{2}.$ |

$down\mathrm{\_}{s}_{3}\leftarrow \sim up\mathrm{\_}{s}_{3}.$ |

Similarly, the system may conclude that the circuit breakers are ok unless it has been told they are broken:

$ok\mathrm{\_}c{b}_{1}\leftarrow \sim broken\mathrm{\_}c{b}_{1}.$ |

$ok\mathrm{\_}c{b}_{2}\leftarrow \sim broken\mathrm{\_}c{b}_{2}.$ |

Although this may look more complicated than the previous representation, it means that it is easier for the user to specify what is occurring in a particular situation. The user has to specify only what is up and what is broken. This may save time if being down is normal for switches and being ok is normal for circuit breakers.

To represent the state of Figure 5.2, the user specifies

$up\mathrm{\_}{s}_{2}.$ |

$up\mathrm{\_}{s}_{3}.$ |

The system can infer that ${s}_{1}$ must be down and both circuit breakers are ok.

The completion of the knowledge base consisting of the clauses above is

$down\mathrm{\_}{s}_{1}\leftrightarrow \neg up\mathrm{\_}{s}_{1}.$ |

$down\mathrm{\_}{s}_{2}\leftrightarrow \neg up\mathrm{\_}{s}_{2}.$ |

$down\mathrm{\_}{s}_{3}\leftrightarrow \neg up\mathrm{\_}{s}_{3}.$ |

$ok\mathrm{\_}c{b}_{1}\leftrightarrow \neg broken\mathrm{\_}c{b}_{1}.$ |

$ok\mathrm{\_}c{b}_{2}\leftrightarrow \neg broken\mathrm{\_}c{b}_{2}.$ |

$up\mathrm{\_}{s}_{1}\leftrightarrow false.$ |

$up\mathrm{\_}{s}_{2}\leftrightarrow true.$ |

$up\mathrm{\_}{s}_{3}\leftrightarrow true.$ |

$broken\mathrm{\_}c{b}_{1}\leftrightarrow false.$ |

$broken\mathrm{\_}c{b}_{2}\leftrightarrow false.$ |

Notice that atoms that are in the bodies of clauses but are not in the head of any clauses are false in the completion.

Recall that a knowledge base is acyclic if there is an assignment of natural numbers (non-negative integers) to the atoms so that the atoms in the body of a clause are assigned a lower number than the atom in the head. With negation as failure, non-acyclic knowledge bases become semantically problematic.

The following knowledge base is not acyclic:

$a\leftarrow \sim b.$ |

$b\leftarrow \sim a.$ |

Clark’s completion of this knowledge base is equivalent to $a\leftrightarrow \neg b$, which just specifies that $a$ and $b$ have different truth values but not which one is true.

The following knowledge base is also not acyclic:

$a\leftarrow \sim a.$ |

Clark’s completion of this knowledge base is $a\leftrightarrow \neg a$, which is logically inconsistent.

Clark’s completion of an acyclic knowledge base is always consistent and always gives a unique truth value to each atom. For the rest of this chapter, we assume that the knowledge bases are acyclic.

A logic is monotonic if any proposition that can be derived from a knowledge base can also be derived when extra propositions are added to the knowledge base. That is, adding knowledge does not reduce the set of propositions that can be derived. The definite-clause logic is monotonic.

A logic is non-monotonic if some conclusions can be invalidated by adding more knowledge. The logic of definite clauses with negation as failure is non-monotonic. Non-monotonic reasoning is useful for representing defaults. A default is a rule that can be used unless it is overridden by an exception.

For example, to say that $b$ is normally true if $c$ is true, a knowledge base designer can write a rule of the form

$b\leftarrow c\wedge \sim ab\mathrm{\_}a.$ |

where $ab\mathrm{\_}a$ is an atom that means abnormal with respect to some aspect $a$. Given $c$, the agent can infer $b$ unless it is told $ab\mathrm{\_}a$. Adding $ab\mathrm{\_}a$ to the knowledge base can prevent the conclusion of $b$. Rules that imply $ab\mathrm{\_}a$ can be used to prevent the default under the conditions of the body of the rule.

Suppose the purchasing agent is investigating purchasing holidays. A resort may be adjacent to a beach or away from a beach. This is not symmetric; if the resort were adjacent to a beach, the knowledge provider would specify this. Thus, it is reasonable to have the clause

$away\mathrm{\_}from\mathrm{\_}beach\leftarrow \sim on\mathrm{\_}beach.$ |

This clause enables an agent to infer that a resort is away from the beach if the agent is not told it is on a beach.

A cooperative system tries to not mislead. If we are told the resort is on the beach, we would expect that resort users would have access to the beach. If they have access to a beach, we would expect them to be able to swim at the beach. Thus, we would expect the following defaults:

$beach\mathrm{\_}access\leftarrow on\mathrm{\_}beach\wedge \sim ab\mathrm{\_}beach\mathrm{\_}access.$ |

$swim\mathrm{\_}at\mathrm{\_}beach\leftarrow beach\mathrm{\_}access\wedge \sim ab\mathrm{\_}swim\mathrm{\_}at\mathrm{\_}beach.$ |

A cooperative system would tell us if a resort on the beach has no beach access or if there is no swimming. We could also specify that, if there is an enclosed bay and a big city, then there is no swimming, by default:

$ab\mathrm{\_}swim\mathrm{\_}at\mathrm{\_}beach\leftarrow enclosed\mathrm{\_}bay\wedge big\mathrm{\_}city\wedge \sim ab\mathrm{\_}no\mathrm{\_}swimming\mathrm{\_}near\mathrm{\_}city.$ |

We could say that British Columbia (BC) is abnormal with respect to swimming near cities:

$ab\mathrm{\_}no\mathrm{\_}swimming\mathrm{\_}near\mathrm{\_}city\leftarrow in\mathrm{\_}BC\wedge \sim ab\mathrm{\_}BC\mathrm{\_}beaches.$ |

Given only the preceding rules, an agent infers $away\mathrm{\_}from\mathrm{\_}beach$. If it is then told $on\mathrm{\_}beach$, it can no longer infer $away\mathrm{\_}from\mathrm{\_}beach$, but it can now infer $beach\mathrm{\_}access$ and $swim\mathrm{\_}at\mathrm{\_}beach$. If it is also told $enclosed\mathrm{\_}bay$ and $big\mathrm{\_}city$, it can no longer infer $swim\mathrm{\_}at\mathrm{\_}beach$. However, if it is then told $in\mathrm{\_}BC$, it can then infer $swim\mathrm{\_}at\mathrm{\_}beach$.

By having defaults of what is normal, a user can interact with the system by telling it what is abnormal, which allows for economy in communication. The user does not have to state the obvious.

One way to think about non-monotonic reasoning is in terms of arguments. The rules can be used as components of arguments, in which the negated abnormality gives a way to undermine arguments. Note that, in the language presented, only positive arguments exist, so these are the only ones that can be undermined. In more general theories, there can be positive and negative arguments that attack each other.

The bottom-up proof procedure for negation as failure is a modification of the bottom-up procedure for definite clauses. The difference is that it can add literals of the form $\sim p$ to the set $C$ of consequences that have been derived; $\sim p$ is added to $C$ when it can determine that $p$ must fail.

Failure can be defined recursively: $p$ fails when every body of a clause with $p$ as the head fails. A body fails if one of the literals in the body fails. An atom ${b}_{i}$ in a body fails if $\sim {b}_{i}$ can be derived. A negation $\sim {b}_{i}$ in a body fails if ${b}_{i}$ can be derived.

Figure 5.11 gives a bottom-up negation-as-failure interpreter for computing consequents of a ground $KB$. Note that this includes the case of a clause with an empty body (in which case $m=0$ and the atom at the head is added to $C$) and the case of an atom that does not appear in the head of any clause (in which case its negation is added to $C$).

Consider the following clauses:

$p\leftarrow q\wedge \sim r.$ |

$p\leftarrow s.$ |

$q\leftarrow \sim s.$ |

$r\leftarrow \sim t.$ |

$t.$ |

$s\leftarrow w.$ |

The following is a possible sequence of literals added to $C$:

$t,\sim r,\sim w,\sim s,q,p$ |

where $t$ is derived trivially because it is given as an atomic clause; $\sim r$ is derived because $t\in C$; $\sim w$ is derived as there are no clauses for $w$, and so the “for every clause” condition of line 18 of Figure 5.11 trivially holds. Literal $\sim s$ is derived as $\sim w\in C$; and $q$ and $p$ are derived as the bodies are all proved.

The top-down procedure for the complete knowledge assumption proceeds by negation as failure. It is similar to the top-down definite-clause proof procedure of Figure 5.4. This is a non-deterministic procedure (see the box) that can be implemented by searching over choices that succeed. When a negated atom $\sim a$ is selected, a new proof for atom $a$ is started. If the proof for $a$ fails, $\sim a$ succeeds. If the proof for $a$ succeeds, the algorithm fails and must make other choices. The algorithm is shown in Figure 5.12.

Consider the clauses from Example 5.29. Suppose the query is $\text{ask}\text{}p$.

Initially, $G=\{p\}$.

Using the first rule for $p$, $G$ becomes $\{q,\sim r\}$.

Selecting $q$, and replacing it with the body of the third rule, $G$ becomes $\{\sim s,\sim r\}$.

It then selects $\sim s$ and starts a proof for $s$. This proof for $s$ fails, and thus $G$ becomes $\{\sim r\}$.

It then selects $\sim r$ and tries to prove $r$. In the proof for $r$, there is the subgoal $\sim t$, and so it tries to prove $t$. This proof for $t$ succeeds. Thus, the proof for $\sim t$ fails and, because there are no more rules for $r$, the proof for $r$ fails. Therefore, the proof for $\sim r$ succeeds.

$G$ is empty and so it returns $yes$ as the answer to the top-level query.

Note that this implements finite failure, because it makes no conclusion if the proof procedure does not halt. For example, suppose there is just the rule $p\leftarrow p$. The algorithm does not halt for the query $\text{ask}\text{}p$. The completion, $p\leftrightarrow p$, gives no information. Even though there may be a way to conclude that there will never be a proof for $p$, a sound proof procedure should not conclude $\sim p$, as it does not follow from the completion.