Third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including the full text).

5.4.4.1 Bottom-Up Implementation

The bottom-up implementation is an augmented version of the bottom-up algorithm for definite clauses presented in Section 5.2.2.1.

The modification to that algorithm is that the conclusions are pairs ⟨a,A⟩, where a is an atom and A is a set of assumables that imply a in the context of Horn clause knowledge base KB.

Initially, the conclusion set C is {⟨a,{a}⟩:a is assumable}. Clauses can be used to derive new conclusions. If there is a clause h←b1∧...∧bm such that for each bi there is some Ai such that ⟨bi,Ai⟩ ∈C, then ⟨h,A1∪...∪Am can be added to C. Note that this covers the case of atomic clauses, with m=0, where ⟨h,{}⟩ is added to C.


1: Procedure ConflictBU(KB,Assumables)
2:           Inputs
3:                     KB: a set Horn clauses
4:                     Assumables: a set of atoms that can be assumed Output
5:                     set of conflicts
6:           Local
7:                     C is a set of pairs of an atom and a set of assumables
8:           C←{⟨a,{a}⟩:a is assumable}
9:           repeat
10:                     select clause "h←b1∧...∧bm" in KB such that
11:                               ⟨bi,Ai⟩∈C for all i and
12:                               ⟨h,A⟩∉C where A=A1∪...∪Am
13:                     C←C∪{⟨h,A⟩}
14:           until no more selections are possible
15:           return {A: ⟨false,A⟩∈C}
Figure 5.9: Bottom-up proof procedure for computing conflicts

Figure 5.9 gives code for the algorithm.

When the pair ⟨false,A⟩ is generated, the assumptions A form a conflict. One refinement of this program is to prune supersets of assumptions. If ⟨a,A1 and ⟨a,A2 are in C, where A1⊂A2, then ⟨a,A2 can be removed from C or not added to C. There is no reason to use the extra assumptions to imply a. Similarly, if ⟨false,A1 and ⟨a,A2 are in C, where A1⊆A2, then ⟨a,A2 can be removed from C because A1 and any superset - including A2 - are inconsistent with the clauses given, and so nothing more can be learned from considering such sets of assumables.

Example 5.22: Consider the axiomatization of Figure 5.8, discussed in Example 5.20.

Initially, in the algorithm of Figure 5.9, C has the value

{⟨ok_l1,{ok_l1}⟩,⟨ok_l2,{ok_l2}⟩, ⟨ok_s1,{ok_s1}⟩,⟨ok_s2,{ok_s2}⟩,
     ⟨ok_s3,{ok_s3}⟩, ⟨ok_cb1,{ok_cb1}⟩, ⟨ok_cb2,{ok_cb2}⟩}.

The following shows a sequence of values added to C under one sequence of selections:

⟨live_outside,{}⟩
⟨live_w5,{}⟩
⟨live_w3,{ok_cb1}⟩
⟨up_s3,{}⟩
⟨live_w4,{ok_cb1,ok_s3}⟩
⟨live_l2,{ok_cb1,ok_s3}⟩
⟨light_l2,{}⟩
⟨lit_l2,{ok_cb1,ok_s3,ok_l2}⟩
⟨dark_l2,{}⟩
⟨false,{ok_cb1,ok_s3,ok_l2}⟩.

Thus, the knowledge base entails

¬ok_cb1∨ ¬ok_s3∨ ¬ok_l2.

The other conflict can be found by continuing the algorithm.