5.8 Abduction

Abduction is a form of reasoning where assumptions are made to explain observations. For example, if an agent were to observe that some light was not working, it hypothesizes what is happening in the world to explain why the light was not working. A tutoring agent could try to explain why a student gives some answer in terms of what the student understands and does not understand.

The term abduction was coined by Peirce (1839–1914) to differentiate this type of reasoning from deduction, which involves determining what logically follows from a set of axioms, and induction, which involves inferring general relationships from examples.

In abduction, an agent hypothesizes what may be true about an observed case. An agent determines what implies its observations – what could be true to make the observations true. Observations are trivially implied by contradictions (as a contradiction logically implies everything), so we want to exclude contradictions from our explanation of the observations.

To formalize abduction, we use the language of Horn clauses and assumables. The system is given:

  • a knowledge base, KB, which is a set of of Horn clauses

  • a set A of atoms, called the assumables, which are the building blocks of hypotheses.

Instead of adding observations to the knowledge base, observations must be explained.

A scenario of KB,A is a subset H of A such that KBH is satisfiable. KBH is satisfiable if a model exists in which every element of KB and every element H is true. This happens if no subset of H is a conflict of KB.

An explanation of proposition g from KB,A is a scenario that, together with KB, implies g.

That is, an explanation of proposition g is a set H, HA such that

KBHg
KBH⊧̸false.

A minimal explanation of g from KB,A is an explanation H of g from KB,A such that no strict subset of H is also an explanation of g from KB,A.

Example 5.31.

Consider the following simplistic knowledge base and assumables for a diagnostic assistant:

bronchitisinfluenza.
bronchitissmokes.
coughingbronchitis.
wheezingbronchitis.
feverinfluenza.
feverinfection.
sore_throatinfluenza.
falsesmokesnonsmoker.
assumablesmokes,nonsmoker,influenza,infection.

If the agent observes wheezing, there are two minimal explanations:

{influenza} and {smokes}.

These explanations imply bronchitis and coughing.

If wheezingfever is observed, the minimal explanations are

{influenza} and {smokes,infection}.

If wheezingnonsmoker was observed, there is one minimal explanation:

{influenza,nonsmoker}.

The other explanation of wheezing is inconsistent with being a non-smoker.

Example 5.32.

Consider the knowledge base

alarmtampering.
alarmfire.
smokefire.

If alarm is observed, there are two minimal explanations:

{tampering} and {fire}.

If alarmsmoke is observed, there is one minimal explanation:

{fire}.

Notice how, when smoke is observed, there is no need to hypothesize tampering to explain alarm; it has been explained away by fire.

Determining what is going on inside a system based on observations about the behavior is the problem of diagnosis or recognition. In abductive diagnosis, the agent hypothesizes diseases or malfunctions, as well as that some parts are working normally, to explain the observed symptoms.

This differs from consistency-based diagnosis (CBD) in the following ways:

  • In CBD, only normal behavior needs to be represented, and the hypotheses are assumptions of normal behavior. In abductive diagnosis, faulty behavior as well as normal behavior needs to be represented, and the assumables need to be for normal behavior and for each fault (or different behavior).

  • In abductive diagnosis, observations need to be explained. In CBD, observations are added to the knowledge base, and false is proved.

Abductive diagnosis requires more detailed modeling and gives more detailed diagnoses, because the knowledge base has to be able to actually prove the observations from the knowledge base and the assumptions. Abductive diagnosis is also used to diagnose systems in which there is no normal behavior. For example, in a tutoring agent, by observing what a student does, the agent can hypothesize what the student understands and does not understand, which can guide the tutoring agent’s actions.

Abduction can also be used for design, in which the query to be explained is a design goal and the assumables are the building blocks of the designs. The explanation is the design. Consistency means that the design is possible. The implication of the design goal means that the design provably achieved the design goal.

Example 5.33.

Consider the electrical domain of Figure 5.2. Similar to the representation of the example for consistency-based diagnosis in Example 5.21, we axiomatize what follows from the assumptions of what may be happening in the system. In abductive diagnosis, we must axiomatize what follows both from faults and from normality assumptions. For each atom that could be observed, we axiomatize how it could be produced.

A user could observe that l1 is lit or is dark. We must write rules that axiomatize how the system must be to make these true. Light l1 is lit if it is ok and there is power coming in. The light is dark if it is broken or there is no power. The system can assume l1 is ok or broken, but not both:

lit_l1live_w0ok_l1.
dark_l1broken_l1.
dark_l1dead_w0.
assumable ok_l1.
assumable broken_l1.
falseok_l1broken_l1.

You can then write rules on how live_w0 and dead_w0 depend on switch positions, the input to w0, and assumptions of the status of the wire. Observing that some of the lights are lit gives explanations that can account for the observation.

Both the bottom-up and top-down implementations for assumption-based reasoning with Horn clauses can be used for abduction. The bottom-up algorithm of Figure 5.9 computes the minimal explanations for each atom; at the end of the repeat loop, C contains the minimal explanations of each atom (as well as potentially some non-minimal explanations). The refinement of pruning dominated explanations can also be used. The top-down algorithm can be used to find the explanations of any g by first generating the conflicts and, using the same code and knowledge base, proving g instead of false. The minimal explanations of g are the minimal sets of assumables collected to prove g such that no subset is a conflict.