5.4 Knowledge Representation Issues

The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).

5.4.2 Querying the User

At design time or offline, there is typically no information about particular cases. This information arrives online as observations from users, sensors, and external knowledge sources. For example, a medical diagnosis program may have knowledge represented as definite clauses about the possible diseases and symptoms but it would not have knowledge about the actual symptoms manifested by a particular patient. One would not expect that the user would want to, or even be able to, volunteer all of the information about a particular case because often the user does not know what information is relevant or know the syntax of the representation language. Users typically prefer to answer explicit questions put to them in a more natural language or using a graphical user interface.

A simple way to acquire information from a user is to incorporate an ask-the-user mechanism into the top-down proof procedure. In such a mechanism, an atom is askable if the user may know the truth value at run time. The top-down proof procedure, when it has selected an atom to prove, either can use a clause in the knowledge base to prove it, or, if the atom is askable, can ask the user whether or not the atom is true. The user is thus only asked about atoms that are relevant for the query. There are three classes of atoms that can be selected:

  • atoms for which the user is not expected to know the answer, so the system never asks

  • askable atoms for which the user has not already provided an answer; in this case, the user should be asked for the answer, and the answer should be recorded

  • askable atoms for which the user has already provided an answer; in this case, that answer should be used, and the user should not be asked again about this atom.

A bottom-up proof procedure can also be adapted to ask a user, but it should avoid asking about all askable atoms; see Exercise 5.

Note the symmetry between the roles of the user and the roles of the system. They can both ask questions and give answers. At the top level, the user asks the system a question, and at each step the system asks a question, which is answered either by finding the relevant definite clauses or by asking the user. The whole interaction can be characterized by a protocol of questions and answers between two agents, the user and the system.

Example 5.13.

In the electrical domain of Example 5.7, one would not expect the designer of the house to know the switch positions (whether each switch is up or down) or expect the user to know which switches are connected to which wires. It is reasonable that all of the definite clauses of Example 5.7, except for the switch positions, should be given by the designer. The switch positions can then be made askable.

Here is a possible dialog, where the user asks a query and answers yes or no to the system’s questions. The user interface here is minimal to show the basic idea; a real system would use a more sophisticated user-friendly interface.

ailog: ask  lit_l1.
Is up_s1 true? no.
Is down_s1 true? yes.
Is down_s2 true? yes.
Answer: lit_l1.

The system only asks the user questions that the user is able to answer and that are relevant to the task at hand.

Instead of answering questions, it is sometimes preferable for a user to be able to specify that there is something strange or unusual going on. For example, patients may not be able to specify everything that is true about them but can specify what is unusual. Patients that come to a doctor because their left knee hurts, should not be expected them to specify that their left elbow does not hurt and, similarly, for every other part that does not hurt. It may be possible for a sensor to specify that something has changed in a scene, even though it may not be able to recognize what is in a scene.

Given that a user has specified everything that is exceptional, an agent can often infer something from the lack of knowledge. The normality will be a default that can be overridden with exceptional information. This idea of allowing for defaults and exceptions to the defaults is explored in Section 5.6.