foundations of computational agents
Some of these exercises can use AIPython (aipython.org) or Prolog.
Suppose we want to be able to reason about an electric kettle plugged into one of the power outlets for the electrical domain of Figure 5.2. Suppose a kettle must be plugged into a working power outlet, it must be turned on, and it must be filled with water, in order to heat. Write definite clauses that let the system determine whether kettles are heating.
You must
give the intended interpretation of all symbols used
write the clauses so they can be loaded into AIPython or Prolog
show that the resulting knowledge base runs in AIPython or Prolog
Consider the domain of house plumbing shown in Figure 5.13.
In this diagram, , , and are cold water pipes; , , and are taps; , , and are drainage pipes.
Suppose you have the following atoms
is true if pipe has mains pressure in it
is true if tap is on
is true if tap is off
is true if is wet ( is either the sink, bath, or floor)
is true if water is flowing through
is true if the sink has the plug in
is true if the bath has the plug in
is true if the sink does not have the plug in
is true if the bath does not have the plug in.
A definite-clause axiomatization for how water can flow down drain if taps and are on and the bath is unplugged is
Finish the axiomatization for the sink in the same manner as the axiomatization for the bath. Test it in AIPython or Prolog.
What information would you expect a resident of a house to be able to provide that the plumber who installed the system, who is not at the house, cannot? Change the axiomatization so that questions about this information are asked of the user.
Axiomatize how the floor is wet if the sink overflows or the bath overflows. They overflow if the plug is in and water is flowing in. You may invent new atomic propositions as long as you give their intended interpretation. (Assume that the taps and plugs have been in the same positions for one hour; you do not have to axiomatize the dynamics of turning on the taps and inserting and removing plugs.) Test it in AIPython or Prolog.
Suppose a hot-water system is installed to the left of tap . This has another tap in the pipe leading into it and supplies hot water to the shower and the sink (there are separate hot and cold water taps for each). Add this to your axiomatization. Give the denotation for all propositions you invent. Test it in AIPython or Prolog.
Consider the knowledge base
Give a model of the knowledge base.
Give an interpretation that is not a model of the knowledge base.
Give two atoms that are logical consequences of the knowledge base.
Give two atoms that are not logical consequences of the knowledge base.
Consider the knowledge base
Show how the bottom-up proof procedure works for this example. Give all logical consequences of .
is not a logical consequence of . Give a model of in which is false.
is a logical consequence of . Give a top-down derivation for the query .
A bottom-up proof procedure can incorporate an ask-the-user mechanism by asking the user about every askable atom. How can a bottom-up proof procedure still guarantee proof of all (non-askable) atoms that are a logical consequence of a definite-clause knowledge base without asking the user about every askable atom?
This question explores how having an explicit semantics can
be used to debug programs. The file elect_bug2
in the the book’s website
is an axiomatization of the electrical wiring domain of
Figure 5.2, but it contains a buggy clause (one that is
false in the intended interpretation shown in the figure). The aim of this
exercise is to to find the buggy clause, given the denotation of the symbols given in Example 5.8. To
find the buggy rule, you do not even need to look at the knowledge base! (You can look at the knowledge base to find the buggy clause
if you like, but that will not help you in this exercise.) All you must know is the meaning of the symbols in the program and what is true
in the intended interpretation.
The query can be proved, but it is false in the intended interpretation. Use the questions of AIPython to find a clause whose head is false in the intended interpretation and whose body is true. This is a buggy rule.
Consider the following knowledge base and assumables aimed to explain why people are acting suspiciously:
Suppose is observed. What are all of the minimal explanations for this observation?
Suppose is observed. What are all of the minimal explanations for this observation?
Is there something that could be observed to remove one of these as a minimal explanation? What must be added to be able to explain this?
What are the minimal explanations of ?
Give the minimal explanations of .
Suppose there are four possible diseases a particular patient may have: , , , and . causes spots. causes spots. Fever could be caused by one (or more) of , , or . The patient has spots and fever. Suppose you have decided to use abduction to diagnose this patient based on the symptoms.
Show how to represent this knowledge using Horn clauses and assumables.
Show how to diagnose this patient using abduction. Show clearly the query and the resulting answer(s).
Suppose also that and cannot occur together. Show how that changes your knowledge base from part (a). Show how to diagnose the patient using abduction with the new knowledge base. Show clearly the query and the resulting answer(s).
Consider the following clauses and integrity constraints:
Suppose the assumables are . What are the minimal conflicts?
Deep Space One (http://nmp.jpl.nasa.gov/ds1/) was a spacecraft launched by NASA in October 1998 that used AI technology for its diagnosis and control. For more details, see Muscettola et al. [1998] or http://ti.arc.nasa.gov/tech/asr/planning-and-scheduling/remote-agent/ (although these references are not necessary to complete this question).
Figure 5.14 depicts a part of the actual DS1 engine design. To achieve thrust in an engine, fuel and oxidizer must be injected. The whole design is highly redundant to ensure its operation even in the presence of multiple failures (mainly stuck or inoperative valves). Note that whether the valves are black or white, and whether or not they have a bar, are irrelevant for this question.
Each valve can be ok (or not) and can be open (or not). The aim of this question is to axiomatize the domain so that we can do two tasks.
Given an observation of the lack of thrust in an engine and given which valves are open, using consistency-based diagnosis, determine what could be wrong.
Given the goal of having thrust and given the knowledge that some valves are ok, determine which valves should be opened.
For each of these tasks, you must think about what the clauses are in the knowledge base and what is assumable.
The atoms should be of the following forms:
is true if valve is open. Thus the atoms should be , , and so on.
is true if valve is working properly.
is true if the output of valve is pressurized with gas. You should assume that and are true.
is true if engine has thrust.
is true if no thrust exists in either engine.
is true if there is no thrust.
To make this manageable, only write rules for the input into engine . Test your code using AIPython or Prolog on a number of examples.
Consider using abductive diagnosis on the problem in the previous question, with the following elaborations.
Valves can be or . Some valves may be specified as open or closed.
A valve can be , in which case the gas will flow if the valve is open and not if it is closed; , in which case gas never flows; , in which case gas flows independently of whether the valve is open or closed; or , in which case gas flowing into the valve leaks out instead of flowing through.
There are three gas sensors that can detect gas leaking (but not which gas); the first gas sensor detects gas from the rightmost valves (), the second gas sensor detects gas from the center valves (), and the third gas sensor detects gas from the leftmost valves ().
Axiomatize the domain so the system can explain thrust or no thrust in engine and the presence of gas in one of the sensors. For example, it should be able to explain why is thrusting. It should be able to explain why is not thrusting and there is a gas detected by the third sensor.
Test your axiomatization on some non-trivial examples.
Some of the queries have many explanations. Suggest how the number of explanations could be reduced or managed so that the abductive diagnoses are more useful.
You are tasked with axiomatizing the plumbing in your home and you have an axiomatization similar to that of Exercise 5.2. A new tenant is going to sublet your home and may want to use your system to determine what may be going wrong with the plumbing (before calling you or the plumber).
There are some atoms that you will know the rules for, some that the tenant will know, and some that neither will know. Divide the atomic propositions into these three categories, and suggest which should be made askable and which should be assumable. Show what the resulting interaction will look like under your division.
This question explores how integrity constraints and consistency-based diagnosis can be used in a purchasing agent that interacts with various information sources on the web. The purchasing agent will ask a number of the information sources for facts. However, information sources are sometimes wrong. It is useful to be able to automatically determine which information sources may be wrong when a user gets conflicting information.
This question uses meaningless symbols such as , , , but in a real domain there will be meaning associated with the symbols, such as meaning “there is skiing in Hawaii” and meaning “there is no skiing in Hawaii” or meaning “butterflies do not eat anything” and meaning “butterflies eat nectar”. We will use meaningless symbols in this question because the computer does not have access to the meanings and must simply treat them as meaningless symbols.
Suppose the following information sources and associated information are provided.
Source claims the following clauses are true:
Source claims the following clauses are true:
Source claims the following clause is true:
Source claims the following clauses are true:
Source claims the following clause is true:
You know that the following clauses are true:
Not every source can be believed, because together they produce a contradiction.
Code the knowledge provided by the users into AIPython using assumables. To use a clause provided by one of the sources, you must assume that the source is reliable.
Use the program to find the conflicts about what sources are reliable. (To find conflicts you can just ask .)
Suppose you would like to assume that as few sources as possible are unreliable. Which single source, if it was unreliable, could account for a contradiction (assuming all other sources were reliable)?
Which pairs of sources could account for a contradiction (assuming all other sources are reliable) such that no single one of them could account for the contradiction?
Suppose you have a job at a company that is building online teaching tools. Because you have taken an AI course, your boss wants to know your opinion on various options under consideration.
They are planning on building a tutoring system for teaching elementary physics (e.g., mechanics and electromagnetism). One of the things that the system must do is to diagnose errors that a student may be making.
For each of the following, answer the explicit questions and use proper English. Answering parts not asked or giving more than one answer when only one is asked for will annoy the boss. The boss also does not like jargon, so please use straightforward English.
The boss has heard of consistency-based diagnosis and abductive diagnosis but wants to know what they involve in the context of building a tutoring system for teaching elementary physics.
Explain what knowledge (about physics and about students) is required for consistency-based diagnosis.
Explain what knowledge (about physics and about students) is required for abductive diagnosis.
What is the main advantage of using abductive diagnosis over consistency-based diagnosis in this domain?
What is the main advantage of consistency-based diagnosis over abductive diagnosis in this domain?
Consider the bottom-up negation-as-failure proof procedure of Figure 5.11. Suppose we want to allow for incremental addition and deletion of clauses. How does change as a clause is added? How does change if a clause is removed?
Suppose you are implementing a bottom-up Horn clause explanation reasoner and you want to incrementally add clauses or assumables. When a clause is added, how are the minimal explanations affected? When an assumable is added, how are the minimal explanations affected?
Figure 5.15 shows a simplified redundant communication network between an unmanned spacecraft () and a ground control center (). There are two indirect high-bandwidth (high-gain) links that are relayed through satellites (, ) to different ground antennae (, ). Furthermore, there is a direct, low-bandwidth (low-gain) link between the ground control center’s antenna () and the spacecraft. The low-gain link is affected by atmospheric disturbances – it works if there are no disturbances () – and the spacecraft’s low-gain transmitter () and antenna 3 are ok. The high-gain links always work if the spacecraft’s high-gain transmitter (), the satellites’ antennae (, ), the satellites’ transmitters (, ), and the ground antennae (, ) are ok.
To keep matters simple, consider only messages from the spacecraft going through these channels to the ground control center.
The following knowledge base formalizes the part of the communication network we are interested in:
Ground control is worried, because it has not received a signal from the spacecraft (). It knows for sure that all ground antennae are ok (i.e., , , and ) and satellite ’s transmitter is ok (). It is not sure about the state of the spacecraft, its transmitters, the satellites’ antennae, ’s transmitter, and atmospheric disturbances.
Specify a set of assumables and an integrity constraint that model the situation.
Using the assumables and the integrity constraints from part (a), what is the set of minimal conflicts?
What is the consistency-based diagnosis for the given situation? In other words, what are the possible combinations of violated assumptions that could account for why the control center cannot receive a signal from the spacecraft?
Explain why NASA may want to use abduction rather than consistency-based diagnosis for the domain of Exercise 5.17.
Suppose that an atmospheric disturbance could produce static or no signal in the low-bandwidth signal. To receive the static, antenna and the spacecraft’s low-bandwidth transmitter must be working. If or are not working or is dead, there is no signal. What rules and assumables must be added to the knowledge base of Exercise 5.17 so that we can explain the possible observations , , or ? You may ignore the high-bandwidth links. You may invent any symbols you need.