foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
There are many possible kinship relationships you could imagine like mother, father, great-aunt, second-cousin-twice-removed, and natural-paternal-uncle. Some of these can be defined in terms of the others, for example:
$brother(X,Y)\leftarrow father(X,Z)\wedge natural\mathrm{\_}paternal\mathrm{\_}uncle(Y,Z).$ | ||
$sister(X,Y)\leftarrow parent(Z,X)\wedge parent(Z,Y)\wedge $ | ||
$\mathrm{}\mathit{\hspace{1em}\hspace{1em}\u2006}female(X)\wedge different(X,Y).$ |
Give two quite different representations for kinship relationships based on different relations being primitive.
Consider representing the primitive kinship relationship using relation
$$children(Mother,Father,List\mathrm{\_}of\mathrm{\_}children)$$ |
What advantages or disadvantages may this representation have compared to the two you designed above?
A travel site has a database that represents information about hotels and feedback from users that uses the relations:
$hotel(Id,Name,City,Province,Country,Address)$ | ||
$reported\mathrm{\_}clean(Hotel,RoomNumber,Cleanliness,day(Year,Month,Day))$ |
Show how the following facts can be represented using triple notation, using vocabulary that make sense:
hotel(h345,"The Beach Hotel",victoria,bc, canada,"300 Beach St"). reported_clean(h345,127,clean,day(2013,01,25)).
Is it reasonable to represent the hotel name and address as strings? Explain.
Sam has proposed that any $n$-ary relation $P({X}_{1},{X}_{2},{X}_{3},\mathrm{\dots},{X}_{n})$ can be reexpressed as $n-1$ binary relations, namely,
${P}_{1}({X}_{1},{X}_{2}).$ | ||
${P}_{2}({X}_{2},{X}_{3}).$ | ||
${P}_{3}({X}_{3},{X}_{4}).$ | ||
$\mathrm{}\mathit{\hspace{1em}\hspace{1em}\u2006}\mathrm{\vdots}$ | ||
${P}_{n-1}({X}_{n-1},{X}_{n}).$ |
Explain to Sam why this may not be such a good idea. What problems would arise if Sam tried to do this? Use an example to demonstrate where the problem arises.
Write an ontology for the objects that often appear on your desk that may be useful for a robot that is meant to tidy your desk. Think of the categories that (a) the robot can perceive and (b) should be distinguished for the task.
Suppose a “beach resort” is a resort near a beach that the resort guests can use. The beach has to be near the sea or a lake, where swimming is permitted. A resort must have places to sleep and places to eat. Write a definition of beach resort in OWL.
A luxury hotel has multiple rooms to rent, each of which is comfortable and has a view. The hotel must also have more than one restaurant. There must be menu items for vegetarians and for meat eaters to eat in the restaurants.
Define a luxury hotel in OWL, based on this description. Make reasonable assumptions where the specification is ambiguous.
Suggest three other properties you would expect of a luxury hotel. For each, give the natural language definition and the OWL specification.
For the following, explain how each is categorized by the top-level ontology of Section 14.3.3:
your skin
the period at the end of the first sentence of this chapter
the excitement a child has before a vacation
the trip home from a vacation
a computer program
summer holidays
the ring of a telephone
the dust on your desk
the task of cleaning your office
the diagnosis of flu in a person
France
Based on this experience, suggest and justify a modification of the top-level ontology. Think about categories that are not exclusive or other distinctions seem to be fundamental.
Consider two ways to modify the depth-bound meta-interpreter of Figure 14.12:
The bound is on number of instances of base-level atoms that appear in the proof. Why might this be better or worse than using the depth of the tree?
Allow different base-level atoms to incur different costs on the bound. For example, some atoms could have zero cost, and some atoms could incur a high cost. Give an example of where this might be useful. What conditions on the atom costs would guarantee that, when a positive bound is given, the proof procedure does not go into an infinite loop?
The program of Figure 14.14 allows duplicate delayed goals. Write a version of $dprove$ that returns minimal sets of delayed goals, in their simplest form.
Write a meta-interpreter that can ask multiple sources for information. Suppose that each source is identified by a universal resource identifier (URI). Suppose you have the predicates
$can\mathrm{\_}answer(Q,URI)$ is true if the source given by $URI$ can answer questions that unify with $Q$.
$reliability(URI,R)$ is true if $R$ is some numerical measure of reliability of URI. You can assume that $R$ is in the range $[-100,100]$, in which the higher number means that it is more reliable.
$askSite(URI,Q,Answer)$ is true when you ask the source $URI$ a question $Q$, it gives the $Answer$ that is one of $\{yes,no,unknown\}$. Note that although $can\mathrm{\_}answer$ and $reliability$ can be simple databases, $askSite$ is a sophisticated program that accesses the web or asks a human a question.
Write a meta-interpreter that can utilize multiple information sources and returns a reliability of the answer, where the reliability of an answer is the minimum of the reliabilities of the information sources used. You must have some convention for when no external sources were used (e.g., a reliability of 200). You can only ask an information source a question that you have recorded that it can answer.
Write a meta-interpreter that allows for asking the users yes-or-no questions. Make sure it does not ask questions to which it already knows the answer.
Extend the ask-the-user meta-interpreter from the previous question to allow for questions that ask for instances. The system could ask the user questions like “for which $X$ is $P(X)$ true?”, where the user can give an instance or tell the system there are no more instances. One feature that is useful is to be able to interpret declarations that a predicate is functional and respond accordingly. For example, it might be better to ask for the height of a person than ask many yes-no questions about their height.
Write a program that takes in a tree produced from the meta-interpreter that builds proof trees as shown in Figure 14.13 and lets someone traverse the tree using how questions.
Write a meta-interpreter that allows both how and why questions. In particular, it should allow the user to ask how questions about a goal that has been proved after a why question. Explain how such a program may be useful.
Write a meta-interpreter for definite clauses that does iterative deepening search. Make sure that it only returns one answer for each proof and that the system says no whenever the depth-first searcher says no. This should be based on the depth-bounded meta-interpreter and the iterative deepening search algorithm.
Build an iterative deepening abductive reasoning system to find minimal consistent sets of assumables to imply a goal. This can be based on the depth-bounded meta-interpreter of Figure 14.12, and the delaying meta-interpreter of Figure 14.14 to collect assumptions. The depth bound should be based on the number of assumables used in the proof. Assume that the assumables are all ground.
This should be done in two parts:
Find the minimal sets of assumables that imply some $g$ using iterative deepening on the number of assumables. When $g$ is $false$, this program finds the minimal conflicts.
Based on part (a), find the minimal explanations of $g$ by interleaving finding conflicts and finding minimal sets of assumables that imply $g$.
In this question, you will write a meta-interpreter for parametrized logic programs. These are logic programs that can use constants in arithmetic expressions. The values for the constants are given as part of the input to the meta-interpreter.
Assume that an environment is a list of terms of the form $val(Parm,Val)$, where $Val$ is the value associated with parameter $Parm$. Assume that each parameter only appears once in an environment. An example environment is $[val(a,7),val(b,5)]$.
In AILog, you can use <=
as the base-level implication and
&
as the base-level conjunction. AILog has <=
defined as an infix
operator and $number$ is a built-in predicate.
Write a predicate $lookup(Parm,Val,Env)$ that is true if parameter $Parm$ has value $Val$ in environment $Env$.
Write a predicate $eval(Exp,Val,Env)$ that is true if parametrized arithmetic expression $Exp$ evaluates to number $Val$ in environment $Env$. An expression is either
of the form $({E}_{1}+{E}_{2})$, $({E}_{1}*{E}_{2})$, $({E}_{1}/{E}_{2})$, $({E}_{1}-{E}_{2})$, where ${E}_{1}$ and ${E}_{2}$ are parameterized arithmetic expressions;
a number; or
a parameter.
Assume that the operators have their usual meaning, that numbers evaluate to themselves, and that parameters evaluate to their value in the environment. You can use the AILog predicates $is$, use infix as $N$ is $E$, which is true if (unparametrized) expression $E$ evaluates to number $N$, and $number(E)$, which is true if $E$ is a number.
Write a predicate $pprove(G,Env)$ that is true if goal $G$ is a logical consequence of the base-level KB, where parameters are interpreted in environment $Env$. An example interaction with AILog is
ailog: tell f(X,Y) <= Y is 2*a+b*X. ailog: ask pprove(f(3,Z),[val(a,7),val(b,5)]). Answer: pprove(f(3,29),[val(a,7),val(b,5)]). [ok,more,how,help]: ok. ailog: ask pprove(f(3,Z),[val(a,5),val(b,7)]). Answer: pprove(f(3,31),[val(a,5),val(b,7)]). [ok,more,how,help]: ok. ailog: tell dsp(X,Y) <= Z is X*X*a & Y is Z*Z*b. ailog: ask pprove(dsp(3,Z),[val(a,7),val(b,5)]). Answer: pprove(dsp(3,19845),[val(a,7),val(b,5)]). [ok,more,how,help]: ok. ailog: ask pprove(dsp(3,Z),[val(a,5),val(b,7)]). Answer: pprove(dsp(3,14175),[val(a,5),val(b,7)]). [ok,more,how,help]: ok.