foundations of computational agents
Some of these exercises can use AILog, a simple logical reasoning system that implements much the reasoning discussed in this chapter. It is available from the book website (http://artint.info). Some of these can also be done in Prolog or Problog.
Add to the situation calculus example (also available from the book web page) the ability to paint an object. In particular, add the predicate
that is true if object has color in situation .
The parcel starts off blue. Thus, we have an axiom:
There is an action to paint object with color . For this exercise, assume objects can only be painted red, and they can only be painted when the object and the robot are both at position . Colors accumulate on the robot. There is nothing that undoes an object being a color; if you paint the parcel red, it is both red and blue – of course this is unrealistic, but it makes the problem simpler.
Axiomatize the predicate and the action using situation calculus.
You can do this without using more than three clauses (apart from the clause defining the color in the initial situation), where none of the clauses has more than two atomic symbols in the body. You do not require equality, inequality, or negation as failure. Test it in AILog.
Your output should look something like the following:
ailog: bound 12. ailog: ask color(parcel,red,S). Answer: color(parcel,red, do(paint(parcel,red), do(move(rob,storage,o109), do(pickup(rob,parcel), do(move(rob,o109,storage), init))))).
In this exercise, you will add a more complicated paint action than in the previous exercise.
Suppose the object denotes a can of paint of color .
Add the action that results in the object changing its color to . (Unlike in the previous question, the object only has one color at a time.) The painting can only be carried out if the object is sitting at and an autonomous agent is at position carrying the can of paint of the appropriate color.
AILog performs depth-bounded search. You will notice that the processing time for the previous questions was slow, and you required a depth bound that was close to the actual depth bound to make it work in a reasonable amount of time.
In this exercise, estimate how long an iterative deepening search will take to find a solution to the following query:
(Do not bother to try it – it will take too long to run.)
Estimate the smallest bound necessary to find a plan. [Hint: How many steps are needed to solve this problem? How does the number of steps relate to the required depth bound?] Justify your estimate.
Estimate the branching factor of the search tree. To do this you should look at the time for a complete search at level versus a complete search at level . You should justify your answer both experimentally (by running the program) and theoretically (by considering what is the branching factor). You do not have to run cases with a large run time to do this problem.
Based on your answers to parts (a) and (b), and the time you found for some run of the program for a small bound, estimate the time for a complete search of the search tree at a depth one less than that required to find a solution. Justify your solution.
In this exercise, you will investigate using event calculus for the robot delivery domain.
Represent the action in the event calculus.
Represent each of the sequences of actions in Example 15.10 in the event calculus.
Show that event calculus can derive the appropriate goals from the sequence of actions given in part (b).
Suppose that, in event calculus, there are two actions, and , and a relation that is initially, at time 0, false. Action makes true, and action makes false. Suppose that action occurs at time 5, and action occurs at time 10.
Represent this in event calculus.
Is true at time 3? Show the derivation.
Is true at time 7? Show the derivation.
Is true at time 13? Show the derivation.
Is true at time 5? Explain.
Is true at time 10? Explain.
Suggest an alternative axiomatization for that has different behavior at times 5 and 10.
Argue for one axiomatization as being more sensible than the other.
Give some concrete specialization operators that can be used for top-down inductive logic programming. They should be defined so that making progress can be evaluated myopically. Explain under what circumstances the operators will make progress.
An alternative regularization for collaborative filtering is to minimize
How this differ from the regularization of Formula 15.1? [Hint: Compare the regularization for the items or users with few ratings with those with many ratings.]
How does the code of Figure 15.5 need to be modified to implement this regularization?
Which works better in on test data? [Hint: You will need to set to be different for each method; for each method, choose the value of by cross validation.]
A simple modification for the gradient descent for collaborative filtering can be used to predict for various values of threshold in . Modify the code so that it learns such a probability. [Hint: Make the prediction the sigmoid of the linear function as in logistic regression.] Does this modification work better for the task of recommending the top- movies, for, say , where the aim is to have the maximum number of movies rated 5 in the top- list? Which threshold work best? What if the top- is judged by the number of movies rated 4 or 5?
Suppose Boolean parameterized random variables and are parents of Boolean . Suppose there are 3000 people and 200 items.
Draw this in plate notation.
How many random variables are in the grounding of this model?
How many numbers need to be specified for a tabular representation of this model. (Do not include any numbers that are functions of other specified numbers.)
Draw the grounding belief network assuming the population of is and the population of is .
What could be observed to make and probabilistically dependent on each other given the observations?
For the representation of addition in Example 15.24, it was assumed that the observed -values would all be digits. Change the representation so that the observed values can be digits, a blank, or . Give appropriate probabilities.
Suppose you have a relational probabilistic model for movie prediction, which represents
where and are a priori independent.
What is the treewidth of the ground belief network (after pruning irrelevant variables) for querying given the following observations?
|Sam||Harry Potter 6||yes|
|Chris||Harry Potter 6||yes|
For the same probabilistic model, for movies, people and ratings, what is the worst-case treewidth of the corresponding graph (after pruning irrelevant variables), where only ratings are observed? [Hint: The treewidth depends on the structure of the observations; think about how the observations can be structured to maximize the treewidth.]
For the same probabilistic model, for movies, people, and ratings, what is the worst-case treewidth of the corresponding graph, where only some of the ratings but all of the genres are observed?