foundations of computational agents
The third edition of Artificial Intelligence: foundations of computational agents, Cambridge University Press, 2023 is now available (including full text).
Try to construct an artificial example where a naive Bayes classifier can give divide-by-zero error in test cases when using empirical frequencies as probabilities. Specify the network and the (non-empty) training examples. [Hint: You can do it with two features, say and , and a binary classification, say , that has domain . Construct a data set where the empirical probabilities give and .] What observation is inconsistent with the model?
Consider designing a help system based on Example 10.5. Discuss how your implementation can handle the following issues, and if it cannot whether it is a major problem.
What should be the initial counts? Where might this information be obtained?
What if the most likely page is not the correct page?
What if users cannot find the correct page?
What if users mistakenly think they have the correct page?
Can some pages never be found?
What should it do with common words that are independent of the help page?
What about words that affect the meaning, such as “not”?
How should it handle words it has never seen before?
How can new help pages be incorporated?
Suppose you have designed a help system based on Example 10.5 and much of the underlying system which the help pages are about has changed. You are now very unsure about which help pages will be requested, but you may have a good model of which words will be used given the help page. How can the help system be changed to take this into account? [Hint: You may need different counts for and .]
Consider the unsupervised data of Figure 10.3.
How many different stable assignments of examples to classes does the -means algorithm find when ? [Hint: Try running the algorithm on the data with a number of different starting points, but also think about what assignments of examples to classes are stable.] Do not count permutations of the labels as different assignments.
Estimate how many different stable assignments there are when .
Estimate many different stable assignments are there when .
Why might someone suggest that three is the natural number of classes in this example? Give a definition for “natural” number of classes, and use this data to justify the definition.
Suppose the -means algorithm is run for an increasing sequence of values for , and that it is run for a number of times for each to find the assignment with a global minimum error. Is it possible that a number of values of exist for which the error plateaus and then has a large improvement (e.g., when the error for , , and are about the same, but the error for is much lower)? If so, give an example. If not, explain why.
To initialize the EM algorithm in Figure 10.6 consider two alternatives:
allow to return a random distribution the first time through the loop
initialize and to random values
By running the algorithm on some data sets, determine which, if any, of these alternatives is better in terms of log loss of the training data, as a function of the number of loops through the data set. Does it matter if and are not consistent with the semantics (counts that should be equal are not)?
As outlined in Example 10.7, define a code for describing decision trees. Make sure that each code corresponds to a decision tree (for every sufficiently long sequence of bits, the initial segment of the sequence will describe a unique decision tree), and each decision tree has a code. How does this code translate into a prior distribution on trees? In particular, how much does the likelihood of introducing a new split have to increase to offset the reduction in prior probability of the split (assuming that smaller trees are easier to describe than large trees in your code)?