foundations of computational agents
Most modern mail systems use spam filters to filter out unwanted email. A common form of spam filtering analyzes the content of the message and the subject to determine the probability that the email should be classified as junk. What is junk to one person might not be junk to another; determining what is junk requires feedback from the user. A user can only give a limited amount of feedback, so data-hungry approaches, like deep learning, trained on the user’s feedback are not appropriate.
A standard way to implement content-based spam filtering is to use a naive Bayes classifier, where the classification is Boolean (spam or not spam). The features can include words, phrases, capitalization, and punctuation. Because the unnormalized probability is a product and taking logs gives a sum, the decision can be seen as a sum of weights for each feature that has a different prediction for spam and non-spam.
The advantage of thinking about this in terms of probability is that it enables learning from user feedback. In particular, having an informed prior, and using, for example, beta or Dirichlet distributions, allows the model to work initially with no feedback, using the priors, and then to learn from personalized feedback. How quickly the model adapts to new data is controlled by the prior counts, the sum of which is an expected sample size. The expected sample size controls how much new experience is weighted compared to the prior model, and can be tuned for good user experience. One way to think about this is that it can use data from everyone (in the prior) but weight personalized data for each user much more.
There is no perfect way to control spam when there is an adversary trying to thwart the spam filter. The issue of acting with adversaries (and other agents) is explored more in Chapter 14.