foundations of computational agents
Experience in studying and building intelligent agents has shown that an intelligent agent requires some internal representation of its belief state. Knowledge is the information about a domain that is used for acting in that domain. Knowledge can include general knowledge that can be applied to particular situations as well as beliefs about a specific state. A knowledge-based system is a system that uses knowledge about a domain to act or to solve problems.
Some philosophers have defined knowledge as justified true belief. AI researchers tend to use the terms knowledge and belief more interchangeably. Knowledge tends to mean general and persistent information that is taken to be true over a longer period of time. Belief tends to mean more transient information that is revised based on new information. Often knowledge and beliefs come with measures of how much they should be believed. In an AI system, knowledge is typically not necessarily true and is justified only as being useful. The distinction between knowledge and belief often becomes blurred when one module of an agent may treat some information as true but another module may be able to revise that information.
Figure 2.9 shows a refinement of Figure 1.3 for a knowledge-based agent. A knowledge base, KB, is built offline by a learner and is used online to determine the actions. This decomposition of an agent is orthogonal to the layered view of an agent; an intelligent agent requires both hierarchical organization and knowledge bases.
Online, when the agent is acting, the agent uses its knowledge base, its observations of the world, and its goals and abilities to choose what to do and to use its newly acquired information to update its knowledge base. The knowledge base is its long-term memory, where it keeps the knowledge that is needed to act in the future. This knowledge is learned from prior knowledge and from data and past experiences. The belief state is the short-term memory of the agent, which maintains the model of current environment needed between time steps.
Offline, before the agent has to act, the agent use prior knowledge and past experiences (either its own past experiences or data it has been given) in what is called learning to build knowledge base that is useful for acting online. Researchers have traditionally considered the case involving lots of data and very general, or even uninformative, prior knowledge in the field of statistics. The case of rich prior knowledge and little or no data from which to learn has been studied under the umbrella of expert systems. For most non-trivial domains, the agent needs whatever information is available, and so it requires both rich prior knowledge and observations from which to learn.
The goals and abilities are given offline, online, or both, depending on the agent. For example, a delivery robot could have general goals of keeping the lab clean and not damaging itself or other objects, but it could be given delivery goals at runtime. The online computation can be made more efficient if the knowledge base is tuned for the particular goals and abilities. This is often not possible when the goals and abilities are only available at runtime.
Figure 2.10 shows more detail of the interface between the agents and the world.