Index
-
& (base-level
and) 14.4.1
-
(if) 13.3, 5.3, 5.3, 5.1.1
-
(equivalence) 5.1.1
-
(base-level if) 14.4.1
-
(equals) 13.7
-
(not equal to) 13.7.2
-
(and) 5.3, 5.1.1
-
(or) 14.4.3
-
(prove) 5.3.2
-
(bottom) 3.4
-
(entails) 13.3.1, 13.3.2, 5.1.2
-
(rewritten as) 13.6.1
-
0/1 error 7.2.1
-
search 3.6.1, 3.7.2
-
abduction 15.1.2, 5.7, 5.7, Representation
-
abductive
diagnosis 5.7
-
abilities 1.3
-
ABox 14.3.2
-
absolute error 7.2.1
-
absorbing state 9.5
-
abstractions 1.4.4
-
achievement
goal 1.5.7
-
achievement goal 6.1.4
-
action 2.2, 6.1
-
action
profile 11.2.1
-
action constraint 6.4.1
-
action features 6.4.1
-
action function 3.2
-
action instance 6.5
-
action profile 11.4
-
action replay 12.9.1
-
action variable 6.4
-
activation function 7.3.2, 7.5
-
activation layer 7.5
-
active learning Online and offline
-
active sensor 2.4.4
-
acts 1.1
-
actuator 2.1, 2.2
-
acyclic directed
graph 3.3.1
-
acyclic knowledge base 5.4.4, 5.6
-
adaptive importance sampling 8.6.5
-
additive independence 9.1.2
-
additive utility 9.1.2
-
adjective 13.6.6
-
admissible heuristic Proposition 3.1, 3.6, 3.6.2
-
admissible search algorithm 3.6.1
-
aerodynamics 1.2.1
-
agent 1.1, 1.3, 2.1, 2.2
-
agent system 2.2
-
AI 1.1
-
AIspace 16.2
-
algebraic variable 4.1.1
-
algorithm
-
Allais Paradox 9.1.1
-
alpha-beta (-) pruning 11.3
-
alternative (ICL) 15.3.1
-
analysis 1.1
-
Analytical Engine 1.2
-
ancestor 5.4.4
-
annealing 4.7.3
-
annealing schedule 4.7.3
-
answer 13.3.3, 5.3.1, 5.3.2, 5.5.2
-
answer clause 5.3.2
-
any-conflict algorithm 4.7.3
-
anytime algorithm 1.5.4
-
aperiodic Markov chain 8.5.1
-
application of substitution 13.4.1
-
applications of AI 1.1, 1.6, 16.2
-
approximate inference Approximate inference, 8.6
-
approximately
correct 7.8.2
-
approximately optimal solution Approximately optimal solution
-
arc 3.3.1
-
arc consistent 4.4, 4.9.1
-
argument 13.3, 5.6.1
-
Aristotelian definition 14.3
-
array A.2
-
Arrow’s impossibility theorem Proposition 11.1
-
artificial general intelligence 16.1
-
artificial intelligence 1.1
-
ask 5.3.1, Step 4
-
ask-the-user 5.4.2
-
askable atom 5.4.2
-
assertional knowledge base 14.3.2
-
assignment 4.1.1
-
assignment
space 4.2
-
assistive technology 16.2
-
assumable 5.7, 5.5.2
-
assumption-based truth maintenance system 5.10, 5.5.4
-
asymptotic
complexity 3.5.2
-
asynchronous value iteration 9.5.2
-
ATMS, see assumption-based truth maintenance system
-
atom 5.3, 5.1.1
-
atomic clause 13.3, 5.3
-
atomic proposition 5.3, 5.1.1
-
atomic symbol 13.3
-
attribute 14.2.1
-
auction 11.6
-
autonomous agent 16.2
-
autonomous delivery robot 1.6, 1.6.1
-
autonomous vehicles 16.2
-
average reward Average reward
-
axiom Step 4, Step 3, 5.1.2
-
axiom schema 13.7.1
-
axiomatizing the domain Step 4, Step 3
-
axioms of
rationality 9.1
-
axioms of probability 8.1.2
-
back-propagation 7.5
-
background knowledge 1.4.1, 15.2.1
-
backtracking 3.5.2, 4.3
-
backward induction 11.3
-
backward search 3.8.2
-
bag-of-words 8.5.6
-
bagging 7.6.1
-
base language 14.4
-
base learner 7.6.2
-
base level 14.4
-
base-level algorithms 7.6.2
-
Basic
Formal Ontology 14.6
-
Basic Formal Ontology 14.3.3
-
basic income 16.2
-
batched gradient descent 7.3.2
-
Bayes classifier 10.1.2
-
Bayes’ rule 10.1, Proposition 8.4
-
Bayesian information criteria (BIC) 10.1.4
-
Bayesian learning 10.4
-
Bayesian network, see belief network
-
Bayesian probability 8.1
-
beam search 4.8
-
behavior 1.1
-
belief 2.2.1
-
belief monitoring 8.5.3
-
belief network 10.3, 8.3, 8.3
-
belief state 1.3, 2.2.1, 2.4.2
-
belief state transition function 2.2.1, 2.3
-
best response 11.4
-
best-first search 3.6
-
beta
distribution 10.4
-
BFO, see Basic Formal Ontology
-
bias 10.1, 7.4, Bias, 7.8.1
-
bias-free hypothesis space 7.8.1
-
bias–variance trade-off 7.4
-
bidirectional search 3.8.2
-
big data 7.4
-
bigram model 8.5.6
-
binary constraint 4.1.2
-
binary variable 4.1.1
-
binomial distribution Example 10.14
-
biology Biology
-
bit 7.2.1, 8.1.5
-
blame attribution problem 12.1
-
body
-
Boltzmann
distribution 12.5
-
Boltzmann distribution 4.7.3, 4.8
-
Boolean
variable 5.2
-
Boolean property 14.2.1
-
Boolean variable 4.1.1
-
boosting 7.6.2
-
bot 2.1
-
bottom
3.4
-
bottom-up proof procedure 5.3.2
-
boundary of a process 14.3.3
-
bounded above zero 3.6.1
-
bounded rationality 1.5.4
-
branch-and-bound search 3.8.1
-
branching factor 3.3.1
-
breadth-first
search 3.5.1
-
burn-in 8.6.7
-
candidate elimination learner 7.8.1
-
canonical form 9.1.2
-
canonical representation 13.7.1
-
cardinal feature 7.2.1
-
cardinal preference 1.5.7
-
case 7.7
-
case analysis 4.5
-
case-based reasoning 7.7
-
causal link 6.5
-
causal model 5.8
-
causal network 8.3.2
-
causal rule 15.1.1, 6.1.3
-
causal transduction 2.2.1, 2.2.1
-
causality 5.8, 5.8, 8.3.2
-
chain rule 7.5, Proposition 8.3
-
chance node 9.2.1
-
characteristic function relations, 14.2.3
-
children 3.3.1, 4.8
-
choice space 15.3.1
-
choose 3.4, 3.4, 5.3.2
-
Church–Turing thesis 1.2
-
clarity
principle 8.3.2
-
clarity principle 4.1.1, 8.3.2
-
Clark normal form 13.8
-
Clark’s completion 13.8, 5.6
-
class 10.1.2, 10.2, 14.3.2, 14.2.3, 7.3.1
-
classification Task, 7.3.1
-
classification tree 7.3.1
-
clause 5.2
-
closed list, see explored set
-
closed-world
assumption 5.6
-
clustering 10.2
-
CNF, see conjunctive normal form
-
cognitive science 1.2.1
-
cold-start problem 15.2.2
-
collaborative filtering 15.2.2
-
combinatorial auction 16.2
-
command 12.4, 2.2, 2.2.1, 2.3
-
command function 2.2.1, 2.3
-
command trace 2.2.1
-
commonsense reasoning 1.4.3, Example 1.2
-
competitive agents 11.1
-
complements 9.1.2
-
complete
-
completeness of preferences Axiom 9.1
-
complex preference 1.5.7
-
complexity 3.5.2
-
compositional measure of belief 8.1.3
-
compound proposition 5.1.1
-
computational
linguistics 13.6
-
computational agent 1.1
-
computational learning theory 7.8.2
-
computational limits dimension 1.5.4
-
computational sustainability 16.2
-
concept 14.3.2
-
conceptualization 13.2, 14.3
-
conditional
probability table 8.4.1
-
conditional effect 6.1.2, 6.1.3
-
conditional entropy 8.1.5
-
conditional expected
value 8.1.4
-
conditional odds 8.4.2
-
conditional probability 8.1.3
-
conditional probability distribution 8.1.3
-
conditionally independent 8.2
-
conditioning 8.4.1
-
Condorcet
paradox Example 11.17
-
conflict 4.7.1, 5.5.2
-
conflicting variable 4.7.3
-
conjugate prior Example 10.2
-
conjunction 5.1.1
-
conjunctive normal form Example 5.24
-
consequence set 5.3.2
-
consistency-based diagnosis 5.5.3, 5.5.3
-
consistent heuristic 3.7.2
-
consistent hypothesis 15.2.1, 7.8
-
constant 13.3
-
constrained optimization
problem 4.9
-
constraint 4.1.2, 4.1.2
-
constraint
satisfaction problem 4.1.3
-
constraint network 4.4
-
constraint optimization problem 4.9
-
context-free
grammar 13.6.1
-
context-specific independence 8.4.2
-
contingent attribute 14.2.3
-
continuant 2, 14.3.3
-
continuous time 8.5.5
-
continuous variable 4.1.1
-
controller 11.2.1, 2.2, 2.2.1
-
convolutional neural network 7.5
-
cooperate 12.10.2
-
cooperative agents 11.1
-
cooperative system Example 5.30
-
coordinate 12.10.2
-
coordination Example 11.12
-
cost 3.3.1, 4.9
-
CPT, see conditional probability table
-
credit assignment problem 12.1, Feedback
-
cross validation 7.4.3, 7.7
-
crossover 4.8, 4.8
-
CSP, see constraint
satisfaction problem
-
CSP solver
-
culture Culture
-
cumulative probability
distribution 8.6.1
-
cumulative reward 9.5
-
cut 7.2.1
-
Cyc 14.3
-
cycle 3.3.1
-
cycle pruning 3.7.1
-
cyclic knowledge base 5.4.4
-
DAG (directed acyclic graph) 3.3.1
-
Datalog 13.3
-
datatype property 14.3.2
-
DBN, see dynamic belief
network
-
DCG, see definite clause
grammar
-
dead reckoning 2.4.1
-
debugging 5.4.4
-
decision
-
decision
tree 9.2
-
decision
variable 9.2
-
decision function 9.3.2
-
decision network 9.3.1, 9.3.1
-
decision node 9.2.1
-
decision tree 10.3.1, 11.2.2, 7.3.1, 8.4.2
-
decision-theoretic planning 9.5.4
-
deduction 5.3.2, 5.7, Representation
-
deep learning 1.5.10, 7.5, 7.5
-
deep reinforcement learning 12.9.1
-
Deep Space One 10.
-
default 5.4.2, 5.6.1
-
definite
clause 13.3, 13.3, 5.3
-
definite clause grammar (DCG) 13.6.1
-
definite clause resolution 5.3.2
-
delay 13.7.2, 14.4.6
-
delivery
robot 1.6.1
-
DENDRAL 1.2
-
denote 13.3.1
-
dense time 2.2.1
-
dependent continuant 12, 14.3.3, 14.3.3
-
depth bound 3.5.3
-
depth-bounded
search 3.5.3
-
depth-bounded meta-interpreter Figure 14.12
-
depth-bounded search
-
depth-first branch-and-bound search 3.8.1
-
depth-first search 3.5.2
-
derivation 13.4.4, 5.3.2, 5.3.2
-
derived 5.3.2
-
description logic 14.3.2
-
descriptive theory 9.1.3
-
design 1.1, 5.7
-
design space 1.5
-
design time computation 1.4.1, 2.4.3
-
desire 2.2.1
-
deterministic 1.5.6
-
diagnosis 5.7
-
diagnostic assistant 1.6, 1.6.2
-
dictator 11.6
-
dictionary 8.6.6
-
difference list 13.6.1
-
differentia 14.3
-
dimension
-
directed
graph 3.3.1
-
directed acyclic graph 3.3.1
-
Dirichlet distribution 10.4
-
discount factor Discounted reward
-
discounted reward Discounted reward
-
discrete time 2.2.1
-
discrete variable 4.1.1, 8.1.1
-
discretization 8.1.1, 8.5.5
-
disjunction 14.4.3, 5.1.1, 5.5.1
-
disjunctive normal form Example 5.24
-
disposition 19, 14.3.3
-
DNF, see disjunctive normal form
-
do command 12.4, 15.1.1, 2.3
-
document 8.5.6
-
domain functions, 13.3.1, 14.2.1, 4.1.1, 7.2, 8.1.1, 8.1.1
-
domain
splitting 4.5
-
domain consistent 4.4
-
domain expert 2.4.3
-
domain ontology 14.3.2
-
dominant strategy 11.6, 11.4.1
-
don’t-care non-determinism 3.4
-
don’t-know non-determinism 3.4
-
dot product 8.5.3
-
DPLL 5.2.1
-
dynamic belief network 8.5.4
-
dynamic decision network 9.5.4, 9.5.4
-
dynamic programming 3.8.3
-
dynamic relation 15.1.1
-
dynamics 1.5.6, 8.5.1, 9.5
-
economically efficient mechanism 11.6
-
effect 6.1.2, 6.1
-
effect constraint 6.4
-
effect uncertainty dimension 1.5.6
-
effectively computable function 1.2
-
effector 2.1
-
elimination ordering 8.4.1
-
EM, see expectation maximization
-
embodied agent 2.1
-
empirical frequency 7.2.3
-
empirical systems 1.1
-
empty body 13.3, 5.3
-
engineering goal 1.1
-
ensemble learning 7.6.2
-
entails 5.1.2
-
entity 1.5.3, 13.1, 14.3.3
-
entropy 7.2.1, 7.2.1, 8.1.5
-
environment 1.3
-
epistemology 1.2.1, 8.1
-
equal 13.7
-
equilibrium
distribution 8.5.1
-
equivalence 5.1.1
-
ergodic Markov chain 8.5.1
-
error 7.2.1, 7.5
-
error layer 7.5
-
error of hypothesis 7.8.2
-
Euclidean distance 3.7.2, 7.7
-
evaluation function 11.3, 4.7.1
-
event
calculus 15.1.2
-
evidence 8.1.3
-
evidential
model 5.8
-
evolutionary algorithm 12.2
-
exact inference Exact inference
-
exclusive-or Example 7.12
-
existence
uncertainty 15.3.1
-
existentially quantified variable 13.3.2
-
expanding a path 3.4
-
expectation 10.2.2
-
expectation maximization 10.2.2, 10.3.2
-
expected monetary
value 9.1.1
-
expected utility 9.3.2
-
expected value 8.1.4
-
of
utility of a
policy 9.3.2
-
experience 12.4
-
expert
system 1.3
-
expert opinion 10.1.1
-
expert system 1.2, 2.4.2
-
explained away Example 5.34, Example 8.15
-
explanation 5.7
-
explanation-based learning 14.4.6
-
exploit 12.5
-
explore 12.5
-
explore probability 12.5
-
explore–exploit dilemma 12.1
-
explored set 3.7.2
-
expression 13.3
-
extension 4.1.2, Example 4.7, 5.1
-
extensional set 14.2.3
-
extensive form 11.2.2
-
external knowledge source 2.4.4
-
3.6.1
-
fact 13.3, 5.3
-
factor A.2, 8.4.1, 8.4.2
-
factored
finite state machine 2.2.1
-
factored
representation 2.2.1
-
factorization 8.3
-
failure 3.4, 5.6.2
-
fairness 1.4.1, 13.5.1, 3.4
-
false 13.3.1, 5.1.2
-
false-negative error Probable solution, 5.4.4, 5.4.4, 7.2.2
-
false-positive error Probable solution, 5.4.4, 5.4.4, 7.2.2
-
false-positive rate 7.2.2
-
fault 5.5.3
-
feature 1.5.3, Chapter 4, 7.2
-
feature
engineering 12.9.1
-
feature engineering 12.9
-
feature selection 7.4.2
-
feature-based representation of actions 6.1.3
-
feed-forward neural network 7.5
-
FIFO 3.5.1
-
filtering 2.4.1, 8.5.2, 8.5.3
-
finite failure 5.6.2
-
finite horizon 1.5.2
-
finite state controller 2.2.1
-
finite state machine 2.2.1
-
first-order predicate calculus 13.5
-
fixed
point Fixed Point
-
flat structure 1.5.1
-
floundering goal 13.8.1
-
fluent 15.1.1
-
flying machines 1.2.1
-
foaf (friend-of-a-friend) Example 14.12
-
fold 7.4.3
-
for all () 13.3.2
-
forward
chaining 5.3.2
-
forward planner 6.2
-
forward sampling 8.6.2
-
forward search 3.8.2
-
found a solution 3.4
-
frame constraint 6.4
-
frame problem 15.5
-
frame rule 15.1.1, 6.1.3
-
framing effect 9.1.1
-
fringe 3.4
-
frontier 3.4
-
fully observable Markov decision process
(MDP) 9.5
-
fully observable world 1.5.6
-
function 21, functions, 14.3.3
-
functional gradient
boosting 7.6.2
-
fuzzy
terms 2.3
-
gambling 8.1
-
game tree 11.2.2
-
general boundary 7.8.1
-
general game playing 11.8
-
general-to-specific search 15.2.1
-
generalization Measuring success
-
generalized additive independence 9.1.2
-
generalized answer clause 13.4.4
-
generalized arc consistency (GAC) 4.4
-
generate and
test 4.2
-
generate and test 4.9.1
-
generic search algorithm 3.4
-
genetic algorithm 4.8
-
genus 14.3
-
Gibbard–Satterthwaite
theorem 11.6
-
Gibbs distribution 12.5, 4.7.3, 4.8
-
Gibbs sampling 8.6.7
-
global optimum 4.7.1, 4.9.2
-
goal 1.5.7, 1.3, 2.2.1, 3.2, 3.3.1, 6.1.4, 6.2
-
goal
node 3.3.1
-
goal constraint 6.4
-
goal state 3.2
-
Google 8.5.1, Example 8.37
-
gradient
descent 4.9.2, 7.3.2
-
grammar 13.6.1
-
granularity 8.5.5
-
graph 3.3.1
-
greedy 1.5.2
-
greedy
look-ahead 15.2.1
-
greedy ascent 4.7.1
-
greedy best-first search 3.6
-
greedy descent 4.7.1
-
greedy optimal split 7.3.1
-
ground
instance 13.4.1
-
ground expression 13.3
-
ground instance 13.4.2
-
ground representation 14.4.1
-
ground truth 7.4
-
guaranteed bounds Approximate inference
-
, see heuristic function
-
hard clustering 10.2
-
hard constraint Chapter 4, 4.1.2
-
head 13.3, 5.3
-
help system Example 10.5, Example 8.35
-
Herbrand
interpretation 13.4.2
-
heuristic 3.4
-
heuristic
depth-first search 3.6
-
heuristic function 3.6
-
heuristic knowledge 3.1
-
heuristic search 3.6
-
hidden Markov
model 9.5
-
hidden Markov model (HMM) 8.5.2
-
hidden property 15.2.2
-
hidden units 7.5
-
hidden variable 10.2.2, 10.3.2, 8.3.2
-
hierarchical control 2.3
-
hierarchical structure 1.5.1
-
hill climbing 4.7.1
-
history 1.3, 12.4, 2.2.1
-
HMM, see hidden Markov model
-
Hoeffding’s inequality 8.6.1
-
horizon 1.5.2, 2.3, 6.4
-
Horn
clause 5.5.1
-
how
question 5.4.3
-
how question 14.4.5
-
hybrid system 2.3
-
hyperplane 7.3.2
-
hypothesis space 7.8
-
i.i.d., see independent and identically
distributed
-
identity uncertainty 15.3.1
-
immaterial entity 10, 14.3.3
-
imperfect information 9.4
-
imperfect-information game 11.2.2, 11.4
-
implication 5.1.1
-
importance sampling 8.6.4, 8.6.5
-
incoming arc 3.3.1
-
inconsistent 5.5.1
-
incorrect answer 5.4.4
-
incremental gradient descent 7.3.2
-
indefinite horizon 1.5.2, 9.5
-
independent and identically distributed
(i.i.d.) 10.4
-
independent choice logic (ICL) 15.3.1
-
independent continuant 4, 14.3.3
-
independent variables 8.2
-
indicator
variable 5.2, 7.2.1
-
indifferent 9.1.1
-
individual 1.5.3, 13.1, 13.3.1, 14.3.2, 4.8
-
individual–property–value triple 14.2.1
-
induction 5.7, Representation
-
inductive logic programming 15.2.1, 15.2.1
-
inference 5.3.2
-
infinite horizon 1.5.2, 9.5
-
influence diagram 9.3.1
-
information
gain 7.2.1
-
information
theory 10.1.4
-
information content 7.2.1, 8.1.5
-
information gain 7.3.1, 8.1.5
-
information seeking actions 9.3
-
information set 11.2.2
-
information theory 7.2.1, 8.1.5
-
inheritance 14.2.3
-
(initial situation) 15.1.1
-
initial part of a path 3.3.1
-
initial-state constraint 6.4
-
input features 7.2
-
input layer 7.5
-
insects 1.4.1
-
instance 13.3.3, 13.4.1, 13.4.1
-
instance space 7.8
-
insurance Example 9.1
-
integrity constraint 5.5.1
-
intelligent
action 1.1
-
intelligent tutoring system 1.6.3
-
intended interpretation Step 3, 13.2, 5.1.2
-
intension 4.1.2, Example 4.7, 5.1
-
intensional set 14.2.3
-
intention 2.2.1
-
interaction dimension 1.5.9
-
interoperate 14.1
-
interpolation Interpolation and extrapolation
-
interpretation 13.3.1, 5.1.2
-
intersection A.3
-
intervention 5.8, 8.3.2
-
inverse graph 3.8.2
-
inverse reinforcement learning 16.2
-
involve a variable 4.1.2
-
is (Prolog
predicate) Example 13.37, 14.4.4
-
is_a Example 14.5
-
island 3.8.2
-
island-driven search 3.8.2
-
iterative
soft-thresholding 7.4.2
-
iterative best improvement 4.7.1
-
iterative deepening 3.5.3
-
Java 13.2, 14.2.3
-
join A.3
-
joint probability
distribution 8.1.1
-
joint probability distribution 8.3
-
-fold cross validation 7.4.3
-
k-means 10.2.1
-
kd-tree 7.7
-
kernel function 7.6, 7.6
-
knowledge 1.4.2, 2.4.2
-
knowledge
base 2.4.2
-
knowledge base 1.4.1, 1.4.2, 13.3, 13.3, 2.4.2, 5.3, 5.1.2
-
knowledge base designer 5.1.2
-
knowledge engineer 2.4.3
-
knowledge graph 14.2.2
-
knowledge is given 1.5.5
-
knowledge is learned 1.5.5
-
knowledge level 1.4.4, 5.4.3
-
knowledge-based system 2.4.2
-
knowledge-level
debugging 5.4.4
-
error 7.2.1
-
error 7.2.1
-
regularizer 7.4.2
-
regularizer 7.4.2
-
error 7.2.1
-
error 7.2.1
-
landmark 2.3
-
language 13.6.1
-
language bias 7.8.1
-
Laplace
smoothing 10.4, Example 7.18
-
Laplace smoothing 10.1.1
-
latent tree model 10.1.2
-
latent variable, see hidden variable
-
law of large numbers 8.6.1
-
Laws of Robotics 16.2
-
layer (neural network) 7.5
-
leaf 3.3.1
-
learning Chapter 10—10.7, Chapter 12—12.13, 15.2—15.2.2, 2.4.2, Chapter 7—7.11
-
Bayesian 10.4
-
belief network 10.3—10.3.5
-
bias 10.1, 7.4, Bias, 7.8.1
-
case-based 7.7
-
collaborative filtering 15.2.2
-
decision tree 10.1.3, 7.3.1, 8.1.5
-
deep 7.5
-
expectiation
maximization (EM) 10.2.2, 10.3.2
-
inductive logic
programming 15.2.1
-
k-means 10.2.1
-
minimum
description length 10.1.4
-
missing data 10.3.3
-
multiagent 12.10.2
-
naive Bayes classifier 10.1.2
-
neural
network 7.5
-
PAC 10.4, 7.8.2
-
probabilistic classifier 10.1.2
-
probabilities 10.1, 10.3.1
-
reinforcement Chapter 12
-
relational 15.2
-
structure 10.3.4
-
supervised Feedback
-
to
coordinate 12.10.2
-
to act Chapter 12
-
unsupervised 10.2, Feedback
-
version space 7.8.1, 7.8.1
-
learning dimension 1.5.5
-
learning rate 7.3.2
-
least fixed point Fixed Point
-
leave-one-out
cross validation 7.7
-
leave-one-out cross validation 7.4.3
-
level of
abstraction 1.4.4
-
lifelong learning Lifelong learning
-
LIFO 3.5.2
-
likelihood 8.1.3
-
likelihood of data 10.1, 7.2.1
-
likelihood ratio 8.4.2
-
likelihood weighting 8.6.4
-
linear classifier 7.3.2
-
linear function 7.3.2
-
linear layer (in neural network)
-
linear programming 4.9
-
linear regression 12.9.1, 7.3.2
-
linear rule for differentiation 7.5
-
linearly separable 7.3.2
-
Linnaean taxonomy 14.3
-
list Example 13.29
-
literal 5.2, 5.6
-
liveness 1.4.1
-
local
search 4.7
-
local optimum 4.7.1, 4.9.2
-
local search 4.7
-
localization 8.5.2
-
log loss 7.2.1, 7.3.2
-
log-likelihood 7.2.1
-
log-linear model 8.4.2
-
logic 13.2
-
logic
program 13.5
-
logic program 5.10
-
logical
variable 13.3
-
logical connectives 5.1.1
-
logical consequence 13.3.1, 13.3.2, 5.1.2
-
logical form 13.6.6
-
logical formula 5.1.1
-
logically follows 5.1.2
-
logistic function 7.3.2
-
logistic regression 10.1.2, 10.3.1, 7.3.2, 8.4.2
-
long-term memory 2.4.2
-
loop pruning 3.7.1
-
lottery 9.1.1
-
lowest-cost-first search 3.5.4, 3.5.4
-
machine learning, see learning
-
maintenance goal 1.5.7, 6.1.4
-
MAP model 10.1
-
mapping functions
-
margin 7.6
-
Markov
decision process 12.1
-
Markov assumption 8.5.1
-
Markov blanket 8.3, 8.6.7
-
Markov chain 8.5.1
-
Markov chain Monte Carlo 8.6.7
-
Markov decision process 9.5, 9.5—9.5.5
-
material entity 6, 14.3.3
-
matrix
multiplication 8.5.3
-
matrix factorization Add hidden properties
-
maximization 10.2.2
-
maximum a
posteriori probability 10.1
-
maximum entropy Maximum entropy or random worlds, 8.2
-
maximum likelihood model 10.1, 7.2.1
-
maximum-likelihood estimate 7.2.3
-
MCMC, see Markov chain Monte Carlo
-
MDL, see minimum description length
-
MDP, see Markov decision process
-
mean 2.
-
measure 8.1.1, 8.1.1
-
mechanism 11.1, 11.6
-
mechanism design 11.6
-
median 3.
-
memory 2.2.1
-
meta-interpreter 14.4, 14.4, 14.4.2
-
meta-level 14.4
-
metalanguage 14.4
-
MGU, see most
general unifier
-
min-factor elimination ordering 4.6
-
minimal conflict 5.5.2
-
minimal diagnosis 5.5.3
-
minimal explanation 5.7
-
minimal model 13.4.2, Fixed Point
-
minimax 11.3, 12.10.1
-
minimum deficiency elimination ordering 4.6
-
minimum description length (MDL) 10.1.4
-
missing at random 10.3.3
-
missing data 10.1.2, 10.3.3, Noise
-
mode 1.
-
model 1.4.4, 13.3.1, 2.4.1, 4.1.2, 5.1.2
-
model averaging 10.4
-
model complexity 7.4
-
model-based reinforcement learning
-
modular structure 1.5.1
-
modularity 1.5.1, 1.5.1
-
modus
ponens 5.3.2
-
modus ponens 5.3.2
-
money pump 9.1.1
-
monitoring 8.5.2, 8.5.3
-
monotone restriction 3.7.2
-
monotonic logic 5.6.1
-
moral graph 8.4.1
-
more general hypothesis 7.8.1
-
more specific hypothesis 7.8.1
-
most
general unifier 13.4.1, 13.4.3
-
most improving step 4.7.3
-
multiagent
decision network 11.2.3
-
multiagent reasoning 1.5.8, Chapter 11—11.9
-
multiple-path pruning 3.7.2, 3.7.2
-
mutex
constraint 6.4.1
-
MYCIN 1.2
-
myopic 1.5.2
-
myopically optimal split 7.3.1
-
n-gram 8.5.6
-
naive Bayes
classifier 10.1.2, 10.2.2, Example 8.35
-
Nash equilibrium 11.4
-
natural kind 10.1.2, 10.1.2, 14.2.3
-
natural language
processing 8.5.6
-
natural language processing 13.6
-
nature 1.5.8, 1.3, 11.1
-
nearest
neighbors 7.7
-
negation 5.1.1, 5.5.1
-
negation as
failure 5.6
-
negation as failure 13.8, 13.8, 5.6.2
-
negative
examples 15.2.1
-
negative income tax 16.2
-
neighbor 3.3.1
-
Netflix Prize 15.2.2
-
neural network 10.3.1, 7.5—7.5
-
neuron 1.2, 7.5
-
no answer 5.3.1
-
no-forgetting agent 9.3.1
-
no-forgetting decision network 9.3.1
-
node 3.3.1
-
noise 7.4, Noise
-
noisy observation 8.5.2
-
noisy-or 8.4.2, Example 8.37
-
non-deterministic choice 3.4
-
non-deterministic procedure 3.4
-
non-ground representation 14.4.1
-
non-monotonic logic 5.6.1
-
non-parametric distribution 8.1.1
-
non-planning agent 1.5.2
-
non-serial dynamic
programming 4.11
-
non-terminal symbol 13.6.1
-
nonlinear planning 6.5
-
norm 7.2.1
-
normal form
game 11.2.1
-
normative theory 9.1.3
-
noun 13.6.6
-
3.4
-
-complete 3.4
-
-hard 3.4
-
(sharp-NP) 8.4
-
number of
agents dimension 1.5.8
-
number uncertainty 15.3.1
-
object 8, 1.5.3, 13.1, 13.6.6, 14.2.1, 14.3.3
-
object language 14.4
-
object property 14.3.2
-
object-oriented languages 13.2, 14.2.3
-
objective function 4.9
-
observation 1.3, 1.4.1, 5.4.1, 8.1.3, 8.3.2
-
occurrent 23, 14.3.3, 14.3.3
-
occurs
check 13.5.1
-
occurs check 13.5.1
-
Ockham’s razor 7.1
-
odds 8.4.2
-
off-policy learner 12.7
-
offline computation 1.5.9, 1.4.1, 2.4.2, 2.4.3
-
offline learning Online and offline
-
offline reasoning, see offline computation
-
omniscient agent 4.1.1
-
on-policy learner 12.7
-
one-point crossover 4.8
-
online computation 1.5.9, 1.4.1, 2.4.2, 2.4.4
-
online learning Online and offline
-
online reasoning, see online computation
-
ontology 14.1, 14.3, 2.4.3, 5.1.2, 5.4.1, 8.1
-
open-world assumption 5.6
-
optimal algorithm 3.8.3
-
optimal policy 3.8.3, 9.2.1, 9.3.2, 9.5.1
-
optimal solution Optimal solution, Optimal solution, 3.2, 3.3.1, 3.5.4
-
optimality criterion 4.9
-
optimism in the face of uncertainty 12.5
-
optimization problem 4.9
-
oracle 3.4
-
orders of magnitude reasoning 2.3
-
ordinal
preference 1.5.7
-
ordinal feature 7.2.1
-
organizations 1.1.1, 16.2
-
outcome 11.2.1, 9.1.1, 9.2
-
outgoing arc 3.3.1
-
outlier Example 7.4
-
overconfidence 7.4
-
overfitting 7.3.1, 7.4
-
OWL 14.3, 14.3.2
-
OWL functional-style
syntax 14.3.2
-
PAC
learning 10.4
-
PAC learning 7.8.2
-
pair tuples
-
parameterized random variable 15.3.1
-
parametric distribution 8.1.1
-
paramodulation 13.7.1
-
parents 8.3
-
partial evaluation 14.4.6
-
partial observation 8.5.2
-
partial restart 4.7.5
-
partial-order planning 6.5
-
partially
observable game 11.2.2
-
partially observable game 11.4
-
partially observable Markov decision process 9.5, 9.5.5
-
partially observable world 1.5.6
-
particle 8.6.6
-
particle filtering 8.6.6
-
partition
function 10.4
-
partition function 10.1
-
passive sensor 2.4.4
-
past experience 1.3
-
path 3.3.1
-
path consistency 4.4
-
pattern
database 3.6.2, 3.8.3
-
payoff matrix Example 11.1
-
percept 12.4, 2.2
-
percept function 2.3
-
percept stream 2.2.1
-
percept trace 2.2.1
-
perceptron 1.2, 7.3.2
-
perfect information 11.3
-
perfect rationality 1.5.4
-
perfect-information game 11.2.2, 12.10.1
-
periodic Markov chain 8.5.1
-
personalized
recommendations 15.2.2
-
philosophy 1.2.1
-
physical symbol
system 1.4.4
-
physical symbol system
hypothesis 1.4.4, 7.5
-
pixel 2.2
-
plan 6.2
-
planner 6.2
-
planning 15.1.1, 6.2—6.8
-
planning
horizon 2.3
-
planning horizon 1.5.2
-
planning horizon dimension 1.5.2
-
plate 15.3.1
-
plate model 15.3.1
-
point estimate 7.2.1
-
policy 3.8.3, 9.2.1, 9.3.2, 9.5.1
-
policy hill-climbing 12.10.2
-
policy iteration 9.5.3
-
policy search 12.2
-
POMDP, see partially observable Markov decision process
-
population 4.8, 8.6.6
-
positive
examples 15.2.1
-
possible action 6.3
-
possible world 15.3.1, 4.1.1, 4.1.1, 8.1.1, 9.2, 9.3.2
-
posterior distribution 8.3.1, 8.4
-
posterior probability 8.1.3, 8.1.3
-
pragmatics Pragmatics, 13.6.7
-
precision 7.2.2
-
precision-recall space 7.2.2
-
precondition 15.1.1, 6.1.2, 6.1, 6.1.2
-
precondition constraint 6.4
-
predicate symbol 13.3
-
prediction error 7.2.1
-
predictor 7.2.2
-
preference 1.3, 4.9
-
preference
elicitation 9.1.2
-
preference bias 7.8.1
-
preference dimension 1.5.7
-
preposition 13.6.6
-
primitive
-
primitive proposition 8.1.1
-
prior
probability 8.1.3
-
prior count 10.1.1, 7.4.1
-
prior knowledge 1.3
-
prior odds 8.4.2
-
prior probability 10.1, 8.1.3
-
prisoner’s dilemma 11.4
-
privacy 15.2.2
-
probabilistic
independence 8.2
-
probabilistic bounds Approximate inference
-
probabilistic inference 8.4
-
probabilistic relational model 15.3.1
-
probabilistically depends on 8.3
-
probability 7.2.3, 8.1—8.9
-
probable solution Probable solution, Probable solution
-
probably
approximately correct 7.8.2
-
probably approximately correct 8.6.1
-
probably approximately correct
learning 7.8.2
-
process 26, 1.5.2, 14.3.3
-
process boundary 28
-
projection A.3
-
Prolog 1.2, 13.3
-
proof 5.3.2
-
proof procedure 5.3.2
-
bottom-up 5.3.2
-
conflict, bottom-up
-
conflict, top-down
-
Datalog, top-down
-
definite clause, bottom-up
-
definite clause, top-down
-
negation as failure, bottom-up
-
negation-as-failure, top-down
-
top-down 5.3.2
-
proof tree 14.4.5, 5.4.3
-
Example 14.4
-
property 1.5.3, 13.1, 14.3.2, 14.2.1, 14.2.1, 14.3.3
-
property inheritance 14.2.3
-
proposal
distribution 8.6.5
-
proposition 1.5.3, 13.1, 5.1.1, 5.1.1
-
propositional
satisfiability 5.2
-
propositional calculus 5.1
-
propositional definite clause 5.3
-
propositional definite clause
resolution 5.3.2
-
prospect theory 9.1.1, 9.1.3
-
Protégé 14.3.2
-
proved 5.3.2
-
pruning belief network 8.4.1
-
pseudocount 10.1.1, 10.1.2, 10.4, 7.4.1, 7.4.2
-
psychology 1.2.1
-
punishment 9.5
-
pure literal 5.2.1
-
pure strategy 11.4
-
purposive
agent 1.3
-
purposive agent 2.1
-
Python 14.2.3
-
9.5.1
-
12.4, 9.5.1
-
Q-learning 12.4
-
Q-value 9.5.1
-
qualitative derivatives 2.3
-
qualitative reasoning 2.3
-
quality 15, 14.3.3
-
quantitative
reasoning 2.3
-
query 13.3, 13.3, 5.3.1
-
query variable 8.3.1
-
querying the user 5.4.2
-
queue 3.5.1
-
random
restart 10.2.1
-
random
variable 8.1.1
-
random
worlds Maximum entropy or random worlds
-
random forest 7.6.1
-
random initialization 4.7
-
random restart 4.7.2, 4.7, 4.7.5
-
random sampling 4.7
-
random walk 4.7.2, 4.7
-
random worlds 8.2
-
range functions, 14.2.1
-
rational 9.1.1
-
rational agent 9.1
-
RDF 14.3
-
RDF–S 14.3
-
reachable state 15.1.1
-
reactive system 2.4.1
-
recall 7.2.2
-
receiver operating characteristic 7.2.2
-
recognition 5.7
-
recommender system 15.2.2
-
record linkage 15.3.1
-
rectified linear unit 7.5
-
recurrent neural
network 7.5
-
reference point 9.1.3
-
reflection 14.4
-
regression Task, 7.3.2
-
regression planning 6.3, 6.3
-
regression to the mean 7.4, 7.4.1
-
regression tree 7.6
-
regularization Regularize, 7.4.2
-
regularization parameter 7.4.2
-
regularizer 7.4.2
-
regulatory capture 16.2
-
reify 14.2.1, 15.1
-
reinforcement learning 12.1—12.13, Feedback
-
rejection
sampling 8.6.3
-
relation relations, A.3, 1.5.3, 13.1, 4.1.2
-
relational
learning 15.2
-
relational algebra A.3
-
relational database A.3
-
relational probability model 15.3.1
-
relational representations 1.5.3
-
relational uncertainty 15.3.1
-
remember 2.2.1, 2.3
-
representation 1.4.2
-
representation bias 7.4
-
representation dimension 1.5.3, 1.5.3
-
representation language 1.4.2
-
resampling 8.6.6
-
resolution 5.3.2, 5.3.2
-
resolvent 5.3.2
-
resource 14.3
-
Resource Description Framework 14.3
-
resource goal 6.1.4
-
restriction bias 7.8.1
-
retry 3.4
-
return 12.4, 9.5
-
revelation principle 11.6
-
reward 9.5
-
reward function 9.5
-
rewrite rule 13.6.1, 13.7.1
-
ridge regression 7.4.2
-
risk averse 9.1.1
-
RL, see reinforcement learning
-
RoboCup 16.2
-
robot 1.3, 1.6.1, 2.1
-
ROC space 7.2.2
-
role 17, 14.3.3
-
root 3.3.1
-
root-mean-square (RMS) error 7.2.1
-
rule 13.3, 13.3, 5.3
-
rule of inference 5.3.2
-
run 11.2.2
-
run-time distribution 4.7.4
-
running average 12.3, 7.4.1
-
safety 1.4.1, 16.2, 16.2
-
safety goal 6.1.4
-
sample average 8.6.1
-
sample complexity 10.4, 7.8.2
-
SARSA 12.7
-
SARSA with linear function approximation 12.9.1
-
satisfiable 5.7
-
satisficing solution Satisficing solution, 3.1
-
satisfy
-
scenario 5.7
-
scheme A.3
-
scientific goal 1.1
-
scope A.2, A.3, 4.1.2, 4.9, 8.4.2
-
search Chapter 3—3.11
-
search and score 10.3.4
-
search bias 7.4, 7.8.1
-
search strategy 3.5
-
second-order logic 13.5
-
second-price auction 11.6
-
select 3.4, 5.3.2
-
selection A.3
-
selector function 15.3.1
-
semantic interoperability 14.1, 2.4.4
-
semantic network 14.2.2
-
semantic web 14.3
-
semantics Figure 13.1, 5.1.2
-
semi-decidable logic 13.5
-
sensing uncertainty dimension 1.5.6
-
sensor 2.4.4, 2.1, 2.2
-
sensor fusion Example 8.32
-
sentence 13.6.1, 8.5.6
-
separable control problem 2.4.1
-
sequential
Monte Carlo 8.6.6
-
sequential decision
problem 9.3
-
sequential prisoner’s
dilemma 11.4
-
set sets
-
set difference A.3
-
set-of-words 8.5.6
-
short-term memory 2.4.2
-
sigmoid function 7.5, 7.3.2, 8.4.2
-
sigmoid layer (in neural network)
-
simulated
annealing 4.7.3
-
simulated annealing 4.7.3
-
simultaneous action games 11.2.2
-
single agent 1.5.8
-
single decision 9.2
-
single-stage decision network 9.2.1
-
singularity 16.2
-
situation 15.1.1
-
situation calculus 15.1.1
-
SLD
resolution 5.3.2
-
SLD derivation 13.4.4, 5.3.2
-
SLD resolution 13.4.4
-
Smalltalk 14.2.3
-
smart house 1.6, 1.6.5
-
smoothing 8.5.2, 8.5.3
-
SNLP 14.
-
social preference function 11.5
-
society 1.1.1
-
soft
constraint 4.9
-
soft clustering 10.2
-
soft constraint Chapter 4
-
soft-max 12.5
-
software agent 1.3
-
software engineering 1.4.3
-
solution 3.2, 3.3.1
-
sound 5.3.2
-
spatio-temporal region 30, 14.3.3
-
specialization operator 15.2.1
-
specific boundary 7.8.1
-
specific-to-general search 15.2.1
-
squashed linear function 7.3.2
-
stable assignment 10.2.1
-
stack 3.5.2
-
stage 1.5.2, 8.5.1
-
start node 3.3.1
-
start state 3.2
-
starvation 13.5.1, 3.4
-
state 1.5.3, 3.2, 8.5.1
-
state constraint 6.4
-
state features 6.4.1
-
state space 3.2
-
state variable 6.4
-
state-space
graph 6.2
-
state-space graph 6.1.1
-
state-space problem 3.2
-
static relation 15.1.1, 15.1.1
-
stationary
distribution 8.6.7
-
stationary distribution 8.5.1
-
stationary model 8.5.4, 8.5.1, 9.5
-
stationary policy 9.5.1
-
statistical relational AI 15.3—15.3.1
-
step
size 4.9.2
-
step function 7.3.2
-
step size 4.9.2
-
stimuli 1.3, 2.2
-
stochastic beam search 4.8
-
stochastic dynamics 1.5.6
-
stochastic gradient
descent 7.3.2
-
stochastic gradient descent 7.5
-
stochastic local
search 4.7.2
-
stochastic policy 12.10.2, 12.10.2
-
stochastic simulation 8.6
-
stochastic strategy 11.4
-
stopping state 9.5
-
strategic agent 11.1
-
strategic form game 11.2.1
-
strategy 11.2.1, 11.2.2, 11.4
-
strategy profile 11.2.2, 11.4
-
strictly dominated 11.4.1, 4.9.1
-
strictly preferred 9.1.1
-
STRIPS assumption 6.1.2
-
STRIPS representation 6.1.2, 6.1.2
-
structure learning 10.3.4
-
subgame-perfect
equilibrium 11.4
-
subgoal 5.3.2, 5.3.2, 6.3
-
subject 13.6.6, 14.2.1
-
subjective probability 8.1
-
substitutes 9.1.2
-
substitution 13.4.1
-
successor 4.7
-
sufficient statistics 10.2.2
-
sum-of-squares error 10.2.1, 7.2.1, 7.3.2, 7.3.2
-
supervised
learning 10.3.1, Feedback
-
supervised learning Task, 7.11, 7.2—7.2
-
support set 2., 11.4
-
support vector machine (SVM) 7.6
-
sustainable system 16.2
-
SVM, see support vector machine
-
symbol 1.4.4, 4.1.1
-
symbol level 1.4.4
-
symbol system 1.4.4
-
symptoms 1.6.2
-
syntax
-
Datalog 13.3
-
natural language Syntax
-
propositional definite clauses 5.3
-
synthesis 1.1
-
systematicity 14.
-
systems 1 and
2 2.6
-
systems 1 and 2 2.3
-
tabu search 4.7.3
-
tabu tenure 4.7.3
-
target
features 7.2
-
TBox 14.3.2
-
TD error 12.3
-
tell 2.3, Step 3
-
temperature 12.5
-
temporal difference error 12.3
-
temporal region 32, 14.3.3
-
term 13.3, 13.5
-
terminal symbol 13.6.1
-
terminological knowledge base 14.3.2
-
test examples Measuring success, 7.2
-
theorem 5.3.2
-
there exists () 13.3.2
-
thing 1.5.3, 13.1, 14.3.3
-
thought 1.1
-
threat Example 11.14
-
time 15.1, 2.2.1
-
time granularity 8.5.5
-
time-homogenous model 8.5.1
-
tit-for-tat 11.4
-
TMS, see truth maintenance system
-
top-down proof procedure 5.3.2
-
top-level ontology 14.3.3
-
top-n 15.2.2
-
topic model 8.5.6
-
total assignment 4.1.1
-
total reward Total reward
-
tractable 1.4.2
-
trading agent 1.6, 1.6.4
-
training examples 7.8, Task, Measuring success, 7.2
-
transduction 2.2.1
-
transient goal 6.1.4
-
transitivity of preferences Axiom 9.2
-
tree 3.3.1
-
tree
augmented naive Bayes (TAN) network 10.1.2
-
treewidth 4.6, 8.4.1
-
triangle inequality 3.7.2
-
trigram 8.5.6
-
triple tuples, 14.2.1, Example 15.12
-
triple representation 14.2.1
-
trolley problems 9.1.3
-
true relations, 13.3.1, 5.1.2
-
true-positive rate 7.2.2
-
trust 16.2, 16.2
-
truth maintenance
system 5.10
-
truthful 11.6
-
try (local search) 4.7
-
tuple tuples, A.3
-
Turing test 1.1.1
-
tutoring system 1.6
-
two-stage choice 4.7.3
-
two-step
belief network 8.5.4
-
Example 14.5
-
type I error 7.2.2
-
type II error 7.2.2
-
UML 14.2.3
-
unary constraint 4.1.2
-
unary relations 13.1
-
unbiased learning algorithm 7.8.1
-
unconditionally independent 8.2
-
unfolded network 8.5.4, 9.5.4
-
unification 13.4.3, 13.4.3
-
unifier 13.4.1
-
Uniform Resource Identifier 14.3, 14.3.1
-
unigram 8.5.6
-
uninformed search strategy 3.5, 3.6
-
union A.3
-
unique names assumption (UNA) 13.7.2
-
unit (neural network) 7.5
-
unit resolution 5.2.1
-
universal
basic income 16.2
-
universally quantified variable 13.3.2, 13.3.2
-
unnormalized probabilities 8.4.1
-
unsatisfiable 5.5.1
-
unsupervised learning 10.2, 10.2.2, Feedback
-
URI 14.3, 14.3.1
-
useful action 6.3
-
user 2.4.4, 4.1.1, 5.4.1
-
utility Optimal solution, Proposition 9.3
-
utility node 9.2.1
-
9.5.1, 9.5.1
-
9.5.1
-
validation
set 7.4.3
-
value 9.5
-
value iteration 9.5.2
-
value of control 9.4
-
value of information 9.4
-
vanilla meta-interpreter Figure 14.9, 14.4.2
-
variable Chapter 4, 4.1.1
-
variable
elimination 4.6
-
variable assignment 13.3.2
-
variable elimination 8.4.1
-
belief network 8.4.1
-
CSP 4.6
-
decision network 9.3.3
-
monitoring 8.5.3
-
single-stage decision network 9.2.1
-
soft
constraints 4.9.1
-
soft constraints
-
variance 7.4
-
variational inference Approximate inference
-
VCG mechanism 11.6
-
VE, see variable elimination
-
verb 13.6.6, 14.2.1
-
version
space 7.8.1
-
violated 4.1.2
-
virtual body 2.3
-
walk 4.7
-
Watson 1.2, 13.10
-
weak learner 7.6.2
-
weakly dominated 4.9.1
-
weakly preferred 9.1.1
-
Web Ontology Language 14.3
-
web services 1.6.4
-
why question 5.4.3
-
whynot question 5.4.3, 5.4.4
-
Wikidata 14.6
-
win or learn fast (WoLF) 12.10.2
-
Winograd schema 1.1.1
-
word 13.3, 13.6.1, 8.5.6
-
world 1.3
-
World Wide Mind 16.2
-
worst-case error 7.2.1
-
wrapper 2.4.4
-
XML 14.3
-
YAGO 14.6
-
yes answer 5.3.1
-
Zeno’s paradox 3.5.4
-
zero-sum game 11.1, 11.3
-
-greedy exploration strategy 12.5
-
(denotation of predicate symbols) 13.3.1
-
(meaning of atoms) 5.1.2
-
(denotation of
terms) 13.5
-
(denotation of terms) 13.3.1