1 Introduction 1
1.1 What Is AI? 1
1.2 Approaches to Artificial Intelligence 6
1.3 Brief History of AI 8
1.4 Plan of the Book 11
1.5 Additional Readings and Discussion 14
Exercises 17
ⅠReactive Machines 19
2 Stimulus-Response Agents 21
2.1 Perception and Action 21
2.1.1 Perception 24
2.1.2 Action 24
2.1.3 Boolean Algebra 25
2.1.4 Classes and Forms of Boolean Functions 26
2.2 Representing and Implementing Action Functions 27
2.2.1 Production Systems 27
2.2.2 Networks 29
2.2.3 The Subsumption Architecture 32
2.3 Additional Readings and Discussion 33
Exercises 34
3 Neural Networks 37
3.1 Introduction 37
3.2 Training Single TLUs 38
3.2.1 TLU Geometry 38
3.2.2 Augmented Vectors 39
3.2.3 Gradient Descent Methods 39
3.2.4 The Windrow-Hoff Procedure 41
3.2.5 The Generalized Delta procedure 41
3.2.6 The Error-Correction Procedure 43
3.3 Neural Networks 44
3.3.1 Motivation 44
3.3.2 Notation 45
3.3.3 The Backpropagation Method 46
3.3.4 Computing Weight Changes in the Final Layer 48
3.3.5 Computing Changes to the Weights in Intermediate Layers 48
3.4 Generalization ,Accuracy, and Overfitting 51
3.5 Additional Readings and Discussion 54
Exercises 55
4 Machine Evolution 59
4.1 Evolutionary Computation 59
4.2 Genetic Programming 60
4.2.1 Program Representation in GP 60
4.2.2 The GP Process 62
4.2.3 Evolving a Wall-Following Robot 65
4.3 Additional Readings and Discussion 69
Exercises 69
5 State Machines 71
5.1 Representing the Environment by Feature Vectors 71
5.2 Elman Networks 73
5.3 Iconic Representations 74
5.4 Blackboard Systems 77
5.5 Additional Readings and Discussion 80
Exercises 80
6 Robot Vision 85
6.1 Introduction 85
6.2 Steering and Automobile 86
6.3 Two Stages of Robot Vision 88
6.4 Image Processing 91
6.4.1 Averaging 91
6.4.2 Edge Enhancement 93
6.4.3 Combining Edge Enhancement with Averaging 96
6.4.4 Region Finding 97
6.4.5 Using Image Attributes Other Than Intensity 101
6.5 Scene Analysis 102
6.5.1 Interpreting Lines and Curves in the Image 103
6.5.2 Model -Based Vision 106
6.6 Stereo Vision and Depth Information 108
6.7 Additional Readings and Discussion 110
Exercises 111
Ⅱ Search in State Spaces 115
7 Agents That Plan 117
7.1 Memory Versus Computation 117
7.2 State-Space Graphs 118
7.3 Searching Explicit State Spaces 121
7.4 Feature-Based State Spaces 122
7.5 Graph Notation 124
7.6 Additional Readings and Discussion 125
Exercises 126
8 Uninformed Search 129
8.1 Formulating the State Space 129
8.2 Components of Implicit State-Space Graphs 130
8.3 Breadth-First Search 131
8.4 Depth-First or Backtracking Search 133
8.5 Iterative Deepening 135
8.6 Additional Readings and Discussion 136
Exercises 137
9 Heuristic Search 139
9.1 Using Evaluation Functions 139
9.2 A General Graph-Searching Algorithm 141
9.2.1 Algorithm A 142
9.2.2 Admissibility of A 145
9.2.3 The Consistency (or Monotone)Condition 150
9.2.4 Iterative-Deepening A 153
9.2.5 Recursive Best-First Search 154
9.3 Heuristic Functions and Search Efficiency 155
9.4 Additional Readings and Discussion 160
Exercises 160
10 Planning, Acting ,and Learning 163
10.1 The Sense/Plan/Act Cycle 163
10.2 Approximate Search 165
10.2.1 Island-Driven Search 166
10.2.2 Hierarchical Search 167
10.2.3 Limited-Horizon Search 169
10.2.4 Cycles 170
10.2.5 Building Reactive Procedures 170
10.3 Learning Heuristic Functions 172
10.3.1 Explicit Graphs 172
10.3.2 Implicit Graphs 173
10.4 Rewards Instead of Goals 175
10.5 Additional Readings and Discussion 177
Exercises 178
11 Alternative Search Formulations and Applications 181
11.1 Assignment Problems 181
11.2 Constructive Methods 183
11.3 Heuristic Repair 187
11.4 Function Optimization 189
Exercises 192
12 Adversarial Search 195
12.1 Two-Agent Games 195
12.2 The Minimax Procedure 197
12.3 The Alpha-Beta Procedure 202
12.4 The Search Efficiency of the Alpha-Beta Procedure 207
12.5 Other Important Matters 208
12.6 Games of Chance 208
12.7 Learning Evaluation Functions 210
12.8 Additional Readings and Discussion 212
Exercises 213
Ⅲ Knowledge Representation and Reasoning 215
13 The Propositional Calculus 217
13.1 Using Constraints on Feature Values 217
13.2 The Language 219
13.3 Rules of Inference 220
13.4 Definition of Proof 221
13.5 Semantics 222
13.5.1 Interpretations 222
13.5.2 The Propositional Truth Table 223
13.5.3 Satisfiability and Models 224
13.5.4 Validity 224
13.5.5 Equivalence 225
13.5.6 Entailment 225
13.6 Soundness and Completeness 226
13.7 The PSAT Problem 227
13.8 Other Important Topics 228
13.8.1 Language Distinctions 228
13.8.2 Metatheorems 228
13.8.3 Associative Laws 229
13.8.4 Distributive Laws 229
Exercises 229
14 Resolution in the Propositional Calculus 231
14.1 A New Rule of Inference: Resolution 231
14.1.1 Clauses as wffs 231
14.1.2 Resolution on Clauses 231
14.1.3 Soundness of Resolution 232
14.2 Converting Arbitrary wffs to Conjunctions of Clauses 232
14.3 Resolution Refutations 233
14.4 Resolution Refutation Search Strategies 235
14.4.1 Ordering Strategies 235
14.4.2 Refinement Strategies 236
14.5 Horn Clauses 237
Exercises 238
15 The Predicate Calculus 239
15.1 Motivation 239
15.2 The Language and Its Syntax 240
15.3 Semantics 241
15.3.1 Worlds 241
15.3.2 Interpretations 242
15.3.3 Models and Related Notions 243
15.3.4 Knowledge 244
15.4 Quantification 245
15.5 Semantics of Quantifiers 246
15.5.1 Universal Quantifiers 246
15.5.2 Existential Quantifiers 247
15.5.3 Useful Equivalences 247
15.5.4 Rules of Inference 247
15.6 Predicate Calculus as a Language for Representing Knowledge 248
15.6.1 Conceptualizations 248
15.6.2 Examples 248
15.7 Additional Readings and Discussion 250
Exercises 250
16 Resolution in the Predicate Calculus 253
16.1 Unification 253
16.2 Predicate-Calculus Resolution 256
16.3 Completeness and Soundness 257
16.4 Converting Arbitrary wffs to Clause Form 257
16.5 Using Resolution to Prove Theorems 260
16.6 Answer Extraction 261
16.7 The Equality Predicate 262
16.8 Additional Readings and Discussion 265
Exercises 265
17 Knowledge-Based Systems 269
17.1 Confronting the Real World 269
17.2 Reasoning Using Horn Clauses 270
17.3 Maintenance in Dynamic Knowledge Bases 275
17.4 Rule-Based Expert Systems 280
17.5 Rule Learning 286
17.5.1 Learning Propositional Calculus Rules 286
17.5.2 Learning First-Order Logic Rules 291
17.5.3 Explanation-Based Generalization 295
17.6 Additional Readings and Discussion 297
Exercises 298
18 Representing Commonsense Knowledge 301
18.1 The Commonsense World 301
18.1.1 What Is Commonsense Knowledge? 301
18.1.2 Difficulties in Representing Commonsense Knowledge 303
18.1.3 The Importance of Commonsense Knowledge 304
18.1.4 Research Areas 305
18.2 Time 306
18.3 Knowledge Representation by Networks 308
18.3.1 Taxonomic Knowledge 308
18.3.2 Semantic Networks 309
18.3.3 Nonmonotonic Reasoning in Semantic Networks 309
18.3.4 Frames 312
18.4 Additional Readings and Discussion 313
Exercises 314
19 Reasoning with Uncertain Information 317
19.1 Review of Probability Theory 317
19.1.1 Fundamental Jdeas 317
19.1.2 Conditional Probabilities 320
19.2 Probabilistic Inference 323
19.2.1 A General Method 323
19.2.2 Conditional Independence 324
19.3 Bayes Networks 325
19.4 Patterns of Inference in Bayes Networks 328
19.5 Uncertain Evidence 329
19.6 D-Separation 330
19.7 Probabilistic Inference in Polytrees 332
19.7.1 Evidence Above 332
19.7.2 Evidence Below 334
19.7.3 Evidence Above and Below 336
19.7.4 A Namerical Example 336
19.8 Additional Readings and Discussion 338
Exercises 339
20 Learning and Acting with Bayes Nets 343
20.1 Learning Bayes Nets 343
20.1.1 Known Network Structure 343
20.1.2 Learning Networks Structure 346
20.2 Probabilistic Inference and Action 351
20.2.1 The General Setting 351
20.2.2 An Extended Example 352
20.2.3 Generalizing the Example 356
20.3 Additional Readings and Discussion 358
Exercises 358
Ⅳ Planning Methods Based on Logic 361
21 The Situation Caluclus 363
21.1 Reasoning about States and Actions 363
21.2 Some Difficulties 367
21.2.1 Frame Axioms 367
21.2.2 Qualifications 369
21.2.3 Ramifications 369
21.3 Generating Plans 369
21.4 Additional Readings and Discussion 370
Exercises 371
22 Planning 373
22.1 STRIPS Planning Systems 373
22.1.1 Describing States and Goals 373
22.1.2 Forward Search Methods 374
22.1.3 Recursive STRIPS 376
22.1.4 Plans with Run-Time Conditionals 379
22.1.5 The Sussman Anomaly 380
22.1.6 Backward Search Methods 381
22.2 Plan Spaces and Partial-Order Planning 385
22.3 Hierarchical Planning 393
22.3.1 ABSTRIPS 393
22.3.2 Combining Hierarchical and Partial-Order Planning 395
22.4 Learning Plans 396
22.5 Additional Readings and Discussion 398
Exercises 400
ⅤCommunication and Integration 405
23 Multiple Agents 407
23.1 Interacting Agents 407
23.2 Models of Other Agents 408
23.2.1 Varieties of Models 408
23.2.2 Simulation Strategies 410
23.2.3 Simulated Databases 410
23.2.4 The Intentional Stance 411
23.3 A Modal Logic of Knowledge 412
23.3.1 Modal Operators 412
23.3.2 Knowledge Axioms 413
23.3.3 Reasoning about Other Agents Knowledge 415
23.3.4 Predicting Actions of Other Agents 417
23.4 Additional Readings and Discussion 417
Exercises 418
24 Communication among Agents 421
24.1 Speech Acts 421
24.1.1 Planning Speech Acts 423
24.1.2 Implementing Speech Acts 423
24.2 Understanding Language Strings 425
24.2.1 Phrase-Structure Grammars 425
24.2.2 Semantic Analysis 428
24.2.3 Expanding the Grammar 432
24.3 Efficient Communication 435
24.3.1 Use of Context 435
24.3.2 Use of Knowledge to Resolve Ambiguities 436
24.4 Natural Language Processing 437
24.5 Additional Readings and Discussion 440
Exercises 440
25 Agent Architectures 443
25.1 Three-Level Architectures 444
25.2 Goal Arbitration 446
25.3 The Triple-Tower Architecture 448
25.4 Bootstrapping 449
25.5 Additional Readings and Discussion 450
Exercises 450
Bibliography 453
Index 493