Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Artificial Intelligence - Lecture slides | CS 1571, Exams of Computer Science

Material Type: Exam; Professor: Hauskrecht; Class: INTRO TO ARTIFICL INTELLIGENCE; Subject: Computer Science; University: University of Pittsburgh; Term: Fall 2007;

Typology: Exams

Pre 2010

Uploaded on 09/02/2009

koofers-user-buv
koofers-user-buv 🇺🇸

10 documents

1 / 31

Toggle sidebar

Related documents


Partial preview of the text

Download Introduction to Artificial Intelligence - Lecture slides | CS 1571 and more Exams Computer Science in PDF only on Docsity! 1 CS 1571 Introduction to AI CS 1571 Introduction to AI Review Milos Hauskrecht milos@cs.pitt.edu 5329 Sennott Square Course review CS 1571 Introduction to AI Announcements Final exam Date: December 10, 2007 at 10:00-11:50am Location: 5129 Sennott Square • Closed book • Cumulative Exam: • The structure of the exam similar to the midterm • Theoretical problems (no programming) 2 CS 1571 Introduction to AI Review CS 1571 Introduction to AI Search • Basic definition of the search problem – Search space, operators, initial state, goal condition • Formulation of a problem: – We have some control over the complexity of the search space size • Two types: – Path vs. configuration search • Expected skills: – Take a problem and formulate it as a search problem – Define: search space, states, operators, and the goal 5 CS 1571 Introduction to AI Optimization Complex configuration searches • rely on iterative algorithms: – Methods: • Hill climbing, • Simulated annealing • Genetic algorithms • Advantage of iterative algorithms: ? – memory !! – useful for very large optimization problems. CS 1571 Introduction to AI Adversarial search • Adversarial search (game playing) – Specifics of a game search, game problem formulation – rational opponent • Algorithms: – Minimax algorithm • Complexity bottleneck for large games – Alpha-Beta pruning: prunes branches not affecting the decision of players – Cutoff of the search tree and heuristics • Expected skills: – Minimax and alpha beta pruning – Design of a heuristic evaluation function 6 CS 1571 Introduction to AI KR and logic • Knowledge representation: – Syntax (how sentences are build), Semantics (meaning of sentences), Computational aspect (how sentences are manipulated) • Logic: – A formal language for expressing knowledge and ways of reasoning – Three components: • A set of sentences • A set of interpretations • The valuation (meaning) function CS 1571 Introduction to AI Propositional logic • A language for symbolic reasoning • Language: – Syntax, Semantics • Satisfiability of a sentence: at least one interpretation under which the sentence can evaluate to True. • Entailment: is true in all worlds in which KB is true • Inference procedure – Soundness – Completeness α=|KB α=|KBα i KBIf then If thenα=|KB αiKB 7 CS 1571 Introduction to AI Propositional logic • Logical inference problem: ? – Does KB entail the sentence ? • Logical inference problem for the propositional logic is decidable. – A procedure (program) that stops in finite time exists • Approaches: – Truth table approach – Inference rule approach – Resolution refutation • Normal forms: DNF, CNF, Horn NF (conversions) α α=|KB α=|KB if and only if )( α¬∧KB is unsatisfiable CS 1571 Introduction to AI First order logic • Deficiencies of propositional logic First order logic (FOL): • allows us to represent objects, their properties, relations and statements about them – Variables, predicates, functions, quantifiers – Syntax and semantics of the sentences in FOL • Expected skills: – Translation of English sentences to FOL 10 CS 1571 Introduction to AI Knowledge-based systems based on FOL • Production systems – What is the difference from the KB in Horn Normal Form? – Conclusions of the rules are actions: • ADD a predicate • DELETE a predicate • MODIFY a predicate • PRINT • ASK CS 1571 Introduction to AI Knowledge-based systems based on FOL • First-order logic is monotonous – What does it mean? 11 CS 1571 Introduction to AI Knowledge-based systems based on FOL • First-order logic is monotonous – What does it mean? – Are production systems monotonous? – Why or why not? CS 1571 Introduction to AI Knowledge-based systems based on FOL • First-order logic is monotonous – What does it mean? – Are production systems monotonous? – Why or why not? – What is conflict resolution? 12 CS 1571 Introduction to AI Planning • Find a sequence of actions that lead to a goal – Much like path search, but for very large domains – Need to represent the dynamics of the world • Two basic approaches planning problem representation: – Situation calculus • Explicitly represents situations (extends FOL) • Solving: theorem proving – STRIPS • Focuses on changes only • Solving: Search (Goal progression, Goal regression) • Frame problem CS 1571 Introduction to AI STRIPS planning Operator schema • Move (x,y,z) ….. moves object x from y to z • Definition: ? – 3 components 15 CS 1571 Introduction to AI STRIPS planning • How does the backward search work? CS 1571 Introduction to AI STRIPS planning • How does the backward search work? • Starts from the goal condition and projects the goals backward using operator 16 CS 1571 Introduction to AI STRIPS planning • What is the divide and conquer approach to solving the problem? CS 1571 Introduction to AI STRIPS planning • What is the divide and conquer approach to solving the problem? • Decompose the problem to a set of simpler subproblems, solve the subproblems and merge their solutions. • Do the linear planners support the decomposition of the planning problem? 17 CS 1571 Introduction to AI STRIPS planning • What is the divide and conquer approach to solving the problem? • Decompose the problem to a set of simpler subproblems, solve the subproblems and merge their solutions. • Do the linear planners support the decomposition of the planning problem? • No. The full sequence is always built. CS 1571 Introduction to AI Sussman’s anomaly. • Is the solution of the planning problem decomposable along goals? A B C C B A Initial state Goal ),( CBOn ),( BAOn 20 CS 1571 Introduction to AI Uncertainty • Bayes rule • Used often for diagnostic inferences: – From effect to causes E.g. P(device=normal | sensor reading=high) • Probabilities given are typically opposite – From causes to effects )( )()|()|( BP APABPBAP = CS 1571 Introduction to AI Uncertainty Full joint probability distribution – the distribution over all variables defining the problem Two important things to remember: • Any probabilistic query can be computed from the full joint distribution • Full joint distribution can be expressed as a product of conditionals via the chain rule 21 CS 1571 Introduction to AI Bayesian belief networks Full joint distribution • over all random variables defining the domain can be very large Issues: – Complexity of a model, – Complexity of inferences, – Acquisition cost Solution: Bayesian belief networks (BBNs) • BBN build upon conditional independence relations: )|()|()|,( CBPCAPCBAP = CS 1571 Introduction to AI Bayesian belief networks • Two components of BBNs: – Structure (directed acyclic graph) – Parameters (conditional prob. distributions) • Full joint probability distribution for the BBN: – Product of local (variable-parents) conditionals ))(|(),..,,( ,..1 21 ∏ = = ni iin XpaXXXX PP 22 CS 1571 Introduction to AI Bayesian belief networks • Benefits for the representation of the full joint distribution: • ? CS 1571 Introduction to AI Bayesian belief networks • Benefits for the representation of the full joint distribution: • We need a smaller number of parameters to define the full joint 25 CS 1571 Introduction to AI # of parameters of the full joint: Parameter complexity problem Alarm example: 5 binary (True, False) variables Burglary JohnCalls Alarm Earthquake MaryCalls 3225 = 3112 5 =− One parameter is for free: # of parameters of the BBN: 20)2(2)2(22 23 =++ One parameter in every conditional is for free: ? CS 1571 Introduction to AI Bayesian belief network. Burglary Earthquake JohnCalls MaryCalls Alarm B E T F T T pA|T,T (1-pA|T,T) T F pA|T,F (1-pA|T,F) F T pA|F,T (1-pA|F,T) F F pA|F,F (1-pA|F,F) P(B) pB (1 – pB ) P(E) pE (1 – pE ) A T F T pJ|T (1-pJ|T) F pJ|F (1-pJ|F) A T F T pM|T (1-pM|T) F pM|F (1-pM|F) P(A|B,E) P(J|A) P(M|A) T F T F • Number of free parameters 1 1 4 2 2 26 CS 1571 Introduction to AI # of parameters of the full joint: Parameter complexity problem • In the BBN the full joint distribution is defined as: • What did we save? Alarm example: 5 binary (True, False) variables Burglary JohnCalls Alarm Earthquake MaryCalls ))(|(),..,,( ,..1 21 ∏ = = ni iin XpaXXXX PP 3225 = 3112 5 =− One parameter is for free: # of parameters of the BBN: 20)2(2)2(22 23 =++ 10)1(2)2(22 2 =++ One parameter in every conditional is for free: CS 1571 Introduction to AI Bayesian belief networks Advantage of BBNs for inferences: • Smart way to do inferences: – Interleave sums and products – Based on: ∑∑ = xx xfaxfa )()( 27 CS 1571 Introduction to AI Inference in Bayesian networks Computing: Approach 1. Blind approach. • Sum out all un-instantiated variables from the full joint, • express the joint distribution as a product of conditionals Computational cost: Number of additions: ? Number of products: ? == )( TJP )()(),|()|()|( , , , , eEPbBPeEbBaAPaAmMPaATJP FTb FTe FTa FTm ========== ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ ),,,,( , , , , mMTJaAeEbBP FTb FTe FTa FTm ====== ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ )( TJP = Burglary JohnCalls Alarm Earthquake MaryCalls CS 1571 Introduction to AI Inference in Bayesian networks Computing: Approach 1. Blind approach. • Sum out all un-instantiated variables from the full joint, • express the joint distribution as a product of conditionals Computational cost: Number of additions: 15 Number of products: ? == )( TJP )()(),|()|()|( , , , , eEPbBPeEbBaAPaAmMPaATJP FTb FTe FTa FTm ========== ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ ),,,,( , , , , mMTJaAeEbBP FTb FTe FTa FTm ====== ∑ ∑ ∑ ∑ ∈ ∈ ∈ ∈ )( TJP = Burglary JohnCalls Alarm Earthquake MaryCalls 30 CS 1571 Introduction to AI Decision-making in the presence of uncertainty • Decision tree: – Decision nodes (choices are made) – Chance nodes (reflect stochastic outcome) – Outcomes (value) nodes (value of the end-situation) • Rational choice (for a risk neutral decision maker): – Decision-maker tries to optimize the expected value • Utility theory: – Utilities express preferences in terms of numeric quantities – Monetary values may not be equal to utility values – Choices based on expected utility (and its maximization) • Value of information – Additional information (via test) can reduce the expected utility CS 1571 Introduction to AI Decision making • Monetary outcomes • Goal: optimize the expected value of the investment Stock 1 Stock 2 Bank 0.6 0.4 110 90 0.4 0.6 140 80 101 1.0 100 1.0 Home 102 104 101 100 (up) (down) (up) (down) 31 CS 1571 Introduction to AI Decision making with the utility function • Preferences of a decision maker captured by a utility function • Utility function log (x) Stock 1 Stock 2 Bank 0.6 0.4 2.0413 1.9542 0.4 0.6 2.1461 1.9030 2.0043 1.0 2.0000 1.0 Home 2.00653 2.0003 2.004 2 (up) (down) (up) (down) CS 1571 Introduction to AI Learning • Supervised learning – Learning mapping between inputs x and desired outputs y – Teacher gives me y’s for the learning purposes • Unsupervised learning – Learning relations between data components – No specific outputs given by a teacher • Reinforcement learning – Learning mapping between inputs x and desired outputs y – Critic does not give me y’s but instead a signal (reinforcement) of how good my answer was • Other types of learning: – Concept learning, explanation-based learning, etc.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved