Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Heuristics - Artificial Intelligence - Lecture Slides | CPSC 481, Study notes of Computer Science

ch 3 Material Type: Notes; Professor: Ryu; Class: Artificial Intelligence; Subject: Computer Science; University: California State University - Fullerton; Term: Spring 2014;

Typology: Study notes

2013/2014

Uploaded on 10/14/2014

qq-falcon
qq-falcon 🇺🇸

5

(2)

6 documents

1 / 44

Toggle sidebar

Related documents


Partial preview of the text

Download Heuristics - Artificial Intelligence - Lecture Slides | CPSC 481 and more Study notes Computer Science in PDF only on Docsity! 1 Artificial Intelligence CPSC 481 AI As Representation and Search: Heuristic Search for Problem Solving 2 Lecture Overview  Heuristics in AI and problem solving  Heuristic search algorithms  Hill-climbing, simulated annealing, best-first search, constraint satisfaction  Game playing strategies using mini-max and alpha-beta pruning  Performance evaluation of heuristics  Theoretical evaluation criteria  Complexity and efficiency issues 5 The “most wins” Heuristic Applied to the First Children in Tic-Tac-Toe 6 Heuristically Reduced State Space for Tic-Tac-Toe Heuristic value calculated by counting possible wins 2/3 of all the search space is pruned away with the first move. Components of a Heuristic Function  A heuristic function, f(n) = g(n) + h(n)  g(n): the distance (or other measure) from the start to current state, n,  E.g., count 0 for the beginning state and is incremented by 1 for each level of the search.  h(n): a heuristic estimate of the distance from state n to a goal.  h value guides search toward heuristically promising states while g value prevents search from indefinitely fruitless path  If two states are the same or nearly the same, it is generally preferable to examine the state that is nearest to the root state of the graph (initial state) since it will give a greater probability of being on the shortest path to the goal.  Good heuristic function returns  Unique and non-conflicting score with accurate measure of better 7 10 The Heuristic f Applied to States in the 8-puzzle Simple Hill-climbing: a Heuristic Search  Strategies  Expand the current state of the search  Evaluate its children states  Select the first close (better) node for further expansion (but keeps no history)  Continue until it finds a solution or reach a no better state  Comment on the approach  One of the simplest approaches to implement a heuristic  Problems  May be stuck at local maxima  If it reach a state that has a better evaluation than any of its children, the algorithm halts and may fail to find the best solution  May get confused by the result of evaluation when the best is not clear (see graph in next slide) 11 12 Plateau Problem in Hill-Climbing with 3- Level Look Ahead *Hill-climbing can get confused in this case (plateau) as the cost of all paths are similar. Best-First Search Algorithm function best_first_search; begin open := [Start]; % initialize closed ‘= [ ]; while open = [] do % states remain begin remove the leftmost state from open, call it X; if X = goal then return the path from Start to X else begin generate children of X; for each child of X do case the child is not on open or closed: begin assign the child a heuristic value; add the child to open end; the child is already on open: if the child was reached by a shorter path then give the state on open the shorter path the child is already on closed: if the child was reached by a shorter path then begin remove the state from closed; add the child to open end; end; % case put X on closed; re-order states on open by heuristic merit (best leftmost) end; return FAIL % open is empty end. 15 16 Best-First Search of a Hypothetical State Space Letters represent states Numbers represent heuristic values as cost, e.g., the smaller the better in this example. Bold states indicate the states expanded by the heuristic algorithm Assume P is the goal 17 A Trace of the Execution of Best-First Search *Maintain a priority queue 20 Successive Stages of Open and Closed with Best-first Search 21 Open and Closed as they Appear after the 3rd Iteration of Best-first Search State Space Generated in Best-first Search Level of search gin) = gin)=0 elala a ala 1f6 Stale D 4 cla| Stated = ginj=1 Tb) =6 Tia) = 6 me 7Lels 7> 3 4 7 2[el[3 a 2lals State @ state t sag ginj=2 1 vey=5 [{1814) ges [tt i) = 6 7Lel[s5 7Lels 7Lels ws ee a \ a - 5 = [Mels 2[e[2 Mel 2|3/ olala| staten [>TqTa]| stator [1 Tela] State) Tlela| Statek gin=3 th) = & th=7 ty =5 tk) =7 7lels cls 7lels 7lels 6 1 3 a gin) =4 7Lsls a, 7 we, 1[,2|3 1] 2|3 a a] Statem Flalal| sta giny=5 tim) = 5 inj=7 22 7le[s| “ cls Goal Local Admissibility for Heuristics DEFINITION MONOTONICITY (Consistency) A heuristic function h is monotone if 1. For all states n, and n,, where n, is a descendant of n,, h(n,) — h(n) = cost(n,,n), where cost(n,n)) is the actual cost (in number of moves) of going from state n, to ny. 2. The heuristic evaluation of the goal state is zero, or h(Goal) = 0. 25 Evaluating the Behavior of Heuristics  Criteria for good heuristics  Completeness  Is the algorithm guaranteed to find a solution when there is one?  Optimality  Does the strategy find the optimal solution?  Time complexity  How long does it take to find a solution?  Space complexity  How much memory is needed to perform the search? 26 27 Use of Information in Heuristics *In general the more informed heuristic is better. *But we should consider the computational cost to use the more informed heuristics, e.g., computer chess 30 Beyond the Types of Heuristic Search Methods We Have Discussed So Far Mini-max Algorithm for Game Playing  When playing a game with opponent (at least two players)  Need to take into account for the actions of the opponent  In order to win, a player needs to maximize his/her advantage and minimize opponent’s advantage whenever possible  The players in a game are referred to as MIN and MAX  MAX represents the player trying to win or to maximizing her advantage  MIN represents the player attempting to minimize the MAX’s score  Assumption: Your opponent uses the same knowledge of the state space as you use and applies that knowledge in a consistent effort to win the game  Implementation 1. Create a game graph by the rule of the game and strategies 2. Label each level of the game graph as MIN and MAX 3. Each leaf node is given a heuristic value 4. Mini-max propagates these values up the graph through successive parent nodes according to the following rules:  If the parent state is a MAX node, give it the maximum value among its children  If the parent state is a MIN node, give it the minimum value of its children 31 32 Leaf states show heuristic values; internal states show backed-up values. Mini-Maxing to Fixed Ply Depth A hypothetical state space 35 Two Ply Mini-max, and One of Two Possible MAX Second Moves, from Nilsson (1971) 36 Two-ply Mini-max Applied to X’s Move Near the End of the Game, from Nilsson (1971) Alpha-beta Pruning for Mini-max  Problem of mini-max  Pursues all branches in the space, including many that could be ignored or pruned by a more intelligent algorithm.  Idea of alpha-beta pruning:  Rather than searching the entire space to the ply depth, it proceeds in a depth- first fashion. Two values, alpha for MAX and beta for MIN are created during each search—use of more informed heuristics.  Alpha can never decrease and Beta can never increase.  Algorithm: 1. Descend to full ply depth in a depth-first fashion and apply our heuristic evaluation to a state and all its siblings 2. Values are backed up to parent using mini-max algorithm 3. Use two rules below for terminating search, based on alpha and beta values: + Stop the search below any MIN node having a beta value ≤ the alpha value of any of its MAX ancestors + Stop the search below any MAX node having an alpha value ≥ the beta value of any of its MIN node ancestors 37 Example: Linear Programing  A scenario of a wooden toy manufacturer:  A soldier: price $27, cost of raw materials $10, labor & overhead cost $14  A train: price $21, cost of raw materials $ $9, labor & overhead cost $10  Skills (carpentry and finishing) required are:  One soldier requires 2hrs of finishing and 1 hr of carpentry labor  One train requires 1hr of finishing and 1 hr of carpentry labor  Raw materials available each week: 100 hrs of finishing and 80 hrs of carpentry  Demand of toys from market:  Soldiers: at most 40 per week and Trains: unlimited  How can the company maximize weekly profit (revenues – cost)?  Objective function: Maximize z = 3x1 + 2x2  Subject to (constraints):  2x1 + x2 <= 100 (Finishing constraint)  x1 + x2 <= 80 (Carpentry constraint)  x1 <= 40 (Constraint on demand for soldiers)  x1, x2 >= 0 (Sign restriction)  Solution:  The feasible region for the problem is the set of all points (x1, x2) satisfying the equations. We can graphically determine the feasible region and get z=$180. 40 x1: # of soldiers produced per week x2: # of trains produced per week Issues related to State Space Representation in Problem Solving  State space representation discussed mostly for game problems  For most games such as the 8-puzzle, tic-tac-toe, etc., search space is large enough to require heuristic pruning  Games generally do not involve complex representational issues.  Mostly require a simple representation, e.g., board configuration  Because of the common representation, a single heuristic may be applied to throughout the search space  But many other complex real-world problems require much more complex representations and methods! 41 Beyond Traditional Heuristic Approaches  Heuristic search is one of the oldest AI strategies since  Many problems do not have exact solutions, e.g., medical diagnosis, or a problem may have an exact solution but the computational cost of finding it may be prohibitive.  But all heuristics are fallible since heuristic is an informed guess of the next step to be taken in solving a problem.  Challenges to traditional heuristic approaches  A heuristic function is supposed to be estimate the cost of a solution. Can a heuristic function be learned from experience?  Requires inductive learning and a function created and executed during runtime  Search methods under partially observable environments  State space search is deterministic and fully observable  Search methods under unknown environments  Contingency problems or online search problems  Offline search computes a complete solution before action but online search interleaves computation and action, e.g., first take an action then observes the environment and compute the next action, etc. 42
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved