Download Introduction to Artificial Intelligence and more Summaries Material Engineering in PDF only on Docsity! Set 2: State-spaces and Uninformed Search ICS 271 Fall 2016 Kalev Kask 271-fall 2016 You need to know • State-space based problem formulation – State space (graph) • Search space – Nodes vs. states – Tree search vs graph search • Search strategies • Analysis of search algorithms – Completeness, optimality, complexity – b, d, m 271-fall 2016 Example: Romania
Neamt
OM ehadia
Dabreta fy
271-fall 2016
Example: Romania • On holiday in Romania; currently in Arad. • Flight leaves tomorrow from Bucharest • Formulate goal: – be in Bucharest • Formulate problem: – states: various cities – actions: drive between cities • Find solution: – sequence of actions (cities), e.g., Arad, Sibiu, Fagaras, Bucharest 271-fall 2016 Problem Types • Static / Dynamic Previous problem was static: no attention to changes in environment • Observable / Partially Observable / Unobservable Previous problem was observable: it knew its initial state. • Deterministic / Stochastic Previous problem was deterministic: no new percepts were necessary, we can predict the future perfectly • Discrete / continuous Previous problem was discrete: we can enumerate all possibilities 271-fall 2016 Abstraction/Modeling • Definition of Abstraction (states/actions) – Process of removing irrelevant detail to create an abstract representation: ``high-level”, ignores irrelevant details • Navigation Example: how do we define states and operators? – First step is to abstract “the big picture” • i.e., solve a map problem • nodes = cities, links = freeways/roads (a high-level description) • this description is an abstraction of the real problem – Can later worry about details like freeway onramps, refueling, etc • Abstraction is critical for automated problem solving – must create an approximate, simplified, model of the world for the computer to deal with: real-world is too detailed to model exactly – good abstractions retain all important details – an abstraction should be easier to solve than the original problem 271-fall 2016 Robot block world • Given a set of blocks in a certain configuration, • Move the blocks into a goal configuration. • Example : – ((A)(B)(C)) (ACB) A C B 271-fall 2016 A B C Operator Description
Al IBi IC
(AVBKC))
Al A [BI] B Cc
BL IC] [IB] Ic BI iC [Al iC Al [Bl [Al fg
((AB)(C)) ((BYAC)) ((BA\(C)) ((BC){A)) ((CA)@B)) ((A)(CB))
RR A A AE LLL
Effects of Moving a Block
• Observable, start in #5. Solution? [Right, Suck] 16 Example: vacuum world Vacuum world state space graph
atl | Ta [AD
(RL OeD aL:
Re
7
17
Example: vacuum world • Unobservable, start in {1,2,3,4,5,6,7,8} e.g., Solution? 18 The Traveling Salesperson Problem • Find the shortest tour that visits all cities without visiting any city twice and return to starting point. • State: – sequence of cities visited • S0 = A C DA E F B 271-fall 2016 The Traveling Salesperson Problem • Find the shortest tour that visits all cities without visiting any city twice and return to starting point. • State: sequence of cities visited • S0 = A },,{ dca },,|),,,{( dcaXxdca • Solution = a complete tour C DA E F B Transition model 271-fall 2016 Example: 8-queen problem 271-fall 2016 The “8-Puzzle” Problem 1 2 3 4 8 6 7 5 1 2 3 4 5 6 7 8 Goal State Start State 1 2 3 4 8 6 7 5 Example: robotic assembly • states?: real-valued coordinates of robot joint angles parts of the object to be assembled • actions?: continuous motions of robot joints • goal test?: complete assembly • path cost?: time to execute new 271-fall 2016 Formulating Problems; Another Angle • Problem types – Satisfying: 8-queen – Optimizing: Traveling salesperson • For traveling salesperson satisfying easy, optimizing hard • Goal types – board configuration – sequence of moves – A strategy (contingency plan) • Satisfying leads to optimizing since “small is quick” • For traveling salesperson – satisfying easy, optimizing hard • Semi-optimizing: – Find a good solution • In Russel and Norvig: – single-state, multiple states, contingency plans, exploration problems 271-fall 2016 Tree search example
NTT Se
~ ~s
=—— ~ ~~.
at —_~— =
Sb > (Timisoara Tsing >
aS oN oN
= ~
eo 4 \ OTS 7 7 N
a= avs, pss =~ a=fn. paS=
(omad > CFagams>) (Oradea a@fmia ead) © CTwad > Colm) iad > Grade >
aa aN a a . RS, oN _ “ an
Ucraiova
271-fall 2016
Tree search example
And >
imisoara,
AO OS oN oo
aon “ SY ~ a“ \ a“ ‘
a= -— — > at a -t~ =e
o se > CF ages Grades + > mics West a Cel >? Choi > © Arad > Oradea”
“ ~ a ~ “ os a ~ a ~, ye ~ “ ~ 4 ~N
271-fall 2016
Tree search example
And >
imisoara,
“oN ‘\
< ‘ NN
~~ =~ a hs
Chad > Clie > Rad > <Giedee >
a rr ae. ra.
a“ ~ “oN sos a se — ~ “oN a ™, oN
function TREE-SEARCH( problem, strategy) returns a solution, or failure
initialize the search tree using the initial state of problem
loop do
if there are no candidates for expansion then return failure
choose a leaf node for expansion according to strategy
if the node contains a goal state then return the corresponding solution
else expand the node and add the resulting nodes to the search tree
271-fall 2016
Tree-Search vs Graph-Search • Tree-search(problem), returns a solution or failure • Frontier initial state • Loop do – If frontier is empty return failure – Choose a leaf node and remove from frontier – If the node is a goal, return the corresponding solution – Expand the chosen node, adding its children to the frontier – ----------------------------------------------------------------------------------------------- • Graph-search(problem), returns a solution or failure • Frontier initial state, explored empty • Loop do – If frontier is empty return failure – Choose a leaf node and remove from frontier – If the node is a goal, return the corresponding solution. – Add the node to the explored. – Expand the chosen node, adding its children to the frontier, only if not in frontier or explored set 271-fall 2016 Basic search scheme • We have 3 kinds of states – explored (past) – only graph search – frontier (current) – unexplored (future) – implicitly given • Initially frontier=start state • Loop until found solution or exhausted state space – pick/remove first node from frontier using search strategy • priority queue – FIFO (BFS), LIFO (DFS), g (UCS), f (A*), etc. – check if goal – add this node to explored, – expand this node, add children to frontier (graph search : only those children whose state is not in explored/frontier list) – Q: what if better path is found to a node already on explored list? 271-fall 2016 Graph-Search
SO
steieee
(a) (b) (c)
Why Search Can be Difficult • At the start of the search, the search algorithm does not know – the size of the tree – the shape of the tree – the depth of the goal states • How big can a search tree be? – say there is a constant branching factor b – and one goal exists at depth d – search tree which includes a goal can have bd different branches in the tree (worst case) • Examples: – b = 2, d = 10: bd = 210= 1024 – b = 10, d = 10: bd = 1010= 10,000,000,000 271-fall 2016 Searching the Search Space • Uninformed (Blind) search : don’t know if a state is “good” – Breadth-first – Uniform-Cost first – Depth-first – Iterative deepening depth-first – Bidirectional – Depth-First Branch and Bound • Informed Heuristic search : have evaluation fn for states – Greedy search, hill climbing, Heuristics • Important concepts: – Completeness : does it always find a solution if one exists ? – Time complexity (b, d, m) – Space complexity (b, d, m) – Quality of solution : optimality = does it always find best solution? 271-fall 2016 Search strategies • A search strategy is defined by picking the order of node expansion • Strategies are evaluated along the following dimensions: – completeness: does it always find a solution if one exists? – time complexity: number of nodes generated – space complexity: maximum number of nodes in memory – optimality: does it always find a least-cost solution? • Time and space complexity are measured in terms of – b: maximum branching factor of the search tree – d: depth of the least-cost solution – m: maximum depth of the state space (may be ∞) 43 Breadth-First Search • Expand shallowest unexpanded node • Implementation: – frontier is a FIFO queue, i.e., new successors go at end Expand: frontier=[C,D,E] Is C a goal state? 271-fall 2016 Breadth-First Search • Expand shallowest unexpanded node • Implementation: – frontier is a FIFO queue, i.e., new successors go at end Expand: frontier=[D,E,F,G] Is D a goal state? 271-fall 2016 Breadth-First Search 271-fall 2016 Actually, in BFS we can check if a node is a goal node when it is generated (rather than expanded) Breadth-First Search Graph S A D B D C E Note: this is the search tree at some particular point in in the search. S G A B D E C F E Not expanded by graph-search 271-fall 2016 A Complexity of Breadth-First Search • Time Complexity – assume (worst case) that there is 1 goal leaf at the RHS – so BFS will expand all nodes = 1 + b + b2+ ......... + bd = O (bd) • Space Complexity – how many nodes can be in the queue (worst-case)? – at depth d there are bd unexpanded nodes in the Q = O (bd) d=0 d=1 d=2 d=0 d=1 d=2 G G 271-fall 2016 Examples of Time and Memory Requirements for Breadth-First Search Depth of Nodes Solution Expanded Time Memory 0 1 1 millisecond 100 bytes 2 111 0.1 seconds 11 kbytes 4 11,111 11 seconds 1 megabyte 8 108 31 hours 11 giabytes 12 1012 35 years 111 terabytes Assuming b=10, 1000 nodes/sec, 100 bytes/node 271-fall 2016 Uniform Cost Search • Guaranteed to find optimal solution (as long as all steps have >0 cost) – When a node is selected for expansion, a shortest path to it has been found • UCS expands in the order of optimal path cost 271-fall 2016 Uniform cost search 1. Put the start node s on OPEN 2. If OPEN is empty exit with failure. 3. Remove the first node n from OPEN and place it on CLOSED. 4. If n is a goal node, exit successfully with the solution obtained by tracing back pointers from n to s. 5. Otherwise, expand n, generating all its successors attach to them pointers back to n, and put them in OPEN in order of shortest cost 6. Go to step 2. DFS Branch and Bound At step 4: compute the cost of the solution found and update the upper bound U. At step 5: expand n, generating all its successors attach to them pointers back to n, and put on top of OPEN. Compute cost of partial path to node and prune if larger than U. . 271-fall 2016 Depth-First Search • Expand deepest unexpanded node • Implementation: – frontier = Last In First Out (LIFO) queue, i.e., put successors at front Is A a goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[H,I,E,C] Is H = goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[I,E,C] Is I = goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[E,C] Is E = goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[C] Is C = goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[F,G] Is F = goal state? 271-fall 2016 Depth-first search • Expand deepest unexpanded node • Implementation: – frontier = LIFO queue, i.e., put successors at front queue=[L,M,G] Is L = goal state? 271-fall 2016
Depth-First Search
ABE a ABE
i6|4 6 16]4
OGLE 0G OGLE
PAEIE! 23/3 2]8/3
isla isl4 ije|4
1 CG 1 CGE 1 Oa
2135 233 21813
6/4 6l4 64
2 Gh 2006 20h
313 3[3 2813. ABE
2\614 AGE 6 [4 6] _[4!
3 filas 3 Os 7 OGE aan
a a] 3 2713
2/614 4 6[8|4
4 005 40P6 8 O05
als seis 2[3
a 2] [4 : 16/84)
5 fis 6 Gols] J Discarded before 9 fis
generating node 7
(a) (b) (c)
Generation of the First Few Nodes in a Depth-First Search
271-fall 2016
=)
66]
I
rf P|
ey
3]
Ey
i
~]
foros)
ta] tft]
tafe [is]
or oo | af} foo oo]
an] fs
y
rl
emf [ta
ea
5
Goal node [7
The Graph When the Goal Is Reached in Depth-First Search
271-fall 2016
Depth-First-Search (*) 1. Put the start node s on OPEN 2. If OPEN is empty exit with failure. 3. Remove the first node n from OPEN. 4. If n is a goal node, exit successfully with the solution obtained by tracing back pointers from n to s. 5. Otherwise, expand n, generating all its successors (check for self-loops)attach to them pointers back to n, and put them at the top of OPEN in some order. 6. Go to step 2. *search the tree search-space (but avoid self-loops) ** the default assumption is that DFS searches the underlying search-tree 271-fall 2016 Depth-First tree-search Properties • Non-optimal solution path • Incomplete unless there is a depth bound • (we will assume depth-limited DF-search) • Re-expansion of nodes (when the state-space is a graph) • Exponential time • Linear space (for tree-search) 271-fall 2016 Comparing DFS and BFS • BFS optimal, DFS is not • Time Complexity worse-case is the same, but – In the worst-case BFS is always better than DFS – Sometimes, on the average DFS is better if: • many goals, no loops and no infinite paths • BFS is much worse memory-wise • DFS can be linear space • BFS may store the whole search space. • In general • BFS is better if goal is not deep, if long paths, if many loops, if small search space • DFS is better if many goals, not many loops • DFS is much better in terms of memory 271-fall 2016 Iterative-Deepening Search (DFS) • Every iteration is a DFS with a depth cutoff. Iterative deepening (ID) 1. i = 1 2. While no solution, do 3. DFS from initial state S0 with cutoff i 4. If found goal, stop and return solution, else, increment cutoff Comments: • IDS implements BFS with DFS • Only one path in memory • BFS at step i may need to keep 2i nodes in OPEN 271-fall 2016 Limit =2 Lom
ae sy
ey 2)
oN aN
@ © § 8
Iterative deepening search L=2
fo
6 8
271-fall 2016
Iterative Deepening Search L=3
Limit =3 2@_ ®
ee
or “e) @ © ©
/ ‘Sy / » / ‘ ¢ , fy
ti tj Q) O 3) b G a)
6 ©» 6 » @ YW BS onc)
if et rk rok rh a vA) v4) vd} rh > rh, ra rh rh vk Th “
DMMP OVVVME OVVSOYVE YPYVVYVVY9OY9D 2D GOYVVPDYVOD
\ i\
PX OLY /
ry) Ga) Ww) A aD WN 4
QOL PO
271-fall 2016
Iterative deepening search
‘\
Depth bound = 1 Depth bound = 2 Depth bound = 3 Depth bound = 4
Stages in Iterative-Deepening Search
271-fall 2016
Bidirectional Search • Idea – simultaneously search forward from S and backwards from G – stop when both “meet in the middle” – need to keep track of the intersection of 2 open sets of nodes • What does searching backwards from G mean – need a way to specify the predecessors of G • this can be difficult, • e.g., predecessors of checkmate in chess? – what if there are multiple goal states? – what if there is only a goal test, no explicit list? • Complexity – time complexity is best: O(2 b(d/2)) = O(b (d/2)) – memory complexity is the same 271-fall 2016 Bi-Directional Search
search frontier
at termination
Start_node Goal node
Bidirectional
search frontiers
at termination
271-fall 2016 Fig. 2.10 Bidirectional and unidirectional breadth-first searches.
Comparison of Algorithms
Figure 3.18
me Breadth- Uniform- Depth- Depth- Iterative Bidirectional
Criterion First Cost First Limited Deepening (if applicable)
Time of of o” b bo oe
Space of of bm "bl bd b”
Optimal? Yes Yes No No Yes Yes
Complete? Yes Yes No Yes, if! > d Yes Yes
ry
m is the maximum depth of the search tree; / is the depth limit.
Evaluation of search strategies. b is the branching factor; d is the depth of solution;
271-fall 2016