Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Dynamic Programming: Solving Optimization Problems with Optimal Substructure - Prof. Thoma, Study notes of Algorithms and Programming

Dynamic programming is a method for solving optimization problems where the solution can be recursively described in terms of solutions to subproblems. This technique is used when the optimal substructure property holds, meaning that the components of an optimal solution are optimal solutions to subproblems. Dynamic programming algorithms find solutions to subproblems and store them in memory for later use, making them more efficient than brute-force methods. In this document, we discuss dynamic programming and its application to various problems such as longest common subsequence, coin changing, and knapsack problems.

Typology: Study notes

Pre 2010

Uploaded on 07/29/2009

koofers-user-36r
koofers-user-36r 🇺🇸

10 documents

1 / 9

Toggle sidebar

Related documents


Partial preview of the text

Download Dynamic Programming: Solving Optimization Problems with Optimal Substructure - Prof. Thoma and more Study notes Algorithms and Programming in PDF only on Docsity! 1 1 CS 432 – Algorithms – Dynamic programming • Also, memoization – Examples: • Longest Common Subsequence – Readings: 8.1, pp. 334-335, 8.4, p. 361 • Also, handout on 0/1 knapsack • Wikipedia articles 2 Dynamic programming  Old “bad” name (see Wikipedia or Notes, p. 361)  It is used, when the solution can be recursively described in terms of solutions to subproblems (optimal substructure)  Algorithm finds solutions to subproblems and stores them in memory for later use  More efficient than “brute-force methods”, which solve the same subproblems over and over again 3 Optimal Substructure Property  Definition on p. 334 – If S is an optimal solution to a problem, then the components of S are optimal solutions to subproblems  Examples: – True for knapsack – True for coin-changing (p. 334) – True for single-source shortest path – Not true for longest-simple-path (p. 335) 4 Dynamic Programming  Works “bottom-up” – Finds solutions to small sub-problems first – Stores them – Combines them somehow to find a solution to a slightly larger subproblem  Compare to greedy approach – Also requires optimal substructure – But greedy makes choice first, then solves 5 Problems Solved with Dyn. Prog.  Coin changing (Section 8.2, we won’t do)  Multiplying a sequence of matrices (8.3, we might do if we have time) – Can do in various orders: (AB)C vs. A(BC) – Pick order that does fewest number of scalar multiplications  Longest common subsequence (8.4, we’ll do)  All-pairs shortest paths (Floyd’s algorithm) – Remember from CS216?  Constructing optimal binary search trees  Knapsack problems (we’ll do 0/1) 6 2 7 Remember Fibonacci numbers?  Recursive code: long fib(int n) { assert(n >= 0); if ( n == 0 ) return 0; if ( n == 1 ) return 1; return fib(n-1) + fib(n-2); }  What’s the problem? – Repeatedly solves the same subproblems – “Obscenely” exponential (p. 326) 8 Memoization  Before talking about dynamic programming, another general technique: Memoization – AKA using a memory function  Simple idea: – Calculate and store solutions to subproblems – Before solving it (again), look to see if you’ve remembered it 9 Memoization  Use a Table abstract data type – Lookup key: whatever identifies a subproblem – Value stored: the solution  Could be an array/vector – E.g. for Fibonacci, store fib(n) using index n – Need to initialize the array  Could use a map / hash-table 10 Memoization and Fibonacci  Before recursive code below called, must initialize results[] so all values are -1 long fib_mem(int n, long results[]) { if ( results[n] != -1 ) return results[n]; // return stored value long val; if ( n == 0 || n ==1 ) val = n; // odd but right else val = fib_mem(n-1, results) + fib_mem(n-2, results); results[n] = val; // store calculated value return val; } 11 Observations on fib_mem()  Same elegant top-down, recursive approach based on definition – Without repeated subproblems  Memory function: a function that remembers – Save time by using extra space  Can show this runs in Θ(n) 12 Memoization and Functional Languages  Languages like Lisp and Scheme are functional languages  How could memoization help?  What could go wrong? Would this always work? – Side effects – Haskell does this (call-by-need) 5 25 LCS Example (1) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 for i = 1 to m c[i,0] = 0 for j = 1 to n c[0,j] = 0 ABCB BDCAB 26 LCS Example (2) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 ABCB BDCAB 27 LCS Example (3) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 0 0 ABCB BDCAB 28 LCS Example (4) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 0 0 1 ABCB BDCAB 29 LCS Example (5) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 000 1 1 ABCB BDCAB 30 LCS Example (6) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 0 0 10 1 1 ABCB BDCAB 6 31 LCS Example (7) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 1 11 ABCB BDCAB 32 LCS Example (8) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 1 1 1 2 ABCB BDCAB 33 LCS Example (10) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 21 1 11 1 1 ABCB BDCAB 34 LCS Example (11) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 21 11 1 1 2 ABCB BDCAB 35 LCS Example (12) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 21 1 1 1 2 1 22 ABCB BDCAB 36 LCS Example (13) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 21 1 1 1 2 1 22 1 ABCB BDCAB 7 37 LCS Example (14) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 21 1 1 1 2 1 22 1 1 2 2 ABCB BDCAB 38 LCS Example (15) j 0 1 2 3 4 5 0 1 2 3 4 i Xi A B C B Yj BB ACD 0 0 00000 0 0 0 if ( Xi == Yj ) c[i,j] = c[i-1,j-1] + 1 else c[i,j] = max( c[i-1,j], c[i,j-1] ) 1000 1 1 21 1 1 1 2 1 22 1 1 2 2 3 ABCB BDCAB 39 LCS Algorithm Running Time  LCS algorithm calculates the values of each entry of the array c[m,n]  So what is the running time? O(m*n) since each c[i,j] is calculated in constant time, and there are m*n elements in the array 40 How to find actual LCS  So far, we have just found the length of LCS, but not LCS itself.  We want to modify this algorithm to make it output Longest Common Subsequence of X and Y Each c[i,j] depends on c[i-1,j] and c[i,j-1] or c[i-1, j-1] For each c[i,j] we can say how it was acquired: 2 2 3 2 For example, here c[i,j] = c[i-1,j-1] +1 = 2+1=3 41 How to find actual LCS - continued  Remember that    −− =+−− = otherwise]),1[],1,[max( ],[][ if1]1,1[ ],[ jicjic jyixjic jic  So we can start from c[m,n] and go backwards  Look first to see if 2nd case above was true  If not, then c[i,j] = c[i-1, j-1]+1, so remember x[i] (because x[i] is a part of LCS)  When i=0 or j=0 (i.e. we reached the beginning), output remembered letters in reverse order 42 Algorithm to find actual LCS  Here’s a recursive algorithm to do this:                                    !" " " #$         "   % " & &
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved