Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Pseudocode Algorithms: Understanding Complexity and Comparisons in Data Structures - Prof., Study notes of Discrete Structures and Graph Theory

An overview of pseudocode algorithms, focusing on their definiteness, correctness, finiteness, generality, and translation into programming languages. It covers various types of loops, if statements, and array manipulations, and discusses the importance of counting comparisons and assignments in analyzing algorithm complexity. Examples include sorting algorithms and searching algorithms like linear search and binary search.

Typology: Study notes

Pre 2010

Uploaded on 03/19/2009

koofers-user-o84
koofers-user-o84 🇺🇸

10 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download Pseudocode Algorithms: Understanding Complexity and Comparisons in Data Structures - Prof. and more Study notes Discrete Structures and Graph Theory in PDF only on Docsity! EECS 210 Fall 2007 Algorithm analysis notes Most of the material below was discussed in the class lectures. I’ve expanded a bit on some of the examples. An algorithm is a finite set of precise instructions for performing a computation or solving a problem. An algorithm should have the following characteristics: input--Input values from a specified set output—output values produced by an input set. The values are the solution to the problem definiteness—steps are precisely defined correctness—will return the correct output for each set of input values finiteness—output must be produced in a finite number of steps generality—will handle any legitimate data set without error The main thing to remember about a pseudocode algorithm is that it must be general enough to be translated into an arbitrary programming language. Put another way, the steps are English-type statements rather than “code.” An algorithm should contain little, if any, language specific syntax. Indentation and/or begin-end blocks are used to indicate nesting or grouping. Loops will be terminated with an end statement if necessary for clarity. comments—enclosed in braces ({ }) or begin with // comparison operators: =, <, , >, ,  (Notice the use of mathematical notation e.g.  not ! =)) assignment operator:  (You’ll often see :=) in other versions of pseudocode.) loops while (Boolean condition or statement) do—test at top examples: while (there are vertices left to examine) do while (A < 10) do for index  start to finish do (There may be an optional step (increment size)) examples: for i  1 to 10 do for i  1 to 30 step 3 do repeat statements to execute until (Boolean condition) always executes at least once example: repeat examine next list item until no items remain if statements if (Boolean condition) then statements to execute endif if (Boolean condition) then statements to execute elseif (Boolean condition) statements to execute endif array or list elements: A[i, j] or aij Complexity of an algorithm The complexity of an algorithm is expressed in terms of the input size (usually n). For example, sorting a list of 100 integers (n =) 100) will take much less time than sorting a list of 1,000,000 integers (n =) 1,000,000). We want an expression in terms of n that will describe the amount of time required to run the algorithm on an input of that size. That way, given a value for n, we can “predict” how long it will take to produce an answer. The complexity of an algorithm is usually expressed by one of three measures—the best case, the worst case, or the average case. The best case is the least amount of time required for any input of size n. (The best case is NOT when n =) 1.) Typically this occurs when the input is a “special case”. For example, an already sorted list is often the “best” input for a sorting algorithm. Similarly, the worst case complexity is an expression that describes the amount of time needed for the “worst” possible input of size n. For instance, for some sorting algorithms the “worst” case is a list that is sorted in reverse order. You can think of the worst case complexity as a “guarantee” in that you know no matter what the data set looks like, the algorithm will never perform worse than the stated worst case complexity. The average case complexity is just what the name implies—if the algorithm were run on many different data sets, averaging the times taken for all the data sets would give you a good idea of how long the algorithm would usually take for a data set of size n. Now, let’s look at some algorithms and discuss how we might determine their best, worst or average case complexities. Example 1: Finding the largest element of a finite sequence max(a1, a2, …, an distinct integers) max  a1 // initialize max to the first element of the list for i  2 to n if max < ai then max  ai return(max) If you also want to find the location of the largest value, the algorithm below should be used. max(a1, a2, …, an: integers) max  a1 // initialize max to the first element of the list location  1 // remember the location of max for i  2 to n if max < ai then list item comparison max  ai location  i return(location) Example 4: Binary search binarysearch( x: int, a1, a2, …, an, increasing int.) i  1 \\ left endpoint of search interval 1 assignment j  n \\ right endpoint of search interval 1 assignment while i < j 1 comparison m  (i+j)/2 if x > am then i  m + 1 1 comparison and 1 assignment else j  m end if x = ai then location  i 1 comparison and 1 assignment else location  0 You should go through the algorithm to convince yourself that it will terminate. Also, notice that we do not explicitly check if am =) x. Instead, when the size of the list is reduced to one item, then we check if the item being sought is in that position. Now, let’s look at the complexity of binary search. We're going to count comparisons between x and elements of the list. Let n =) 2k. (Note how this is not a problem because we're doing a worst case analysis and if the estimate is a little too high that's not a problem.) |—————————|—————————| There are k iterations of the while loop since i m j we can only divide the list in half k times. Thus, 2k comparisons are needed. |————|—————| 1 more comparison is needed to drop out of i m j the loop and another is needed in the if test at the bottom. This gives us a total of 2k + 2 |—————| comparisons. i j Notice there is no best or worst case since • every list of length n will take the same number • of comparisons • 21 If n =) 2k then k =) lg n. Thus the complexity is: 20 =) 1 element 2 lg n + 2 =) O(lg n) We now look at two different methods that can be used to find the largest and smallest elements of a list. The second method is another divide and conquer algorithm. In this case we’ll count the total number of steps Example 5: Largest and smallest list elements (version 1) minmax(a1, a2, …, an) // Find the largest and smallest elements of a list. min  a1 1 assignment max  a1 1 assignment for i  2 to n do n - 1 iterations if min >a i then min  a i 2(n -1) =) 2n – 2 steps if max < a i then max  ai return(max) 1 return(min) 1 Total: 2n + 2 steps If we count only the comparisons that involve elements of the list, there are 2n – 2 comparisons made. We now look at a (recursive) divide and conquer algorithm for finding the largest and smallest elements in a list. The basic approach is this: 1. Divide the list in half. 2. Find the largest and smallest value in each half. 3. Compare the two large values. The larger of these is the largest element in the list. 4. Compare the two small values. The smaller of these is the smallest element in the list. Obviously just dividing the list in half one time is not going to help us very much, but we recursively use this process until we have a list of length two. When there are just two elements one comparison is needed to determine which is the larger and which is the smaller value. This is our basis case. The “conquer” part of a divide and conquer algorithm puts the solutions of the two subproblems together. For this problem steps 3 and 4 above combine the solutions. That is, it takes exactly two comparisons to combine the solutions of the subproblems. The operation we will focus on is the number of comparisons of list items that are done in finding the max and min. Example 6: Largest and smallest list elements (version 2) recur_minmax(big, small, a1, a2, …, an) // Find the largest and smallest elements of a list. if n = 2 then if a1 > a2 1 comparison big  a1 small  a2 else big  a2 small  a1 else recur_minmax(big1, small1, a1 … an/2) recur_minmax(big2, small2, an/2 + 1 … an) big  max(big1, big2) 1 comparison small  min(small1, small2) 1 comparison The recurrence relation: f(n) =) 2f(n/2) + 2 where the + 2 represents the comparisons in the last two steps of the algorithm Basis case (n = 2): a2 = 1 |—-———|———-—| n/2 n/2 |——|——|——|——| n/4 n/4 n/4 n/4 Let’s solve this by back substitution. f(n) =) 2f(n/2) + 2 First note that f(n/2) =) 2f(n/4) + 2 so substituting in we get f(n) =) 2[2f(n/4) + 2] + 2 =) 22f(n/22) + 22 + 2 Let n =) 2k =) 22[2f(n/23) + 2]+ 22 + 2 Then, the basis case is n/(2k-1) =) 2k/2k-1 =) 2 =) 23f(n/23) + 23+ 22 + 2 we divide the list k – 1 times until the list has … length 2 =) 2k-1f(2) + 2k-1+ 2k-2 + … +22 + 2 Now, using the fact that f(2) =) 1, we get k-1 f(n) =) 2k-1 + 2i =) 2k-1 + 2k – 2 (Remember the sum starts at i =) 0 not i =) 1) i=)1 =) 2k-1 + 2•2k-1 – 2 =) 3•2k-1 – 2 =) 3(n/2) – 2 This method is faster than the first version above, and in fact it can be shown that this algorithm is optimal in that no other algorithm for the problem uses fewer comparisons. Sorting Another operation that we often look at is sorting a list into numerical or alphabetical order. There are many different ways this can be done, some of which are much more efficient than others. In sorting algorithms, both comparisons and swaps of list items are counted in the analysis. The algorithms we’ll look at sort the list into nondecreasing order (i.e. increasing order, except there may be duplicates.) Let’s begin with selection sort. Example 7: Selection Sort select_sort (A : data_array; n : integer) \\ Sort the list into increasing order using the following method: \\ Select the smallest element among ai, ..., an, and \\ swap with ai, continuing until the list is sorted. (1) for i  1 to n - 1 do Loop executes n – 1 times (2) low  i; (3) for j  i + 1 to n do Number of comparisons: (4) if aj < alow then n-1, then n-2, …, 1 (5) low  j (6) swap(ai, alow) n-1 swaps, one for each value of i Let’s begin with the complexity of the swap. A swap (line 6) requires three assignments. Line 4 contains comparisons of list items. Note that the number of comparisons remains the same no matter what the order of the list is originally. The number of comparisons can be expressed by the sum n-1  i =) (n-1)n/2 =) O(n2) i=)1
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved