Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Brute Force Design Strategies in Computer Architecture | CPSC 5155G, Study notes of Computer Architecture and Organization

Material Type: Notes; Professor: Bosworth; Class: Computer Architecture; Subject: Computer Science; University: Columbus State University; Term: Fall 2009;

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-pme
koofers-user-pme 🇺🇸

10 documents

1 / 10

Toggle sidebar

Related documents


Partial preview of the text

Download Brute Force Design Strategies in Computer Architecture | CPSC 5155G and more Study notes Computer Architecture and Organization in PDF only on Docsity! Chapter 3: Brute Force The brute-force design strategy is often the simplest and occasionally the best approach to solving a problem. This strategy is not sufficiently discussed in many algorithm design courses, possibly due to the mistaken idea that it is not sufficiently “advanced”, whatever that might mean. We study the method for a number of reasons, including: 1) It occasionally produces a good solution, and 2) We use it as a “yardstick” against which to compare all more sophisticated methods. As the book notes, it is very difficult to find a solvable problem that cannot be solved by some sort of brute-force algorithm. Indeed, “unsolvable by brute-force” is almost a definition of the term “unsolvable by any algorithm”. As an example of a problem that is well solved by brute force, consider the “dot product” of two N-dimensional vectors, each represented as an array with N elements. Algorithm DotProduct ( A[0..N – 1], B[0..N – 1] ) Dot = 0 For J = 0 To (N – 1) Do Dot = Dot + A[J]*B[J] Return Dot Selection Sort and Bubble Sort We now study a few solutions to the problem of sorting a list by comparison only. As always, the search and sort problems can be used as sources of many interesting examples. The selection sort is the first algorithm that we shall study in this section. The strategy of selection sort is to process an unsorted sublist of the list of numbers by placing the smallest element at the beginning of the list. Here is a slightly modified version of the textbooks algorithm. It is more efficient in that it reduces array references. Algorithm SelectionSort( A[0.. N – 1] ) For I = 0 to (N – 2) Do Min = I A_Min = A[I] For J = (I + 1) To (N – 1) Do If A[J] < A_Min Then A_Min = A[J] Min = J End If End Loop over J Swap A[I] and A[Min] // For algorithms swap is a End Loop over I // basic operation. Page 1 of 10 CPSC 5115 Last Revised November 28, 2020 CPSC 5115 Algorithm Analysis and Design Course Notes It is easily seen that the algorithm produces a sorted list, taking       2 N =   2 1 NN comparisons to complete the process. Thus, we say that the time complexity of this brute- force algorithm is (NN2). Later, we shall study some “more advanced” sort algorithms that are known to have time complexity (NNlog N). While the latter algorithms are usually more efficient than selection sort, it is common knowledge that for small values of N, selection sort is faster. The commonly accepted estimate is that selection sort is better for sorting 10 or fewer items, and possibly retains its advantage up to about 25 items. We now discuss bubble sort briefly. As noted in the textbook, the algorithm is generally considered as inferior and would not be discussed were it not for the catchy title. Indeed, one textbook illustrates the algorithm by sorting the names of volcanoes, thus the volcanoes are seen to “bubble up”. The key strategy of this algorithm is the repeated application of the “bubble” operation. If A[J + 1] < A[J] Then Swap A[J + 1] and A[J] The book presents the standard version of the bubble-sort algorithm and mentions an obvious way to improve its performance. We shall present this slightly improved version, which can have some advantages should the array be almost sorted. Algorithm Bubble (A[0.. N – 1]) // Change the outer for loop to a Repeat Until loop I = 0 Repeat Sorted = True For J = 0 to (N – 2 – I) Do If A[J + 1] < A[J] Then Swap A[J+1] and A[J} Sorted = False End If End For I = I + 1 Until ( I > (N – 2) ) Or Sorted Here we make the observation that if the list has been divided into two parts A0 …. AK-1 | AK … AN-1 in which the list AK … AN-1 has been sorted and an examination of the list A0 …. AK does not show any elements out of order, then we conclude that the entire list is sorted. Page 2 of 10 Chapter 3 Last Revised November 28, 2020 CPSC 5115 Algorithm Analysis and Design Course Notes DSmin   // This gives me some grief For I = 1 to (N – 1) Do For J = (I + 1) To N Do DS = (XI – XJ)2 + (YI – YJ)2 If ( DS < DSmin) Then DSmin = DS IX1 = I IX2 = J End If Next J // Visual Basic syntax Next I Return (IX1, IX2) // The indices of the closest pair We now have a perfectly good algorithm for the problem. I want to divert from our purely theoretical approach and discuss some very real problems with the above when it is considered as a fragment of a real program. My main complaint concerns the first statement which assigns plus infinity to the proposed minimum for the square of the distance. I distrust any algorithm that involves assigning a value of either positive or negative infinity. The following code moves towards my preferred algorithm. If (N < 2) Then Return (1, 1) // // I always like to start with a real value when computing // either the minimum or maximum of anything. // DSmin (X2 – X1)2 + (Y2 – Y1)2 For I = 1 to (N – 1) Do For J = (I + 1) To N Do DS = (XI – XJ)2 + (YI – YJ)2 If ( DS < DSmin) Then DSmin = DS IX1 = I IX2 = J End If Next J // Visual Basic syntax Next I Return (IX1, IX2) // The indices of the closest pair The observant student will note that the above calculates (X2 – X1)2 + (Y2 – Y1)2 twice. For me personally, this is a small cost to avoid silly code. The student is invited to contemplate this issue, particularly considering what numeric value to assign to infinity. At a company where I previously worked, we determined that  = 1.0  1038. All I can say is that we had no errors in either our code or results of our computation due to this assignment. Another consideration is whether to exit the loop if one computes DS = 0. Note that I excluded this possibility earlier, but one might want to consider it. Page 5 of 10 Chapter 3 Last Revised November 28, 2020 CPSC 5115 Algorithm Analysis and Design Course Notes Manhattan Distance (NProblem 2c on page 112 of the text) The basic question here is whether or not the Manhattan distance is equivalent to the standard Euclidean distance. Equivalence means that two points are further apart in Manhattan distance if and only if they are further apart in Euclidean distance. A bit of experience shows that this claim is obviously false. It is the intent of these notes to show why it is obvious. First, we want to redefine the problem slightly in order to simplify the notation. Consider two points in the plane: P1 and P2, as shown in the following figure. The main difference is that we are using X and Y to denote the positive differences in the X and Y coordinates of the points respectively, so that the Manhattan distance between them is just X + Y and the square of the Euclidean distance is X2 + Y2. Using the above notation consider two sets of points. In set 1, the values are X = 9 and Y = 1. In set 2, the values are X = 5 and Y = 6. For set 1, the Manhattan distance is 9 + 1 = 10 the Euclidean distance is 181 = 82  9.055. For set 2, the Manhattan distance is 5 + 6 = 11 the Euclidean distance is 3625 = 61  7.810. Thus, the points in set 1 are closer under the Manhattan distance definition and further apart under the Euclidean distance definition, and the two definitions are not equivalent. The basis of this answer is a related result that is worth proving. We state this as a theorem. Theorem: Consider a sequence of integers (NX1, X2, X3, …, XN) with a fixed sum, call it S. Thus, S =   N K KX 1 . Define the sum of the squares as SSQ =    N K X K 1 2 . Then SSQ is minimized when each value of XK is as close as possible to S / N. Proof: Consider two elements XJ and XK, with XJ ≥ XK. We show that the value of SSQ is not increased by subtracting 1 from XJ and adding 1 to XK. (NXJ – 1)2 + (NXK + 1)2 = (NXJ)2 – 2XJ +1 + (NXK)2 + 2XK +1 = (NXJ)2 + (NXK)2 + 2(NXK – XJ) + 2 and (NXJ – 1)2 + (NXK + 1)2 – (NXJ)2 – (NXK)2 = 2(NXK – XJ) + 2. Page 6 of 10 Chapter 3 Last Revised November 28, 2020 CPSC 5115 Algorithm Analysis and Design Course Notes First suppose that XJ = XK. Then the value of SSQ is changed by 2(NXK – XJ) + 2 = 2. This shows that making the values more different increases the value of SSQ. For XJ = XK + 1, this becomes (NXJ)2+ (NXK)2 + 2(N– 1) + 2 = (NXJ)2+ (NXK)2. This is not a surprise as all we have done here is to swap the values. Suppose XJ = XK + Z, with Z > 1. Then SSQ changes by 2(N – Z) + 2 < – 2 + 2 = 0 and the value of SSQ is decreased by the operation. Although not directly connected to the problem, it is this result that gave me the clue as to how to generate an example to falsify the claim. Exhaustive Search Exhaustive search is a method for generating the answer to a problem by generating every possible solution and examining that solution in comparison with other possible solutions. We sometimes refer to these as either combinatorial problems, because they involve the generation of combinatorial objects, or exponential-time problems. We first note that the two notations are roughly equivalent. Remember that the factorial function is to be considered in the class of exponential functions, specifically that the function N! is (N2N). We show this by proving that N! > 2N for N > 3. The proof is by simple induction, with a significantly different choice of the base case. Base case: N = 4 We observe that 4! = 24 > 16 = 24. Induction: Assume N ≥ 4. Then (NN + 1)! = (NN + 1)N! > 2N!. By the inductive hypothesis N! > 2N, so 2N! > 22N = 2N+1. We now mention two examples of problems best solved by exhaustive search as well as a variant of one of the problems that is known to have a very easy exact solution. Knapsack Problem The first problem is well-known and, as is the case with all interesting problems, can be reinterpreted to apply to a large number of very important applications. The knapsack problem is stated as follows: “Given N items of known weights (NW1, W2, …, WN) and values (NV1, V2, …., VN) and a knapsack of capacity W, find the most valuable subset of the items that fit into the knapsack. There is an implied condition that we should consider. Suppose that the items are ordered by non-decreasing weight, so that W1  W2  …  WN. Then W1  W <   N K KW 1 . We have an easy solution if either all items fit or no items fit. There are two important variants of the knapsack problem. The fractional (continuous) knapsack problem that has an easy linear-time solution. The 0/1 (discrete) knapsack can be solved only by exhaustive search. We shall use the fractional knapsack to make statements about solving 0/1 knapsack. Page 7 of 10 Chapter 3 Last Revised November 28, 2020
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved