Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Notes for Asymptotic Order Notation | CSC 517, Study notes of Computer Science

Material Type: Notes; Class: Data Str Algrm Anl; Subject: Computer Science (CSC) ; University: University of Miami; Term: Fall 2007;

Typology: Study notes

Pre 2010

Uploaded on 09/17/2009

koofers-user-r7w
koofers-user-r7w 🇺🇸

10 documents

1 / 6

Toggle sidebar

Related documents


Partial preview of the text

Download Notes for Asymptotic Order Notation | CSC 517 and more Study notes Computer Science in PDF only on Docsity! Asymptotic Order Notation Burton Rosenberg September 10, 2007 Introduction We carefully develop the notations which measure the asymptotic growth of functions. The intuition is to group functions into function classes, based on its “growth shape”. The most used such grouping is O(f) (“big-oh of f”), where a function f serves as a model for an entire collection of functions, the set O(f), including all functions growing not faster than the function f . Definition 1 (Big-Oh) Let f : X → Y be a function. O(f) = {g : X → Y | ∃xo, c > 0 s.t. ∀x ≥ xo, cf(x) ≥ g(x) ≥ 0} By abuse of notation, g ∈ O(f) is written g = O(f), rather than the more correct g ∈ O(f). The phrase “O(f) = g” has no meaning. In general, the sets X and Y are the sets of real numbers, integers or naturals, according to the situation. Computer science is interested in functions which give only positive values, or less drastically, functions which eventually assume only positive values. There is some long-term simplicity in adopting this restriction, although short-term it requires additional definitions and care. Definition 2 (Eventually Non-negative, Positive) A function f : X → Y is eventually non- negative if there exists an xo such that for all x ≥ xo, f(x) ≥ 0. The function is eventually positive if there exists an xo such that for all x ≥ xo, f(x) > 0. Theorem 1 If f is not eventually non-negative, O(f) is empty. If f is eventually non-negative, then 0 = O(f), and O(f) is therefore non-empty. Every element of O(f) is eventually non-negative. The function 0 : X → Y is the zero function, defined as 0(x) = 0 for all x. In general any constant C can be viewed as a function of constant value C. There is a related but stronger notion for the classification of functions, “Little-Oh of f”. Whereas a function in O(f) is bounded above by the function f , a function in o(f) is increasingly insignificant to the function f . 1 Definition 3 (Little-Oh) Let f : X → Y be a function. o(f) = { g : X → Y | ∀c > 0,∃xo > 0 s.t. ∀x ≥ xo, cf(x) > g(x) ≥ 0 } Theorem 2 If f is not eventually positive, o(f) is empty. If f is eventually positive, then 0 = o(f), and o(f) is therefore non-empty. Every element of o(f) is eventually non-negative. Note that while O(0) is not empty (it contains all functions that are “eventually zero”), o(0) is empty. So the converse of the following theorem is not true. Theorem 3 (Little-Oh is Stronger than Big-Oh) If g = o(f) then g = O(f). Proving a function in O(f) requires demonstrating xo and c satisfying the definition. For some demonstrations, the function is so strongly a member of O(f) that the subtleties of Big-Oh are a waste of time. The class of functions o(f) (“little-oh of f”) has stronger demands, hence sim- pler demonstrations. A function in o(f) is also in O(f), so one use of Little-Oh is to provide demonstrations for Big-Oh. Theorem 4 (Analytic Definition of Little-Oh) Let f : X → Y be eventually positive. Let g : X → Y be eventually non-negative. Then g = o(f) if and only if: lim x→∞ g(x)/f(x) → 0 Example 1 log x = O(x). Use L’Hopital’s rule to show log x/x → 0. This gives log x = o(x), and therefore log x = O(x). In the same way, it can be shown that n log n = O(n1+) for any real  > 0. Example 2 For j′ > k > 0, xk = o(xj), hence xk = O(xj). Order Properties of the Notation The order notation arranges functions from smaller to larger, reminiscent of the ordering of numbers from smaller to larger. In this analogy, Big-Oh is non-strict inequality, g = O(f) can be interpreted as g ≤ f . Likewise, Little-Oh is strict inequality, g = o(f) is similar to g < f . Given three functions, f, g and h, we expect that ordering is transitive: if h ≤ g and g ≤ f then h ≤ f . The is indeed true for order notation. Theorem 5 (Transitivity) If h = O(g) and g = O(f) then h = O(f). If h = o(g) and g = o(f) then h = o(f). Big-Oh and Little-Oh express non-strict and strict “less-than”. Other notations express non-strict and strict “greater-than”. 2 Efficiency of Algorithms The efficiency of an algorithm is given by the order of growth in run time with respect to input size. For instance, numerous simple sorting routines essentially compare all elements pairwise in the course of sorting. For n elements, this makes O(n2) comparisons, and this places a bound on runtime. Suppose two algorithms are composed. The output of algorithm A with run time fA is the input to algorithm B with run time fB. What is the efficiency of the combined algorithm? Assuming the first algorithm leaves the size unchanged, it is fA(x)+fB(x). This theorem allows for simplification. Theorem 14 Let g1, g2 = O(f) and a1, a2 be non-negative constants. Then a1g1 + a2g2 = O(f). Suppose we would like to calculate the mode of x numbers, that is, the value which occurs most often. This can be accomplished by sorting the numbers and then sweeping through the sorted numbers keeping track of the longest run of consecutive, equal values. If the sorting is done in time fA(x) = O(x2), and the sweep is done in time fB(x) = O(x), the entire algorithm runs in time fA(x) + fB(x) = O(x2). In this algorithm, sorting is the bottle neck. Improve the sort to O(x log x) (e.g. using merge-sort) and immediately the entire algorithm improves to O(x log x). The moral of the story: when an algorithm can be broken down into a sequence of sub-algorithms, the sub-algorithm with largest run time rules the runtime of the combined algorithm. It should be intuitive that, adx d + ad−1xd−1 + . . . + a0 = O(xd) for ad > 0. It is actually a bit of a mess to prove, due to the fact that for i < d some or all of the ai might be negative. Theorem 15 If f ′ = O(f) and g′ = O(g) then f ′g′ = O(fg). Given this theorem, a fairly neat way about a proof is the factorization: f(x) = adxd ( 1 + d−1∑ i=0 (ai/ad)xi−d ) followed by showing 1 + ∑ (ai/ad)xi−d = O(1). A little more work gives the matching Ω bound. Corollary 16 Let f(x) = ∑d i=0 aix i with ad > 0. Then f = Θ(xd). Example 4 A quick summary of our results so far: 0 < 1 < log x < x < x log x < x1+ < x2 < x3 < . . . 5 An algorithm running with any of these run times is consider feasible. It runs in polynomial time, that is, O(xk) for some k. Algorithms taking strictly more time than polynomial are consider infeasible. Theorem 17 xk = o(ax) for any k and any a > 1. Example 5 Our hierarchy continues: x2 < x3 < . . . < 2x < ex < . . . < x! < xx = 2(log x)x < 2x 1+ < 2x 2 < . . . An algorithm taking more than polynomial time, such as an exponential time algorithm running in O(2x) is considered infeasible. The reason for this is that small changes in input size increase computation time greatly. In practice, any useful algorithm will be applied to increasingly complex problems. Due to the growth rate, computation power will not keep up with the demands of problem size. For all practical purposes, then, the problem in not solvable by this algorithm. Exponential time algorithms sometimes invoke the following very simple strategy: “try all possible combinations of outputs and pick the one that’s correct.” Here is an exponential time algorithm for sorting a pack of cards. There are 52 cards. They can be rearranged into 52! orders. Try all orderings one by one, until the correct one appears. To give the reader an idea of the inefficiency of this method, it wouldn’t be much less efficient to sort cards by repeatedly shuffling them, each time looking to see if the cards just happened to fall in order. This should give an intuition to the practical infeasibility of exponential time algorithms. Exercises 1. Give proofs for all theorems and corollaries given in these notes. 2. Give proofs for the examples given in these notes. 3. Prove or disprove: if g1, g2 = Ω(f) and a1, a2 are positive constants, then a1g1 +a2g2 = Ω(f). 4. Prove or disprove: if f ′ = Ω(f) and g′ = Ω(g) then f ′g′ = Ω(fg). 5. Prove or disprove: if f ′ = O(f) and g′ = O(g) then f ′ and g′ are both O(f + g). 6. Give the analogous analytic definition for Ω which agrees with the transpose symmetry of o and Ω.. Warning: the limit requires that the denominator be eventually positive. 6
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved