Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Asymptotic Notation and Algorithm Running Time, Slides of Introduction to Computers

An explanation of algorithm running time and the concept of asymptotic notation, specifically big-oh notation. The intuition behind these concepts and the formal definition of θ(g(n)), o(g(n)), and ω(g(n)). It also includes exercises for the reader to prove certain properties of these notations.

Typology: Slides

2010/2011

Uploaded on 09/06/2011

stifler_11
stifler_11 🇬🇧

4.5

(8)

48 documents

1 / 3

Toggle sidebar

Related documents


Partial preview of the text

Download Asymptotic Notation and Algorithm Running Time and more Slides Introduction to Computers in PDF only on Docsity! COMS21103: Summary of Lecture 3 Some of the points that we discussed in lecture three: Algorithm running time Given an algorithm A and some fixed (computer) on which this algorithm is executed, we wrote TA(n) for the worst running time of algorithm A on inputs of size n. We then saw that we can think of the function c · TA(n) for some c ∈ (0, 1), as the running time of the algorithm on a “sped-up” machine. For example, by doubling the speed of the machine, the running time of the algorithm halves, so it becomes 12 · TA(n)). Similarly, c · TA(n) for some c ∈ (1,∞) corresponds to running the algorithm on a slower machine. For example, by halving the speed of the machine, the running time becomes 2 · TA(n). Throughout the lecture we used this intuition to justify the definitions that were given. Asymptotic (big-Oh) notation First, we wanted to characterize formally what does it mean for two algorithms to have essentially the same (worst case) running time. Using the above intuition, we came up with an informal definition which was saying that two algorithms A and B have essentially the same efficiency if one can be made run both faster and slower than the other simply by changing the speed of the machine on which it is run. In other words A and B run essentially the same if we can make the running time of B a lower bound for that of A by speeding up the machine on which B is run, and we can make the running time of B an upper bound for that of A by slowing down the machine on which B is run. To capture this idea we have defined the set of functions which, asymptotically, grow at the same rate as the function g : N → R+ to be Θ(g(n)) = {f | ∃c0, c1 ∈ R+,∃n0 ∈ N,∀n ≥ n0 : c0 · g(n) ≤ f(n) ≤ c1 · g(n)} Roughly, f(n) ∈ Θ(g(n)) if and only if by multiplying with some constant c0 the graph of g can be made above that of f , and by multiplying with constant c1 the graph of g can be made always below that of f . Notice that we ignore what happends on input sizes less than n0. So, if the function g(n) describes the running time of some algorithm A, any other algorithm B for which the running time is described by some function f ∈ Θ(g(n)) should be considered as efficient as A. As a sanity check for the above definition, prove on your own that if f behaves asymptotically like g, then g behaves asymptotically like f . (Which in the case of algorithms would say that if algorithm B has asymptotically the same running time as A, then A has, asymptotically, the same running time as B. ) In other words: We have shown that if f ∈ Θ(g(n)) then g ∈ Θ(f(n)). 1 Again, if f ∈ Θ(g) then g acts as both an upper-bound and a lower-bound to f (when scaled by appropriate constants). In algorithm analysis we may only care about an upper bound for the running time of algorithms. In class we have expressed mathematically what does it mean for a function f to be upper bounded by a function g. In English, this says that we can find a constant c such that the graph of c · g(n) is above that of f(n) for all big enough n (again, we ignore what happens for small inputs). Formally, we have defined the set of functions f for which g is an upper bound by: O(g(n)) = {f | ∃c ∈ R+,∃n0 ∈ N,∀n ≥ n0 : f(n) ≤ c · g(n)} By analogy, we have defined the set of functions for which the function g is a lower bound: Ω(g(n)) = {f | ∃c ∈ R+,∃n0 ∈ N,∀n ≥ n0 : f(n) ≥ c · g(n)} We have shown that Θ(g(n)) = O(g(n))∩Ω(g(n) and that if f(n) ∈ O(g(n)) and g(n) ∈ O(h(n)) then f(n) ∈ O(h(n)) Exercise: Show that if f(n) ∈ O(g(n)) then g(n) ∈ Ω(f(n)) Exercise: Show that if f(n) ∈ Θ(h(n)) and g(n) ∈ Θ(h(n)) then f(n) + g(n) ∈ Θ(h(n)). I ended up with the following list of functions that we typically encounter in algorithm analysis. The functions are listed in increasing order of their order of growth. 1. log n 2. n,  ∈ (0, 1) 3. n log n 4. n2 5. n3 6. nk (k ≥ 3) 7. 2n 8. 3n I have not yet defined yet strict upper and lower bounds, but I will do it in the next lecture. Since their place is together with the classes of functions defined bellow I will keep them in these lecture notes. The set of functions for which g is a strict upper bound is: o(g(n)) = {f | ∀c ∈ R+,∃n0 ∈ N,∀n ≥ n0 : f(n) < c · g(n)} Using the intuition with which we started with: g is a strict upperboud for f if no matter by what constant we would multiply g, at some point, the graph of g becomes always above that of f . Strict lower bounds are defined analogously: ω(g(n)) = {f | ∀c ∈ R+,∃n0 ∈ N,∀n ≥ n0 : f(n) > c · g(n)} 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved