Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Asymptotic Notation: Understanding the Growth Rate of Functions, Slides of Computer Science

Computer Science TheoryMathematics for Computer ScienceDiscrete Mathematics

An introduction to asymptotic notation, a mathematical tool used to analyze the limiting behavior of functions as their input size grows large. Asymptotic notation allows us to express the upper and lower bounds of a function's growth rate using Big O, Omega, and Theta notations. the basics of asymptotic notation, including examples and proofs.

What you will learn

  • What is asymptotic notation and why is it important?
  • Can you provide examples of functions and their asymptotic behavior?
  • How do we use Big O, Omega, and Theta notations to analyze the growth rate of functions?

Typology: Slides

2021/2022

Uploaded on 09/12/2022

beverly69
beverly69 🇬🇧

4

(8)

16 documents

1 / 5

Toggle sidebar

Related documents


Partial preview of the text

Download Asymptotic Notation: Understanding the Growth Rate of Functions and more Slides Computer Science in PDF only on Docsity! Asymptotic Notation Basics (Updated April 16, 2013) In computer science it is often difficult to precisely measure certain quantities, and even when they can be measured accurately their exact values are not essential. Accurate, but simple looking approximations could be more useful than complex exact formulas. A typical example is that of a running time of an algorithm. The actual running time depends on the implementation, the compiler, on the machine, and on the conditions under which the program is executed. Thus it is simply not possible to tell what is the running time of an algorithm based only on its description. But in fact the exact formula is not even necessary. For example, if an algorithm makes 4n3 + 3n+ 7 steps on inputs of length n, for large values of n the terms 3n and 7 are insignificant. It makes more sense to say that the running time is approximately 4n3. In fact, it makes sense to even ignore the constant 4, since this constant is implementation- and platform-dependent. So we can say that the running time will be proportional to n3. In other words, it will be roughly cn3, for some constant c. This observation motivates the definition of asymptotic notation. Definition. For two functions f(x) and g(x), we say that f(x) is of order g(x), and write f(x) = O(g(x)), if there are constants c and x0 such that |f(x)| ≤ cg(x) for all x ≥ x0. In all useful contexts the functions we will deal with will be non-negative, in which case we can ignore the absolute value (we will in fact do this in most examples). Occasionally though we may encounter functions that may take some negative values for small values of x, so we need to take this into account in the definition. Note that the above definition forces g(x) to be non-negative if x is large enough. Example 1. We show that 7x + 20 = O(x). Indeed, say, for x ≥ 10 we have 7x + 20 ≤ 7x + 2x ≤ 9x, so the above definition applies with x0 = 10 and c = 9. But these choices are not unique. We could as well say that for x ≥ 20 we have 7x+ 20 ≤ 7x+ x ≤ 8x, etc. Example 2. Let’s show now that 2n3 + 6n2 − 2 = O(n3). We have 2n3 + 6n2 − 2 ≤ 2n3 + 6n3 = 8n3 for n ≥ 0, so we can conclude that 2n3 + 6n2 − 2 = O(n3). Example 3. As a first more exciting example, we’ll look at the harmonic numbers: H(n) = n∑ i=1 1 i = 1 + 1 2 + 1 3 + ...+ 1 n . This function grows with n, but how fast? We claim that its growth rate is the same as that of logarithmic functions, that is H(n) = O(log n). Let’s now prove it. The idea of the proof is to divide the sequence 1, 1/2, 1/3, .. into about log n blocks, so that the sum of each block is between 1 2 and 1. More specifically, we divide the sequence 1, 1/2, 1/3, ... into blocks 1 1 2 + 1 3 1 4 + 1 5 + 1 6 + 1 7 . . . 1 2i + 1 2i + 1 + ...+ 1 2i+1 − 1 . . . 1 2k + 1 2k + 1 + ...+ 1 n where k is chosen to be the integer such that 2k ≤ n < 2k+1. (Thus k = blog nc.) So, for any i except k, the i-th block starts at 1/2i and ends right before 1/2i+1. The last block is exceptional, since it ends at 1/n which could be smaller than 1/(2k+1 − 1). Thus the sum of the i-th block (except the last) is 1 2j + 1 2j + 1 + 1 2j + 2 ...+ 1 2j+1 − 1 In this sum, each term is at most 1/2j and is greater than 1/2j+1. This block starts at the 2j-th term and ends right before 2j+1-st term, so it has 2j+1 − 2j = 2j terms. Therefore the sum of this block is at most 1 2j · 1 2j = 1 and at least 2j · 1 2j+1 = 1 2 . (Except the last block’s sum, which is at most 1 but could be less than 1 2 .) Putting it all together, we proceed as follows. We have k + 1 blocks, with each block adding to at most 1, so Hn ≤ (k + 1) · 1 ≤ log n + 1. On the other hand, all blocks except last add up to at least 1 2 , so Hn ≥ k · 12 ≥ 1 2 (log n− 1). Summarizing, we have proved the following theorem. Theorem 0.1 1 2 (log n− 1) ≤ Hn ≤ log n+ 1, for all positive integers n. How can we express this using the big-Oh notation? Bounding Hn from above, we get that for n ≥ 2, Hn ≤ log n+1 ≤ log n+log n = 2 log n. So Hn ≤ 2 log n for n ≥ 2, and we can conclude that Hn = O(log n). Much better estimates are known for Hn. It is known that Hn ∼ lnn + γ, where γ ≈ 0.57 is called Euler’s constant. More precisely, the difference Hn − lnn converges to γ with n→∞. We can express this better approximation using the big-Oh notation as well, by using the big-Oh notation for the approximation error rather than to the function itself: Hn = lnn + O(1). An even more accurate approximation is Hn = lnn+ γ +O(1/n), as it shows that the approximation error Hn − lnn− γ vanishes when n→∞. Example 4. The asymptotic notation has not been invented by computer scientists – it has been used in mathematics for over a 100 years, for describing approximation errors using various expansions (like Taylor series), and in number theory, to estimate the growth of some functions. Consider for example the following question: for a number n, how many prime numbers are between 2 and n? This value is denoted π(n). It is known that π(n) = O(n/ log n). In fact, more accurate estimates for π(n) exist; for example π(n) = n/ lnn+O(n/ log2 n). This result is often called the Prime Number Theorem. Example 5. We now consider an example that involves sequences defined by recurrence equations. Suppose we have a sequence {ai} defined by a0 = 3, a1 = 8, an = 2an−1 + an−2. We claim that an = O(2.75n). In order to prove this, we show by induction that an ≤ 3(2.75)n. Indeed, in the base cases, a0 = 3 = 3(2.75)0 and a1 = 8 < 3(2.75)1. In the inductive step, assume the claim holds for numbers less than n. For n, by the inductive assumption, we get an = 2an−1 + an−2 ≤ 2 · 3(2.75)n−1 + 3(2.75)n−2 ≤ 3(2.75)n−2(2 · 2.75 + 1) ≤ 3(2.75)n−2(2.75)2 ≤ 3(2.75)n, and thus the claim holds for n as well. Therefore it holds for all values of n. Example 6. Ok, now let’s talk about algorithms. What’s the running time of the algorithm below? Algorithm WhatsMyRunTime1 (n : integer) for i← 1 to 6n do z← 2z − 1 for i← 1 to 2n2 do for j ← 1 to n+ 1 do z← z2 − z The first loop makes 6n iterations, the second double loop makes 2n2 · (n + 1) = 2n3 + 2n2 iterations, for the total of 2n3 + 2n2 + 6n iterations. For n ≥ 1 this is at most 2n3 + 2n3 + 6n3 ≤ 10n3, so the running time is O(n3). 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved