Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Approximation Algorithms: Coping with NP-Completeness in Computer Science - Prof. David M., Study notes of Computer Science

Notes from a university lecture on approximation algorithms, which are used to find near-optimal solutions to np-complete problems when exact solutions are not feasible. The concept of np-completeness, the limitations of brute-force search and general search methods, and the definition and importance of approximation algorithms. The document also includes examples of approximation algorithms for the vertex cover and traveling salesman problem, and discusses the concept of polynomial time approximation schemes (ptas).

Typology: Study notes

Pre 2010

Uploaded on 02/13/2009

koofers-user-idv
koofers-user-idv 🇺🇸

10 documents

1 / 5

Toggle sidebar

Related documents


Partial preview of the text

Download Approximation Algorithms: Coping with NP-Completeness in Computer Science - Prof. David M. and more Study notes Computer Science in PDF only on Docsity! CMSC 451 Design and Analysis of Computer Algorithms Fall 2006 Notes for Dave Mount’s Lectures Note: The material given here is based on the presentation in the book Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein. Although the concepts are essentially the same as those in Kleinberg and Tardos, the notation differs. Approximation Algorithms: VC and TSP Coping with NP-completeness: With NP-completeness we have seen that there are many important opti- mization problems that are likely to be quite hard to solve exactly. Since these are important problems, we cannot simply give up at this point, since people do need solutions to these problems. How do we cope with NP-completeness: Use brute-force search: Even on the fastest parallel computers this approach is viable only for the smallest instances of these problems. Heuristics: A heuristic is a strategy for producing a valid solution, but there are no guarantees how close it is to optimal. This is worthwhile if all else fails, or if lack of optimality is not really an issue. General Search Methods: There are a number of very powerful techniques for solving general com- binatorial optimization problems that have been developed in the areas of AI and operations re- search. These go under names such as branch-and-bound, A∗-search, simulated annealing, and genetic algorithms. The performance of these approaches varies considerably from one problem to problem and instance to instance. But in some cases they can perform quite well. Approximation Algorithms: This is an algorithm that runs in polynomial time (ideally), and pro- duces a solution that is within a guaranteed factor of the optimum solution. Performance Bounds: Most NP-complete problems have been stated as decision problems for theoretical reasons. However underlying most of these problems is a natural optimization problem. For example, the TSP optimization problem is to find the simple cycle of minimum cost in a digraph, the VC optimization problem is to find the vertex cover of minimum size, the clique optimization problem is to find the clique of maximum size. Note that sometimes we are minimizing and sometimes we are maximizing. An approximation algorithm is one that returns a legitimate answer, but not necessarily one of the smallest size. How do we measure how good an approximation algorithm is? We define the ratio bound of an approximation algorithm as follows. Given an instance I of our problem, let C(I) be the cost of the solution produced by our approximation algorithm, and let C∗(I) be the optimal solution. We will assume that costs are strictly positive values. For a minimization problem we want C(I)/C ∗(I) to be small, and for a maximization problem we want C∗(I)/C(I) to be small. For any input size n, we Lecture Notes 1 CMSC 451 say that the approximation algorithm achieves ratio bound ρ(n), if for all I , |I| = n we have max I ( C(I) C∗(I) , C∗(I) C(I) ) ≤ ρ(n). Observe that ρ(n) is always greater than or equal to 1, and it is equal to 1 if and only if the approximate solution is the true optimum solution. Some NP-complete problems can be approximated arbitrarily closely. Such an algorithm is given both the input, and a real value  > 0, and returns an answer whose ratio bound is at most (1 + ). Such an algorithm is called a polynomial time approximation scheme (or PTAS for short). The running time is a function of both n and . As  approaches 0, the running time increases beyond polynomial time. For example, the running time might be O(nd1/e). If the running time depends only on a polynomial function of 1/ then it is called a fully polynomial-time approximation scheme. For example, a running time like O((1/)2n3) would be such an example, whereas O(n1/) and O(2(1/)n) are not. Although NP-complete problems are equivalent with respect to whether they can be solved exactly in polynomial time in the worst case, their approximability varies considerably. • For some NP-complete problems, it is very unlikely that any approximation algorithm exists. For example, if the graph TSP problem had an approximation algorithm with a ratio bound of any value less than ∞, then P = NP. • Many NP-complete can be approximated, but the ratio bound is a (slow growing) function of n. For example, the set cover problem (a generalization of the vertex cover problem), can be approximated to within a factor of ln n. We will not discuss this algorithm, but it is covered in CLRS. • Some NP-complete problems can be approximated to within a fixed constant factor. We will discuss two examples below. • Some NP-complete problems have PTAS’s. One example is the subset problem (which we haven’t discussed, but is described in CLRS) and the Euclidean TSP problem. In fact, much like NP-complete problems, there are collections of problems which are “believed” to be hard to approximate and are equivalent in the sense that if any one can be approximated in polynomial time then they all can be. This class is called Max-SNP complete. We will not discuss this further. Suffice it to say that the topic of approximation algorithms would fill another course. Vertex Cover: We begin by showing that there is an approximation algorithm for vertex cover with a ratio bound of 2, that is, this algorithm will be guaranteed to find a vertex cover whose size is at most twice that of the optimum. Recall that a vertex cover is a subset of vertices such that every edge in the graph is incident to at least one of these vertices. The vertex cover optimization problem is to find a vertex cover of minimum size. How does one go about finding an approximation algorithm. The first approach is to try something that seems like a “reasonably” good strategy, a heuristic. It turns out that many simple heuristics, when not optimal, can often be proved to be close to optimal. Here is an very simple algorithm, that guarantees an approximation within a factor of 2 for the vertex cover problem. It is based on the following observation. Consider an arbitrary edge (u, v) in the Lecture Notes 2 CMSC 451
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved