Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Approximation Algorithms: Solving NP-Hard Problems with Suboptimal Solutions - Prof. Eric , Study notes of Computer Science

Approximation algorithms, which are used to find near-optimal solutions to np-hard problems. The main issues of approximation algorithms, including finding good bounds and the difference between absolute and relative approximation. Examples of problems, such as vertex cover and independent set, are provided. The document also includes details on algorithms for scheduling and maximal matching, as well as proofs of approximation ratios.

Typology: Study notes

Pre 2010

Uploaded on 07/22/2009

koofers-user-yt3
koofers-user-yt3 🇺🇸

10 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download Approximation Algorithms: Solving NP-Hard Problems with Suboptimal Solutions - Prof. Eric and more Study notes Computer Science in PDF only on Docsity! Approximation Algorithms • Main issues – Finding a good lower or upper bound for the optimal solution value – Relating your algorithm’s solution value to this bound – Examples: Scheduling and Vertex Cover • Basic Setting – Working with an optimization problem, not a decision problem, where you are trying to minimize or maximize some value – Examples: ∗ Vertex cover · Input: Graph G = (V, E) · Task: Find a vertex cover of minimum size · Value to be minimized: vertex cover size ∗ Independent Set · Input: Graph G = (V, E) · Task: Find an independent set of maximum size · Value to be maximized: independent set size – The problems are NP-hard ∗ The decision version of all these problems is NP-complete ∗ Thus, we probably cannot find a polynomial time algorithm that solves the problem optimally for all input instances. – Notation ∗ Let Π be the problem under consideration ∗ Let I be an input instance of Π ∗ let OPT denote the optimal algorithm ∗ Let A denote the algorithm under consideration ∗ For any algorithm A and any input instance I, let A(I) denote the value of A’s solution for input instance I – Absolute approximation of c (c-absolute-approximation algorithm) ∗ ∃c such that ∀IA(I) ≤ OPT (I) + c for minimization problem Π · A is a 2-absolute-approximation algorithm for vertex cover if A can always find a vertex cover at most 2 nodes larger than the optimal size. ∗ ∃c such that ∀IA(I) ≥ OPT (I)− c for maximization problem Π · A is a 2-absolute-approximation algorithm for independent set if A can always find an independent set at most 2 nodes smaller than the optimal size. ∗ Very few NP-hard problems have polynomial-time absolute approximation algorithms 1 – Relative approximation (c-approximation algorithm) ∗ ∃c such that ∀IA(I) ≤ c×OPT (I) for minimization problem Π · For example, A is a 2-approximation algorithm for vertex cover if A can always find a vertex cover of at most twice the optimal size. ∗ ∃c such that ∀IA(I) ≥ (1/c)×OPT (I) for maximization problem Π · For example, A is a 2-approximation algorithm for independent set if A can always find an independent set of at least half the optimal size. ∗ Most of our focus is finding good relative approximation algorithms; unless explicitly stated otherwise, when I talk about approximation algorithms, I mean relative approximation algorithms • Example: Multiprocessor scheduling to minimize makespan – Input ∗ Set of n jobs with processing times xn ∗ Number of machines m – Task ∗ Schedule the jobs on the m machines with the goal of minimizing the makespan (maximum completion time of any job) of the schedule. – Initial thoughts about this problem ∗ Suppose m = 1: is this problem hard? ∗ We know this problem is hard for m ≥ 2 because of a reduction from the partition problem. • Graham’s list scheduling algorithm (Greedy) – Take the items in an arbitrary order – Place each item on the currently least loaded machine – This is a greedy algorithm • Proof of (2− 1/m)-approximation factor – Greedy’s cost ∗ Let job j be the last job to complete by Greedy ∗ Let h be the load of the machine j is placed onto before job j is placed ∗ Greedy’s cost is h + xj – Bounds on OPT (I) ∗ OPT (I) ≥ maxj xj ∗ OPT (I) ≥ 1 m ∑n i=1 xi – Relating the two costs ∗ h ≤ 1 m (( ∑n i=1 xi)− xj) · This follows because greedy places j onto the least loaded machine available 2 – OPT (I) ≥ M ∗ This follows because each edge in the matching must be covered by at least one node ∗ Since no two edges in the matching share a node, at least one of the two nodes from each edge must be chosen. • Example: Bin Packing – Input ∗ Set of n objects of sizes si ∗ Infinite number of bins of size B – Task ∗ Find a packing of the n objects into the minimum possible number of bins – Bin packing is NP-complete ∗ Can reuse the reduction from Partition to Makespan Scheduling • Algorithms – First Fit Algorithm ∗ Arbitrarily order the items ∗ Number the bins by the order they are opened (initially no bins are open) ∗ When working with item i, place it into the first open bin it fits into. If it does not fit into any open bin, open a new bin and place it into that bin. – Best Fit Algorithm ∗ Arbitrarily order the items ∗ When working with item i, place it into the open bin it fits most tightly into. If it does not fit into any open bin, open a new bin and place it into that bin. – First Fit Decreasing Algorithm ∗ Sort the items into non-decreasing order by size ∗ Number the bins by the order they are opened (initially no bins are open) ∗ When working with item i, place it into the first open bin it fits into. If it does not fit into any open bin, open a new bin and place it into that bin. – Best Fit Decreasing Algorithm ∗ Sort the items into non-decreasing order by size ∗ When working with item i, place it into the open bin it fits most tightly into. If it does not fit into any open bin, open a new bin and place it into that bin. – Proof that all algorithms are at least 2-approximations ∗ There is at most 1 open bin that is more than half empty · If not, any of the algorithms would have combined the items in the two at least half-empty bins into 1 bin ∗ Thus, an upper bound on the number of bins (ignoring the one possibly half- empty bin) used by any of these algorithms is 2 1 B ∑n i=1 si 5 ∗ Clearly, OPT (I) ≥ 1 B ∑n i=1 si ∗ The result of 2 follows. – FF and BF have approximation ratios of 17/10 ignoring a constant additive factor – FFD and BFD have approxmation ratios of 11/9 ignoring a constant additive factor – The proofs of these results are very long and involved. • Example: Traveling Salesperson – Input ∗ List of n cities ∗ Distances d(i, j) between each pair of cities – Task ∗ Find a tour of the cities of minimum possible total length – No polynomial-time c-approximation algorithm exists for this problem unless P = NP ∗ If there is a polynomial-time c-approximation algorithm for this problem, then Hamiltonian Cycle can be solved in polynomial time. ∗ Take an arbitrary input instance (graph G = (V, E)) of Hamiltonian Cycle and turn it into a TSP problem as follows: · For each node in V , create a city · For each edge (i, j) ∈ E, set d(i, j) = 1 · For each pair of vertices (i, j) 6∈ E, set d(i, j) = nc · If G has a Hamiltonian cycle, then the optimal tour in the TSP instance is n · If G does not have a Hamiltonian cycle, then the optimal tour in the TSP instance has length at least n− 1 + nc ∗ Apply our c-approximation algorithm to the TSP instance · If our approximation algorithm for TSP returns an answer of at most nc, then we know our original graph had a Hamiltonian cycle. · If our approximation algorithm for TSP returns an answer greater than nc, then we know our original graph did not have a Hamiltonian cycle. · Thus, we can solve the Hamiltonian cycle problem in polynomial time using our c-approximation algorithm for TSP. • Example: Metric Traveling Salesperson – Input ∗ List of n cities ∗ Distances d(i, j) between each pair of cities · Distances satisfy triangle inequality: d(i, j) + d(j, k) ≤ d(i, k) · Note that the distances in our proof above that unrestricted TSP is hard to approximate do not satisfy this constraint 6 – Task ∗ Find a tour of the cities of minimum possible total length • Greedy algorithm – Best guarantee is O(log n)OPT (I) • MST algorithm – Find a minimum spanning tree T . – Double all the edges in T to create an Eulerian graph. – Take an Euler tour of the graph using shortcuts to avoid visiting cities more than once. ∗ Shortcuts cannot increase cost because of triangle inequality • Proof of 2-approximation ratio – Let C(T ) be the total weight of the minimum spanning tree T – OPT (I) ≥ C(T ) ∗ Remove an edge from the optimal tour. ∗ We now have a path which is one possible spanning tree. ∗ The minimum spanning tree must have cost no more than this path. – MST (I) ≤ 2C(T ) ∗ The Eulerian tour has cost exactly 2C(T ) ∗ When taking shortcuts, this cost may decrease but cannot increase because of triangle inequality • Christofides’ matching improvement – Find a minimum spanning tree T . – Find all the nodes in T with odd degree – Find a minimum weight matching M of these nodes – Create an Eulerian graph by adding the edges in M to those in T – Take an Euler tour of the graph using shortcuts to avoid visiting cities more than once. • Proof of 3/2-approximation ratio – Let C(T ) be the total weight of the minimum spanning tree T and let C(M) be the total weight of M – OPT (I) ≥ C(T ) ∗ Remove an edge from the optimal tour. ∗ We now have a path which is one possible spanning tree. ∗ The minimum spanning tree must have cost no more than this path. – 1/2OPT (I) ≥ C(M) 7
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved