Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linear Programming: Approximations Algorithms and Integer Linear Programming, Study notes of Approximation Algorithms

Linear programming, a method for solving maximization/minimization problems with linear constraints and objective functions. The basics of linear programming, methods for solving it, and the concept of integer linear programming. It also includes examples of approximating vertex cover and set cover problems using linear programming. The document mentions the simplex method, ellipsoid method, and karmarkar's algorithm.

Typology: Study notes

2011/2012

Uploaded on 02/14/2012

alexey
alexey 🇺🇸

4.7

(18)

75 documents

1 / 5

Toggle sidebar

Related documents


Partial preview of the text

Download Linear Programming: Approximations Algorithms and Integer Linear Programming and more Study notes Approximation Algorithms in PDF only on Docsity! CS880: Approximations Algorithms Scribe: Matt Darnall Lecturer: Shuchi Chawla Topic: Linear Programming Date: 2/22/07 9.1 Linear Programming Linear programming is a method for solving a large number of maximization/minimization prob- lems. Linear programming problems have the property that the constraints and the objective function are all linear functions of the input variables. The existence of a polynomial time algo- rithm for solving linear programs and the multitude of optimization problems that they can encode makes them particularly useful in practice. To be precise, a linear programming problem (LP) is one that can be formulated as follows: Minimize cTx (9.1.1) Subject to Ax ≤ b (9.1.2) Here x is a vector of real-valued variables (sometimes assumed to be nonnegative), c and b are vectors of real constants, and A is a matrix of real constants. A useful geometric interpretation of this problem can be useful for understanding. We view each constraint ∑n i=1 aijxi ≤ bj in Ax ≤ b as a hyperplane in <n, where the vector x has n entries. The constraint says that the solution vector x must lie below this hyperplane. The intersection of the constraints will be a polytope in <n, with the points inside the polytope called feasible solutions. We then look at the planes cTx = k for real k. The solution to the linear program is the largest value of k such that the intersection of the constraint polytope and the hyperplane cTx = k is nonempty. Using our geometric interpretation, it is easy to see that the solution of a linear program will occur at a vertex of the constraint polytope. These vertices are are called basic solutions. A possible method of solving a linear program is to enumerate the basic solutions and find the one with the largest value of the objective function. Unfortunately, there can be an exponential number of basic solutions. An example of this is the cube in <n. Here, using only n variables and the 2n constraints xi ≤ 1 and xi ≥ 0 for all i, we can describe a constraint polytope with 2n basic points. 9.2 Methods of Solving LPs The first class of algorithms to solve LPs attempt to find the optimal solution by searching the boundary of the constraint polytope. These methods use ”pivot rules” to determine the next di- rection of travel once a basic point is reached. If no direction of travel yields an improvement 1 to the objective function, then the basic point is the optimal solution. The first algorithm to use this idea was the Simplex Method from George Dantzig in 1947. Though these methods per- form well in practice, it is not known if any ”pivot rule” algorithm can run in polynomial time. In particular, an example has been given that takes exponential time using the Simplex Method [1]. The first polynomial time algorithm for solving an LP was derived from the Ellipsoid Method of Shor, Nemirovsky, and Yudin. This algorithm gives way of finding a feasible solution to an optimization problem. The idea is to enclose feasible solutions in an ellipse. Then, check to see center of the ellipse is a feasible solution. If not, find a violating constraint using a seperation oracle. Then, enclose the half of the ellipse where the feasible solutions must lie in another ellipse. Since this next ellipse is at least a fixed constant times smaller, by repeating we are able to hone in on a solution exponentially fast. Khachiyan was able to adapt this method of finding a feasible solution to give a polynomial time solution to LPs. Though this result was a breakthrough in the theory, the algorithm usually takes longer than the Simplex Method in practice. The next class of algorithms for solving LPs are called ”interior point” methods. As the name suggests, these algorithms start by finding an interior point of the constraint polytope and then proceeds to the optimal solution by moving inside the polytope. The first interior point method was given by Karmarkar in 1984. His method is not only polynomial time like the Ellipsoid Method, but it also gave good running times in practice like the Simplex Method. 9.3 Integer Linear Programming To recall from last time, a linear programming problem is given by Minimize cTx (9.3.3) Subject to Ax ≤ b (9.3.4) where x is a vector of real-valued variables (sometimes assumed to be nonnegative), c and b are vectors of real constants, and A is a matrix of real constants. We saw that there exists a polynomial time algorithm for solving an LP. If we add the additional condition that the variables in x be inte- gers, we get what is called an integer linear programming problem, or IP. Unfortunately, there is no polynomial time algorithm for solving an IP. In fact, the existence of one would imply that P = NP! We can relax the conditions of an IP make it an LP. It is obvious that the optimal solution to the LP, xL is a lower bound on the optimal solution to the IP, xI. To approximate an IP, we can solve the ”relaxed” LP and then find an integer solution close to the optimal solution of the LP. If we do it correctly, the approximate solution for the IP will be close to the solution for the LP, and we will have a good approximation algorithm. The ratio the LP solution and IP solution is called the integrality gap. We will attack many of the previous problems we looked at using this 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved