Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Linear Programming Algorithms: Primal and Dual Simplex Methods, Study notes of Algorithms and Programming

Algorithms for solving linear programming problems, focusing on the primal and dual simplex methods. The simplex algorithm is a method for finding the optimal solution to a linear program by iteratively moving from one feasible basis to another, ensuring feasibility and optimality at each step. The document also covers the concept of locally optimal bases and the relationship between primal and dual linear programs.

Typology: Study notes

Pre 2010

Uploaded on 03/16/2009

koofers-user-msw
koofers-user-msw 🇺🇸

10 documents

1 / 8

Toggle sidebar

Related documents


Partial preview of the text

Download Linear Programming Algorithms: Primal and Dual Simplex Methods and more Study notes Algorithms and Programming in PDF only on Docsity! Algorithms Lecture 18: Linear Programming Algorithms Simplicibus itaque verbis gaudet Mathematica Veritas, cum etiam per se simplex sit Veritatis oratio. [And thus Mathematical Truth prefers simple words, because the language of Truth is itself simple.] — Tycho Brahe (quoting Seneca (quoting Euripides)) Epistolarum astronomicarum liber primus (1596) When a jar is broken, the space that was inside Merges into the space outside. In the same way, my mind has merged in God; To me, there appears no duality. — Sankara, Viveka-Chudamani (c. 700), translator unknown 18 Linear Programming Algorithms In this lecture, we’ll see a few algorithms for actually solving linear programming problems. The most famous of these, the simplex method, was proposed by George Dantzig in 1947. Although most variants of the simplex algorithm performs well in practice, no simplex variant is known to run in sub-exponential time in the worst case. However, if the dimension of the problem is considered a constant, there are several linear programming algorithms that run in linear time. I’ll describe a particularly simple randomized algorithm due to Raimund Seidel. My approach to describing these algorithms will rely much more heavily on geometric intuition than the usual linear-algebraic formalism. This works better for me, but your mileage may vary. For a more traditional description of the simplex algorithm, see Robert Vanderbei’s excellent textbook Linear Programming: Foundations and Extensions [Springer, 2001], which can be freely downloaded (but not legally printed) from the author’s website. 18.1 Bases, Feasibility, and Local Optimality Consider the canonical linear program max{c · x | Ax ≤ b, x ≥ 0}, where A is an n × d constraint matrix, b is an n-dimensional coefficient vector, and c is a d-dimensional objective vector. We will interpret this linear program geometrically as looking for the lowest point in a convex polyhedron in IRd , described as the intersection of n+ d halfspaces. As in the last lecture, we will consider only non-degenerate linear programs: Every subset of d constraint hyperplanes intersects in a single point; at most d constraint hyperplanes pass through any point; and objective vector is linearly independent from any d − 1 constraint vectors. A basis is a subset of d constraints, which by our non-degeneracy assumption must be linearly independent. The location of a basis is the unique point x that satisfies all d constraints with equality; geometrically, x is the unique intersection point of the d hyperplanes. The value of a basis is c · x , where x is the location of the basis. There are precisely n+d d  bases. Geometrically, the set of constraint hyperplanes defines a decomposition of IRd into convex polyhedra; this cell decomposition is called the arrangement of the hyperplanes. Every subset of d hyperplanes (i.e., every basis) defines a vertex of this arrangement (the location of the basis). I will use the words ‘vertex’ and ‘basis’ interchangeably. A basis is feasible if its location x satisfies all the linear constraints, or geometrically, if the point x is a vertex of the polyhedron. If there are no feasible bases, the linear program is infeasible. A basis is locally optimal if its location x is the optimal solution to the linear program with the same objective function and only the constraints in the basis. Geometrically, a basis is locally optimal if its location x is the lowest point in the intersection of those d halfspaces. A careful reading of the proof of the Strong Duality Theorem reveals that local optimality is the dual equivalent of feasibility; a basis is locally feasible for a linear program Π if and only if the same basis is feasible for the dual linear 1 Algorithms Lecture 18: Linear Programming Algorithms program q. For this reason, locally optimal bases are sometimes also called dual feasible. If there are no locally optimal bases, the linear program is unbounded.1 Two bases are neighbors if they have d − 1 constraints in common. Equivalently, in geometric terms, two vertices are neighbors if they lie on a line determined by some d − 1 constraint hyperplanes. Every basis is a neighbor of exactly dn other bases; to change a basis into one of its neighbors, there are d choices for which constraint to remove and n choices for which constraint to add. The graph of vertices and edges on the boundary of the feasible polyhedron is a subgraph of the basis graph. The Weak Duality Theorem implies that the value of every feasible basis is less than or equal to the value of every locally optimal basis; equivalently, every feasible vertex is higher than every locally optimal vertex. The Strong Duality Theorem implies that (under our non-degeneracy assumption), if a linear program has an optimal solution, it is the unique vertex that is both feasible and locally optimal. Moreover, the optimal solution is both the lowest feasible vertex and the highest locally optimal vertex. 18.2 The Primal Simplex Algorithm: Falling Marbles From a geometric standpoint, Dantzig’s simplex algorithm is very simple. The input is a set H of halfspaces; we want the lowest vertex in the intersection of these halfspaces. SIMPLEX1(H): if ∩H =∅ return INFEASIBLE x ← any feasible vertex while x is not locally optimal 〈〈pivot downward, maintaining feasibility〉〉 if every feasible neighbor of x is higher than x return UNBOUNDED else x ← any feasible neighbor of x that is lower than x return x Let’s ignore the first three lines for the moment. The algorithm maintains a feasible vertex x . At each so-called pivot operation, the algorithm moves to a lower vertex, so the algorithm never visits the same vertex more than once. Thus, the algorithm must halt after at most n+d d  pivots. When the algorithm halts, either the feasible vertex x is locally optimal, and therefore the optimum vertex, or the feasible vertex x is not locally optimal but has no lower feasible neighbor, in which case the feasible region must be unbounded. Notice that we have not specified which neighbor to choose at each pivot. Several different pivoting rules have been proposed, but for almost every known pivot rule, there is an input polyhedron that requires an exponential number of pivots under that rule. No pivoting rule is known that guarantees a polynomial number of pivots in the worst case.2 18.3 The Dual Simplex Algorithm: Rising Bubbles We can also geometrically interpret the execution of the simplex algorithm on the dual linear program q. Again, the input is a set H of halfspaces, and we want the lowest vertex in the intersection of these 1For non-degenerate linear programs, the feasible region is unbounded in the objective direction if and only if no basis is locally optimal. However, there are degenerate linear programs with no locally optimal basis that are infeasible. 2In 1957, Hirsch conjectured that for any linear programming instance with d variables and n+ d constraints, starting at any feasible basis, there is a sequence of at most n pivots that leads to the optimal basis. Hirsch’s conjecture is still open 50 years later; no counterexamples have ever been found, but no proof is known except in a few special cases. Truly our ignorance is unbounded (or at least dual infeasible). 2 Algorithms Lecture 18: Linear Programming Algorithms Here are more complete descriptions of the simplex algorithm with this initialization rule, in both primal and dual forms. As usual, the input is a set H of halfspaces, and the algorithms either return the lowest vertex in the intersection of these halfspaces or report that no such vertex exists. SIMPLEX1(H): x ← any vertex H̃ ← any rotation of H that makes x locally optimal while x is not feasible if every locally optimal neighbor of x is lower (wrt H̃) than x return INFEASIBLE else x ← any locally-optimal neighbor of x that is higher (wrt H̃) than x while x is not locally optimal if every feasible neighbor of x is higher than x return UNBOUNDED else x ← any feasible neighbor of x that is lower than x return x SIMPLEX2(H): x ← any vertex H̃ ← any translation of H that makes x feasible while x is not locally optimal if every feasible neighbor of x is higher (wrt H̃) than x return UNBOUNDED else x ← any feasible neighbor of x that is lower (wrt H̃) than x while x is not feasible if every locally optimal neighbor of x is lower than x return INFEASIBLE else x ← any locally-optimal neighbor of x that is higher than x return x 18.5 Linear Expected Time for Fixed Dimensions In most geometric applications of linear programming, the number of variables is a small constant, but the number of constraints may still be very large. The input to the following algorithm is a set H of n halfspaces and a set B of b hyperplanes. (B stands for basis.) The algorithm returns the lowest point in the intersection of the halfspaces in H and the hyperplanes B. At the top level of recursion, B is empty. I will implicitly assume that the linear program is both feasible and bounded. (If necessary, we can guarantee boundedness by adding a single halfspace to H, and we can guarantee feasibility by adding a dimension.) A point x violates a constraint h if it is not contained in the corresponding halfspace. 5 Algorithms Lecture 18: Linear Programming Algorithms SEIDELLP(H, B) : if |B|= d return ⋂ B if |H ∪ B|= d return ⋂ (H ∪ B) h← random element of H x ← SEIDELLP(H \ h, B) (∗) if x violates h return SEIDELLP(H \ h, B ∪ ∂ h) else return x The point x recursively computed in line (∗) is the optimal solution if and only if the random halfspace h is not one of the d halfspaces that define the optimal solution. In other words, the probability of calling SEIDELLP(H, B ∪ h) is exactly (d − b)/n. Thus, we have the following recurrence for the expected number of recursive calls for this algorithm: T (n, b) =    1 if b = d or n+ b = d T (n− 1, b) + d − b n · T (n− 1, b+ 1) otherwise The recurrence is somewhat simpler if we write δ = d − b: T (n,δ) =    1 if δ = 0 or n= δ T (n− 1,δ) + δ n · T (n− 1,δ− 1) otherwise It’s easy to prove by induction that T (n,δ) = O(δ! n): T (n,δ) = T (n− 1,δ) + δ n · T (n− 1,δ− 1) ≤ δ! (n− 1) + δ n (δ− 1)! · (n− 1) [induction hypothesis] = δ! (n− 1) +δ! n− 1 n ≤ δ! n At the top level of recursion, we perform one violation test in O(d) time. In each of the base cases, we spend O(d3) time computing the intersection point of d hyperplanes, and in the first base case, we spend O(dn) additional time testing for violations. More careful analysis implies that the algorithm runs in O(d! · n) expected time. 6 Algorithms Lecture 18: Linear Programming Algorithms Exercises 1. Fix a non-degenerate linear program in canonical form with d variables and n+ d constraints. (a) Prove that every feasible basis has exactly d feasible neighbors. (b) Prove that every locally optimal basis has exactly n locally optimal neighbors. 2. Suppose you have a subroutine that can solve linear programs in polynomial time, but only if they are both feasible and bounded. Describe an algorithm that solves arbitrary linear programs in polynomial time. Your algorithm should return an optimal solution if one exists; if no optimum exists, your algorithm should report that the input instance is UNBOUNDED or INFEASIBLE, whichever is appropriate. [Hint: Add one variable and one constraint.] 3. (a) Give an example of a non-empty polyhedron Ax ≤ b that is unbounded for every objective vector c. (b) Give an example of an infeasible linear program whose dual is also infeasible. In both cases, your linear program will be degenerate. 4. Describe and analyze an algorithm that solves the following problem in O(n) time: Given n red points and n blue points in the plane, either find a line that separates every red point from every blue point, or prove that no such line exists. 5. The single-source shortest path problem can be formulated as a linear programming problem, with one variable dv for each vertex v 6= s in the input graph, as follows: maximize ∑ v dv subject to dv ≤ `s→v for every edge s→ v dv − du ≤ `u→v for every edge u→ v with u 6= s dv ≥ 0 for every vertex v 6= s This problem asks you to describe the behavior of the simplex algorithm on this linear program in terms of distances. Assume that the edge weights `u→v are all non-negative and that there is a unique shortest path between any two vertices in the graph. (a) What is a basis for this linear program? What is a feasible basis? What is a locally optimal basis? (b) Show that in the optimal basis, every variable dv is equal to the shortest-path distance from s to v. (c) Describe the primal simplex algorithm for the shortest-path linear program directly in terms of vertex distances. In particular, what does it mean to pivot from a feasible basis to a neighboring feasible basis, and how can we execute such a pivot quickly? (d) Describe the dual simplex algorithm for the shortest-path linear program directly in terms of vertex distances. In particular, what does it mean to pivot from a locally optimal basis to a neighboring locally optimal basis, and how can we execute such a pivot quickly? 7
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved