        Next: Random Number Generation Up: Numerical Problems Previous: Constrained and Unconstrained Optimization

## Linear Programming Input description: A set S of n linear inequalities on m variables , , and a linear optimization function .

Problem description: Which variable assignment X' maximizes the objective function f while satisfying all inequalities S?

Discussion: Linear programming is the most important problem in mathematical optimization and operations research.     Applications include:

• Resource allocation -   We seek to invest a given amount of money so as to maximize our return. Our possible options, payoffs, and expenses can usually be expressed as a system of linear inequalities, such that we seek to maximize our possible profit given the various constraints. Very large linear programming problems are routinely solved by airlines and other corporations.
• Approximating the solution of inconsistent equations -   A set of m linear equations on n variables , , is overdetermined if m > n.   Such overdetermined systems are often inconsistent, meaning that no assignment of variables simultaneously solves all the equations. To find the variable assignment that best fits the equations, we can replace each variable by and solve the new system as a linear program, minimizing the sum of the error terms.
• Graph algorithms - Many of the standard graph problems described in this book, such as shortest paths, bipartite matching, and network flow, can all be solved as special cases of linear programming.       Most of the rest, including traveling salesman, set cover, and knapsack, can be solved using integer linear programming.

The standard algorithm for linear programming is called the simplex method.   Each constraint in a linear programming problem acts like a knife that carves away a region from the space of possible solutions. We seek the point within the remaining region that maximizes (or minimizes) f(X). By appropriately rotating the solution space, the optimal point can always be made to be the highest point in the region. Since the region (simplex) formed by the intersection of a set of linear constraints is convex, we can find the highest point by starting from any vertex of the region and walking to a higher neighboring vertex. When there is no higher neighbor, we are at the highest point.

While the basic simplex algorithm is not too difficult to program, there is a considerable art to producing an efficient implementation capable of solving large linear programs. For example, large programs tend to be sparse (meaning that most inequalities use few variables), so sophisticated data structures must be used.   There are issues of numerical stability and robustness, as well as which neighbor we should walk to next (so called pivoting rules).   Finally, there exist sophisticated interior-point methods, which cut through the interior of the simplex instead of walking along the outside, that beat simplex in many applications.

The bottom line on linear programming is this: you are much better off using an existing LP code than writing your own. Further, you are much better off paying money than surfing the net.   Linear programming is one algorithmic problem of such economic importance that commercial implementations are far superior to free versions.

Issues that arise in linear programming include:

• Do any variables have integrality constraints? -   It is impossible to send 6.54 airplanes from New York to Washington each business day, even if that value maximizes profit according to your model. Such variables often have natural integrality constraints. A linear program is called an integer program when all its variables have integrality constraints, or a mixed integer progam if some of them do.

Unfortunately, it is NP-complete to solve integer or mixed programs to optimality. However, there are techniques for integer programming that work reasonably well in practice.   Cutting plane techniques solves the problem first as a linear program, and then adds extra constraints to enforce integrality around the optimal solution point before solving it again. After a sufficient number of iterations, the optimum point of the resulting linear program matches that of the original integer program. As with most exponential-time algorithms, run times for integer programming depend upon the difficulty of the problem instance and are unpredictable. If they do not make progress quickly, they are unlikely make much progress over longer periods of time. Therefore, if you have multiple implementations available, it may well pay to try the same problem using different codes in the hopes that one can complete in a reasonable amount of time.

• What if my optimization function or constraints are not linear? - In least-squares curve fitting, we seek the line that best approximates a set of points by minimizing the sum of squares of the distance between each point and the line. In formulating this as a mathematical program, the natural objective function is no longer linear, but quadratic. Unfortunately, quadratic programming is NP-complete, even without integer variables.

There are three possible courses of action when you must solve a nonlinear program. The best is to see if you can model it in some other way, as is the case with least-squares fitting. The second is to try to track down special codes for quadratic programming, which do exist.     Finally, you can model your problem as a constrained or unconstrained optimization problem and try to solve it with the codes discussed in Section .

• What if my model does not match the input format of my LP solver? - Many linear programming implementations accept models only in so-called standard form, where all variables are constrained to be nonnegative, the object function must be minimized, and all constraints must be equalities (instead of inequalities).   Do not fear. There exist standard transformations to map arbitrary LP models into standard form. To convert a maximization problem to a minimization one, simply multiply each coefficient of the objective function by -1. The remaining problems can be solved by adding slack variables to the model.   See any textbook on linear programming for details.

Implementations: A very useful resource on solving linear programs is the USENET frequently asked question (FAQ) list, maintained by John W. Gregory. In particular, it provides a list of available codes with descriptions of experiences.   Check out the plaintext version at ftp://rtfm.mit.edu/pub/usenet/sci.answers/linear-programming-faq or a slicker WWW version at http://www.skypoint.com/ ashbury/linear-programming-faq.html.

The noncommercial code of choice appears to be lp_solve,     written in ANSI C by Michel Berkelaar, who has solved problems as large as 30,000 variables and 50,000 constraints. Lp_solve can also handle (smaller) integer and mixed-integer problems. It is available by anonymous ftp from ftp://ftp.es.ele.tue.nl/pub/lp_solve but is not in the public domain. A user community for lp_solve exists, which has ported it to a variety of different platforms.

NEOS (Network-Enabled Optimization System) provides a unique service, an opportunity to solve your problem on computers and software at Argonne National Laboratory via the WWW.   Linear programming and unconstrained optimization are both supported. This is worth checking out at http://www.mcs.anl.gov/home/otc/Server/ if you need an answer instead of a program.

If you are serious about solving large linear programs, you likely need a commercial implementation. The book [MW93] provides an overview of commercial linear programming systems, online at http://www.mcs.anl.gov/home/otc/Guide/SoftwareGuide/index.html. Surveys of commercial LP codes appear in [SR95, Sha93] and in the linear programming FAQ. I have heard good things from various people about CPLEX and AMPL, but do your own research before spending money.

For low-dimensional linear programming problems, computational geometry algorithms can outperform more general LP codes.   See ftp://icemcfd.com/pub/linprog.a for a C language implementation of Seidel's randomized incremental LP algorithm, by Mike Hohmeyer.

Algorithm 551 [Abd80] and Algorithm 552 [BR80] of the Collected Algorithms of the ACM are simplex-based codes for solving overdetermined systems of linear equations, in Fortran.     See Section for details.

Pascal implementations of the revised and dual simplex methods for linear programming, as well as cutting plane and explicit enumeration algorithms for integer programming, are provided in [SDK83].   See Section . These are likely to work only for small problems.

Sedgewick [Sed92] provides a bare bones implementation of the simplex algorithm in C++.   See Section for details.

Notes: Good expositions on the simplex and ellipsoid algorithms for linear programming include [PS82, Chv83].   Expositions on low-dimensional linear programming include [PS85]. For an implementation-oriented exposition on linear and integer programming, with references to experimental work, see [SDK83].

The need for optimization via linear programming arose in logistics problems in World War II. The simplex algorithm was invented by George Danzig in 1947 [Dan63]. Klee and Minty [KM72] proved that the simplex algorithm is exponential in worst case, but it is very efficient in practice. Khachian's ellipsoid algorithm [Kha79] proved that linear programming was polynomial in 1979.   Karmarkar's algorithm [Kar84] is an interior-point method that has proven to be both a theoretical and practical improvement of the ellipsoid algorithm, as well as a challenge for the simplex method.

Linear programming is P-complete under log-space reductions [DLR79]. This makes it unlikely to have an NC parallel algorithm, where a problem is in NC iff it can be solved on a PRAM in polylogarithmic time using a polynomial number of processors.   Any problem that is P-complete under log-space reduction cannot be in NC unless P=NC.   See [GHR95] for a thorough exposition of the theory of P-completeness, including an extensive list of P-complete problems.

Related Problems: Constrained and unconstrained optimization (see page ), network flow (see page ).        Next: Random Number Generation Up: Numerical Problems Previous: Constrained and Unconstrained Optimization

Algorithms
Mon Jun 2 23:33:50 EDT 1997