Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Solving Linear Optimization Problems using a Simplex ..., Exams of Linear Programming

If the last iteration value is just an approximate solution to the dual problem the algorithm will transfer it to a quite good approximation to ...

Typology: Exams

2021/2022

Uploaded on 08/01/2022

hal_s95
hal_s95 🇵🇭

4.4

(620)

8.6K documents

1 / 51

Toggle sidebar

Related documents


Partial preview of the text

Download Solving Linear Optimization Problems using a Simplex ... and more Exams Linear Programming in PDF only on Docsity! 2006:285 CIV MASTER'S THESIS UL Solving Linear Optimization Problems using a Simplex Like Boundary Point Method in Dual Space Per Bergstr6m Lulea University of Technology MSc Programmes in Engineering Engineering Physics Department of Mathematics 2006:285 CIV - ISSN: 1402-1617 - ISRN: LTU-EX--06/285--SE SOLVING LINEAR OPTIMIZATION PROBLEMS USING A SIMPLEX LIKE BOUNDARY POINT METHOD IN DUAL SPACE by Per Bergstr6m Department of Mathematics LULEA UNIVERSITY OF TECHNOLOGY SE-971 87 Lulea Sweden October 1, 2006 ACKNOWLEDGMENTS First of all I would like to thank my supervisor, associate professor Ove Edlund. He has taken his doctor’s degree in scientific computing. A majority part of his doctoral thesis deals with linear programming. The interested reader who wants to read it can find it on the web. There is a link to Ove’s thesis in the references of this thesis, in fact reference number [1]. It was perfect to have a supervisor like Ove when experiment with and writing about linear programming. It is always good to have an experienced supervisor in his own subject when doing research. This thesis has been time consuming. At least it took a long while from start until finish. Before I finished my master thesis I was offered an interesting trainee job. That was a trainee job at the Swedish company Tomlab!, which is the world- wide leading company in optimization solvers for Matiap® and LabVIEW ™. The company has offices in Vasteras (Sweden) and San Diego (USA, CA), and I was placed in San Diego. It was very exciting to live in another part of the world for a while than northern Sweden where I was born and usually live. The trainee job at Tomlab had no direct connection with this thesis, but the job gave me understanding in how extremely important optimization is for the society today. It also gave me lots of experiences in computer nce and last but not least, it gave me knowledge in how it is to work in the industry. However, it was not a hard decision to accept the job offer from Tomlab, even if I was forced to finish this thesis during the lovely Swedish summer. Lulea, October 1, 2006 Per Bergstr6m http: //www.tomlab.biz/ TABLE OF CONTENTS List OF TABLES... 1... ee 1. INTRODUCTION .. 2... 2 2 ee 2. LINEAR OPTIMIZATION AND SPARSE MATRICES... 2.2.2.2... 4, SIMPLEX BOUNDARY METHOD 4.1. An improvement 5. TEST RESULT ......... 5.1. Dense matrices .. . Dual problem... . 5.2. Network flow problems... ......0.0...02..0008 5.3. Netlib test problems 6. CONCLUSION... 1.2.0... 7. OTHER METHODS ....... APPENDIX A. SKETCH OF A CONVERGENCE PROOF ..........- APPENDIX B. LPMIN.M...... APPENDIX C. LPMIN-FLOW.M... REFERENCES ...........- on 22 23 24 24 26 26 27 29 30 31 32 33 44 49 List OF TABLES TABLE 2.1. The yearly capacities and requirements in suitable units... . TABLE 2.2. Unit costs for supplying each customer from each supplier. TABLE 2.3. The swimmers personal best expressed in seconds. ...... TABLE 5.1. Comparison between two different algorithms. m is the number of equality constraints and ¢ is measured in seconds. TABLE 5.2. Comparison between two different algorithms. m is the number of equality constraints and t is measured in seconds. ......... TABLE 5.3. Results from comparison with problem from simsys_sparse. m is the number of equality constraints and t is measured in seconds. TABLE 5.4. Results from comparison with problem from simsys_sparse2. m is the number of equality constraints and t is measured in seconds. TABLE 5.5. Results from comparison with problem from Netlib. ¢ is mea- sured in seconds. . 2... ee 27 28 28 29 2. LINEAR OPTIMIZATION AND SPARSE MATRICES In applied linear programming the matrix will often be huge and sparse. The sparse pattern arises because of weak relations between all the decision variables. Every variable just has direct connection with a couple of other variables among perhaps thousands of them. A detailed example of this follows in section 2.3. In this chapter examples of linear programming problems from the reality will be described. There is lot of literature where you can find examples of applied linear programming. For example, see [4],[5] or [6]. One of many types of problems is the transportation problem. In [4] Williams gives an example of such a problem. At page 73 you can find the following example. 2.1. Example 1. A Transportation Problem Three suppliers (S1,S2,S3) are used to provide four customers (T1,T2,T3,T4) with their requirements for a particular commodity over a year. The yearly capacities of the suppliers and requirements of the customers are given in table 2.1 (in suitable units). TABLE 2.1. The yearly capacities and requirements in suitable units. Suppliers Sl $2 83 Capacities (per year) 135 56 93 Customers Tl T2 T3 T4 Requirements (per year) | 62 83 39 91 The unit costs for supplying each customer from each supplier are given in table 2.2 (in £/unit). TABLE 2.2. Unit costs for supplying each customer from each supplier. Customers Tl T2 T3 T4 S1] 132 - 97 103 Suppliers $2] 85 91 - - S3| 106 89 100 98 A dash indicates the impossibility of certain suppliers for certain depots or cus- tomers. The total capacity of the suppliers is greater than the requirement of the cus- tomers. To get an easier structure in the problem you can introduce a new cus- tomer (it is called a dummy customer) with requirement 9 units per year. With 2. Linear Optimization and Sparse Matri the new dummy customer the total capacity of the suppliers is equal to the re- quirement of the customers. The problem can be viewed in a graph. Se figure 2.1. FIGURE 2.1. Graph for problem in example 1. This problem can be formulated to a LP model by introducing variables 2;; to represent the quantity of the commodity sent from Si to Tj in a year. The resultant model is (2.1). 10 2. Linear Optimization and Sparse Matri LAO 16 6E £8 co 9g— cet I Al vert eer weap Seg veg key tee erp Teg— co Wap war stg arp ear sla— bla ely: Mg elg— Ur 0} yoofqns rengg + x90T + *xgg + xgor + aT6 + agg + Megor + Pays + Mageq une 11 2. Linear Optimization and Sparse Matrices FIGURE 2.2. Graph for problem in example 2. 7 Freestyle Thereza wee | Backstroke Johanna < Res Breaststroke Arina-Karin CAS Butterfly Louise K (pretanding 1) Louise J (pratending 2) The problem is to find the team that gives the lowest total time. This problem can be expressed as a linear optimization problem similar as in previous example. 14 2. Linear Optimization and Sparse Matrices 2.3. Example of network flow problem In the two previous sections two examples were described more or less from the reality. Both the problems have a similar structure. A more general problem of this kind is network flow problems. A MATLAB-program that generates these kind of problems has been made in this master thesis. When testing LP algorithms it is an advantage to have a problem to solve. Of course you can take examples from literature but it takes lots of time to write it down and these problems are also quite small. In real life these problems are much larger, with perhaps hundreds of thousands variables. To get problems of that size it is impossible to write it down from literature, because of no literature has examples of that size, and if they have, it should take to much time to transfer the problems correctly. One way to evade this is to let a computer program generates test problems. Network flow problems can be shown in a graph. A graph consists of nodes and arcs. Each node is connected with at least one other node by an arc (or many arcs). The graph is connected. That means that it is a connection between all the nodes in one or another way. In the network there are production, consumption and a flow. The production and consumption are at the nodes and the flow is in the arcs. One node is viewed in figure 2.3. FIGURE 2.3. Node with net production, inflow and outflow. net production > > total inflow “7 total outflow At each node there is a total inflow, a total outflow and a net production. Nothing of the flow disappears and therefore must the equation (2.2) hold at each node. total outflow — total inflow = net production (2.2) Because of the fact that the graph is connected it is a connection between all the nodes. In the type of problem that is generated here the graph is undirected. That means that the flow can go in both directions between each node. Se figure 2.4. 2. Linear Optimization and Sparse Matrices FIGURE 2.4. Undirected graph. In the continuation one undirected arc will be used instead of two directed arcs between the nodes. For every node the equation (2.2) must hold. The flow from node i to node j is denoted by 2;,;. Of course, the flow must be greater then zero and the flow might also have an upper limit (unlike the example problems in the previous chapter). Each node has a number of inflows and a number of outflows. Some of them have also a net production different from zero. It is the cost of the total flow through the network you want to minimize. This kind of problem is an LP-problem of the form (2.3). minc!x x s.t. 9 ¢ 2.3 Ax=b (2.3) x>0 The rows in A consist of information from corresponding node. The matrix will have a distinctly sparse pattern. Every column in A consists of one 1 and one —1. The remaining elements are zero. The number of columns in A is the same as the number of arcs in the graph. This is the same as the number of variables in the problem. A quite large problem can be viewed in figure 2.5. This flow problem has 1024 nodes and 5052 arcs. 16 2. Linear Optimization and Sparse Matrices | L | 200 400 600 800 FIGURE 2.7. Graph with optimal solution. | 600 600 800 600 - 400 - 200 - 200 400 - -600 - 80] 19 2. Linear Optimization and Sparse Matrices FIGURE 2.8. Structure plot of A. noo be A os A ee oO 1000 2000 3000 4000 6000 nz= 11428 The graph of this problem has another structure and the structure plot of the matrix also shows that the pattern of the matrix is different. This is because of the difficulty to number the nodes. The number of nodes is the same as in the example before, 1024, but the number of arcs is more, in fact 5714. That means that this problem has 5714 decision variables. Likewise in this problem the number of non-zeros in each column is two, so the share of non-zeros is the same. As mentioned in the beginning of this chapter the reason of making a pro- gram that generates test problems is to get problems prepared to solve with some method. This type of problem is quite representative for general LP-problems from the reality. The matrix that belongs to the problem is huge and sparse. One way to solve this kind of problems is to test all the possible combinations. With a problem of the same size as the latest present the number of different combinations are about 10“. For every combination you have to solve a sparse linear equation system of size 1023 x 1023. That time is small but not negligible. Let say it takes about 107° seconds for a (very) fast computer to test each combination. Then it takes 10115 seconds to solve the problem. It is about 10'!° times the age of the universe. There is no chance to solve this kind of problem by testing all the combinations. You have to use a much more clever method. The programs that generate the two different test problems can be find on the Internet at Mathworks file exchange central‘. The program that generates test problems like the one in figure 2.5 have the name simsys_sparse and the program that generates test problems like the one in figure 2.7 have the name simsys_sparse2. Athanasios Migdalas has written a book on network problem. That book can be found in the references, [8], and it contains lots of network related problems. A realistic and complex special case of minimum cost network flow problem is when you have one sink and one source and the cost is proportional to the distance between the nodes. Then the optimal solution can be interpreted as the shortest path in a road network system. A modern application of this is trip planning with GPS. A GPS is not as fast as a stationary computer so then it is very important to solve the problem very efficiently. See figure 2.9 for a trip planning example. Here is the shortest path between Lulea University of Technology and Lulea Airport Inttp: //www.mathworks . com/matlabcentral/fileexchange/loadFile.do?objectId= 8845&0bjectType=file 20 2. Linear Optimization and Sparse Matrices marked with a thicker line. FIGURE 2.9. The shortest path between Lulea University of Technology and Lulea. Airport. (Source: www.eniro.se) cl , 21 4. Simplex boundary method Ficure 4.1. Illustration of convergence. Feasible region *%; When a feasible point is reached, the search for optimum begins. The search direction will be the steepest feasible search direction. That means that if the iteration point is an inner point the search direction will be the negative gradient vector, and if the iteration point is a boundary point the search direction will be the negative gradient vector projected onto the equality fulfilled constraints. The length of every iteration step is adjusted so the step will be as large as possible in the same time the new iteration value will be at the boundary to the feasible region. After some iteration the iteration point will be at a corner and then the simplex phase-II starts. 4.1. An improvement One problem with the method present in the D-level thesis [10] is that even if the function value at the first corner point is pretty close to the optimal function value the number of simplex steps will be huge, especially for large problems. One way to improve this phenomenon is that when you have get a feasible point, you don’t start immediately by searching for a feasible corner. Instead you try to get a more centralized point in the feasible region. After that you go a small step in the negative (if it is a minimum problem) gradient direction. Then you take a step perpendicular to the gradient vector to get a more centralized point. After that a small step in the negative gradient direction has to be done, and so on. After about m steps (the number of variables) you will be closer to optimum and quite centralized. The hard thing is to centralize the point. That is also the most expensive. How good this works out depends on the geometry of the feasible region. 4.2. Dual problem The example problems in chapter 2 were on another form than (4.1), but there is a relation between them. The problems (2.3) and (4.1) are dual to each other, with the exception that the dual problem is a maximization problem. That means it is the same to solve problem (4.3) and problem (4.4). 24 4. Simplex boundary method max bly ¥ ¢ s.t. (4.3) Aty<c minc!x x s.t. Ax=b (4-4) x>0 At optimum c'x = b'y and the constraints in (4.3) that are satisfied with equal ity, correspond to the elements in x are equal to zero in (4.4). For all feasible x and y, c'x > by. That causes one problem since if you have an approximate solution in the problem (4.3) you can’t use it in (4.4). One way to overcome that problem is, when you have found a corner point, which is a good approximation of optimum, you use it in the same way when you are going back to the primal problem, as you do when you have found the absolute optimum. Unfortunately, that point has a function value less than the optimal solution. That means that some of the decision variables x; in x will be less than zeros and hence outside of the feasible set of solutions. To overcome that problem you can use a similar method to reach the feasible region as done in the dual problem. That means first of all, all 2; < 0 will be put to zero. After that you solve (4.5) which is an easy problem to solve. min(x — x,)? s.t. (4.5) Ax=b In (4.5) x; is the previous x, and x is the new. Some of the new elements 2x; in x are less than zero. The «; that are less than zeros can be put to zero. Then (4.5) will be solved again and so on. With this method x will converge to the feasible set, unfortunately not to optimum. It might be better to take some simplex step too many than too few. One more thing that has to be mentioned is when you for example solve network flow problem you can make use of the characteristics of that type of problem. First of all, all the elements in c are bigger or equal to zero. That means that no time has to be spent in searching for a feasible point. The part in the algorithm where it tries to find a centralized close-to-optimum point might also be skipped. The special structure makes it unnecessarily to do that. That is because of when a feasible corner is reached with the first part in the algorithm, it will not require so many simplex steps, even with a uncentralized point. Therefore with this kind of problem you can use a reduced algorithm that directly starts with finding a feasible corner point and then starts the simplex method. 5. ‘TEST RESULT In this chapter follows the test results. The algorithm is implemented in MAT- LAB with the name 1pmin. Comparison has been done with MATLAB’s own solver, linprog, which is included in MATLAB’s optimization toolbox. linprog is using different methods for Medium-Scale Optimization and Large-Scale Optimization. For more information about linprog and its solver methods, see the documenta- tion of Linprog on Mathworks homepage!. 5.1. Dense matrices When the D-level thesis [10] was made a program that generates a kind of test problem was also made. Here exactly the same principle is used but the problems are in the problem format (2.3). The matrices are dense. This is not the network flow problem that was described in section 2.3 in this thesis.First a quite easy problem, where c > 0. The result is shown in table 5.1. TABLE 5.1. Comparison between two different algorithms. m is the number of equality constraints and t is measured in seconds. m | lpmin (t,) | linprog (t:) | te/ty 50 | 0.71 1.87 2.63 100 | 2.09 10.82 5.18 150 | 6.31 33.78 5.35 200 | 17.79 70.69 3.97 250 | 45.31 159.78 3.53 300 | 76.34 297.04 3.89 The last column tells the time quota, a comparison between the two different solvers. Here all the quotas are bigger than one and therefore it is obvious that lpmin is faster in this case A comparison test when the elements in c have both positive and negative element values have also being done. That means it is harder to find a feasible solution in the dual problem. The result for this can be find in table 5.2. Inttp: //www.mathworks . comn/access/helpdesk/help/toolbox/optim/ug/linprog. html 26 5. Test result 5.3. Netlib test problems The last comparison test is for LP test problem from Netlib. The Netlib test problems and their description can be found on the internet? at the link given in the footnote. All the Netlib problems are not compared in this test. One reason is that many of the LP problems have an upper bound on their decision variable and hence can not be solved by 1pmin. One other reason, which is more serious, is that round-off errors made lpmin useless in some cases were 1pmin failed to find the optimum. The result for some of the problems can be find in table 5.5. TABLE 5.5. Results from comparison with problem from Netlib. ¢ is measured in seconds. m lpmin (t,) | linprog (ta) | to/t afiro 0.07 0.06 0.86 bandm* 5.79 0.39 0.068 blend* 0.15 0.09 0.60 israel* 1.15 1.23 1.07 sc50a* 0.14 0.04 0.28 sc50b.mat* 0.12 0.03 0.25 sc105* 0.34 0.061 0.18 sc205* 0.99 0.11 0.11 scagr7.mat* | 0.54 0.07 0.13 scagr25.mat* | 11.0 0.31 0.028 sctap1.mat* | 4.00 0.25 0.062 share1b* 0.70 0.20 0.29 share2b* 0.45 0.09 0.20 stocfor1* 0.34 0.10 0.29 * Slightly wrong result. The test result which is given in table 5.5 tells that lpmin is not the best choice of solver for this kind of problem. It was only one of the Netlib problems 1pmin was able to solve with a correct solution. In some of the problems 1pmin returned a quite bad solution and there also exists problems were lpmin did not return a solution at all because of difficulties with round-off errors. °http://www.netlib.org/lp/data/ 29 6. CONCLUSION As everyone can see in chapter 5 the results differ a lot between the different kinds of test problems. For the problems with dense matrices 1pmin was faster than linprog. That follows from the tables 5.1 and 5.2. In other words lpmin is a better choice of solver for these kind of problems than linprog is. In the D-level thesis, [10], where the problem was in the form (4.1), the result was even much better for lpmin compared with linprog. The problems were after all generated in the same way as the Dense matrices problem in this thesis. With the format (4.1) was the time comparison t2/t; growing in the same rate as the number of variables were growing. With the network flow problems linprog was faster as follows from tables 5.3 and 5.4. One possibility is that linprog has a more efficient sparse matrix handling than lpmin. Another possibility is that the structure of the problem makes it more efficient to solve with the method used in linprog. These two things makes it better to use linprog then 1pmin for these kind of problem. The result for the Netlib problems was not in favour for 1pmin which follows from table 5.5. For some of the Netlib problems 1pmin failed because of round-off errors and for some problems 1pmin did not find the absolute optimum. This tells that lots of work on the algorithm in 1pmin is still remaining. From this it follows that for some kind of problem one solver method might be the best and for another type of problem another solver method might be the best. If a method is perfect for problems in some application it can totally fail in some other application. Consequently, the result for lpmin was blended. This algorithm turns definite out to be a promising first attempt. The result for the first kind of test problems, the one with dense matrices, was eminent for lpmin. The comparison was done with a commercial solver, but 1pmin solved it much faster. On the other hand, the network flow problem and the Netlib problem could have been solved faster and more accurate by lpmin. A Swedish proverb might be used in this case, Rome was not built in one day. That means that it takes a lot of time to construct a complicated work and it will never be finished. It is the same with algorithm development. Lots of time has been spent to develop this algorithm, lpmin, but much more work is still remaining to make the algorithm complete. One thing to make 1pmin to a better solver is to translate the MATLAB-code to for example C-code. That will speed up the calculations which will do lpmin to a much faster solver. Another thing is to do lpmin better in sparse matrix handling. That would do lpmin more competitive compared with commercial solvers. 30 7. OTHER METHODS There are several different methods that solve linear optimization problems. The simplex method is the most known but it is not the only method. As mentioned in the introduction there is also a method type called inner point method. That type of method works in a quite different way. The first distinct difference is that with the inner point method the iteration values are inside the feasible region, not on the boundary as in the simplex method. That explains the name of the method type. There are lots of different kinds of inner point methods. One method is explained in reference [11] and [12]. These reports can be found on the Internet, on the page Higher Order Primal Dual Method!, which is a site were lots of optimization stuff can be found. The method described in [11] and [12] works in a way that the primal- and dual problem are solved in the same time. The lower and upper bounds of the decision variable are replaced with a special function which is put into the objective function. This boundary function can looks like in many different ways. In the reports [11] and [12] the authors use a logarithmic barrier function to replace the bounds with. It is called a barrier function because it will not be defined outside of the feasible region. Another type of function that can be used to replace the bounds of the decision variable with is a penalty function. It is a function that is defined outside the feasible region but outside the feasible region its function values are of the kind that the iteration value will be forced into the feasible region. An analyze of this has been done in Article III in [1]. One advantage with the inner point methods are that they are quite fast for large scale problems. The simplex method might be very slow for large scale problems. That is because of the number of corners grow exponentially with respect to the size of the problem. Every iteration step goes quickly but the numbers of iterations will be very large. Another advantage with inner point methods is that you can stop whenever you want, and if it has converged well, you will get a good approximate value to optimum. The big disadvantage is that in every iteration an ill conditioned equation system has to be solved. It is very hard to solve an ill conditioned equation system with numerical methods to get a correct solution. The solution you will get might be completely wrong. That can cause troubles when using inner point methods. ‘http: //www.maths .ed.ac.uk/~gondzio/software/hopdm. html 31 Appendix B. lpmin.m for j=1:4 co=0; for i=randperm(n) sc=c(i)-y’*Aeq(: ,i); if sc<O co=cot1; y=y+(sc/colnorm(i)*2)*Aeq(: ,i); end end if co== bol=0; break end end if bol yO=1.5*y; for j=1:2 co=0; for i=randperm(n) sc=c(i)-y0’*Aeq(:,i); if sc<0 co=co+1; y0=y0+(sc/colnorm(i)*2)*Aeq(:,i); end end if co== bol=0; y=y0; break end end if bol y=0.5* (yO+y) ; yory; for j=1:3 co=0; for i=randperm(n) sc=c(i)-y0’*Aeq(: ,i); if sc<O co=co+1; yO=y0+1.9* (sc/colnorm(i)*2)*Aeq(: ,i); end 34 Appendix B. lpmin.m end if co==| bol=0; y=y0; break end end if bol for j=1:3 co=0; for i=randperm(n) sc=c(i)-y’*Aeq(: ,i); if sc<O co=co+1; y=yt1.1*(sc/colnorm(i)*2)*Aeq(: ,i); end end if co== bol=0; break end end if bol y=0.5* (y+y0) ; for j=1:5 co=0; for i=randperm(n) sc=c(i)-y’*Aeq(:,i); if sc<O co=cotl; y=y+(0.8+0.4*rand) *(sc/colnorm(i)*2) *Aeq( end end if co== bol=0; break end end end end end 12)5 Appendix B. lpmin.m end else y=zeros(m,1); end warning off c_Ay=c’-y’*Aeq; mov=c_Ay./colnorm; mimov=min (mov) ; test=0; if mimov<0 c_Ay=c_Ay-(2*mimov) *colnorm; test=1; 3 dmov=1; elseif mimov<=1le-4 mimov=-min(c_Ay(c_Ay>1e-4)) ; c_Ay=c_Ay-(2*mimov) *colnorm; bol=mov<=1e-4; dir=Aeq(: ,bol)*(1./colnorm(bol)’) ; diA=dir’ *Aeq; tvec=c_Ay./diA; t1=0.5*min(tvec(diA>0)); t2=mean([0 max(tvec(diA<0))]); if t2== t2=-100*t1; end t=t1+t2; y=yttedir; c_Ay=c_Ay-t*diA; test=1; dmov=1; end nob=beq/norm(beq) ; bA=beq’ *Aeq; step=1-0.9577 (130/m) ; yO=y; c_Ay0=c_Ay; ys=y; c_Ays=c_Ay; 36 Appendix B. lpmin.m end end y=y+ ((1-step) *mi)*beq; c_Ay=c_Ay-((1-step) #mi) *bA; ind=abs(c_Ay)<1e-15; numind=1:n; tt=1; nop=1e-20; x=zeros(n,1); % find a feasible corner (in dual space) for i=1:(m-1) tt=Aeq(: , ind) \beq; if i>0.95+m hl=(beq’ *Aeq(:,ind))’; r=hl-((Aeq(: , ind) *tt)’*Aeq(:,ind))’; or; rrp=r’*r; for j=1:5 if norm(r)<1e-16 break end Aq=((Aeq(: , ind)*q)’*Aeq(: ,ind))’; alfa=rrp/(q’*Aq) ; tt=tttralfarq; r=r-alfa*Aq; rr=r’*r; beta=rr/rrp; g=r+beta*q; Yrp=rr; end end dir=beq-Aeq(:,ind)*tt; no=norm(dir) ; if no/nop<i1e-10 if min(tt)>=-1le-8 x(ind)=tt; break end dir=randn(m, 1); 39 Appendix B. lpmin.m dir=dir/norm(dir) ; tt=Aeq(:,ind)\dir; if i>0.95+m hl=(dir’*Aeq(:,ind))’; r=hl-((Aeq(: ,ind)*tt)’*Aeq(:,ind))’; ar; rrp=r’ *r; for j=1:5 if norm(r)<le-16 break end Ag=((Aeq(: ,ind)*q)’*Aeq(:,ind))’; alfa=rrp/(q’*Aq) ; tt=tttalfa*q; r=r-alfa*Aq; rr=r? *r; beta=rr/rrp; g=rtbeta*q; rrp=rr; end end dir=dir-Aeq(: ,ind) *tt; no=norm(dir) ; if no/nop<te-9 break end diA=dir’*Aeq; tvec=c_Ay./diA; bol=not (ind) ; [t ,indind] =min(abs(tvec(bol))); else end diA=dir’*Aeq; tvec=c_Ay./diA; bol=and(diA>0,not (ind)) ; [t ,indind] =min(tvec(bol)); nop=no; if length(t)>0 y=ytt*dir; c_Ay=c_Ay-t*diA; else 40 Appendix B. lpmin.m error(’Dual unbounded, primal infeasible’); end indp=ind; ind=c_Ay<1e-15; ind(indp)=1; indvec=numind(bol) ; ind (indvec(indind))=1; end % do simplex phase II (in dual space) if max(x)== for i=1:3+*m su=sum (ind) ; if su==m tt=Aeq(: , ind) \beq; (ttmi,k]=min(tt) ; if ttmi>-1e-13; break elseif ttmi==-Inf hl=(beq’*Aeq(:,ind))’; r=hl-((Aeq(:, ind)*tt)’*Aeq(:,ind))?; gr; rrp=r’*r; for j=1:5 if norm(r)<1e-16 break end Ag=((Aeq(: ,ind)*q)’*Aeq(:,ind))’; alfa=rrp/(q’*Aq) ; tt=tttalfa*q; r=r-alfa*Aq; rr=r’? *r; beta=rr/rrp; g=rtbeta*q; Yrp=rr; end dir=beq-Aeq(: , ind) *tt; else ek=zeros(m, 1); ek(k)=-1; dir=Aeq(:, ind)’ \ek; dir=dir/norm(dir) ; end 41 APPENDIX C. LPMIN_-FLOW.M function x=lpmin_flow(c,Aeq, beq) % min c?*x % Aeq*x=beq h% O<=x % vank(Aeq)={number of rows}-1 h c>=0 [m,n]=size(Aeq) ; m=m-1; x=lpmin_flow2(c,Aeq(1:m,:),beq(1:m) ,m,n) ; function x=lpmin_flow2(c,Aeq,beq,m,n) % min c’*x % Aeq*x=beq % O<=x % Aeq must have full rank c=(1e2/max (abs (c)))*c; colnorm=sqrt (sum(Aeq.*2,1)); warning off c_Ay=c’; y=zeros(m,1); % find a start point at the boundary bA=beq’ *Aeq; tvec=c_Ay./bA; t=min(tvec(bA>0)) ; if length(t)>0 y=t*beq; c_Ay=c_Ay-t*bA; else error(’Dual unbounded, primal infeasible’); end ind=abs(c_Ay)<1e-15; numind=1:n; nop=1e-20; 44 Appendix C. lpmin_flow.m x=zeros(n, 1); % find a feasible corner (in dual space) (m-1) tt=Aeq(: , ind) \beq; if i>0.95+m hl=(beq’*Aeq(:,ind))’; r=hl-((Aeq(: ,ind)*tt)’*Aeq(:,ind))’; grr; rrp=r’*r; for j=1:5 if norm(r)<te-16 break end Ag=((Aeq(:,ind)*q)’*Aeq(:,ind))’; alfa=rrp/(q’*Aq) ; tt=tttralfarq; r=r-alfa*Aq; rr=r’*r; beta=rr/rrp; g=r+beta*q; YYp=rr ; end end dir=beq-Aeq(:,ind)*tt; no=norm(dir) ; if no/nop<i1e-10 if min(tt)>=-1le-8 x(ind)=tt; break end dir=randn(m, 1); dir=dir/norm(dir) ; tt=Aeq(:,ind)\dir; if i>0.95+m hl=(dir’*Aeq(:,ind))’; r=hl-((Aeq(: ,ind)*tt)’*Aeq(:,ind))’; rs rrp=r’ *r; for j=1:5 if norm(r)<le-16 Appendix C. lpmin_flow.m break end Ag=((Aeq(: , ind) *q) ’*Aeq(: ,ind))?; alfa=rrp/(q’*Aq) ; tt=tttalfa*q; r=r-alfa*Aq; rr=r? *r; beta=rr/rrp; g=rtbeta*q; Yrp=rr; end end dir=dir-Aeq(: ,ind) *tt; no=norm(dir) ; if no/nop<te-9 break end diA=dir’*Aeq; tvec=c_Ay./diA; bol=not (ind) ; [t ,indind] =min(abs(tvec(bol))); else diA=dir’*Aeq; tvec=c_Ay./diA; bol=and(diA>0,not (ind)) ; [t ,indind]=min(tvec(bol)); end nop=no ; if length(t)>0 y=ytt*dir; c_Ay=c_Ay-t*diA; else error(’Dual unbounded, primal infeasible’); end indp=ind; ind=c_Ay<1e-15; ind(indp)=1; indvec=numind(bol) ; ind (indvec(indind))=1; end % do simplex phase II (in dual space) 46 {1] REFERENCES Ove Edlund. Solution of linear programming and non-linear regression prob- lems using linear M-estimation methods. Phd thesis, Lulea University of Tech- nology, http://epubl.luth.se/1402-1544/1999/17/, 1999. ISSN:1402- 1544 ; 1999:17. George B Dantzig. Linear programming and extensions. 1968. Princeton, N. J. : Princeton Univ. Press, 1963. ISBN 0-691-08000-3. Gondzio J. and R. Kouwenberg. High performance computing for asset liability management. Technical report, Department of Mathematics & Statistics, The University of Edinburgh and Econometric Institute, Erasmus University Rotterdam, http: //www.maths.ed.ac.uk/~gondzio/software/ wrecord.ps, 1999. H. Paul Williams. Model building in mathematical programming. 1999. Chichester : Wiley. ISBN 0-471-99788-9. Hillier, Frederick S and Lieberman, Gerald. Introduction to operations re- search. 2005. Boston : McGraw-Hill. ISBN 0-07-123828-X. Ciriani, Tito A and Leachman, Robert C. Optimization in industry : mathe matical programming and modelling techniques in practice. 1993. Chichester ; Wiley. ISBN 0-471-93492-5. Jan Lundgren, Mikael Ronnqvist, and Peter Varbrand. Introduction to op- erations research. 2003. Lund : Studentlitteratur. ISBN 91-44-03104-1. Athanasios Migdalas. Mathematical programming techniques for analysis and design of communication and transportation networks. 1988. Linképing : Univ. ISBN 91-7870-302-6. David G Luenberger. Linear and nonlinear programming. 1984. Reading, Mass. : Addison-Wesley. ISBN 0-201-15794-2. Per Bergstrém. En algoritm for linjara optimeringsproblem. D-level thesis, Lulea University of Technology, http: //epubl .1tu.se/1402-1552/2005/ 04/, 2005. Gondzio J. and T. Terlaky. A computational view of interior point methods for linear programming. Technical report, Logilab, HEC Geneva, Section of Management Studies, University of Geneva,Faculty of Technical Mathematics and Informatics, Delft University of Technology, http: //www.maths.ed.ac. uk/~gondzio/software/oxford.ps, 1994. 49 [12] Andersen E.D., J. Gondzio, C. Meszaros, and X. Xu. Implementation of interior point methods for large scale linear programming. Technical report, Logilab, HEC Geneva, Section of Management Studies, University of Geneva, http: //www.maths .ed.ac.uk/~gondzio/software/kluwer.ps, 1996.
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved