3
Advanced Topics in Linear Programming
The dual of a linear programming problem finds a place in this chapter. The economic interpretation of dual variables that can be used by management for planning its resources is also explained in this chapter. If a LPP involving a large number of variables and constraints is solved by this method, it will require a large storage space and time on a computer. Some computational techniques have been developed which require much less computer storage and time than that required by the simplex method. The two important and efficient computational techniques are the revised simplex method or simplex method with multipliers and the bounded variables method. The revised simplex method is dealt with in this chapter. The sensitivity analysis of the problem to study the effect of change in various resource levels is also discussed. This chapter also describes some integer programming formulation techniques and highlights its application in managerial problems where linear programming techniques fail. Further, two methods, namely, cutting plane method and branch and bound method are discussed to solve integer programming problems. Some advantages and limitation of LPP are also discussed.
3.1 DUALITY THEORY
Every linear programming problem is associated with another unique linear programming problem. The original (given) problem is then called the primal problem while the other is called its dual problem. The relationship between the solutions of a primal problem and its dual problem prove to be extremely useful in a variety of ways. The solution of one of the problem readily provides the solution of the other one. This fact is important because situations may arise where it will be easier to solve the dual rather than to solve the primal.
3.1.1 The Dual Problem
There are two important forms of primal-dual pairs, namely, symmetric form and unsymmetric form.
Symmetric form
Let the primal problem be
Maximise z = c1x1 + c2x2 + 1/4 + cnxn
In matrix notation the equation (3.1.1) can be written as
Maximise z = CX
subject to AX ≤ b
and,
X ≥ 0, b^{T} ∈ R^{m}
where,
Remark: It is not necessary that the primal is to be maximised. It can be minimised too.
Steps to be followed to convert a primal to a dual.
Step 1: The maximisation problem in the primal becomes the minimisation problem in the dual and vice versa.
Step 2: The maximisation problem has less than or equal to (≤) type of constraints while the minimisation problem has greater than or equal to (≥) type of constraints.
Step 3: If the primal contains m constraints and n variables, then the dual will contain n constraints and m variables. That is, the transpose of the body matrix of the primal problem gives the body matrix of the primal problem gives the body matrix of the dual and vice versa.
Step 4: The constants c_{1}, …, c_{n} in the objective function of the primal appear in the right-hand sides of the constraints of the dual.
Step 5: The constants b_{1}, …, b_{m} in the constraints of the primal appear in the objective function of the dual.
Step 6: Both the primal and the dual variables are non-negative.
The relationship between the primal and dual problems can be best described by
The dual of equation (3.1.1) is
Minimise w = b_{1}y_{1} + b_{2}y_{2} + … + b_{m}y_{m}
subject to
In matrix form equation (3.1.2) can be represented in matrix form as
Minimise w = b^{T}Y
subject to
A^{T}Y ≥ C^{T} and Y ≥ 0
Equations (3.1.1) and (3.1.2) are called symmetric primal-dual form.
Unsymmetric duals
For primal problems in standard matrix form duals may be defined as follows:
Primal | Dual |
---|---|
Minimise z = C^{T} X | Maximise w = b^{T} Y |
subject to AX = b | subject to A^{T}Y ≤ C |
and X ≥ 0 | and Y ≥ 0 |
Maximise z = C^{T}X | Minimise w = b^{T}Y |
subject to AX = b | subject to A^{T}Y ≤ C |
and X ≥ 0 | and Y ≥ 0 |
Example 1
Determine the dual of the problem
Minimise z = 5x_{1} + 2x_{2} + x3
subject to
2x_{1} + 3x_{2} + x_{3} ≥ 20
6x_{1} + 8x_{2} + 5x_{3} ≥ 30
7x_{1} + x_{2} + 3x_{3} ≥ 40
x_{1} + 2x_{2} + 4x_{3} ≥ 50
x_{1}, x_{2}, x_{3} ≥ 0.
and
Solution: The given (primal) problem is in symmetric form. Since there are 4 constraints in the primal there will be 4 variables in the dual. Let it be y_{1}, y_{2}, y_{3}, y_{4}.
The dual problem is
Maximise w = 20y_{1} + 30y_{2} + 40y_{3} + 50y_{4}
2y_{1} + 6y_{2} + 7y_{3} + y_{4} ≤ 5
3y_{1} + 8y_{2} + y_{3} + 2y_{4} ≤ 2
y_{1} + 5y_{2} + 3y_{3} + 4y_{4} ≤ 1
y_{1}, y_{2}, y_{3}, y_{4} ≤ 0.
Note: We observe that the dual problem has three constraints since the primal problem has three variables.
Example 2
Write the dual of the following LPP
Maximise z = 2x_{1} + 5x_{2} + 3x_{3}
subject to
2x_{1} + 4x_{2} − 3x_{3} ≤ 8
−2x_{1} − 2x_{2} + 3x_{3} ≥ −7
x_{1} + 3x_{2} − 5x_{3} ≥−2
4x_{1} + x_{2} + 3x_{3} ≤4
x_{1}, x_{2}, x_{3} ≥ 0.
Solution: First, we write the primal (given) problem in symmetric form.
Maximise z = 2x_{1} + 5x_{2} + 3x_{3}
subject to
2x_{1} + 4x_{2} − x_{3} ≤ 8
2x_{1} + 2x_{2} − 3x_{3} ≤ 7 (multiplying both sides by −1)
−x_{1} − 3x_{2} + 5x_{3} ≤ 2 (multiplying both sides by −1)
4x_{1} + x_{2} + 3x_{3} ≤ 4
x_{1}, x_{2}, x_{3} ≥ 0.
Then, the dual problem is
Minimise w = 8y_{1} + 7y_{2} + 2y_{3} + 4y_{4}
subject to
2y_{1} + 2y_{2} − y_{3} + 4y_{4} ≥ 2
4y_{1} + 2y_{2} − 3y_{3} + y_{4} ≥ 5
−y_{1} − 3y_{2} + 5y_{3} + 3y_{4} ≥ 3
y_{1}, y_{2}, y_{3}, y_{4} ≥ 0.
Unrestricted variables
We recall that unrestricted variable is a variable that can take positive or negative or zero value.
Example 3
Write the dual of the following LPP
Maximise z = 4x_{1} + 2x_{2}
subject to
x_{1} − 2x_{2} ≥ 2
x_{1} + 2x_{2} = 8
x_{1} − x_{2} ≤ 10
x_{1} ≥ 0, x_{2} is unrestricted in sign.
Solution: The primal in symmetric form is
Maximise z = 4x_{1} + 2x_{2}
subject to
Since x_{2} is unrestricted put x_{2} = − where Then, (3.1.3) becomes
Maximise z = 4x_{1} + 2 − 2
subject to
−x_{1} + 2 − 2 ≤ −2
x_{1} + 2 − 2 ≤ 8
−x_{1} + 2 + 2 ≤ −8
x_{1} − + ≤ 10
x_{1}, , ≥0.
Then, the dual problem is
Minimise w = − 2y_{1} + 8y_{2} − 8y_{3} + 10y_{4}
subject to
−y_{1} + y_{2} − y_{3} + y_{4} ≥ 4
2y_{1} + 2y_{2} − 2y_{3} − y_{4} ≥ 2
−2y_{1} − 2y_{2} + 2y_{3} + y_{4} ≥ − 2
y_{1}, y_{2}, y_{3}, y_{4} ≥ 0.
Put y_{2} − y_{3} = y. Then, the LPP becomes
Minimise w = − 2y_{1} + 8y + 10y_{4}
subject to
−y_{1} + y + y_{4} ≥ 4
2y_{1} + 2y − 2y_{4} = 2
y_{1}, y_{4} ≥ 0 and y is unrestricted which is the dual of the given problem.
Remark 1: If the kth constraint of the primal problem is an equality, then the corresponding dual variable yk is unrestricted in sign and vice versa.
Remark 2: If any variable of the primal problem is unrestricted in sign, the corresponding constraint in the dual problem will be equality and vice versa.
3.1.2 Duality Theorems
Theorem 3.1
The dual of the dual is the primal.
Proof: Let the primal problem be
subject to
and
X ≥ 0
Then, the dual of (3.1.4) is
subject to
and
Y ≥ 0
The dual of (3.1.5) is
subject to
and
U ≥ 0
By observing (C^{T})^{T} = C and (A^{T})^{T} = A, (3.1.6) becomes
subject to
and
(3.1.4) and (3.1.7) represents the same problem. Hence, the dual of the dual is the primal.
Theorem 3.2
Let X_{0} be a feasible solution to the primal problem.
Maximise z = CX subject to AX ≤ b, X ≥ 0, where X^{T} and C ∈ R^{n}, b^{T} ∈ R^{m} and A is an m × n real matrix. If Y_{0} be a feasible solution to the dual of the primal, namely Minimise w = b^{T} Y subject to A^{T} Y ≥ C^{T}, Y ≥ 0, Y^{T} ε R^{m} then C_{X}0 ≤ b^{T} Y_{0}.
Proof: Since X_{0} and Y_{0} are the feasible solutions to the primal and its dual, respectively, we have
AX0 ≤ b, X_{0} ≥ 0 and A^{T} Y ≥ C^{T}, Y_{0} ≥ 0.
Hence,
or,
Theorem 3.3
Let X_{0} be a feasible solution to the primal problem. Maximise Z = CX subject to AX ≤ b, X ≥ 0 and Y_{0} be a feasible solution to is dual: Maximise w = b^{T}Y subject to A^{T}Y ≥ C^{T}, Y ≥ 0 where C, X^{T} ∈ R^{n} and Y^{T}, b^{T} ∈ R^{m} and A is a m − n real matrix. If CX0 = b^{T} W_{0} then both X_{0} and W_{0} are optimum solutions to the primal and dual respectively.
Proof : Let be any other feasible solution to the primal problem. Then, theorem 3.2 gives,
C ≤ b^{T}Y_{0}
Thus,
and hence X_{0} is an optimum solution to the primal problem, because primal is of maximisation type.
Similarly, if is any other feasible solution to the dual problem, then b^{T} Y_{0} ≤ b^{T} and thus Y_{0} is an optimum solution to the dual problem, because dual is a minimisation problem.
Theorem 3.4 Basic Duality Theorem
Let a primal problem be Maximise z = CX subject to AX ≤ b, X ≥ 0, C, X^{T} ∈ R^{n} and the associated dual be minimise w = b^{T} Y subject to A^{T} Y ≥ C^{T}, Y ≥ 0, Y^{T}, b^{T} ∈ R^{m}. If X_{0} (Y_{0}) is an optimum solution to the primal (dual) then there exists a feasible solution Y_{0}(X_{0}) to the dual (primal) such that CX0 = b^{T} Y0.
Proof: The standard primal can be written as Maximise z = CX subject to AX + IXs = b, where is the slack vector and I is the associated m − m identity matrix.
Let X_{0} = [X_{B}, 0] be an optimum solution to the primal, where is optimum basic feasible solution given by X_{B} = B^{−1} b, B being the optimal basis of A. Then, the optimal primal objective function is
z = CX_{0} = C_{B}X_{B} where C_{B} is the cost vector associated with X_{B}.
Now, the net evaluations in the optimal simplex table are given by
Since X_{B} is optimal, we must have z_{j} − c_{j} 0 for all j.
This gives C_{B}B^{−1} a_{j} ≤ c_{j} and C_{B}B^{−1} e_{j} ≥ 0 for all j.
or,
C_{B} B^{−1} A ≥ C and C_{B} B^{−1} 0 in matrix form.
or,
Now, if we let B^{−1} = = Y_{0}, the above become
A^{T} W0 ≥ C^{T} and W_{0} ≥ 0, ∈ R^{m}. This means that W_{0} is a feasible solution to the dual problem. Moreover, the corresponding dual objective function value is . Thus, given an optimal solution X_{0} to the primal, there exists a feasible solution Y_{0} to the dual such that CX0 = b^{T} Y_{0}.
Similarly, starting with Y_{0}, the existence of X_{0} can be proved.
Corollary 3.5
If X_{0} is an optimal solution to the primal, an optimal solution to the dual is given by W_{0} = B^{−1} C_{B}, where B is the primal optimal basis.
Theorem 3.6 Fundamental Theorem of Duality
If the primal or the dual has a finite optimum solution, then the other problem also processes a finite optimum solution and the optimum values of the objective functions of the two problems are equal.
Proof: Consider the primal-dual pair:
Primal: Minimise z = CX, subject to AX ≤ b and X 0.
Dual: Minimise w = b^{T} Y, subject to A^{T} Y ≥ C^{T} and W ≥ 0.
Necessary condition: Let a feasible solution X_{0}(Y_{0}) be an optimum solution to the primal (dual) problem. It then follows from Theorem 3.4 that there exists a feasible solution Y_{0}(X_{0}) to the dual (primal) problem, such that CX_{0} = b^{T} Y_{0}.
It now follows from Theorem 3.3 that Y_{0} must be optimal. This proves the necessity of the condition. Sufficiency follows from Theorem 3.3.
Theorem 3.7 Existence Theorem
If either the primal or the dual problem has an unbounded objective function value, then the other problem has no feasible solution.
Proof: Let the given primal problem have unbounded solution. Then, for any value of the objective function, say + ∞, there exists a feasible solution, say X yielding this solution that is, CX → ∞. If possible, let the dual problem have a feasible solution. Then from Theorem 3.2, for each feasible solution Y_{0} of the dual, there exists a feasible solution. X_{0} to the primal such that CX0 ≤ b^{T} Y0. That is b^{T} Y → ∞. Now, as b is a constant and W_{0} has to satisfy the constraint A^{T} Y_{0} C^{T}, therefore the dual objective function b^{T} Y_{0} must be finite. This contradicts the result b^{T} Y_{0} → ∞. Hence, the dual problem has no feasible solution.
By a similar argument it can be shown that when the dual has an unbounded solution, the primal has no solution.
Remark: The above result shows the following relation between the solutions of the primal and dual.
Dual problem | Primal problem | |
---|---|---|
Feasible solution | No feasible solution | |
Feasible solution | Optimum | Dual unbounded |
No feasible solution | Primal unbounded | Unbounded or |
No feasible solution |
Theorem 3.8 Complementary Slackness Theorem
Let X_{0} and Y_{0} be the feasible solutions to the primal, Maximise z = C^{T} X, AX ≤ b, X ≥ 0, and its dual Minimise w = b^{T} Y, A^{T} Y ≥ C^{T}, Y ≥ 0, respectively. Then, a necessary and sufficient condition for X_{0} and Y_{0} to be optimal to their respective problem is that
Proof: Necessity: Let α = (b − AX_{0}) and β = (A^{T}Y_{0} − C^{T}). Since X_{0}, Y_{0} are solutions to the primal and dual respectively, we have α ≥ 0, β ≥ 0 and α + β = b − C^{T}.
Now, if X_{0}, Y_{0} are optimal, then CX0 = b^{T} Y_{0} so that α + β = 0. But since α ≥ 0 and β ≥ 0 this gives α = 0, β = 0.
Thus, the conditions are necessary.
Sufficiency: Let the given conditions hold for the feasible solutions X_{0} and W_{0}. That is, α = 0 and β = 0.
Then,
⇒CX_{0} = b^{T}Y0
⇒X_{0} and W_{0} are optimal
Thus, the conditions are sufficient.
3.1.3 Duality and Simplex Method
Since any LPP can be solved by using simplex method, the method is applicable to both the primal as well as its dual. The fundamental theorem of duality suggests that an optimum solution to the associated dual can be obtained from that of its primal and vice versa.
Properties of Primal and Dual Optimal Solutions
- If the primal (dual) variables correspond to the slack variable in the dual (primal) problem its optimum value is directly read off from the dual (primal) simplex table as the net evaluation corresponding to the slack variable.
- If the primal (dual) variables correspond to the artificial variable in the dual (primal) problem its optimum value is directly read off form the optimum dual (primal) simplex table as the net evaluation corresponding to that artificial variable after deleting the constants M.
Example 1
Use duality to solve the following LPP
Maximise z = 4x_{1} + 2x_{2}
subject to the constraints
− x_{1} − x_{2} ≤ − 3
− x_{1} + x_{2} − 2
x_{1}, x_{2} ≥ 0
Solution: The given problem can be rewritten as
subject to
The dual of (3.1.8) is
subject to
Solution of (3.1.9) by Big M method:
Introduce the surplus variables s_{1}, s_{2} ≥ 0 and the artificial variables A_{1}, A_{2} ≥ 0 in the above constraints (3.1.9) can be written as
Minimise w = − 3y_{1} + 2y_{2} + 0s_{1} + 0s_{2} − MA_{1} − MA_{2}
− y_{1} + y_{2} − s_{1} + A_{1} = 4
− y_{1} − y_{1} − s_{2} + A_{2} = 2
y_{1}, y_{2}, s_{1}, s_{2}, A_{1}, A_{2} ≥ 0.
An obvious IBFS is A_{1} = 4, A_{2} = 2
TABLE 3.1
We observe that the column corresponding to y_{1} is the pivotal column. Since there is no positive element in the pivotal column the dual problem has an unbounded solution. Hence, there exists no finite optimum solution to the primal problem.
Example 2
Use duality to solve the following LPP.
Maximise z = 2x_{1} + x_{2}
subject to the constraints
x1 + 2x2 ≤ 10
x_{1} + x_{2} ≤ 6
x_{1} − x_{2} ≤ 2
x_{1} − 2x_{2} ≤ 1
x_{1}, x_{2} ≥ 0.
Solution: The dual of the given problem is
Minimise w = 10y_{1} + 6y_{2} + 2y_{3} + y_{4}
subject to the constraints
y_{1} + y_{2} + y_{3} + y_{4} ≥ 2
2y_{1} + y_{2} − y_{3} − y_{4} ≥ 1
y_{1}, y_{2}, y_{3}, y_{4}, ≥ 0.
Introducing surplus variables s_{1}, s_{2} ≥ 0 and artificial variables A_{1}, A_{2} ≥ 0 then the optimal simplex table is
TABLE 3.2
Since all z_{j} − c_{j} 0, the current solution is optimal. The optimum solution of the dual is
y_{1} = 0, y_{2} = 1/2, y_{4} = 0 and Minimum of w = −10.
To find the optimum solution of the primal:
Corresponding to the variable x_{1} we have the dual constraint y_{1} + y_{2} + y_{3} + y_{4} ≥ 2 and the basic variable associated with the constraint is A_{1}. Hence, the optimal value of x_{1}
= M − (net evaluation value of A_{1})
= M − (M − 4) = 4
Similarly, the optimal value of x_{2} is
= M − (M − 2) = 2
∴ The optimal solution of the primal is
x_{1} = 4, x_{2} = 2 with Maximum of z = − 10
Example 3
A company makes three products X, Y, Z out of three raw materials A, B and C. The number of units of raw materials required to produce one unit of products x, y, z is given in the following Table:
TABLE 3.3
The profit per unit on the products X, Y and Z arè40, 25 and 50, respectively. The number of units of raw materials available are 36, 60 and 45, respectively.
- Determine the product mix that will maximise the total profit.
- Through the final simplex table, write the solution to the dual problem and give the economic interpretation.
Solution: Formulation of the LPP
Let the company produce x, y and z units of the products X, Y and Z, respectively. Then, according to the given conditions, the objective function is Maximise z = 40x + 25y + 50z.
Also, from the Table it is clear that the material constraints are
Maximise z = 40x + 25y + 50z
Subject to (3.1.10).
Solution by simplex method:
Introduce slack variables s_{1}, s_{2}, s_{3} ≥ 0 in the given constraints. The LPP becomes
Maximise z = 40x + 20y + 50z + 0s_{1} + 0s_{2} + 0s_{3}
subject to
x + 2y + z + s_{1} = 36
2x + y + 4z + s_{2} = 60
2x + 5y + z + s_{3} = 45
and x, y, z, s_{1}, s_{2}, s_{3} ≥ 0.
If we solve the problem by regular simplex method the optimum simplex table is
TABLE 3.4
Since all z_{j} − c_{j} ≥ 0 the current solution is optimum to the primal.
The optimum solution is
x = 20, y = 0, z = 5 and maximum of z = 1050
The dual problem:
The dual of the problem is
Minimise w = 36y_{1} + 60y_{2} + 45y_{3}
subject to
Solution of the dual from the optimum simplex table of the primal:
The dual variable y_{1} corresponds to the constraints x + 2y + z ≤ 36 and the initial slack variable corresponding to this constraint is s_{1}. Hence, the optimal solution of y_{1} = z_{j} − c_{j} value of s_{1} = 0.
Similarly, the optimum value of y_{2} = z_{j} − c_{j} value of s_{2} = 10
Optimum value of y_{3} = z_{j} − c_{j} value of s_{3} = 10. Hence, the optimum solution of the (3.1.11) is
x = 20, y = 0, z = 5 and Minimum of w = 1050
Economic interpretation: Suppose the manager of the company wishes to sell the three raw materials A, B and C instead of using them for production of the products X, Y and Z. Suppose the selling prices were y_{1}, y_{2} and y_{3} per unit of raw material A, B and C, respectively. The cost to the purchaser of all three raw materials would be 36y_{1} + 60y_{2} + 45y_{3}. Of course, the purchaser would like to set the selling prices in such a way so as to minimise the total cost paid. Hence, the objective function becomes
Minimise w = 36y_{1} + 60y_{2} + 45y_{3}
The final table of the primal problem indicates that the marginal value of raw material A is zero, for B is 10 per unit; and for C is 10 per unit. Thus, if the manager sells the raw materials A, B and C at price 0, 10 and 10 per unit, respectively, he will get the same contribution of 1050 which he going to fetch in case he utilizes these resources for production of the three products X, Y and Z.
3.2 DUAL SIMPLEX METHOD
3.2.1 Introduction
In simplex method we have already seen that every basic solution with all z_{j} − c_{j} 0 will not be feasible (since z_{j} − c_{j} = C_{B}B^{−1} a_{j} − c_{j} is independent of right hand side vector b for all j), but any basic feasible solution with all z_{j} − c_{j} ≥ 0 will certainly be an optimal solution. Such typical problems, for which it is possible to find infeasible but better than optimal (with all z_{j} − c_{j} ≥ 0) starting basic solutions, can be solved more easily by dual simplex method. Such a situation is recognized by first expressing the constraints in the form (≤) and the objective function in the maximisation form. After adding the slack variables and putting the problem in the Table form, if any of the right hand side elements is negative and if the optimality condition is satisfied, then the problem can be solved by the dual simplex method. It is important to note that by this arrangement, negative element on the right hand side signifies that the corresponding slack variable is negative.
3.2.2 Difference between Regular Simplex Method and Dual Simplex Method
The regular simplex method starts with a basic feasible but a non-optimal solution and works towards optimality, whereas the dual simplex method starts with a basic infeasible but an optimal solution and works towards feasibility. Also, in regular simplex method we first determine the entering variable and then the leaving variable while in the case of dual simplex method we first determine the leaving variable and then the entering variable.
3.2.3 Dual Simplex Algorithm
Step 1: Convert the problem into a maximisation type if is in the minimisation type.
Step 2: Convert ≥ type inequalities (if any) into ≤ type equalities by multiplying both sides by − 1.
Step 3: Express the LPP in standard from by introducing slack variables. Obtain the initial basic solution (not feasible) and express these information in the simplex table.
Step 4: (Optimality condition). Test the nature of z_{j} − c_{j}.
Case (i): If all (z_{j} − c_{j})≥0 and all X_{Bi}≥0, then the current solution is an optimum basic feasible solution.
Case (ii): If all (z_{j} − c_{j})≥0 and atleast one X_{Bi} < 0 then the current solution is not an optimum basic feasible solution. Go to the next step.
Case (iii): If any (z_{j} − c_{j}) < 0 then this method fails.
Step 5: (Feasibility conditions).
- (Leaving variable): The leaving variable is the basic variable corresponding to the most negative value of X_{Bi}. Let x_{k} be the leaving variable.
- (Entering variable): Compute the ratio between (z_{j} − c_{j}) row and the key row, that is,
(Consider the ratios with negative denominators alone) The entering variable is the non-basic variable corresponding to the maximum ratio θ. If there is no such ratio with negative denominator, then the problem has no feasible solution.
Step 6: Carry out the row operations as in the regular simplex method and repeat the procedure until either an optimum feasible solution is obtained or there is an indication of nonexistence of a feasible solution.
Example 1
Use dual simplex method to solve the LPP
Maximise z = 2x_{1} + 2x_{2}
subject to
2x_{1} − x_{2} − x_{3} ≥ 3
x_{1} − x_{2} + x_{3} ≥ 2
x_{1}, x_{2}, x_{3} ≥ 0.
Solution: First convert all the inequalities into ≤ type.
Maximise z = 2x_{1} + 2x_{2}
subject to
− 2x_{1} + x_{2} + x_{3} ≤ − 3
− x_{1} + x_{2} − x_{3} ≤ − 2
x_{1}, x_{2}, x_{3} ≥ 0.
Introduce slack variables s_{1}, s_{2} ≥ 0 an obvious initial basic non-feasible solution is s_{1} = − 3, s_{2} = − 2.
Starting Table:
TABLE 3.5
Since there are some (z_{j} − c_{j}) < 0 this method fails. That is, we cannot solve this problem by dual simplex method.
Remark: From the above example we observe that if a LPP is of maximisation type with positive objective function coefficient, then the LPP cannot be solved by dual simplex method.
Example 2
Use dual simplex method to solve
Maximise z = − 3x_{1} − 2x_{2}
subject to
x_{1} + x_{2} ≥ 1
x_{1} + x_{2} ≥ 7
x_{1} + 2x_{2} ≥ 10
x_{2} ≤ 3
x_{1}, x_{2} ≥ 0.
Solution: First, we convert all the inequalities into ≤ type. The given LPP becomes
Maximise z = − 3x_{1} − 2x_{2}
subject to
− x_{1} − x_{2} ≤ − 1
x_{1} + x_{2} ≤ 7
− x_{1} − 2x_{2} ≤ − 10
x_{2} ≤ 3.
Introducing slack variables s_{1}, s_{2}, s_{3}, s_{4} ≥ 0 in the given constraints the LPP becomes
Maximise z = −3x_{1} − 2x_{2} + 0s_{1} + 0s_{2} + 0s_{3} + 0s_{4}
subject to
−x_{1} − x_{2} + s_{1} = −1
x_{1} + x_{2} + s_{2} = 7
−x_{1} − 2x_{2} + s_{3} = −10
−x_{1} − 2x_{2} + s_{3} = −10
x_{1},x_{2},s_{1},s_{2},s_{3},s_{4} ≥ 0.
An obvious initial basic non-feasible solution is
s_{1} = − 1, s_{2} = 7, s_{3} = − 10, s_{4} = 3.
TABLE 3.6
Since all z_{j} − c_{j} ≥ 0 the current solution is optimal and since some X_{Bi} are negative the solution is non-feasible. From the Table we see that s_{3} is the leaving variable.
To find the entering vector:
Since the maximum of
corresponding to the variable x_{2}. Hence, x_{2} enters into the basis.
First iteration: Introduce x_{2} and drop s_{3}.
TABLE 3.7
Form Table 3.7, it is clear that s_{4} is the leaving variable and x_{1} is the entering variable.
Second iteration: Introduce x_{1} and drop s_{4}.
TABLE 3.8
Since all z_{j} − c_{j} 0, the optimality condition is satisfied. All X_{Bi} ≥ 0 implies that the feasibility condition is also satisfied. Hence, the optimum solution is
x_{1} = 4, x_{2} = 3 with maximum of z = −18
Example 3
Using dual simplex method solve the following LPP
Minimise z = x_{1} + x_{2}
subject to
2_{x}_{1} + 2x_{2} ≥ 2
and,
x_{1}, x_{2} ≥ 0
Solution: Convert the minimisation LPP into a maximisation LPP and convert all the inequalities into ≤ type.
Maximise z^{*} = −x_{1} − x_{2} where z^{*} = −z
subject to
−2x_{1} − x_{2} ≤ − 2
x_{1} + x_{2} ≤ − 1
and,
x_{1}, x_{2} ≥ 0
Introduce slack variables s_{1}, s_{2} ≥ 0 in the given constraints. The LPP becomes
Maximise z^{*} = −x_{1} −x_{2} + 0s_{1} + 0s_{2}
subject to
−2x_{1} − x_{2} + s_{1} = − 2
x_{1} + x_{2} + s_{2} = − 1
and,
x_{1}, x_{2}, s_{1}, s_{2} ≥ 0
An obvious initial basic (non-feasible) solution is s_{1} = − 2, s_{2} = − 1.
Starting Table:
TABLE 3.9
s_{1} is the entering variable since s_{2} is the variable with most negative X_{Bi}. Also maximum of corresponding to x_{1}. Hence, x_{1} is the entering variable.
First iteration: Introduce x_{1} and drop s_{1}.
TABLE 3.10
Since all z_{j} − c_{j} ≥ 0 the optimality condition is satisfied. Also, s_{2} is the only variable with negative X_{Bi} and so s_{2} is the leaving variable. But there is no negative entries in the key row. Hence, there exists no feasible solution to the given problem.
3.3 REVISED SIMPLEX METHOD
3.3.1 Introduction
The original simplex method is a straight forward algebraic procedure. However, this way of executing the algorithm is not the most efficient computational procedure for electronic digital computers. It computes and stores many numbers that are needed at the current iteration and which may not become relevant for decision-making at subsequent iterations. The only piece of information relevant at each iteration are the coefficients of the non-basic variables, the coefficient of the entering basic variable in the order equations, and the right hand side of the equation. It would be very useful to have a procedure that could obtain this information efficiently without computing and storing all the other coefficients. These considerations motivated the development of the revised simplex method. This method was designed to accomplish exactly the same thing as the original method, but in a way that is more efficient for execution on a digital computer. Thus, it is a streamlined version of the original procedure. It computes the essential data in a more compact form.
3.3.2 Revised Simplex Algorithm
Step 1: Introduce slack or surplus variables and the artificial variables whenever necessary. Express the given LPP in standard form after converting it into maximisation type.
Step 2: Obtain an initial basic feasible solution with an initial basis B = I_{m} (identity matrix of order m) and find B−^{1} and then calculate = B^{−1}b where b is the right hand side constant vector.
Step 3: Calculate the simplex multiplier M = (M_{1}, M_{2}, …, Mm) (where m denotes the number of constraints) using M = C_{B} B−^{1} where C_{B} is the cost vector associated with the basis B.
Step 4: Calculate the net evaluations of the non-basic variables using − where P_{i} is the column vector of the variable x_{i} and c_{i} is the cost of the variable x_{i} in the objective function.
Case (i): If there is no negative − then the current solution is optimum.
Case (ii): If there is some negative _{k} _{k} then the non-basic variable x_{k} with _{k} − _{k} as most negative enters into the basis. Proceed to Step 5.
Step 5: Calculate the pivotal column vector.
_{k} = B^{−1}P_{k}
Case (i): If all x_{ik} ≤ 0, then there exists an unbounded solution to the given LPP.
Case (ii): If at least one x_{ik} < 0, consider the current X_{B} and determine the leaving variable. Proceed to Step 6.
Step 6: Write down the results obtained from step (2) to step (5) in the revised simplex table.
Step 7: Convert the leading element to unity and all other elements of the entering column to zero and update the current basic feasible solution.
Step 8: Go to step (4) and repeat the procedure until an optimum basic feasible solution is obtained or till there is an indication of an unbounded solution.
Use revised simplex method to solve the LPP.
Maximise z = 6x − 2_{x}_{2} + 3x_{3}
subject to
2x_{1} − x_{2} + 2x_{3} ≤ 2
x_{1} + 4x_{3} ≤ 4
x_{1}, x_{2}, x_{3} ≥ 0
Solution: Convert the LPP into standard form by introducing slack variables x_{4} and x_{5}.
Maximise z = 6x_{1} − 2x_{2} + 3x_{3} + 0s_{1} + 0s_{2}
subject to
2x_{1} − x_{2} + 2x_{3} + s_{1} = 2
x_{1} + 0_{x}_{1} + 4_{x}_{3} + s_{2} = 4
and,
x_{1}, x_{2}, x_{3}, s_{1}, s_{2} ≥ 0
Let P_{1}, …, P_{5} denote the column vectors corresponding to the variables, x_{1}, …, x_{5} and let b denote the right hand constants, respectively. Then,
Now, (x_{4}, x_{5}) forms the initial basis
Therefore, B−^{1} = B
Now,
Starting Table: (The last three columns are added later)
TABLE 3.11
The simplex multipliers are
M = (_{M}_{1}, M_{2}) = C_{B}B^{−1} = (0, 0) = (0, 0)
Now,
Since _{1} − _{1} is most negative the variable to enter is x_{1}. Hence, the pivotal column is
After calculating the minimum ratio we see that x_{4} leaves the basis. Now, the pivotal column must be reduced to . This is obtained by dividing row 1 by 2 and then subtracting row 2 from row 1.
First iteration:
TABLE 3.12
The simplex multipliers corresponding to the first iteration are M = (M_{1}, M_{2}) = C_{B}B^{−1} =
Now,
∴x_{2} enters into the basis Now, the pivotal column
Hence, x_{2} enters and x_{5} leaves the basis. Now, the pivotal column is must be reduced to .
TABLE 3.13
The simplex multipliers corresponding to the second iteration are
Since all − is non-negative the current solution is optimum. The optimum solution is
x_{1} = 4, x_{2} = 6, x_{3} = 0 with maximum of z = 6 × 4 − 2 × 6 + 3 × 0 = 12
Example 2
Use revised simplex method to solve the following LPP.
Minimise z = x_{1} + 2x_{2}
subject to
2x_{1} + 5x_{2} ≥ 0
x_{1} + x_{2} ≥ 2
x_{1}, x_{2} ≥ 0
Solution: Convert the minimisation LPP into maximisation type and then introduce surplus variables x_{3}, x^{4} and the artificial variables x_{5}, x_{6}. The LPP becomes
subject to
2x_{1} + 5x_{2} − x_{3} + x_{3} = 6
x_{1} + x_{2} − x_{4} + x_{6} = 2
x_{1}, x_{2}, x_{3}, x_{4}, x_{5}, x_{6} ≥ 0
Let P_{1},P_{2}, …, P_{6} denote the column vectors corresponding to the variables x_{1}, …, x_{6} and let b denote the right hand constants, respectively.
Then,
Now, (x_{5}, x_{6}) forms the basis
Initial Table:
TABLE 3.14
The simplex multipliers for Table 3.14 are
Hence, x_{2} enters in the basis and hence the pivotal column is
After calculating the ratio we see that x_{5} leaves the basis. Now, the pivotal column is . This must be reduced to .
TABLE 3.15
The simplex multipliers for Table 3.15 are
Hence,
Hence, x_{1} enters. The pivotal column is
and the ratio implies that x_{6} leaves the basis. We must reduce the pivotal column to .
Second iteration:
TABLE 3.16
The simplex multipliers are
Since all − are non-negative the current solution is optimum. The optimum solution is
x_{1} = 4/3, x_{2} = 2/3, x_{3} = 0 and minimum of z = 4/3 + 2(2/3) = 8/3
3.3.3 Advantages of the Revised Simplex Method Over the Regular Simplex Method
- For LPP involving large number of variables the number of constraints and the amount of comptations is very much reduced because the revised simplex method works with a reduced Table whose size is determined by the number of constraints.
- The revised simplex method works with a reduced Table as it stores only the basic variables, the basis inverse and the constants. Hence, less amount of new information needs to be stored in the memory of the computer from one iteration to the other.
- Data can be stored in lesser space. The original data is usually given in fixed decimals of three or four digits. The storage of data is more accurate and compact since the revised simplex method works only with the original data.
- There is less accumulation of round off errors since no calculations are done on a column unless it is ready to enter the basis.
- The theory of the revised simplex method, especially the importance of the basis inverse and the simple multipliers is quite helpful in understanding the advanced topics in linear programming such as duality theory and sensitivity analysis.
3.4 SENSITIVITY ANALYSIS
3.4.1 Introduction
After a liner programming problem has been solved, it is useful to study the effect of changes in the parameters of the problem on the current optimal solution. The LP solution must provide dynamic information. A static optimum solution will become obsolete as soon as the conditions under which the model is constructed change. Sensitivity analysis is concerned with studying possible changes in the available optimal solution as a result of making changes in the original model.
The change in parameters of the problem may be discrete or continuous. The study of the effect of discrete changes in parameters on the optimal solution is called sensitivity analysis or post-optimality analysis, while that of continuous changes in parameters is called parametric programming. One way to determine the effects of parameter changes is to solve the new problem a new, which may be computationally inefficient. Alternatively, the current optimal solution may be investigated, making use of the properties of the simplex criteria. The second method reduces additional computations considerably and hence forms the subject of present discussion.
The changes in the parameters of a LPP include
- changes in the right hand side of the constraint b_{i}
- changes in the cost coefficient c_{j}
- addition of new variables
- changes in the coefficients of constraints a_{rj}
- addition of new constraints
- deletion of variables
- deletion of constraints.
Generally, these parameter changes result in one of the following three cases:
- The optimal solution does not change.
- The basic variables remains the same but their values change.
- The basic variables as well as their values are changed.
While dealing with these changes, one important objective is to find the maximum extent to which a parameter of a set of parameters can change so that the current optimal solution remains optimal. In other words, the objective is to determine how sensitive is the optimal solution to the changes in those parameters. Such an analysis is called sensitivity analysis.
1. Variation in the right side of constraints Since the optimality condition z_{j} − c_{j} does not involve any of b_{i}, if any component b_{i} of the requirement vector b = (b_{1}, …, b_{m}) is changed, then this change will not effect the conditions of optimality. Hence, if any b_{i} is changed to b_{i} + Δb_{i} then the new solution thus obtained will remain optimal.
But X_{B} = B^{−1}b depends on b. Therefore, any change in b may affect the feasibility of the current solution.
Example 1
Consider the LPP
Maximise z = 5x_{1} + 12x_{2} + 4x_{3}
subject to
x_{1} + 2x_{2} + x_{3} ≤ 5
2x_{1} − x_{2} + 3x_{3} = 2
x_{1}, x_{2}, x_{3} ≥ 0
- Discuss the effect of changing the requirement vector from to on the optimum solution.
- Discuss the effect of changing the requirement vector from to on the optimum solution.
- Which resource should be increased and by how much to achieve the best marginal increase in the value of the objective function?
- Which resource should be decreased and by how much to achieve the best marginal increase in the value of the objective function?
Solution: If we solve the given problem by Big ‘M’ method the find simplex table is
TABLE 3.17
and the optimal solution is
x_{1} = 9/5, x_{2} = 8/5, x_{3} = 0 and maximum z = 141/5
- If the requirement vector changes to then the new values of the current basic variables are given by
Hence, the new solution is optimal and feasible and the new optimum solution is
x_{1} = 11/5, x_{2} = 12/5 with maximum of z = 5 × 11/5 + 12 × 12/5 + 4 × 0 = 199/5
- If the requirement vector changes to then the new values of the basic variables are given by
Hence, the current optimal solution becomes infeasible. Hence, we have to apply the dual simplex method.
TABLE 3.18
Introduce x_{3} into the basis and drop x_{2} from the basis then the next iteration is
TABLE 3.19
Since all z_{i} − c_{j} ≥ 0 and all X_{Bi} ≥ 0 the current solution is optimum. The optimal solution is
x_{1} = 0, x_{2} = 0, x = 3 with maximum of z= 5(0) + 12(0) + 4 × 3 = 12
- In order to find the resource that should be increased (or decreased) we shall write the final objective function, which is G = 5y_{1} + 2y_{1}, where y_{1} = 29/5 and y_{2} = − 2/5 are the optimal dual variables. Thus, the first resource increases the objective function by 29/5. Next, we are to find how much the first resource should be increased so that each additional unit continues to increase the objective function by 29/5. This requirement will be met so long as the primal problem remains feasible. If Δ be the increase in the first resource, it can be determined from the condition
As x_{1} and x_{2} remain feasible (≥0) for all values of Δ > 0, the first resource can be increased indefinitely while maintaining the condition that each additional unit will increase the objective function by 29/5.
- The second resource should be decreased as each additional unit of the second resource decreases the objective function by 2/5. Let Δ be the decrease in the second resource. To find its extent, we make use of the condition that the current solution remains feasible so long as
Evidently x_{1} remains positive only so long as or Δ ≤ 9/2. If Δ > 9/2, x_{1} becomes nega tive and must leave the solution.
2. Addition of a new constraint Addition of a new constraint may or may not affect the feasibility of the current optimal solution. The addition of a new constraint can result in one of the two conditions.
- If the new constraint is satisfied by the current optimal solution, then this new constraint is said to be redundant and its addition will not change the solution. That is, the current solution remains feasible as well as optimal.
- If the new constraint is not satisfied by the current optimal solution, the solution becomes infeasible. In this case the new solution is obtained by using the dual simplex method.
However, in general, whenever a new constraint is added to a linear programming problem, the old optimal value will always be better or at least equal to the new optimal value. In other words, the addition of a new constraint cannot improve the optimal value of any LPP.
Example 2
Consider the LPP
Maximise z = 4x_{1} + 6x_{2} + 2x_{3},
subject to
If we solve the problem by simplex method then the optimal simplex table will be
TABLE 3.20
and the the optimal solution will be
If we add the additional constraint 2x_{1} + 3x_{2} + 2x_{3} ≤ 4 then the optimal solution given in (3.1.13) does not satisfy the additional constraint. The current optimal solution is not optimal for the modified problem. Let s_{3} be the slack variable for this constraint then the modified Table will be
TABLE 3.21
Since x_{1} and x_{2} are basic variables the corresponding entries in the third column should be zero. To achieve this multiply the first row by −2 and second row by −3 and add them to the third row, the above Table becomes
Since all z_{j} − c_{j} ≥ 0, the optimality condition is satisfied but X_{B} value for the third row is negative. Now, we apply the dual simplex method then the optimal solution becomes
x_{1} = 5/3, x_{2} = 5/3 and x_{3} = 2/3 with maximum of z = 12
3.4.2 Changes Affecting Optimality
1. Changes in the objective function (or) changes in the cost coefficient c_{j}:
Changes in the coefficients of the objective function may take place due to a change in cost or profit of either basic variables or non-basic variables.
If X_{B} is the optimal basic feasible solution, then we have X_{B} = B^{−1}b. It is clear that X_{B} is independent of C and therefore, any change in some component c_{j} of C will not change X_{B}. X_{B} will always remain basic feasible solution.
But however, the optimal condition (z_{j} − c_{j} ≥ 0 for all j which is satisfied for optimal solution c_{j} changes. So, any change in some component c_{j} of C may affect the optimality of the current solution and will not affect the feasibility of the current solution.
The variation in the price vector C may be made by the following two ways:
- Variation c_{k} ∉ C_{B} (or) variation in the coefficient c_{j} of the non-basic variable x_{j} in the objective function.
If c_{j} ∉ C_{B} changes to (c_{j} + ΔC_{j}), such that Δc_{j} ≤ z_{j} − c_{j}, then the value of the objective function and the optimal solution of the problem remain changed. The range over c_{j} ∉c_{B} can vary maintaining the optimality of the solution given by −∝ ≤ cj ≤ cj + Δc_{j}. Note that there is a lowerbound to Δj.
- Variation in c_{k} ∈ C_{B} (or) variation in the coefficient c_{k} of the basic variable x_{k} in the objective function. In Δc_{j}.
In c_{k} ∈ C_{B} changes to (c_{k} + Δc_{k}), then the range over c_{k} ∈ C_{B} can vary so that the solution remains optimal is given by
If there is no a_{kj} > 0, there is a lowerbound to Δc_{k} and if no a_{kj} < 0, there is no upperbound to Δc_{k}. The value of the objective function will be improved further improved by an amount X_{B}_{k} · ∈C_{B}.
i.e.
z^{*} = z + X_{B}_{k} · ΔC_{B}_{k}
Remark 1: Any change c_{j} ∉ C_{B} will affect only its net evaluation coefficients and not others.
2. Any change c_{j} ∈ C_{B} will affect its net evaluation coefficients and the value of the objective function.
Example
Consider the LPP
Maximise z = 4x_{1} + 6x_{2} + 2x_{3}
subject to
x_{1} + x_{2} + x_{3} ≤ 3
x_{1} + 4x_{2} + 7x_{3} ≤ 9
x_{1}, x_{2}, x_{3} ≥ 0
(a) Solve the LPP
(b) (i) Find the range on the value of non-basic variable coefficient c_{3} such that the current solution will still remains optimal.
(ii) What happens if c_{3} in increased to 12? What is the new optimal solution?
(c) (i) Find the range on basic variable coefficient c_{1} such that the current optimal solution remains optimal.
(ii) Find the effect of c_{1} = 8 on the optimal solution.
(d)Find the effect of changing the objective function to z = 2x_{1} + 8x_{2} + 4x_{3} on the current optimal solution.
Solution: (a) If we solve the LPP by simplex method (left as an exercise to the reader) the optimal simplex table is
TABLE 3.23
(Here, s_{1} and s_{2} are slack variables) and the optimal solution is
x_{1} = 1, x_{2} = 2, x_{3} = 0 and maximum of z = 4 × 1 + 6 × 2 + 2 × 0 = 16
(b) (i) The coefficient c_{3} corresponds to the non-basic variables x_{3} and its value is zero in the current optimal solution since the associated cost for x_{3} is 2 (very low). Clearly if c_{3} further decreases, it will have no effect on the current optimal solution. However, if c_{3} is increased beyond a certain value it may become profitable for the third decisions variable.
As a rule, the sensitivity of the current optimal solution is determined by studying how the current optimal solution changes as a result of changes in the input data. When a value of c_{3} changes, then the net evaluation value of x_{3} also changes. The current solution remains optimal as long as z_{3} − c_{3} remains non-negative.
From the optimal simplex table
or,
8 − c_{3} ≥ 0
or,
c_{3} ≤ 0
Hence, as long as the coefficient of x_{3} is less than or equal to 8, it is not profitable to produce it.
(ii) if c_{3} = 12, z − c_{3} = (4 6) − 12 = −4
Since z_{3} − c_{3} is negative the current solution is not optimal. Since there is a negative net evaluation we continue the iteration process [left as an exercise to the reader]. The new optimal solution is
x_{1} = 2, x_{2} = 0, x_{3} = 1 with maximum of z = 4 × 2 + 6 × 0 + 12 × 1 = 20
(c) (i) When c_{1} decreases below a certain level, it may no longer remain profitable with respect to the decision variable x_{1}. On the other hand, if c_{1} increases beyond a certain value, it may becomes so profitable that it is most paying to produce only the product corresponding to the decision variable x_{1}. In either case the optimal solution will change and hence there is lower as well as upper limit on c_{1} within which the optimal production mix will not be affected.
It is clear from Table that any variation in c_{1} (and/or in c_{2} also) will not change z_{1} − c_{1} and z_{2} − c_{2} (∵ they remain zero), while z_{3} − c_{3}, z_{4} − c_{4}, z_{5} − c_{5} will change. However, as long as z_{j} − c_{j} (j = 3, 4, 5) remain non-negative, Table will remain optimal.
Now,
Hence,
− c_{1} + 10 × 0 i.e. c_{1} ≤ 10
c_{1} − 2 ≥ 3/2
1/3c_{1} + 2 ≥ i.e. c_{1} ≤ 6
∴ Range of c_{1} for the optimal solution to remain optimal is 3/2 ≤ c_{1} ≤ 6.
Thus, so long as c_{1} lies within these limits, the optimal solution in Table remains optimal. However, within this range, as the value of c_{1} is changed, maximum of z undergoes a change. For example, when c_{1} = 4, maximum of z = 4 × 1 + 6 × 2 = 16.
z_{3} − c_{3} = 10 − 8 = 2
z_{4} − c_{4} = −2 + 4/3c_{1} = 26/3
z_{5} − c_{5} = −1/3c_{1} + 2 = −2/3
z_{1} − c_{1} = 0
z_{2} − c_{2} = 0
As the net evaluation corresponding to the slack variable x_{2} is −2/3, the current solution is not optimum. We can continue the simplex iteration process by taking s_{2} as the entering variable. The new optimal solution is x_{1} = 3, x_{2} = 0, x_{3} = 0 with maximum of z = 24 [left as an exercise to the reader].
(d) The effect on the optimal solution can be determined by checking whether the net evaluation row remains non-negative.
z_{1} − c_{1} = 0
z_{2} − c_{2} = 0
Hence, the optimal solution does not change. The optimal solution remains as x_{1} = 1, x_{2} = 2, x_{3} = 0 and maximum of z = 1 × 2 + 2 × 8 + 0 × 4 = 18.
Note that there is also an indication of alternate optimum solution since z_{4} − c_{4} = 0.
2. Variation in the coefficients a_{ij} of the constraints (or) variation in the component a_{ij} of the coefficient matrix A:
Case (i): If the coefficients a_{ij} of a non-basic variable x_{j}(a_{ij} ∉ B) gets changed in the current optimal solution, then this change will not affect the feasibility of the current solution but may affect the optimality of the current solution.
Case (ii): If the coefficient a_{ij} of a basic variable x_{j}(a_{ij} ∈ B) gets changed in the current optimal solution, then this may effect both feasibility and optimality of the current solution. As the basis matrix is affected, it may affect all the quantities given in the current optimal Table. Under this situation, it may be better to solve the problem again.
Example
Consider the LPP
Maximise z = 45x_{1} + 100x_{2} + 30x_{3} + 50x_{4}
subject to
7x_{1} + 10x_{2} + 4x_{3} + 9x_{4} ≤ 1200
3x_{1} + 40x_{2} + x_{3} + x_{4} ≤ 800
x_{1}, x_{2}, x_{3}, x_{4} ≥ 0
Find the effect of the following on the optimal solution:
- If the x_{1} column in the problem changes from to
- If the x_{1} column changes from to
Solution: (a) Now, x_{1} is a non-basic variable and z_{1} − c_{1} = C_{B}B^{−1} P_{1} − c_{1}
= MP_{1} − c_{1} [where M is the simplex multiplier]
Now,
as s_{1}, s_{2} forms the initial basis.
= (22/3 2/3)
∴z_{1} − c_{1}(22/3 2/3) − 45 = 29/3
As this value is non-negative, the original optimum solution remains optimum for the new solution also.
(b)
z_{1} − c_{1} = C_{B} B^{−1} P_{1} − c_{1}
= MP1 − c_{1}
=(22/3 2/3) = −3
Hence, the solution can be improved by taking the variable x_{1} as the entering variable. By applying the simplex iteration process the new optimum solution is
x_{1} = 2000/7, x_{3} = 5600/27, x_{2} = x_{4} = 0 and maximum of z = 86000/7
3. Addition of a new activity (or variable)
If a new variable is introduced in a LPP whose optimal solution has been obtained, then the solution of the problem will remain feasible. Addition of an extra variable x_{n + 1} to the problem will introduce an extra column, say a_{n + 1} to the coefficient matrix A and an extra cost c_{n} + 1 will be introduced in the cost vector c_{k}. Thus, the addition of this extra variable may effect the optimality of the problem. If z_{n + 1} − c_{n + 1} × 0 then the solution of the original problem will remain optimal for the new problem also. Otherwise we use simplex method to obtain the optimal solution to the new problem.
Further, it can be noted that the addition of a new variable can never worsen the value of the objective function.
Example
Consider the LPP given in the above example. Then, the optimum simplex table is given by Table. If a new variable x_{5} is added to this problem with a column and c_{5} = 120, find the change in the optimal solution.
Solution: z_{5} − c_{5} = C_{B} B^{− 1} P_{5} − c_{5}
using table.
Since z_{5} − c_{5} = − 40 which is negative, the existing optimum solution can be improved. By applying the simplex iteration process (left as an exercise to the reader), the new optimum solution is
x_{3} = 400/3, x_{4} = 200/3 with maximum of z = 30 × 400/3 + 120 × 200/3 = 1200
4. Deletion of a variable
Deletion of a non-basic variable is a totally superfluous operation and does not affect the feasibility and/or optimality of the current optimal solution. But the deletion of a basic variable may affect the optimality of the current solution. To find the new optimum solution assign a large penalty − M to the variables under consideration and then use the regular simplex method to the modified simplex table.
Example
Consider the optimal Table of a maximisation problem.
TABLE 3.24
Find the change in the optimal solution, when the basic variable x_{2} is deleted.
Solution: Since this problem is of maximisation type we assign a cost − M to the variable x_{2}. The modified simplex table becomes
TABLE 3.25
First iteration: Introduce x_{4} and drop x_{2}.
TABLE 3.26
Since all z_{j} − c_{j} ≤ 0, the current solution is optimum. The optimal solution to the new problem is
x_{1} = 0, x_{2} = 0 and x_{3} = 4 with maximum of z = 28.
5. Deletion of a constraint
The constraint to be deleted may be either binding or unbinding (redundant) on the optimal solution. The deletion of an unbinding constraint can only enlarge the feasible region but will not affect the optimal solution. Moreover, if the constraint under consideration has a slack or surplus variable of zero value in the basis matrix, is cannot be binding and hence will not affect the optimal solution.
The deletion of a binding constraint will, however, cause post-optimality problem. The simplest way to proceed in this case is via the addition of one or two new variables.
3.5 PARAMETRIC PROGRAMMING
3.5.1 Introduction
Sensitivity analysis discussed the effect of discrete charges in the input coefficients of the linear programming problem on its optimal solution. However, if there is continuous change in the values of these coefficients, none of the results derived in that section are applicable. Parametric linear programming investigates the effect of predetermined continuous variations of these coefficients on the optimal solution. It is simply an extension of sensitivity analysis and aims at finding the various basic solutions that become optimal one after the other as the coefficients of the problem change continuously. The coefficients change as a linear function of a single parameter, hence the name parametric linear programming for this computational technique. As in sensitivity analysis, the purpose of this technique is to reduce the additional computations required to obtain the changes in the optimal solution. Here, we consider only two types of parametric problems.
- Parametric cost problem in which cost vector c varies linearly as a function of parameter μ.
- Parametric right hand side problem in which the requirement vector b varies linearly as a function of parameter μ.
3.5.2 Parametric Cost Problem
Consider the LPP
Minimise z = CX
subject to
AX = b,
X ≥ 0,
where C is the given cost vector.
Let this cost vector change to c + μ. Then the parametric cost problem is Minimise z = subject to
AX = b,
X ≥ 0
where is the given predetermined cost variation vector and μ is an unknown (positive or negative) parameter. As μ changes, the coefficients of all variables also change. We want to find the family of optimal solutions as μ changes from − ∞ to + ∞.
The problem can be converted into maximisation type and can be solved by using simplex method and sensitivity analysis. When μ = 0, the parametric cost problem reduces to the original linear programming problem. Let B and X_{B} represent the optimal basis matrix and the optimum basis feasible solution, respectively for μ = 0. The net evaluations or relative cost coefficients are all non-negative and are given by where C_{B} is the cost vector of the basic variables and _{j} is the jth column (corresponding to the variable x_{j}) in the optimal Table.
As μ changes from zero to a positive or negative value, the feasible region and values of the basic variables X_{B} remain unaltered, but the relative cost coefficients change. For any variable x_{j}, the relative cost coefficients is given by
Since the vector C and C′ are known, (z − c_{j}) and ( −) can be determined. For the solution to be optimum (z_{j} − c_{j})(μ) ≥ 0 or (z_{j} − c_{j}) + μ( − ) ≥ 0. In other words, for a given solution, we can determine the range for μ within which the solution remains optimal.
Example
Consider the linear programming problem
Maximise z = 4x_{1} + 6x_{2} + 2x_{3}
subject to
x_{1} + x_{2} + x_{3} ≤ 3,
x_{1} + 4x_{2} + 7x_{3} ≤ 9
x_{1}, x_{2}, x_{3} ≥ 0
If we solve this problem by regular simplex method then the optimal simplex table is
TABLE 3.27
(where s_{1} and s_{2} are slack variables).
Solve this problem if the variation cost vector C′ = (2 −2 2 0 0). Identify all critical values of the parameter μ.
Solution: The given parametric cost problem is
Maximise z = (4 + 2μ)x_{1} + (6 − 2μ)x_{2} + (2 + 2μ)x_{3} + 0x_{4} + 0x_{5}
subject to
x_{1} + x_{2} + x_{3} + s_{4} = 3,
x_{1} + 4x_{2} + 7x_{3} + s_{2} = 9,
x_{1}, x_{2}, x_{3} s_{1}, s_{2} ≥ 0
when μ = 0 the problem reduces to the linear programming problem, whose optimal solution is given in Table 3.27. For values of μ other than zero, the relative profit coefficients become linear functions of μ. To compute them we first add a new relative profit row called − to Table 3.27. This is shown in the following Table.
TABLE 3.28
Note that − is calculated similar to z_{j} − c_{j} except that c is replaced by .
The above Table represents a basic feasible solution for the given parametric cost problem. It is given by x_{1} = 1, x_{2} = 2, x_{3} = s_{1} = s_{2} = 0 and the value of the objective function z(μ) = z + μz′ = 16 − 2μ. The relative profit coefficients, which are linear functions of μ, are given by
The above Table will be optimal if −
μ ≥ 0 for j = 3, 4, 5. Thus, we can determine the range of μ for which the above Table remains optimal as follows:
Thus, x_{1} = 1, x_{2} = 2, x_{3} = x_{4} = x_{5} = 0 is an optimal solution for the given parametric problem for all values of μ between −1 and 1/2 and maximum of z = 16 − 2μ.
For μ > 1/2, the relative profit coefficient of the non-basic variable s_{2}, namely, becomes negative and Table 3.28 no larger remains optimal. Regular simplex method is used to iterate towards optimality. The entering variable is s_{2} and the leaving variable is x_{2}.
The above Table will be optimum if
( − )μ ≥ 0 for j = 2, 3,4.
Now, ( − )(μ) = −2 + 4μ ≥ 0 ∴μ ≥ 1/2
( − )(μ) = 2 + μ0 ≥ 0 which is true
( − )(μ) = 4 + 2μ ≥ 0 ∴μ ≥ −1/2
∴ For all μ≥1/2, the optimal solution is given by
x_{1} = 3, x_{2} = x_{3} = s_{1} = 0, s_{2} = 6 and Maximum of z = 12 + 6μ.
For μ < −1, the relatively profit coefficient of the non-basic variable s_{1} namely, ( − )(−) becomes negative Then, the above Table is not optimal.
s_{1} enters into the basis and x_{1} leaves the basis.
TABLE 3.30
The above Table will be optimal if
(z_{j} − c_{j}) (μ) ≥ 0 for j = 1, 3, 5.
Now,
∴ For all μ ≤ −1, the optimal solution is given by
x_{1} = 0, x_{2} = 9/4, x_{3} = 0, s_{1} = 3/4, s_{2} = 0 and maximum of z = 27/2 −9/2 μ.
The Tables 3.27, 3.28, 3.29 provide the families of optimal solution for −1 ≤ μ ≤ 1/2, μ 1/2 and μ ≤ − 1, respectively.
3.5.3 Parametric Right Hand Side Problem
The right hand side constraints in a linear programming problem represents the limits in the resources and the outputs. In some practical problems all the resources are not independent of one another. A shortage of one resources may cause shortage of the other resources at varying levels. Same is true for outputs also. For example, consider a firm manufacturing electrical appliances. A shortage in electric power will decrease the demand of all the electric items produced in varying degrees depending upon the electric energy consumed by them. In all such problems, we are to consider simultaneous changes in the right hand side constants, which are functions of one parameter and study how the optimal solution is affected by these changes.
Let the linear programming problem before parameterisation be
Maximise z = CX,
subject to
AX = b,
X ≥ 0
where b is the known requirement (right hand side) vector. Let this requirement vector be changed to b + μb′ so that parametric right hand side problem becomes
Maximise z = CX
subject to
AX = b + μb′, X ≥ 0
where b′ is the given and predetermined variation vector and μ is an unknown parameter. As μ changes, the right hand constants also change. We wish to determine the family of optimal solutions as μ changes from − ∞ to + ∞. When μ = 0, the parametric problem reduces to the original linear programming problem. Simplex method is used to find its optimal solution.
Let B and X_{B} represent the optimal basis matrix and the optimal feasible solution, respectively for μ = 0. Then, X_{B} = B^{−1}b. As μ changes from zero to a positive or negative value, the value of the basic variables change and the new values are given by
X_{B} = B^{−1}(b + μB^{−1}b′ = + μ_{1}
A change in μ has no effect on the values of relative profit coefficients _{j}, that is, _{j} values remain non-negative. For a given matrix B, values of and can be calculated. The solution X_{B} = + λ is feasible and optimal as long as + μ ≥ 0. In other words, for a given solution we can determine the range for μ within which the solution remains optimal.
Example
Consider the linear programming problem
Maximise z = 4x_{1} + 6x_{2} + 2x_{3}
x_{1} + x_{2} + x_{3} ≤ 3
x_{1} + 4x_{2} + 7x_{3} ≤ 9
x_{1}, x_{2}, x_{3} ≥ 0
If we solve this problem by simplex method by taking s_{1} and s_{2} as the slack variables then the optimal simplex table is
TABLE 3.31
Solve the problem if the variation right hand side vector . Perform the complete parametric analysis and identify all critical values of parameter μ.
Solution: The given parametric right hand side problem is
Maximise z = 4x_{1} + 6x_{2} + 2x_{2} + 0s_{1} + 0s_{2}
subject to
x_{1} + x_{2} + x_{3} + s_{1} = 3 + 3μ
x_{1} + 4x_{2} + 7x_{3} + s_{2} = 9 − 3μ
x1, x_{2}, x_{3}, s_{1}, s_{2} ≥ 0
When μ = 0, the problem reduces to the linear programming problem whose optimal solution is given in Table 3.31. For values of μ other than zero, the values of right hand side constants change because of the variation vector b′. The vector X_{B} and are computed as follows:
Hence, the new Table is
TABLE 3.32
For a fixed μ, the value of the basic variables in the above Table are given by
x_{1} = μ = 1 + μ
x_{2} = μ = 2 − 2μ
− values are not affected as long as the basis consists of variables x_{1} and x_{2}. As μ changes values of basic variables x_{1} and x_{2} change and the above Table remains optimal as long as basis (x_{1}, x_{2}) remains feasible. In other words, the above Table remains optimal as long as
x_{1} = 1 + 5μ ≥ 0 or μ ≥ −1/5
x_{2} = 2 − 2μ ≥ 0 or μ ≤ 1.
Thus, the above Table remains optimal as μ varies from − 1/5 to 1. Thus, for all −1/5 ≤ μ ≤ 1, the optimal solution is given by
x_{1} = 1 + 5μ, x_{2} = 2 − 2μ, x_{3} = s_{1} = s_{2} = 0, and maximum of z = 16 + 8 μ.
For μ > 1, the basic variable x_{2} becomes negative. Dual simplex method can, therefore, be applied to find the new optimal solution for μ > 1. Evidently, x_{2} is the variable that leaves the basis. The ratio’s of the non-basic variables are −3, 10, −2. Thus, variables s_{1} is the entering variable. The next iteration is
TABLE 3.33
The basic solution given by the above Table is
x_{1} = 9 − 3 μ, x_{2} = 0, x_{3} = 0, s_{1} = −6 + 6 μ, s_{2} = 0, maximum of z = 36 − 12μ.
The solution is optimal as long as the basic variables x_{1} and s_{1} remain non-negative. That is, as long as
x_{1} = 9 − 3μ ≥ 0 or μ ≤ 3,
s_{1} = − 6 + 6μ 0 or μ ≥ 1
The above solution is optimal for all 1 ≤ μ ≤ 3.
For μ > 3, the basic variable x_{1} becomes negative. As there is no negative coefficient in the first row the solution is infeasible. Hence, there is no optimal solution to the problem for all λ > 3.
For μ ≤ − 1/5, the basic variable x_{1} in Table 3.32 becomes negative. Dual simplex method can therefore be applied to find the new optimum solution for μ ≤ − 1/5. Evidently, x_{1} is the variable that leaves the basis and s_{2} is the variable that enters into the basis.
The basic solution given by the above Table is x_{1} = 0, x_{2} = 3 + 3μ, x_{3} = 0, s_{1} = 0, s_{2} = − 3 − 15μ and maximum of z = 18 + 18μ.
The solution is optimal as long as
x_{2} = 3 + 3μ 0 or μ − 1
s_{2} = − 3 − 15μ ≥ 0 or μ ≤ −1/5.
The above solution is optimal for all −1 ≤ μ ≤ −1/5. For μ < −1, the basic variable x_{2} in the above Table becomes negative. As there is no negative coefficient in the second row, the solution is infeasible. Hence, there exists no optimal solution to the problem for all λ < − 1.
3.6 GOAL PROGRAMMING
3.6.1 Introduction
A difficulty with a linear programming problem is that the objective function is measured in only one dimension such as profit or loss or production capacity and so on. It is impossible in linear programming to have multiple goals unless they can be measured in the same units.
The term goal programming was introduced by A. Charnes and W.W. Copper in 1961. The concept emerged as an issue for unsolvable linear programming problems. Organisational objectives vary according to the characteristics, types, philosophy of management, and particular environmental conditions of the organization. Profit maximisation, which is regarded as the sole objective of the business firm in classical economic theory, is one of the most widely accepted goals of management. However, in the present dynamic business environment, profit maximisation, is not the only objective of management. In fact, business firms do no quite a few occasions place higher priorities on goals other than profit maximisation. We have seen for example, firms place great emphasis on social responsibilities, social contribution, public relations, industrial and labour relations and so on. Such goals are sought because of outside pressure or voluntary management decisions.
Thus, we see management has multiple conflicting objectives to achieve in the present day business scenario. This implies that the decision criteria should be multi-dimensional and the decision involves multiple goals. In this context goal programming assumes greater importance as a powerful quantitative technique capable of handling multiple decision criteria.
3.6.2 Concepts of Goal Programming
Linear programming technique that has been applied quite extensively to various decision problems has a limited value for problems involving multiple goals. The difficulty in using linear programming lies in the uni-dimensionality of the objective function, which requires maximisation or minimisation of one objective function3/4an over simplification of reality. However, in real life situations multiple goals and subgoals must be taken into account. Various goals may be expressed in different units of measurement such as rupees, hours, tons and others. Often, multiple goals of management are in conflict or are achievable only at the expense of other goals. Furthermore, these goals are incommensurable. Thus, the solution of the problem requires an establishment of a hierarchy of importance among these incompatible goals so that the lower-order goals are considered only after the higher-order goals are satisfied or have reached the point beyond which no further improvement are desired. If management can provide an ordinal ranking of goals in terms of their contributions or importance to the organisation and all goal constraints are in linear relationships, the problem can be solved by goal programming.
How is an objective function to be determined and expressed in algebraic form when there exists multiple incommensurable, and conflicting goals? Goal programming provides an answer in this case. In goal programming, instead of trying to maximise or minimise the objective criterion directly as in linear programming, the deviations between goals and what can be achieved within the given set of constraint are to be minimised.
In goal programming, the objective function contains primarily the deviational variables that represent each goal or subgoal. The deviational variable is represented in two dimensions in the objective function, a positive and a negative deviation from each subgoal and/or constraint. Then, the objective function becomes the minimisation of these deviations, based on the relative importance or priority assigned to them.
The solution of linear programming is limited by quantification. Unless the management can accurately quantify the relationship of the variables in cardinal numbers, the solution is only as good as inputs. The distinguishing characteristic of goal programming is that is allows for ordinal solution. Stated differently, management may be unable to specify the cost or utility of goal or a subgoal, but often upper or lower limits may be stated for each subgoal. The manager is usually called upon to specify the priority of the desired attainment of each goal or subgoal and rank them in ordinal sequence for decision analysis. The true value of goal programming is, therefore, the solution of problems involving multiple, conflicting goals according to the manager’s priority structure.
3.6.3 Formulation of Goal Programming Problem
It is important to consider formulation of the model before getting into the details of goal programming solution. In fact, the most difficult problem in the application of management science to decision analysis is the formulation of the model for the practical problem in question. Model formulation is the process of transforming a real-word decision problem into a management science model. While the advent of micro-computers with powerful software have now made the solution an easy task, the problem formulation requires a great deal of effort in terms of conceptualisation. One key to successful application of goal programming is the ability to recognise when a problem can be solved by goal programming and to formulate the corresponding model.
Steps of goal programming formulation
Step 1: Define variables and constants
Define the choice variables and the right hand side constants. The right hand side constants may be either available resources or specified goal levels. It requires a careful analysis of the problem in order to identify all relevant variables that have some effect on the set of goals stated by the decision-marker.
Step 2: Formulation of constraints
Through an analysis of the relationships among choice variables and their relationships to the goals, a set of constraint should be formulated. A constraint may be either a system constraint representing a relationship among the variables or a goal constraints that defines the relationship between the choice variables and the goals. It should be remembered that if there is no deviational variable to minimise in order to achieve a certain goal, a new constraint must be created. Also, if further refinement of goals and priorities are required, it may be facilitated by decomposing certain deviational variables.
Step 3: Developing of objective function
Through the analysis of the decision-maker’s goal structure, the objective function must be developed. First, the pre-emptive priority factors should be assigned to certain deviational variables that are relevant to goal attainment. Second, if necessary differential weights must be assigned to deviational variables at the same priority level. It is imperative that goals at the same priority level be commensurable.
Example 1
The manufacturing plant of an electronics firm produces tow types of television sets. both colour and black and white. According to the past experience, production of either a colour or a black and white set requires an average of one hour in the plant. The plant has a normal production capacity of 40 hours a week. The marketing department reports that, because of limited sales opportunity, the maximum number of colour and black and white sets that can be sold are 24 and 30, respectively for the week. The gross margin from the sale of a colour set is 80, whereas it is 40 from a black and white set.
The chairman of the company has set the following goals as arranged in the order of their importance to the organisation:
- First, he wants to avoid any underutilisation of normal production capacity (no layoffs of production workers).
- Second, he wants to sell as many television sets as possible. Since the gross margin from the sale of a colour television set is twice the amount from a black and white set, he was twice as much desire to achieve sales for colour television sets as for black and white sets.
- Third, the chairman wants to minimise the overtime operation of the plant as much as possible.
Formulate this as a goal programming problem.
Solution: First, we note that the gross margin shows that the colour set priority is two times that of the black and white set. Let x_{1} denote the number of colour television sets and x_{2} denote the black and white sets for production. Then,
- The production capacity of both types of sets is given as x_{1} + x_{2} + − = 40 where are the deviational variables, representing underutilisation of normal operation and overtime operation of the plant, respectively.
- The sales capacity for two type of sales are
where are the deviational variables representing under achievement of sales towards goals and stand for deviational variables to represent over achievement of sales towards goals.
- If p_{1}, p_{2}, and p_{3} designate the priority of the goals, the complete formulation of goal programming is as follows:
Minimise
subject to the constraints
Example 2
A textile company produces two types of materials A and B. The material A is produced according to direct orders from furniture manufacturers. The material B, is distributed to retail stores. The average rate of production for material A and B are identical at 1000 metres/hour. By running two shifts the operational capacity of the plant is 80 hours per week.
The marketing department reports that the maximum estimated sales for the following week is 70,000 metres of material A and 45,000 metres of material B. According to the accounting department the profit from a metre of material A is 2.50 and from a metre of material B is 1.50.
The management of the company decides that a stable employment level is a primary goal for the firm. Therefore, whenever there is a demand exceeding normal production capacity, the management simply expands production capacity by providing overtime. However, management feels that overtime operation of the plant of more than 10 hours per week should be avoided because of the accelerating costs. The management has the following goals in the order of importance:
- The first goal is to avoid any underutilisation of production capacity (i.e. to maintain stable employment at normal capacity).
- The second goal is to limit the overtime operation of the plant to 10 hours.
- The third goal is to achieve the sales goals of 70,000 metres of material A and 45,000 metres of material B.
- The last goal is to minimise the overtime operation of the plant as much as possible.
Formulate this is a goal programming problem to help the management for the best solution.
Solution:
Constraint for production capacity:
The production capacity is limited to 80 hours at present by running two shifts. However, since overtime of the plant is allowed to a certain extent, the constraint may be written as
where,
x_{1} — Number of hours used for producing material A.
x_{2} — Number of hours used for producing material
B.
— Underutilization of production capacity as set at 80 hours of operation.
— overutilization of normal production capacity beyond 80 hours.
The maximum sales for material A and material B are 70,000 and 45,000 metres, respectively. Hence, it is assumed that overachievement of sales beyond the maximum limits are impossible. Then, the sales constraints will be (as before x_{1} and x_{2} are expressed in thousands)
where,
= underachievement of sales goal of material A.
= underachievement of sales goal of material B.
Constraint for overtime operation:
Form the case itself, only production and sale constraints can be formulated. However, the analysis of goals indicates that overtime operation of the plant is to minimised to 10 hours or less. To solve the problem by goal programming, we need a deviational variable that represents the overtime operation of the plant beyond 10 hours. By minimising this deviational variable to zero we can achieve the goal. Since there is no such deviational variable in the three constraints presented above, we must create a new constraint.
The overtime operation of the plant should be limited to 10 hours or less. However, it may not be possible to limit the overtime operation to 10 hours or less in order to meet higher order goals. Therefore, can be smaller than, equal to 10. Constraint regarding overtime can be expressed as d Δ d
where,
negative deviation of overtime operation from 10 hours.
overtime operation beyond 10 hours.
One metre of material A gives a profit of 2.50 and one metre of B gives a profit of 1.50. Since the production rate is same for both A and B, namely, 1000 metres per hour, the hourly profits of A and B are in the ratio of 2.50 : 1.50 or 5 : 3. Hence, it is appropriate to assign these as differential weights in goal 3^{rd}. The differential weights imply that management is relatively more concerned with the achievement of the sales goal for material A than that for material B.
Hence, the problem is
subject to
The simplex method of goal programming Simplex method can be used to solve a goal programming problem.
Example 1
A company manufactures two products, radios and transistors, which must be processed through assembly and finishing departments. Assembly department has 90 hours available, finishing department can handle up to 72 hours of work. Manufacturing one radio requires 6 hours in assembly and 3 hours in finishing department. Each transistor requires 3 hours in assembly and 6 hours in finishing. If profit is 120 per radio and 90 per transistor, determine the best combination of radios and transistors to realise a maximum profit of 2,100.
Solution: This is a single goal (profit) problem. Let
d^{−} = amount by which profit goal is underachieved.
d^{+} = amount by which profit goal is overachieved.
The given problem can be expressed as the following goal problem:
Minimise z = d^{−} (under achievement of the profit target)
subject to
120x_{1} + 90x_{2} + d^{−} − d^{+} = 2100
(∵profit earned + underachievement − overachievement = target)
Adding slack variables, the assembly and finishing constraints can be expressed as
6x_{1} + 3x_{2} + s_{1} = 90
3x_{1} + 6x_{2} + s_{2} = 72,
x_{1}, x_{2}, s_{1}, s_{2} ≥ 0
If we convert the problem as maximisation type, the objective function becomes maximise z^{*} = −d^{−}.
Solution by simplex method:
Initial Table:
TABLE 3.35
First iteration: Introduce x_{1} and drop s_{1}.
TABLE 3.36
Second iteration: Introduce x_{2} and drop s_{2}.
TABLE 3.37
Since all z_{j} − c_{j} ≥ 0, the current solution is optimum and the optimal solution is x_{1} = 12, x_{2} = 6 and minimum of z = 120.
Minimum of z = 120 means that the target profit of 2,100 is underachieved 120, that is profit actually earned (by manufacturing 12 radios and 6 transistors) is 2,100 − 120 = 1,980.
Example 2
If in example 1, the company sets two equally ranked goals, one to reach a profit goal of 1,500 and the other to meet a radio goal of 10 sets. Find the optimal solution.
Solution: Since the two goals are equally ranked, à1 deviation from the profit target is important (in the goal programming model) as a deviation of 1 radio set. Let
= amount by which the profit goal is under achieved.
= amount by which the profit goal is over achieved.
= amount by which the radio goal is under achieved.
= amount by which the radio goal is over achieved.
Then, the goal programming model becomes
subject to
Convert the problem to maximisation, the objective function becomes maximise z^{*} = −d_{p} − d_{r}.
Adding slack variables, the assembly and finishing constraints can be written as
6x_{1} + 3x_{2} + s_{1} = 90
3x_{1} + 6x_{2} + s_{2} = 72
Note that and are the basic variables corresponding to the first and second constraint, respectively.
TABLE 3.38
First iteration: Introduce x_{1} and drop s_{2}.
TABLE 3.39
Second iteration: Introduce and drop .
TABLE 3.40
It may be noted from the above Table that the goal of 10 radios is achieved and in fact improved (25/2 − 10 = 5/2), 5/2 appears in this Table as (overachievement of radios). Profit goal of 1500 is also reached since both and are zero (because they are not in the final solution) and therefore, profit is exactly 1500.
3.6.4 Graphical Method of Goal Programming
The graphical method of solving goal programming problem is quite similar to the graphical method of linear programming. In linear programming, the method is used to maximise an objective function with one goal, whereas in goal programming, it is used to minimise the total deviation from a set of multiple goals. The deviation from the goal with the highest priority is minimised to the fullest extent possible before deviation from the next goal is minimised.
Example 1
A manufacturing firm produces two types of product A and B. According to the past experience, production of either product A or product B requires an average of one hour in the plant. The plant has a normal production capacity of 400 hours a month. The marketing department of the firm reports that because of limited market, the maximum number of product A and product B that can be sold in a month are 240 and 300, respectively. The net profit from the sale of product A and product B are 800 and 400, respectively. The manager of the firm has set the following goals arranged in the order of importance.
p_{1}: He wants to avoid any underutilisation of normal production capacity.
p_{2}: He wants to sell maximum possible units of product A and product B. since the net profit from the sale of product A is twice the amount from that of product B, the manager has twice as much desire to achieve sales for product A as for product B.
p_{3}: He wants to minimise the overtime operation of the plants as much as possible.
Solution: Formulation:
Let x_{1} and x_{2} be the numbers of units of product A and product B to be produced, respectively. Since overtime operation is allowed, the plant capacity can be expressed as
where is the underutilization of production capacity and is the overtime operation of the normal production capacity. Since the sales goals given are the maximum possible sales volume, positive deviations will not appear in the sales constraints. The sales constraints can be expressed as:
where,
= underachievement of sales goal for product A
= Underachievement of sales goal for product B
Hence, the complete model is
subject to
First plot all the goal constraints on a graph as shown in Fig. 3.1. Since the underutilization and overutilization of the plant capacity is permissible, the deviational variables and are indicated by arrows in Fig. 3.1. Similarly, are also denoted by arrows in the diagram. The feasible region is
FIGURE 3.1
The first goal is completely achieved at the line EF in the reduced feasible region EBF. Now, the manager would like to move to priority level 2, where the differential weight for product A is two times that of B (assigned in the ratio of the profits). The sales goal of product A is completely achieved at line EB in the feasible region EBF. So, we see that the first two goals are completely achieved at point B in the feasible region EBF. The third goal on overtime operation cannot be achieved without dispensing with the first two goals in the priority order. The optimal solution to the problem is
x_{1} = 240, x_{2} = 300, = 0, = 0, = 0, = 140
Example 2
A small furnishing company manufactures tables and chairs. Each chair requires 4 man-hours of labour while each table requires 5 man-hours of labour. If only 40 man-hours are available each week and the owner of the company would neither hire additional labour nor utilise overtime, formulate the linear goal programming problem and solve it. Both the table and the chair fetch a profit of 2000 per week. Also, he would like to supply 10 chairs, if possible, per week to a sister concern.
Solution: The goals of the company and their assigned priorities are
Priority | Goal |
---|---|
p_{1} | To avoid hiring extra labour or utilise overtime. |
p_{2} | To reach a profit goal of 2000 a week. |
p_{3} | To supply 10 chairs a week to the sister concern. |
Let x_{1} and x_{2} denote the number of chairs and tables to be produced per week. and represent the amount by which the i^{th} goal is underachieved and overachieved, respectively, then the goals are
g_{1}: 4x_{1} + 5_{x}_{2} + − = 40 (man-hour goal)
g_{2}: 100x_{1} + 100_{x}_{2} + − = 2,000 (profit goal)
g_{3}: x_{2} + − = 10 (Supply of chair goal)
To achieve the first goal completely, the overtime must be minimised. Similarly, the second goal will be fulfilled when the underachievement in profit, is minimised and the supply of chair goal will be attained when underachievement is minimised. Therefore, the objective function of the problem may be written as
Minimise {, , }
subject to
The three goal constraints are plotted as straight lines by assuming each = = 0. Arrows(←or→) are then associated with each line to represent underachievement or over achievement of the goal.
FIGURE 3.2
Man-hour goal is the most important and is the first to be considered. Since this goal is achieved when is minimised. We may set = 0 as its minimum value. The region satisfying x_{1}, x_{2} ≥ 0 and is shown shaded in the Fig. 3.2.
Any point within this region satisfies the man-hour goal and minimises at the priority level one.
The next important goal is the profit goal (p_{2}) and to attain it we must minimise . Clearly, cannot be decreased drawn to zero since this would degrade our previous solution3/4a higher priority goal. The minimum value of without affecting the previously attained higher goal is the point (x_{1} = 4, x_{2} = 0) and is marked in Fig. 3.2.
The last important goal (in the order of priority) is g_{3}. To satisfy this we must minimise . However, any minimisation of would require a movement from the solution previously determined and would degrade at least one higher priority goal already attained.
Therefore, the optimal solution to the problem is x_{1} = 10, x_{2} = 0 and minimum of z = (0, 1000, 10)
The optimum values of z simply indicate that goal is completely attained, while g_{2} and g_{3} are only partially attained.
FIGURE 3.3
3.7 INTEGER PROGRAMMING
3.7.1 Introduction
So far, we have considered decision variables which can take only integer values. In many practical problems the decision variables make sense only if they have integer values. For example, it is often necessary to assign men, machines, and vehicles to activities in integer quantities. This restriction is a difficult one to handle mathematically. However, some progress has been made in developing solution procedures for the case of linear programming problems subjected to this additional restriction that the decision variables must have integer values.
A linear programming problem in which all or some of the decision variables are restricted to assume non-negative integer values is called an integer programming problem [or IPP or integer linear programming].
Importance of Integer Programming Integer programming problem is of particular importance in business and industry where, quite often, the fractional solutions are unrealistic because the units are not divisible. For example, it is quite possible to use 2.34 kg of raw material, or 4.56 machine hours and so on. But it is meaningless to produce 5.67 chairs or 7.89 tables, or to open 4.23 shops and so on. Hence, a new procedure has been developed in this direction for the case of LPP subjected to the additional restriction that the decision variables must have integer values.
Applications of Integer Programming Integer programming problems occur quite frequently in business and industry. Transportation problems, assignment problems are integer programming problems, since the decision variables are either zero or one.
Many other decision problems can necessitate integer programming models. One category of such problems deals with the sequencing, scheduling and routing decisions. An example is the travelling salesman problem which aims at finding a least distance route for a salesman who must visit each of n cities, starting and ending his journey at home city. Larger expenditures of capital and resources are required in capital budgeting decisions. This is the main reason why integer programming is so important for marginal decisions.
An optimal solution to a capital budgeting problem may yield considerably more profit to a firm than an approximate or guessed solution. For example, fertilizer manufacturing firm with 15 plants may be able to substantially increase profits by cutting down to 10 plants or less-provided this reduction is planned optimally.
Rounding off Techniques and its Draw Backs IPP can be solved by the usual simplex method by ignoring the integer restriction and then rounding off the non-integer values to integers in the optimal solution obtained by the simplex method. But there is no guarantee that the integer valued solution thus obtained will satisfy one or more constraints and as such the new solution may not be feasible.
Definition of Integer Programming Integer programming problem (IPP) The linear programming problem, maximise z = CX, subject to AX ≤ b, X ≤ 0 and some x_{j} ∈ X are integers where A is an m × n real matrix is called an integer programming problem.
All integer programming problem (All I.P.P. or pure I.P.P.) An integer programming problem is said to be an all integer programming problem if all x_{j} ∈ X are integers.
Mixed integer programming problem (Mixed I.P.P.) An integer programming problem is said to be mixed integer programming problem if not all x_{j} ∈ X are integers.
Zero-one programming problem or standard discrete problem: If all the variables in the optimum solution are allowed to take values either 0 or 1 as in ‘do’ or ‘not to do’ type decisions, then the problem is called the zero-one programming problem.
3.7.2 Formulation of Integer Programming Problem
Integer programming problem formulation do not pose problem when the variables inherently are discrete in nature. Problems arise in situations where direct LP or IP formulations are not possible. First, we see a direct IP formulation of the problem situation.
Example 1
Suppose we are landed on a treasure island full of three types of valuable stones, emerald (E), ruby (R) and topaz (T). Each piece of E, R and T weighs, 3, 2, 2 kg and is known to have a value of 4, 3, 1, crore respectively. We have a bag that can carry a maximum of 11 kg. The stones cannot be made into smaller pieces. Formulate this situation as an IPP so as to maximise the total value carried.
Decision variables We have to decide how many pieces of each type of stone has to be carried so as to be within the capacity of the bag.
Let x_{1} denote the number of emeralds to be carried and let x_{2} be the number of rubies to be carried and let x_{3} be the number of topaz to be carried.
Objective function
The total value carried is 4x_{1} + 3x_{2} + x_{3} and so the objective function is maximise z = 4x_{1} + 3x_{2} + x_{3}.
Constraints: The total weight to be carried is 3x_{1} + 2x_{2} + 2x_{3}. Since the capacity of the bag is limited to 11 kg we have
3x_{1} + 2x_{2} + 2x_{3} ≤ 11
Also, x_{1}, x_{2}, x_{3} denote the number of stones, that cannot be negative and so x_{1}, x_{2}, x_{3} ≥ 0.
Hence, the complete IPP is
Maximise z = 4x_{1} + 3x_{2} + x_{3}
subject to
3x_{1} + 2x_{2} + 2x_{3} ≤ 11
x_{1}, x_{2}, x_{3} ≥ 0
Example 2 Capital Budgeting Problem
Consider a company which would like to take up three projects. However, because of budget limitations not all the projects can be selected. It is known that project j has a present value of c_{j}, and would require an investment of a_{jt} in period t. The capital available in period t is b^{T}. The problem is to choose projects so as to maximise present value, without violating the budget constraints. Formulate the problem as an IP.
Solution: For choice of situations of “yes-no”, “go-no-go” type, where the objective is to determine whether or not a particular activity is to be undertaken, integer binary variables that can take a value of 0 or 1 can be used to represent the decision variables. Here, we find that for each project, we want to find out whether it should be taken up or not. Define three decision variables x_{j}(j = 1, 2, 3) corresponding to each project, and
x_{j} = 1, if project j is selected
= 0, otherwise
Then, the objective function and the constraints can be expressed in terms of the decision variables, to give the required formulation.
Maximise z = c_{j} x_{j}
subject to
Σa_{jt}x_{j} ≤ b^{T}, for all period t
x_{j} = 1 or 0, for all projects.
Solution Methodologies of Integer Programming
Integer programming methods can be categorized as
- Cutting plane methods
- Search methods
Cutting methods: A systematic procedure for solving pure integer programming problem was first developed by RE Gomory in 1958. Later on, he extended the procedure to solve mixed IPP named as cutting plane algorithm; the method consists in first solving the IPP as ordinary LPP by ignoring the integerality restriction and then introducing additional constraints one after the other to cut (eliminate) certain part of the solution space until an integral solution is obtained.
Search methods: It is an enumeration method in which all feasible integer points are enumerated. The widely used search method is the branch and bound technique. It also starts with the continuous optimum, but systematically partitions the solution space into subproblems that eliminate parts that contain no feasible integer solution. It was originally developed by AH Land and AG Doig.
3.7.3 Gomory’s all IPP Method (or) Cutting Plane Method or Gomory’s Fractional Cut Method
An optimal solution to an IPP is first obtained by simplex method ignoring the restriction of integral values. In the optimum solution if all the variables have integer values, the current solution will be the desired optimum integer solution. Otherwise, we add a new constraint to the problem such that the new set of feasible solutions includes all the original feasible integer solutions but does not include the optimum non-integer solution initially found. We then solve the revised problem using the simplex method and see if we can get an integer solution. If not, we add another fractional cut and repeat the process until an integer solution is found. Since we never eliminate any feasible integer solution from consideration when we add functional cuts, the integer solution ultimately found must be optimum.
Algorithm:
Step 1: Convert the problem into maximisation if it is in the minimisation form. All the coefficients and constants should be integers.
Step 2: Ignore the integerality condition and find the optimum solution of the resulting LPP by simplex method.
Step 3: Test the integerality of the optimum solution:
- If all X_{Bi} ≥ 0 and are integers, an optimum integer solution is obtained.
- If all X_{Bi} ≥ 0 and at least one X_{Bi} is not an integer, then go to the next step.
Step 4: Rewrite each X_{Bi} as X_{Bi} = [X_{Bi}] + f, where [X_{Bi}] is the integral part of X_{Bi} and f_{i} is the positive fractional part of X_{Bi}, 0 ≤ f_{i} ≤ 1. Choose the largest fraction of that is choose Max {f_{i}}. In case of a tie, select arbitrarily. Let Max {f_{i}} = f_{k} corresponding to X_{Bk} (then k^{th} row is called as a source row).
Step 5: Express each of the negative fractions if any, in the k^{th} row (source row) of the optimum simplex Table as the sum of a negative integer and a non-negative fraction.
Step 6: Find the fractional cut constraint (Gomorian constraint or secondary constraint).
From the source row
that is,
or,
where s_{1} is the Gomorian slack.
Step 7: Add the fractional cut constraint obtained in step 6 at the bottom of the optimum simplex table obtained in step 2. Find the new feasible optimum solution using dual simplex method.
Step 8: Go to step 3 and repeat the procedure until an optimum integer solution is obtained.
Remark: In cutting plane method, the fractional cut constraints cut the useless area of the feasible region in the graphical solution of the problem. That is, cut area which has no integer-valued feasible solution. Thus, these Gomorian constraints eliminate all the non-integral solutions without loosing any integer valued solution.
Example 1
Solve the following problem
Maximise z = 6x_{1} + 2x_{2}
subject to
3x_{1} + 2x_{2} = 5
x_{1}, x_{2} ≥ 0 and are integers.
Solution: (By Gomory’s cutting plane method)
The given LPP can be written as
Maximise z = 6x_{1} + 2x_{2}
subject to
x_{1} + 2/3 x_{2} = 5/3
x_{1}, x_{2} ≥ 0 and are integers.
By taking x_{1} as the basic variable the initial basic feasible solution is x_{1} = 5/3.
Initial table:
TABLE 3.41
Since all z_{j} − c_{j} 0 the current solution is optimum. The optimum solution is
x_{1} = 5/3, x_{2} = 0 and maximum of z = 10.
But the solution obtained is not an integer solution.
Construction of Gomory’s Constraint:
Now, the constraint equation (x_{1} − equation) can be written as 1 + 2/3 = (1 + 0)x_{1} +(0 + 2/3)x_{2} and so the Gomory’s constraint is
0x_{1} + 2/3x_{2} ≥ 2/3
−2/3x_{2} ≤ −2/3
or
−2/3 x_{2} + s_{1} = −2/3 where s_{1} is a Gomory’s slack variable.
We obtain s_{1} −2/3. Since the solution is infeasible apply dual simplex method. Then, the next Table is
TABLE 3.42
Drop s_{1} from the basis and introduce x_{2}. Then, the next iteration is
TABLE 3.43
Since all X_{Bi} ≥ 0 and all z_{j} − c_{j} ≥ 0, the current solution is optimal. The solution is also an integer solution.
The optimum integer solution is
x_{1} = 1, x_{2} = 1 with maximum of z = 8
Example 2
Solve the integer programming problem:
Maximise z = 7x_{1} + 9x_{2}
subject to
−x_{1} + 3x_{2} ≤ 6
7x_{1} + x_{2} ≤ 35
x_{1}, x_{2} ≤ 0 and are integers.
Solution: Ignore the integer restriction and solve the problem by regular simplex method. Then, the optimal simplex table is
Now
7/2 = 3 + 1/2(f_{1} = 1/2)
9/2 = 4 + 1/2(f_{2} = 1/2)
and maximum of {1/2, 1/2} = 1/2. Hence, arbitrary choose first row (x_{2} row) for constructing the Gemory’s constraint.
Now, x_{2} equation is
3 + 1/2 = (0 + 0)x_{1} + (1 + 0)x_{1} + (0 + 7/22)s_{1} + (0 + 1/22)s_{2}.
Hence, the Gomory’s constraint is
7/22s_{1} + 1/22s_{1} ≥ 1/2
or
−7/22s_{1} − 1/22s_{1} ≤ −1/2
that is,
−7/22s_{1} − 1/22s_{1} + s_{3} = −1/2
where, s_{3} is the Gomory’s slack variable. Hence, s_{3} = −1/2. As the solution is infeasible apply dual simplex method after including the Gomory’s constraint equation in the above Table.
TABLE 3.45
Since maximum of
Introduce s_{1} and drop s_{3}.
Since all X_{Bi} ≥ 0 and all z_{j} − c_{j} ≥ 0 the current solution is optimum and the optimum non-integer solution is
x_{1} = 32/7, x_{2} = 3 with maximum of z = 59.
Since
32/7 = 4 + 4/7
and,
11/7 = 1 + 4/7
and maximum {4/7, 4/7} = 4/7.
Arbitrarily choose x_{1} equation for constructing the next Gomory’s constraint.
4 + 4/7 = (1 + 0)x_{1} + (0 + 0)x_{2} + (0 + 0)s_{1} + (0 + 1/7)s_{2} + (0 + 6/7)s_{3}
1/7s_{2} + 6/7s_{2} ≥ 4/7
i.e.
− 1/7 s_{2} − 6/7s_{2} ≤ − 4/7
i.e.
− 1/7s_{2} − 6/7s_{2} + s_{4} = − 4/7 where s_{4} is a slack variable. The next iteration is
TABLE 3.47
Since maximum {1/−1/7, 8/ −6/7} = − 7 corresponding to s_{2}. Drop s_{4} and introduce s_{2}.