3. Advanced Topics in Linear Programming – Operations Research, 2nd Edition

3

Advanced Topics in Linear Programming

The dual of a linear programming problem finds a place in this chapter. The economic interpretation of dual variables that can be used by management for planning its resources is also explained in this chapter. If a LPP involving a large number of variables and constraints is solved by this method, it will require a large storage space and time on a computer. Some computational techniques have been developed which require much less computer storage and time than that required by the simplex method. The two important and efficient computational techniques are the revised simplex method or simplex method with multipliers and the bounded variables method. The revised simplex method is dealt with in this chapter. The sensitivity analysis of the problem to study the effect of change in various resource levels is also discussed. This chapter also describes some integer programming formulation techniques and highlights its application in managerial problems where linear programming techniques fail. Further, two methods, namely, cutting plane method and branch and bound method are discussed to solve integer programming problems. Some advantages and limitation of LPP are also discussed.

3.1 DUALITY THEORY

Every linear programming problem is associated with another unique linear programming problem. The original (given) problem is then called the primal problem while the other is called its dual problem. The relationship between the solutions of a primal problem and its dual problem prove to be extremely useful in a variety of ways. The solution of one of the problem readily provides the solution of the other one. This fact is important because situations may arise where it will be easier to solve the dual rather than to solve the primal.

3.1.1 The Dual Problem

There are two important forms of primal-dual pairs, namely, symmetric form and unsymmetric form.

Symmetric form

Let the primal problem be

 

Maximise z = c1x1 + c2x2 + 1/4 + cnxn

subject to

In matrix notation the equation (3.1.1) can be written as

 

Maximise z = CX

subject to AXb

and,

X ≥ 0, bTRm

where,

Remark: It is not necessary that the primal is to be maximised. It can be minimised too.

Steps to be followed to convert a primal to a dual.

Step 1: The maximisation problem in the primal becomes the minimisation problem in the dual and vice versa.

Step 2: The maximisation problem has less than or equal to (≤) type of constraints while the minimisation problem has greater than or equal to (≥) type of constraints.

Step 3: If the primal contains m constraints and n variables, then the dual will contain n constraints and m variables. That is, the transpose of the body matrix of the primal problem gives the body matrix of the primal problem gives the body matrix of the dual and vice versa.

Step 4: The constants c1, …, cn in the objective function of the primal appear in the right-hand sides of the constraints of the dual.

Step 5: The constants b1, …, bm in the constraints of the primal appear in the objective function of the dual.

Step 6: Both the primal and the dual variables are non-negative.

The relationship between the primal and dual problems can be best described by

The dual of equation (3.1.1) is

 

Minimise w = b1y1 + b2y2 + … + bmym

subject to

In matrix form equation (3.1.2) can be represented in matrix form as

 

Minimise w = bTY

subject to

ATYCT   and   Y ≥ 0

Equations (3.1.1) and (3.1.2) are called symmetric primal-dual form.

Unsymmetric duals

For primal problems in standard matrix form duals may be defined as follows:

Primal Dual
Minimise z = CT X Maximise w = bT Y
subject to AX = b subject to ATYC
and X ≥ 0 and Y ≥ 0
Maximise z = CTX Minimise w = bTY
subject to AX = b subject to ATYC
and X ≥ 0 and Y ≥ 0

Example 1

Determine the dual of the problem

 

Minimise z = 5x1 + 2x2 + x3

subject to

2x1 + 3x2 + x3 ≥ 20
6x1 + 8x2 + 5x3 ≥ 30
7x1 + x2 + 3x3 ≥ 40
x1 + 2x2 + 4x3 ≥ 50
x1, x2, x3 ≥ 0.

and

Solution: The given (primal) problem is in symmetric form. Since there are 4 constraints in the primal there will be 4 variables in the dual. Let it be y1, y2, y3, y4.

The dual problem is

 

Maximise w = 20y1 + 30y2 + 40y3 + 50y4

subject to

2y1 + 6y2 + 7y3 + y4 ≤ 5
3y1 + 8y2 + y3 + 2y4 ≤ 2
y1 + 5y2 + 3y3 + 4y4 ≤ 1
y1, y2, y3, y4 ≤ 0.

Note: We observe that the dual problem has three constraints since the primal problem has three variables.

Example 2

Write the dual of the following LPP

 

Maximise z = 2x1 + 5x2 + 3x3

subject to

2x1 + 4x2 − 3x3 ≤ 8
−2x1 − 2x2 + 3x3 ≥ −7
x1 + 3x2 − 5x3 ≥−2
4x1 + x2 + 3x3 ≤4
x1, x2, x3 ≥ 0.

Solution: First, we write the primal (given) problem in symmetric form.

 

Maximise z = 2x1 + 5x2 + 3x3

subject to

2x1 + 4x2x3 ≤ 8
2x1 + 2x2 − 3x3 ≤ 7   (multiplying both sides by −1)
x1 − 3x2 + 5x3 ≤ 2   (multiplying both sides by −1)
4x1 + x2 + 3x3 ≤ 4
x1, x2, x3 ≥ 0.

Then, the dual problem is

 

Minimise w = 8y1 + 7y2 + 2y3 + 4y4

subject to

2y1 + 2y2y3 + 4y4 ≥ 2
4y1 + 2y2 − 3y3 + y4 ≥ 5
y1 − 3y2 + 5y3 + 3y4 ≥ 3
y1, y2, y3, y4 ≥ 0.

Unrestricted variables

We recall that unrestricted variable is a variable that can take positive or negative or zero value.

Example 3

Write the dual of the following LPP

 

Maximise z = 4x1 + 2x2

subject to

x1 − 2x2 ≥ 2
x1 + 2x2 = 8
x1x2 ≤ 10
x1 ≥ 0, x2 is unrestricted in sign.

Solution: The primal in symmetric form is

 

Maximise z = 4x1 + 2x2

subject to

Since x2 is unrestricted put x2 = where Then, (3.1.3) becomes

 

Maximise z = 4x1 + 2 − 2

subject to

x1 + 2 − 2 ≤ −2
x1 + 2 − 2 ≤ 8
x1 + 2 + 2 ≤ −8
x1 + ≤ 10
x1, , ≥0.

Then, the dual problem is

 

Minimise w = − 2y1 + 8y2 − 8y3 + 10y4

subject to

y1 + y2y3 + y4 ≥ 4
2y1 + 2y2 − 2y3y4 ≥ 2
−2y1 − 2y2 + 2y3 + y4 ≥ − 2
y1, y2, y3, y4 ≥ 0.

Put y2y3 = y. Then, the LPP becomes

 

Minimise w = − 2y1 + 8y + 10y4

subject to

y1 + y + y4 ≥ 4
2y1 + 2y − 2y4 = 2

y1, y4 ≥ 0 and y is unrestricted which is the dual of the given problem.

Remark 1: If the kth constraint of the primal problem is an equality, then the corresponding dual variable yk is unrestricted in sign and vice versa.

Remark 2: If any variable of the primal problem is unrestricted in sign, the corresponding constraint in the dual problem will be equality and vice versa.

3.1.2 Duality Theorems

Theorem 3.1

The dual of the dual is the primal.

Proof: Let the primal problem be

subject to

and

X ≥ 0

Then, the dual of (3.1.4) is

subject to

and

Y ≥ 0

The dual of (3.1.5) is

subject to

and

U ≥ 0

By observing (CT)T = C and (AT)T = A, (3.1.6) becomes

subject to

and

(3.1.4) and (3.1.7) represents the same problem. Hence, the dual of the dual is the primal.

Theorem 3.2

Let X0 be a feasible solution to the primal problem.

Maximise z = CX subject to AXb, X ≥ 0, where XT and CRn, bTRm and A is an m × n real matrix. If Y0 be a feasible solution to the dual of the primal, namely Minimise w = bT Y subject to AT Y ≥ CT, Y ≥ 0, YT ε Rm then CX0 ≤ bT Y0.

Proof: Since X0 and Y0 are the feasible solutions to the primal and its dual, respectively, we have

AX0 ≤ b, X0 ≥ 0  and  AT Y ≥ CT, Y0 ≥ 0.

Hence,

or,

Theorem 3.3

Let X0 be a feasible solution to the primal problem. Maximise Z = CX subject to AXb, X ≥ 0 and Y0 be a feasible solution to is dual: Maximise w = bTY subject to ATY ≥ CT, Y ≥ 0 where C, XTRn and YT, bTRm and A is a m − n real matrix. If CX0 = bT W0 then both X0 and W0 are optimum solutions to the primal and dual respectively.

Proof : Let be any other feasible solution to the primal problem. Then, theorem 3.2 gives,

 

CbTY0

Thus,

and hence X0 is an optimum solution to the primal problem, because primal is of maximisation type.

Similarly, if is any other feasible solution to the dual problem, then bT Y0bT and thus Y0 is an optimum solution to the dual problem, because dual is a minimisation problem.

Theorem 3.4 Basic Duality Theorem

Let a primal problem be Maximise z = CX subject to AXb, X ≥ 0, C, XTRn and the associated dual be minimise w = bT Y subject to AT Y ≥ CT, Y ≥ 0, YT, bTRm. If X0 (Y0) is an optimum solution to the primal (dual) then there exists a feasible solution Y0(X0) to the dual (primal) such that CX0 = bT Y0.

Proof: The standard primal can be written as Maximise z = CX subject to AX + IXs = b, where is the slack vector and I is the associated m − m identity matrix.

Let X0 = [XB, 0] be an optimum solution to the primal, where is optimum basic feasible solution given by XB = B−1 b, B being the optimal basis of A. Then, the optimal primal objective function is

 

z = CX0 = CBXB where CB is the cost vector associated with XB.

Now, the net evaluations in the optimal simplex table are given by

Since XB is optimal, we must have zjcj 0 for all j.

This gives CBB−1 aj ≤ cj and CBB−1 ej ≥ 0 for all j.

or,

CB B−1 A ≥ C and CB B−1 0 in matrix form.

or,

Now, if we let B−1 = = Y0, the above become

AT W0 ≥ CT and W0 ≥ 0, Rm. This means that W0 is a feasible solution to the dual problem. Moreover, the corresponding dual objective function value is . Thus, given an optimal solution X0 to the primal, there exists a feasible solution Y0 to the dual such that CX0 = bT Y0.

Similarly, starting with Y0, the existence of X0 can be proved.

Corollary 3.5

If X0 is an optimal solution to the primal, an optimal solution to the dual is given by W0 = B−1 CB, where B is the primal optimal basis.

Theorem 3.6 Fundamental Theorem of Duality

If the primal or the dual has a finite optimum solution, then the other problem also processes a finite optimum solution and the optimum values of the objective functions of the two problems are equal.

Proof: Consider the primal-dual pair:

Primal: Minimise z = CX, subject to AXb and X 0.

Dual: Minimise w = bT Y, subject to AT Y ≥ CT and W ≥ 0.

Necessary condition: Let a feasible solution X0(Y0) be an optimum solution to the primal (dual) problem. It then follows from Theorem 3.4 that there exists a feasible solution Y0(X0) to the dual (primal) problem, such that CX0 = bT Y0.

It now follows from Theorem 3.3 that Y0 must be optimal. This proves the necessity of the condition. Sufficiency follows from Theorem 3.3.

Theorem 3.7 Existence Theorem

If either the primal or the dual problem has an unbounded objective function value, then the other problem has no feasible solution.

Proof: Let the given primal problem have unbounded solution. Then, for any value of the objective function, say + ∞, there exists a feasible solution, say X yielding this solution that is, CX → ∞. If possible, let the dual problem have a feasible solution. Then from Theorem 3.2, for each feasible solution Y0 of the dual, there exists a feasible solution. X0 to the primal such that CX0 ≤ bT Y0. That is bT Y → ∞. Now, as b is a constant and W0 has to satisfy the constraint AT Y0 CT, therefore the dual objective function bT Y0 must be finite. This contradicts the result bT Y0 → ∞. Hence, the dual problem has no feasible solution.

By a similar argument it can be shown that when the dual has an unbounded solution, the primal has no solution.

Remark: The above result shows the following relation between the solutions of the primal and dual.

Dual problem Primal problem
  Feasible solution No feasible solution
Feasible solution Optimum Dual unbounded
No feasible solution Primal unbounded Unbounded or
    No feasible solution

Theorem 3.8 Complementary Slackness Theorem

Let X0 and Y0 be the feasible solutions to the primal, Maximise z = CT X, AXb, X ≥ 0, and its dual Minimise w = bT Y, AT Y ≥ CT, Y ≥ 0, respectively. Then, a necessary and sufficient condition for X0 and Y0 to be optimal to their respective problem is that

Proof: Necessity: Let α = (bAX0) and β = (ATY0CT). Since X0, Y0 are solutions to the primal and dual respectively, we have α ≥ 0, β ≥ 0 and α + β = bCT.

Now, if X0, Y0 are optimal, then CX0 = bT Y0 so that α + β = 0. But since α ≥ 0 and β ≥ 0 this gives α = 0, β = 0.

Thus, the conditions are necessary.

Sufficiency: Let the given conditions hold for the feasible solutions X0 and W0. That is, α = 0 and β = 0.

Then,

CX0 = bTY0
X0 and W0 are optimal

Thus, the conditions are sufficient.

3.1.3 Duality and Simplex Method

Since any LPP can be solved by using simplex method, the method is applicable to both the primal as well as its dual. The fundamental theorem of duality suggests that an optimum solution to the associated dual can be obtained from that of its primal and vice versa.

Properties of Primal and Dual Optimal Solutions

  1. If the primal (dual) variables correspond to the slack variable in the dual (primal) problem its optimum value is directly read off from the dual (primal) simplex table as the net evaluation corresponding to the slack variable.
  2. If the primal (dual) variables correspond to the artificial variable in the dual (primal) problem its optimum value is directly read off form the optimum dual (primal) simplex table as the net evaluation corresponding to that artificial variable after deleting the constants M.

Example 1

Use duality to solve the following LPP

 

Maximise z = 4x1 + 2x2

subject to the constraints

x1x2 ≤ − 3
x1 + x2 − 2
x1, x2 ≥ 0

Solution: The given problem can be rewritten as

subject to

The dual of (3.1.8) is

subject to

Solution of (3.1.9) by Big M method:

Introduce the surplus variables s1, s2 ≥ 0 and the artificial variables A1, A2 ≥ 0 in the above constraints (3.1.9) can be written as

 

Minimise w = − 3y1 + 2y2 + 0s1 + 0s2MA1MA2

subject to

y1 + y2s1 + A1 = 4
y1y1s2 + A2 = 2
y1, y2, s1, s2, A1, A2 ≥ 0.

An obvious IBFS is A1 = 4, A2 = 2

 

TABLE 3.1

We observe that the column corresponding to y1 is the pivotal column. Since there is no positive element in the pivotal column the dual problem has an unbounded solution. Hence, there exists no finite optimum solution to the primal problem.

Example 2

Use duality to solve the following LPP.

 

Maximise z = 2x1 + x2

subject to the constraints

x1 + 2x2 ≤ 10
x1 + x2 ≤ 6
x1x2 ≤ 2
x1 − 2x2 ≤ 1
x1, x2 ≥ 0.

Solution: The dual of the given problem is

 

Minimise w = 10y1 + 6y2 + 2y3 + y4

subject to the constraints

y1 + y2 + y3 + y4 ≥ 2
2y1 + y2y3y4 ≥ 1
y1, y2, y3, y4, ≥ 0.

Introducing surplus variables s1, s2 ≥ 0 and artificial variables A1, A2 ≥ 0 then the optimal simplex table is

 

TABLE 3.2

Since all zjcj 0, the current solution is optimal. The optimum solution of the dual is

 

y1 = 0, y2 = 1/2, y4 = 0 and Minimum of w = −10.

To find the optimum solution of the primal:

Corresponding to the variable x1 we have the dual constraint y1 + y2 + y3 + y4 ≥ 2 and the basic variable associated with the constraint is A1. Hence, the optimal value of x1

= M − (net evaluation value of A1)
= M − (M − 4) = 4

Similarly, the optimal value of x2 is

= M − (M − 2) = 2

∴ The optimal solution of the primal is

x1 = 4, x2 = 2 with Maximum of z = − 10

Example 3

A company makes three products X, Y, Z out of three raw materials A, B and C. The number of units of raw materials required to produce one unit of products x, y, z is given in the following Table:

 

TABLE 3.3

The profit per unit on the products X, Y and Z arè40, 25 and 50, respectively. The number of units of raw materials available are 36, 60 and 45, respectively.

  1. Determine the product mix that will maximise the total profit.
  2. Through the final simplex table, write the solution to the dual problem and give the economic interpretation.

Solution: Formulation of the LPP

Let the company produce x, y and z units of the products X, Y and Z, respectively. Then, according to the given conditions, the objective function is Maximise z = 40x + 25y + 50z.

Also, from the Table it is clear that the material constraints are

Hence, the complete LPP is

 

Maximise z = 40x + 25y + 50z

Subject to (3.1.10).

Solution by simplex method:

Introduce slack variables s1, s2, s3 ≥ 0 in the given constraints. The LPP becomes

 

Maximise z = 40x + 20y + 50z + 0s1 + 0s2 + 0s3

subject to

x + 2y + z + s1 = 36
2x + y + 4z + s2 = 60
2x + 5y + z + s3 = 45
and x, y, z, s1, s2, s3 ≥ 0.

If we solve the problem by regular simplex method the optimum simplex table is

 

TABLE 3.4

Since all zjcj ≥ 0 the current solution is optimum to the primal.

The optimum solution is

 

x = 20, y = 0, z = 5 and maximum of z = 1050

The dual problem:

The dual of the problem is

 

Minimise w = 36y1 + 60y2 + 45y3

subject to

Solution of the dual from the optimum simplex table of the primal:

The dual variable y1 corresponds to the constraints x + 2y + z ≤ 36 and the initial slack variable corresponding to this constraint is s1. Hence, the optimal solution of y1 = zjcj value of s1 = 0.

Similarly, the optimum value of y2 = zjcj value of s2 = 10

Optimum value of y3 = zjcj value of s3 = 10. Hence, the optimum solution of the (3.1.11) is

 

x = 20, y = 0, z = 5 and Minimum of w = 1050

Economic interpretation: Suppose the manager of the company wishes to sell the three raw materials A, B and C instead of using them for production of the products X, Y and Z. Suppose the selling prices were y1, y2 and y3 per unit of raw material A, B and C, respectively. The cost to the purchaser of all three raw materials would be 36y1 + 60y2 + 45y3. Of course, the purchaser would like to set the selling prices in such a way so as to minimise the total cost paid. Hence, the objective function becomes

 

Minimise w = 36y1 + 60y2 + 45y3

The final table of the primal problem indicates that the marginal value of raw material A is zero, for B is 10 per unit; and for C is 10 per unit. Thus, if the manager sells the raw materials A, B and C at price 0, 10 and 10 per unit, respectively, he will get the same contribution of 1050 which he going to fetch in case he utilizes these resources for production of the three products X, Y and Z.

3.2 DUAL SIMPLEX METHOD

3.2.1 Introduction

In simplex method we have already seen that every basic solution with all zjcj 0 will not be feasible (since zjcj = CBB−1 ajcj is independent of right hand side vector b for all j), but any basic feasible solution with all zjcj ≥ 0 will certainly be an optimal solution. Such typical problems, for which it is possible to find infeasible but better than optimal (with all zjcj ≥ 0) starting basic solutions, can be solved more easily by dual simplex method. Such a situation is recognized by first expressing the constraints in the form (≤) and the objective function in the maximisation form. After adding the slack variables and putting the problem in the Table form, if any of the right hand side elements is negative and if the optimality condition is satisfied, then the problem can be solved by the dual simplex method. It is important to note that by this arrangement, negative element on the right hand side signifies that the corresponding slack variable is negative.

3.2.2 Difference between Regular Simplex Method and Dual Simplex Method

The regular simplex method starts with a basic feasible but a non-optimal solution and works towards optimality, whereas the dual simplex method starts with a basic infeasible but an optimal solution and works towards feasibility. Also, in regular simplex method we first determine the entering variable and then the leaving variable while in the case of dual simplex method we first determine the leaving variable and then the entering variable.

3.2.3 Dual Simplex Algorithm

Step 1: Convert the problem into a maximisation type if is in the minimisation type.

Step 2: Convert ≥ type inequalities (if any) into ≤ type equalities by multiplying both sides by − 1.

Step 3: Express the LPP in standard from by introducing slack variables. Obtain the initial basic solution (not feasible) and express these information in the simplex table.

Step 4: (Optimality condition). Test the nature of zjcj.

Case (i): If all (zjcj)≥0 and all XBi≥0, then the current solution is an optimum basic feasible solution.

Case (ii): If all (zjcj)≥0 and atleast one XBi < 0 then the current solution is not an optimum basic feasible solution. Go to the next step.

Case (iii): If any (zjcj) < 0 then this method fails.

Step 5: (Feasibility conditions).

  1. (Leaving variable): The leaving variable is the basic variable corresponding to the most negative value of XBi. Let xk be the leaving variable.
  2. (Entering variable): Compute the ratio between (zjcj) row and the key row, that is,

(Consider the ratios with negative denominators alone) The entering variable is the non-basic variable corresponding to the maximum ratio θ. If there is no such ratio with negative denominator, then the problem has no feasible solution.

Step 6: Carry out the row operations as in the regular simplex method and repeat the procedure until either an optimum feasible solution is obtained or there is an indication of nonexistence of a feasible solution.

Example 1

Use dual simplex method to solve the LPP

 

Maximise z = 2x1 + 2x2

subject to

2x1x2x3 ≥ 3
x1x2 + x3 ≥ 2
x1, x2, x3 ≥ 0.

Solution: First convert all the inequalities into ≤ type.

 

Maximise z = 2x1 + 2x2

subject to

− 2x1 + x2 + x3 ≤ − 3
x1 + x2x3 ≤ − 2
x1, x2, x3 ≥ 0.

Introduce slack variables s1, s2 ≥ 0 an obvious initial basic non-feasible solution is s1 = − 3, s2 = − 2.

Starting Table:

 

TABLE 3.5

Since there are some (zjcj) < 0 this method fails. That is, we cannot solve this problem by dual simplex method.

Remark: From the above example we observe that if a LPP is of maximisation type with positive objective function coefficient, then the LPP cannot be solved by dual simplex method.

Example 2

Use dual simplex method to solve

 

Maximise z = − 3x1 − 2x2

subject to

x1 + x2 ≥ 1
x1 + x2 ≥ 7
x1 + 2x2 ≥ 10
x2 ≤ 3
x1, x2 ≥ 0.

Solution: First, we convert all the inequalities into ≤ type. The given LPP becomes

 

Maximise z = − 3x1 − 2x2

subject to

x1x2 ≤ − 1
x1 + x2 ≤ 7
x1 − 2x2 ≤ − 10
x2 ≤ 3.

Introducing slack variables s1, s2, s3, s4 ≥ 0 in the given constraints the LPP becomes

 

Maximise z = −3x1 − 2x2 + 0s1 + 0s2 + 0s3 + 0s4

subject to

−x1 − x2 + s1 = −1
x1 + x2 + s2 = 7
−x1 − 2x2 + s3 = −10
−x1 − 2x2 + s3
= −10
x1,x2,s1,s2,s3,s4 ≥ 0.

An obvious initial basic non-feasible solution is

s1 = − 1, s2 = 7, s3 = − 10, s4 = 3.

TABLE 3.6

Since all zjcj ≥ 0 the current solution is optimal and since some XBi are negative the solution is non-feasible. From the Table we see that s3 is the leaving variable.

To find the entering vector:

Since the maximum of

corresponding to the variable x2. Hence, x2 enters into the basis.

First iteration: Introduce x2 and drop s3.

 

TABLE 3.7

Form Table 3.7, it is clear that s4 is the leaving variable and x1 is the entering variable.

Second iteration: Introduce x1 and drop s4.

 

TABLE 3.8

Since all zjcj 0, the optimality condition is satisfied. All XBi ≥ 0 implies that the feasibility condition is also satisfied. Hence, the optimum solution is

 

x1 = 4, x2 = 3 with maximum of z = −18

Example 3

Using dual simplex method solve the following LPP

 

Minimise z = x1 + x2

subject to

2x1 + 2x2 ≥ 2

x1x2 ≥ 1

and,

x1, x2 ≥ 0

Solution: Convert the minimisation LPP into a maximisation LPP and convert all the inequalities into ≤ type.

 

Maximise z* = −x1x2 where z* = −z

subject to

−2x1x2 ≤ − 2
x1 + x2 ≤ − 1

and,

x1, x2 ≥ 0

Introduce slack variables s1, s2 ≥ 0 in the given constraints. The LPP becomes

 

Maximise z* = −x1x2 + 0s1 + 0s2

subject to

−2x1x2 + s1 = − 2
x1 + x2 + s2 = − 1

and,

x1, x2, s1, s2 ≥ 0

An obvious initial basic (non-feasible) solution is s1 = − 2, s2 = − 1.

Starting Table:

 

TABLE 3.9

s1 is the entering variable since s2 is the variable with most negative XBi. Also maximum of corresponding to x1. Hence, x1 is the entering variable.

First iteration: Introduce x1 and drop s1.

 

TABLE 3.10

Since all zjcj ≥ 0 the optimality condition is satisfied. Also, s2 is the only variable with negative XBi and so s2 is the leaving variable. But there is no negative entries in the key row. Hence, there exists no feasible solution to the given problem.

3.3 REVISED SIMPLEX METHOD

3.3.1 Introduction

The original simplex method is a straight forward algebraic procedure. However, this way of executing the algorithm is not the most efficient computational procedure for electronic digital computers. It computes and stores many numbers that are needed at the current iteration and which may not become relevant for decision-making at subsequent iterations. The only piece of information relevant at each iteration are the coefficients of the non-basic variables, the coefficient of the entering basic variable in the order equations, and the right hand side of the equation. It would be very useful to have a procedure that could obtain this information efficiently without computing and storing all the other coefficients. These considerations motivated the development of the revised simplex method. This method was designed to accomplish exactly the same thing as the original method, but in a way that is more efficient for execution on a digital computer. Thus, it is a streamlined version of the original procedure. It computes the essential data in a more compact form.

3.3.2 Revised Simplex Algorithm

Step 1: Introduce slack or surplus variables and the artificial variables whenever necessary. Express the given LPP in standard form after converting it into maximisation type.

Step 2: Obtain an initial basic feasible solution with an initial basis B = Im (identity matrix of order m) and find B1 and then calculate = B−1b where b is the right hand side constant vector.

Step 3: Calculate the simplex multiplier M = (M1, M2, …, Mm) (where m denotes the number of constraints) using M = CB B1 where CB is the cost vector associated with the basis B.

Step 4: Calculate the net evaluations of the non-basic variables using where Pi is the column vector of the variable xi and ci is the cost of the variable xi in the objective function.

Case (i): If there is no negative then the current solution is optimum.

Case (ii): If there is some negative k k then the non-basic variable xk with kk as most negative enters into the basis. Proceed to Step 5.

Step 5: Calculate the pivotal column vector.

k = B−1Pk

Case (i): If all xik ≤ 0, then there exists an unbounded solution to the given LPP.

Case (ii): If at least one xik < 0, consider the current XB and determine the leaving variable. Proceed to Step 6.

Step 6: Write down the results obtained from step (2) to step (5) in the revised simplex table.

Step 7: Convert the leading element to unity and all other elements of the entering column to zero and update the current basic feasible solution.

Step 8: Go to step (4) and repeat the procedure until an optimum basic feasible solution is obtained or till there is an indication of an unbounded solution.

Example 1

Use revised simplex method to solve the LPP.

Maximise z = 6x − 2x2 + 3x3

subject to

2x1x2 + 2x3 ≤ 2
x1 + 4x3 ≤ 4
x1, x2, x3 ≥ 0

Solution: Convert the LPP into standard form by introducing slack variables x4 and x5.

Maximise z = 6x1 − 2x2 + 3x3 + 0s1 + 0s2

subject to

2x1x2 + 2x3 + s1 = 2
x1 + 0x1 + 4x3 + s2 = 4

and,

x1, x2, x3, s1, s2 ≥ 0

Let P1, …, P5 denote the column vectors corresponding to the variables, x1, …, x5 and let b denote the right hand constants, respectively. Then,

Now, (x4, x5) forms the initial basis

Therefore, B1 = B

Now,

Starting Table: (The last three columns are added later)

 

TABLE 3.11

The simplex multipliers are

 

M = (M1, M2) = CBB−1 = (0, 0) = (0, 0)

Now,

Since 11 is most negative the variable to enter is x1. Hence, the pivotal column is

After calculating the minimum ratio we see that x4 leaves the basis. Now, the pivotal column must be reduced to . This is obtained by dividing row 1 by 2 and then subtracting row 2 from row 1.

First iteration:

 

TABLE 3.12

The simplex multipliers corresponding to the first iteration are M = (M1, M2) = CBB−1 =

Now,

x2 enters into the basis Now, the pivotal column

Hence, x2 enters and x5 leaves the basis. Now, the pivotal column is must be reduced to .

Second iteration:

 

TABLE 3.13

The simplex multipliers corresponding to the second iteration are

Since all is non-negative the current solution is optimum. The optimum solution is

 

x1 = 4, x2 = 6, x3 = 0 with maximum of z = 6 × 4 − 2 × 6 + 3 × 0 = 12

Example 2

Use revised simplex method to solve the following LPP.

 

Minimise z = x1 + 2x2

subject to

2x1 + 5x2 ≥ 0
x1 + x2 ≥ 2
x1, x2 ≥ 0

Solution: Convert the minimisation LPP into maximisation type and then introduce surplus variables x3, x4 and the artificial variables x5, x6. The LPP becomes

subject to

2x1 + 5x2x3 + x3 = 6
x1 + x2x4 + x6 = 2
x1, x2, x3, x4, x5, x6 ≥ 0

Let P1,P2, …, P6 denote the column vectors corresponding to the variables x1, …, x6 and let b denote the right hand constants, respectively.

Then,

Now, (x5, x6) forms the basis

Initial Table:

 

TABLE 3.14

The simplex multipliers for Table 3.14 are

Hence, x2 enters in the basis and hence the pivotal column is

After calculating the ratio we see that x5 leaves the basis. Now, the pivotal column is . This must be reduced to .

First iteration:

 

TABLE 3.15

The simplex multipliers for Table 3.15 are

Hence,

Hence, x1 enters. The pivotal column is

and the ratio implies that x6 leaves the basis. We must reduce the pivotal column to .

Second iteration:

 

TABLE 3.16

The simplex multipliers are

Since all are non-negative the current solution is optimum. The optimum solution is

 

x1 = 4/3, x2 = 2/3, x3 = 0 and minimum of z = 4/3 + 2(2/3) = 8/3

3.3.3 Advantages of the Revised Simplex Method Over the Regular Simplex Method

  1. For LPP involving large number of variables the number of constraints and the amount of comptations is very much reduced because the revised simplex method works with a reduced Table whose size is determined by the number of constraints.
  2. The revised simplex method works with a reduced Table as it stores only the basic variables, the basis inverse and the constants. Hence, less amount of new information needs to be stored in the memory of the computer from one iteration to the other.
  3. Data can be stored in lesser space. The original data is usually given in fixed decimals of three or four digits. The storage of data is more accurate and compact since the revised simplex method works only with the original data.
  4. There is less accumulation of round off errors since no calculations are done on a column unless it is ready to enter the basis.
  5. The theory of the revised simplex method, especially the importance of the basis inverse and the simple multipliers is quite helpful in understanding the advanced topics in linear programming such as duality theory and sensitivity analysis.
3.4 SENSITIVITY ANALYSIS

3.4.1 Introduction

After a liner programming problem has been solved, it is useful to study the effect of changes in the parameters of the problem on the current optimal solution. The LP solution must provide dynamic information. A static optimum solution will become obsolete as soon as the conditions under which the model is constructed change. Sensitivity analysis is concerned with studying possible changes in the available optimal solution as a result of making changes in the original model.

The change in parameters of the problem may be discrete or continuous. The study of the effect of discrete changes in parameters on the optimal solution is called sensitivity analysis or post-optimality analysis, while that of continuous changes in parameters is called parametric programming. One way to determine the effects of parameter changes is to solve the new problem a new, which may be computationally inefficient. Alternatively, the current optimal solution may be investigated, making use of the properties of the simplex criteria. The second method reduces additional computations considerably and hence forms the subject of present discussion.

The changes in the parameters of a LPP include

  1. changes in the right hand side of the constraint bi
  2. changes in the cost coefficient cj
  3. addition of new variables
  4. changes in the coefficients of constraints arj
  5. addition of new constraints
  6. deletion of variables
  7. deletion of constraints.

Generally, these parameter changes result in one of the following three cases:

  1. The optimal solution does not change.
  2. The basic variables remains the same but their values change.
  3. The basic variables as well as their values are changed.

While dealing with these changes, one important objective is to find the maximum extent to which a parameter of a set of parameters can change so that the current optimal solution remains optimal. In other words, the objective is to determine how sensitive is the optimal solution to the changes in those parameters. Such an analysis is called sensitivity analysis.

1. Variation in the right side of constraints Since the optimality condition zjcj does not involve any of bi, if any component bi of the requirement vector b = (b1, …, bm) is changed, then this change will not effect the conditions of optimality. Hence, if any bi is changed to bi + Δbi then the new solution thus obtained will remain optimal.

But XB = B−1b depends on b. Therefore, any change in b may affect the feasibility of the current solution.

Example 1

Consider the LPP

 

Maximise z = 5x1 + 12x2 + 4x3

subject to

x1 + 2x2 + x3 ≤ 5
2x1x2 + 3x3 = 2
x1, x2, x3 ≥ 0

  1. Discuss the effect of changing the requirement vector from to on the optimum solution.
  2. Discuss the effect of changing the requirement vector from to on the optimum solution.
  3. Which resource should be increased and by how much to achieve the best marginal increase in the value of the objective function?
  4. Which resource should be decreased and by how much to achieve the best marginal increase in the value of the objective function?

Solution: If we solve the given problem by Big ‘M’ method the find simplex table is

 

TABLE 3.17

and the optimal solution is

 

x1 = 9/5, x2 = 8/5, x3 = 0 and maximum z = 141/5

  1. If the requirement vector changes to then the new values of the current basic variables are given by

    Hence, the new solution is optimal and feasible and the new optimum solution is

     

    x1 = 11/5, x2 = 12/5 with maximum of z = 5 × 11/5 + 12 × 12/5 + 4 × 0 = 199/5

  2. If the requirement vector changes to then the new values of the basic variables are given by

    Hence, the current optimal solution becomes infeasible. Hence, we have to apply the dual simplex method.

     

    TABLE 3.18

    Introduce x3 into the basis and drop x2 from the basis then the next iteration is

     

    TABLE 3.19

    Since all zicj ≥ 0 and all XBi ≥ 0 the current solution is optimum. The optimal solution is

     

    x1 = 0, x2 = 0, x = 3 with maximum of z= 5(0) + 12(0) + 4 × 3 = 12

  3. In order to find the resource that should be increased (or decreased) we shall write the final objective function, which is G = 5y1 + 2y1, where y1 = 29/5 and y2 = − 2/5 are the optimal dual variables. Thus, the first resource increases the objective function by 29/5. Next, we are to find how much the first resource should be increased so that each additional unit continues to increase the objective function by 29/5. This requirement will be met so long as the primal problem remains feasible. If Δ be the increase in the first resource, it can be determined from the condition

    As x1 and x2 remain feasible (≥0) for all values of Δ > 0, the first resource can be increased indefinitely while maintaining the condition that each additional unit will increase the objective function by 29/5.

  4. The second resource should be decreased as each additional unit of the second resource decreases the objective function by 2/5. Let Δ be the decrease in the second resource. To find its extent, we make use of the condition that the current solution remains feasible so long as

    Evidently x1 remains positive only so long as or Δ ≤ 9/2. If Δ > 9/2, x1 becomes nega tive and must leave the solution.

2. Addition of a new constraint Addition of a new constraint may or may not affect the feasibility of the current optimal solution. The addition of a new constraint can result in one of the two conditions.

  1. If the new constraint is satisfied by the current optimal solution, then this new constraint is said to be redundant and its addition will not change the solution. That is, the current solution remains feasible as well as optimal.
  2. If the new constraint is not satisfied by the current optimal solution, the solution becomes infeasible. In this case the new solution is obtained by using the dual simplex method.

However, in general, whenever a new constraint is added to a linear programming problem, the old optimal value will always be better or at least equal to the new optimal value. In other words, the addition of a new constraint cannot improve the optimal value of any LPP.

Example 2

Consider the LPP

 

Maximise z = 4x1 + 6x2 + 2x3,

subject to

If we solve the problem by simplex method then the optimal simplex table will be

 

TABLE 3.20

and the the optimal solution will be

If we add the additional constraint 2x1 + 3x2 + 2x3 ≤ 4 then the optimal solution given in (3.1.13) does not satisfy the additional constraint. The current optimal solution is not optimal for the modified problem. Let s3 be the slack variable for this constraint then the modified Table will be

 

TABLE 3.21

Since x1 and x2 are basic variables the corresponding entries in the third column should be zero. To achieve this multiply the first row by −2 and second row by −3 and add them to the third row, the above Table becomes

 

TABLE 3.22

Since all zjcj ≥ 0, the optimality condition is satisfied but XB value for the third row is negative. Now, we apply the dual simplex method then the optimal solution becomes

 

x1 = 5/3, x2 = 5/3 and x3 = 2/3 with maximum of z = 12

3.4.2 Changes Affecting Optimality

1. Changes in the objective function (or) changes in the cost coefficient cj:

Changes in the coefficients of the objective function may take place due to a change in cost or profit of either basic variables or non-basic variables.

If XB is the optimal basic feasible solution, then we have XB = B−1b. It is clear that XB is independent of C and therefore, any change in some component cj of C will not change XB. XB will always remain basic feasible solution.

But however, the optimal condition (zjcj ≥ 0 for all j which is satisfied for optimal solution cj changes. So, any change in some component cj of C may affect the optimality of the current solution and will not affect the feasibility of the current solution.

The variation in the price vector C may be made by the following two ways:

  1. Variation ckCB (or) variation in the coefficient cj of the non-basic variable xj in the objective function.

    If cjCB changes to (cj + ΔCj), such that Δcj ≤ zj − cj, then the value of the objective function and the optimal solution of the problem remain changed. The range over cjcB can vary maintaining the optimality of the solution given by −∝ ≤ cj ≤ cj + Δcj. Note that there is a lowerbound to Δj.

  2. Variation in ckCB (or) variation in the coefficient ck of the basic variable xk in the objective function. In Δcj.

    In ckCB changes to (ck + Δck), then the range over ck ∈ CB can vary so that the solution remains optimal is given by

    If there is no akj > 0, there is a lowerbound to Δck and if no akj < 0, there is no upperbound to Δck. The value of the objective function will be improved further improved by an amount XBk · ∈CB.

    i.e.

     

    z* = z + XBk · ΔCBk

Remark 1: Any change cj ∉ CB will affect only its net evaluation coefficients and not others.

2. Any change cj ∈ CB will affect its net evaluation coefficients and the value of the objective function.

Example

Consider the LPP

 

Maximise z = 4x1 + 6x2 + 2x3

subject to

x1 + x2 + x3 ≤ 3
x1 + 4x2 + 7x3 ≤ 9
x1, x2, x3 ≥ 0

(a) Solve the LPP

(b) (i) Find the range on the value of non-basic variable coefficient c3 such that the current solution will still remains optimal.

(ii) What happens if c3 in increased to 12? What is the new optimal solution?

(c) (i) Find the range on basic variable coefficient c1 such that the current optimal solution remains optimal.

(ii) Find the effect of c1 = 8 on the optimal solution.

(d)Find the effect of changing the objective function to z = 2x1 + 8x2 + 4x3 on the current optimal solution.

Solution: (a) If we solve the LPP by simplex method (left as an exercise to the reader) the optimal simplex table is

 

TABLE 3.23

(Here, s1 and s2 are slack variables) and the optimal solution is

x1 = 1, x2 = 2, x3 = 0 and maximum of z = 4 × 1 + 6 × 2 + 2 × 0 = 16

(b) (i) The coefficient c3 corresponds to the non-basic variables x3 and its value is zero in the current optimal solution since the associated cost for x3 is 2 (very low). Clearly if c3 further decreases, it will have no effect on the current optimal solution. However, if c3 is increased beyond a certain value it may become profitable for the third decisions variable.

As a rule, the sensitivity of the current optimal solution is determined by studying how the current optimal solution changes as a result of changes in the input data. When a value of c3 changes, then the net evaluation value of x3 also changes. The current solution remains optimal as long as z3c3 remains non-negative.

From the optimal simplex table

or,

8 − c3 ≥ 0

or,

c3 ≤ 0

Hence, as long as the coefficient of x3 is less than or equal to 8, it is not profitable to produce it.

(ii) if c3 = 12, zc3 = (4 6) − 12 = −4

Since z3c3 is negative the current solution is not optimal. Since there is a negative net evaluation we continue the iteration process [left as an exercise to the reader]. The new optimal solution is

 

x1 = 2, x2 = 0, x3 = 1 with maximum of z = 4 × 2 + 6 × 0 + 12 × 1 = 20

(c) (i) When c1 decreases below a certain level, it may no longer remain profitable with respect to the decision variable x1. On the other hand, if c1 increases beyond a certain value, it may becomes so profitable that it is most paying to produce only the product corresponding to the decision variable x1. In either case the optimal solution will change and hence there is lower as well as upper limit on c1 within which the optimal production mix will not be affected.

It is clear from Table that any variation in c1 (and/or in c2 also) will not change z1c1 and z2c2 (∵ they remain zero), while z3c3, z4c4, z5c5 will change. However, as long as zjcj (j = 3, 4, 5) remain non-negative, Table will remain optimal.

Now,

Hence,

c1 + 10 × 0 i.e. c1 ≤ 10
c1 − 2 ≥ 3/2
1/3c1 + 2 ≥ i.e. c1 ≤ 6

∴ Range of c1 for the optimal solution to remain optimal is 3/2 ≤ c1 ≤ 6.

Thus, so long as c1 lies within these limits, the optimal solution in Table remains optimal. However, within this range, as the value of c1 is changed, maximum of z undergoes a change. For example, when c1 = 4, maximum of z = 4 × 1 + 6 × 2 = 16.

(ii) When c1 = 8,

z3c3 = 10 − 8 = 2

 

z4c4 = −2 + 4/3c1 = 26/3
z5c5 = −1/3c1 + 2 = −2/3
z1c1 = 0
z2c2 = 0

As the net evaluation corresponding to the slack variable x2 is −2/3, the current solution is not optimum. We can continue the simplex iteration process by taking s2 as the entering variable. The new optimal solution is x1 = 3, x2 = 0, x3 = 0 with maximum of z = 24 [left as an exercise to the reader].

(d) The effect on the optimal solution can be determined by checking whether the net evaluation row remains non-negative.

 

z1c1 = 0
z2c2 = 0

Hence, the optimal solution does not change. The optimal solution remains as x1 = 1, x2 = 2, x3 = 0 and maximum of z = 1 × 2 + 2 × 8 + 0 × 4 = 18.

Note that there is also an indication of alternate optimum solution since z4c4 = 0.

2. Variation in the coefficients aij of the constraints (or) variation in the component aij of the coefficient matrix A:

Case (i): If the coefficients aij of a non-basic variable xj(aijB) gets changed in the current optimal solution, then this change will not affect the feasibility of the current solution but may affect the optimality of the current solution.

Case (ii): If the coefficient aij of a basic variable xj(aijB) gets changed in the current optimal solution, then this may effect both feasibility and optimality of the current solution. As the basis matrix is affected, it may affect all the quantities given in the current optimal Table. Under this situation, it may be better to solve the problem again.

Example

Consider the LPP

 

Maximise z = 45x1 + 100x2 + 30x3 + 50x4

subject to

7x1 + 10x2 + 4x3 + 9x4 ≤ 1200
3x1 + 40x2 + x3 + x4 ≤ 800
x1, x2, x3, x4 ≥ 0

Find the effect of the following on the optimal solution:

  1. If the x1 column in the problem changes from to
  2. If the x1 column changes from to

Solution: (a) Now, x1 is a non-basic variable and z1c1 = CBB−1 P1c1

= MP1c1 [where M is the simplex multiplier]

Now,

as s1, s2 forms the initial basis.

= (22/3 2/3)
z1c1(22/3 2/3) − 45 = 29/3

As this value is non-negative, the original optimum solution remains optimum for the new solution also.

(b)

 

z1c1 = CB B−1 P1c1
= MP1 − c1
=(22/3 2/3) = −3

Hence, the solution can be improved by taking the variable x1 as the entering variable. By applying the simplex iteration process the new optimum solution is

 

x1 = 2000/7, x3 = 5600/27, x2 = x4 = 0 and maximum of z = 86000/7

3. Addition of a new activity (or variable)

If a new variable is introduced in a LPP whose optimal solution has been obtained, then the solution of the problem will remain feasible. Addition of an extra variable xn + 1 to the problem will introduce an extra column, say an + 1 to the coefficient matrix A and an extra cost cn + 1 will be introduced in the cost vector ck. Thus, the addition of this extra variable may effect the optimality of the problem. If zn + 1cn + 1 × 0 then the solution of the original problem will remain optimal for the new problem also. Otherwise we use simplex method to obtain the optimal solution to the new problem.

Further, it can be noted that the addition of a new variable can never worsen the value of the objective function.

Example

Consider the LPP given in the above example. Then, the optimum simplex table is given by Table. If a new variable x5 is added to this problem with a column and c5 = 120, find the change in the optimal solution.

Solution: z5c5 = CB B− 1 P5c5

using table.

Since z5c5 = − 40 which is negative, the existing optimum solution can be improved. By applying the simplex iteration process (left as an exercise to the reader), the new optimum solution is

 

x3 = 400/3, x4 = 200/3 with maximum of z = 30 × 400/3 + 120 × 200/3 = 1200

4. Deletion of a variable

Deletion of a non-basic variable is a totally superfluous operation and does not affect the feasibility and/or optimality of the current optimal solution. But the deletion of a basic variable may affect the optimality of the current solution. To find the new optimum solution assign a large penalty − M to the variables under consideration and then use the regular simplex method to the modified simplex table.

Example

Consider the optimal Table of a maximisation problem.

 

TABLE 3.24

Find the change in the optimal solution, when the basic variable x2 is deleted.

Solution: Since this problem is of maximisation type we assign a cost − M to the variable x2. The modified simplex table becomes

 

TABLE 3.25

First iteration: Introduce x4 and drop x2.

 

TABLE 3.26

Since all zjcj ≤ 0, the current solution is optimum. The optimal solution to the new problem is

 

x1 = 0, x2 = 0 and x3 = 4 with maximum of z = 28.

5. Deletion of a constraint

The constraint to be deleted may be either binding or unbinding (redundant) on the optimal solution. The deletion of an unbinding constraint can only enlarge the feasible region but will not affect the optimal solution. Moreover, if the constraint under consideration has a slack or surplus variable of zero value in the basis matrix, is cannot be binding and hence will not affect the optimal solution.

The deletion of a binding constraint will, however, cause post-optimality problem. The simplest way to proceed in this case is via the addition of one or two new variables.

3.5 PARAMETRIC PROGRAMMING

3.5.1 Introduction

Sensitivity analysis discussed the effect of discrete charges in the input coefficients of the linear programming problem on its optimal solution. However, if there is continuous change in the values of these coefficients, none of the results derived in that section are applicable. Parametric linear programming investigates the effect of predetermined continuous variations of these coefficients on the optimal solution. It is simply an extension of sensitivity analysis and aims at finding the various basic solutions that become optimal one after the other as the coefficients of the problem change continuously. The coefficients change as a linear function of a single parameter, hence the name parametric linear programming for this computational technique. As in sensitivity analysis, the purpose of this technique is to reduce the additional computations required to obtain the changes in the optimal solution. Here, we consider only two types of parametric problems.

  1. Parametric cost problem in which cost vector c varies linearly as a function of parameter μ.
  2. Parametric right hand side problem in which the requirement vector b varies linearly as a function of parameter μ.

3.5.2 Parametric Cost Problem

Consider the LPP

 

Minimise z = CX

subject to

 

AX = b,
X ≥ 0,

where C is the given cost vector.

Let this cost vector change to c + μ. Then the parametric cost problem is Minimise z = subject to

 

AX = b,
X ≥ 0

where is the given predetermined cost variation vector and μ is an unknown (positive or negative) parameter. As μ changes, the coefficients of all variables also change. We want to find the family of optimal solutions as μ changes from − ∞ to + ∞.

The problem can be converted into maximisation type and can be solved by using simplex method and sensitivity analysis. When μ = 0, the parametric cost problem reduces to the original linear programming problem. Let B and XB represent the optimal basis matrix and the optimum basis feasible solution, respectively for μ = 0. The net evaluations or relative cost coefficients are all non-negative and are given by where CB is the cost vector of the basic variables and j is the jth column (corresponding to the variable xj) in the optimal Table.

As μ changes from zero to a positive or negative value, the feasible region and values of the basic variables XB remain unaltered, but the relative cost coefficients change. For any variable xj, the relative cost coefficients is given by

Since the vector C and C′ are known, (zcj) and () can be determined. For the solution to be optimum (zjcj)(μ) ≥ 0 or (zjcj) + μ() ≥ 0. In other words, for a given solution, we can determine the range for μ within which the solution remains optimal.

Example

Consider the linear programming problem

 

Maximise z = 4x1 + 6x2 + 2x3

subject to

x1 + x2 + x3 ≤ 3,
x1 + 4x2 + 7x3 ≤ 9
x1, x2, x3 ≥ 0

If we solve this problem by regular simplex method then the optimal simplex table is

 

TABLE 3.27

(where s1 and s2 are slack variables).

Solve this problem if the variation cost vector C′ = (2 −2 2 0 0). Identify all critical values of the parameter μ.

Solution: The given parametric cost problem is

 

Maximise z = (4 + 2μ)x1 + (6 − 2μ)x2 + (2 + 2μ)x3 + 0x4 + 0x5

subject to

 

x1 + x2 + x3 + s4 = 3,
x1 + 4x2 + 7x3 + s2 = 9,
x1, x2, x3 s1, s2 ≥ 0

when μ = 0 the problem reduces to the linear programming problem, whose optimal solution is given in Table 3.27. For values of μ other than zero, the relative profit coefficients become linear functions of μ. To compute them we first add a new relative profit row called to Table 3.27. This is shown in the following Table.

 

TABLE 3.28

Note that is calculated similar to zjcj except that c is replaced by .

The above Table represents a basic feasible solution for the given parametric cost problem. It is given by x1 = 1, x2 = 2, x3 = s1 = s2 = 0 and the value of the objective function z(μ) = z + μz′ = 16 − 2μ. The relative profit coefficients, which are linear functions of μ, are given by

The above Table will be optimal if

μ ≥ 0 for j = 3, 4, 5. Thus, we can determine the range of μ for which the above Table remains optimal as follows:

Thus, x1 = 1, x2 = 2, x3 = x4 = x5 = 0 is an optimal solution for the given parametric problem for all values of μ between −1 and 1/2 and maximum of z = 16 − 2μ.

For μ > 1/2, the relative profit coefficient of the non-basic variable s2, namely, becomes negative and Table 3.28 no larger remains optimal. Regular simplex method is used to iterate towards optimality. The entering variable is s2 and the leaving variable is x2.

 

TABLE 3.29

The above Table will be optimum if

 

()μ ≥ 0 for j = 2, 3,4.

Now, ()(μ) = −2 + 4μ ≥ 0 ∴μ ≥ 1/2

 

()(μ) = 2 + μ0 ≥ 0 which is true
()(μ) = 4 + 2μ ≥ 0 ∴μ ≥ −1/2

∴ For all μ≥1/2, the optimal solution is given by

 

x1 = 3, x2 = x3 = s1 = 0, s2 = 6 and Maximum of z = 12 + 6μ.

For μ < −1, the relatively profit coefficient of the non-basic variable s1 namely, ()(−) becomes negative Then, the above Table is not optimal.

s1 enters into the basis and x1 leaves the basis.

 

TABLE 3.30

The above Table will be optimal if

 

(zjcj) (μ) ≥ 0 for j = 1, 3, 5.

Now,

 

∴ For all μ ≤ −1, the optimal solution is given by

 

x1 = 0, x2 = 9/4, x3 = 0, s1 = 3/4, s2 = 0 and maximum of z = 27/2 −9/2 μ.

The Tables 3.27, 3.28, 3.29 provide the families of optimal solution for −1 ≤ μ ≤ 1/2, μ 1/2 and μ ≤ − 1, respectively.

3.5.3 Parametric Right Hand Side Problem

The right hand side constraints in a linear programming problem represents the limits in the resources and the outputs. In some practical problems all the resources are not independent of one another. A shortage of one resources may cause shortage of the other resources at varying levels. Same is true for outputs also. For example, consider a firm manufacturing electrical appliances. A shortage in electric power will decrease the demand of all the electric items produced in varying degrees depending upon the electric energy consumed by them. In all such problems, we are to consider simultaneous changes in the right hand side constants, which are functions of one parameter and study how the optimal solution is affected by these changes.

Let the linear programming problem before parameterisation be

 

Maximise z = CX,

subject to

AX = b,
X ≥ 0

where b is the known requirement (right hand side) vector. Let this requirement vector be changed to b + μb′ so that parametric right hand side problem becomes

 

Maximise z = CX

subject to

AX = b + μb′, X ≥ 0

where b′ is the given and predetermined variation vector and μ is an unknown parameter. As μ changes, the right hand constants also change. We wish to determine the family of optimal solutions as μ changes from − ∞ to + ∞. When μ = 0, the parametric problem reduces to the original linear programming problem. Simplex method is used to find its optimal solution.

Let B and XB represent the optimal basis matrix and the optimal feasible solution, respectively for μ = 0. Then, XB = B−1b. As μ changes from zero to a positive or negative value, the value of the basic variables change and the new values are given by

 

XB = B−1(b + μB−1b′ = + μ1

A change in μ has no effect on the values of relative profit coefficients j, that is, j values remain non-negative. For a given matrix B, values of and can be calculated. The solution XB = + λ is feasible and optimal as long as + μ ≥ 0. In other words, for a given solution we can determine the range for μ within which the solution remains optimal.

Example

Consider the linear programming problem

 

Maximise z = 4x1 + 6x2 + 2x3

subject to

 

x1 + x2 + x3 ≤ 3
x1 + 4x2 + 7x3 ≤ 9
x1, x2, x3 ≥ 0

If we solve this problem by simplex method by taking s1 and s2 as the slack variables then the optimal simplex table is

 

TABLE 3.31

Solve the problem if the variation right hand side vector . Perform the complete parametric analysis and identify all critical values of parameter μ.

Solution: The given parametric right hand side problem is

 

Maximise z = 4x1 + 6x2 + 2x2 + 0s1 + 0s2

subject to

x1 + x2 + x3 + s1 = 3 + 3μ
x1 + 4x2 + 7x3 + s2 = 9 − 3μ
x1, x2, x3, s1, s2 ≥ 0

When μ = 0, the problem reduces to the linear programming problem whose optimal solution is given in Table 3.31. For values of μ other than zero, the values of right hand side constants change because of the variation vector b′. The vector XB and are computed as follows:

Hence, the new Table is

 

TABLE 3.32

For a fixed μ, the value of the basic variables in the above Table are given by

 

x1 = μ = 1 + μ
x2 = μ = 2 − 2μ

values are not affected as long as the basis consists of variables x1 and x2. As μ changes values of basic variables x1 and x2 change and the above Table remains optimal as long as basis (x1, x2) remains feasible. In other words, the above Table remains optimal as long as

 

x1 = 1 + 5μ ≥ 0 or μ ≥ −1/5
x2 = 2 − 2μ ≥ 0 or μ ≤ 1.

Thus, the above Table remains optimal as μ varies from − 1/5 to 1. Thus, for all −1/5 ≤ μ ≤ 1, the optimal solution is given by

 

x1 = 1 + 5μ, x2 = 2 − 2μ, x3 = s1 = s2 = 0, and maximum of z = 16 + 8 μ.

For μ > 1, the basic variable x2 becomes negative. Dual simplex method can, therefore, be applied to find the new optimal solution for μ > 1. Evidently, x2 is the variable that leaves the basis. The ratio’s of the non-basic variables are −3, 10, −2. Thus, variables s1 is the entering variable. The next iteration is

 

TABLE 3.33

The basic solution given by the above Table is

 

x1 = 9 − 3 μ, x2 = 0, x3 = 0, s1 = −6 + 6 μ, s2 = 0, maximum of z = 36 − 12μ.

The solution is optimal as long as the basic variables x1 and s1 remain non-negative. That is, as long as

 

x1 = 9 − 3μ ≥ 0 or μ ≤ 3,
s1 = − 6 + 6μ 0 or μ ≥ 1

The above solution is optimal for all 1 ≤ μ ≤ 3.

For μ > 3, the basic variable x1 becomes negative. As there is no negative coefficient in the first row the solution is infeasible. Hence, there is no optimal solution to the problem for all λ > 3.

For μ ≤ − 1/5, the basic variable x1 in Table 3.32 becomes negative. Dual simplex method can therefore be applied to find the new optimum solution for μ ≤ − 1/5. Evidently, x1 is the variable that leaves the basis and s2 is the variable that enters into the basis.

 

TABLE 3.34

The basic solution given by the above Table is x1 = 0, x2 = 3 + 3μ, x3 = 0, s1 = 0, s2 = − 3 − 15μ and maximum of z = 18 + 18μ.

The solution is optimal as long as

 

x2 = 3 + 3μ 0 or μ − 1
s2 = − 3 − 15μ ≥ 0 or μ ≤ −1/5.

The above solution is optimal for all −1 ≤ μ ≤ −1/5. For μ < −1, the basic variable x2 in the above Table becomes negative. As there is no negative coefficient in the second row, the solution is infeasible. Hence, there exists no optimal solution to the problem for all λ < − 1.

3.6 GOAL PROGRAMMING

3.6.1 Introduction

A difficulty with a linear programming problem is that the objective function is measured in only one dimension such as profit or loss or production capacity and so on. It is impossible in linear programming to have multiple goals unless they can be measured in the same units.

The term goal programming was introduced by A. Charnes and W.W. Copper in 1961. The concept emerged as an issue for unsolvable linear programming problems. Organisational objectives vary according to the characteristics, types, philosophy of management, and particular environmental conditions of the organization. Profit maximisation, which is regarded as the sole objective of the business firm in classical economic theory, is one of the most widely accepted goals of management. However, in the present dynamic business environment, profit maximisation, is not the only objective of management. In fact, business firms do no quite a few occasions place higher priorities on goals other than profit maximisation. We have seen for example, firms place great emphasis on social responsibilities, social contribution, public relations, industrial and labour relations and so on. Such goals are sought because of outside pressure or voluntary management decisions.

Thus, we see management has multiple conflicting objectives to achieve in the present day business scenario. This implies that the decision criteria should be multi-dimensional and the decision involves multiple goals. In this context goal programming assumes greater importance as a powerful quantitative technique capable of handling multiple decision criteria.

3.6.2 Concepts of Goal Programming

Linear programming technique that has been applied quite extensively to various decision problems has a limited value for problems involving multiple goals. The difficulty in using linear programming lies in the uni-dimensionality of the objective function, which requires maximisation or minimisation of one objective function3/4an over simplification of reality. However, in real life situations multiple goals and subgoals must be taken into account. Various goals may be expressed in different units of measurement such as rupees, hours, tons and others. Often, multiple goals of management are in conflict or are achievable only at the expense of other goals. Furthermore, these goals are incommensurable. Thus, the solution of the problem requires an establishment of a hierarchy of importance among these incompatible goals so that the lower-order goals are considered only after the higher-order goals are satisfied or have reached the point beyond which no further improvement are desired. If management can provide an ordinal ranking of goals in terms of their contributions or importance to the organisation and all goal constraints are in linear relationships, the problem can be solved by goal programming.

How is an objective function to be determined and expressed in algebraic form when there exists multiple incommensurable, and conflicting goals? Goal programming provides an answer in this case. In goal programming, instead of trying to maximise or minimise the objective criterion directly as in linear programming, the deviations between goals and what can be achieved within the given set of constraint are to be minimised.

In goal programming, the objective function contains primarily the deviational variables that represent each goal or subgoal. The deviational variable is represented in two dimensions in the objective function, a positive and a negative deviation from each subgoal and/or constraint. Then, the objective function becomes the minimisation of these deviations, based on the relative importance or priority assigned to them.

The solution of linear programming is limited by quantification. Unless the management can accurately quantify the relationship of the variables in cardinal numbers, the solution is only as good as inputs. The distinguishing characteristic of goal programming is that is allows for ordinal solution. Stated differently, management may be unable to specify the cost or utility of goal or a subgoal, but often upper or lower limits may be stated for each subgoal. The manager is usually called upon to specify the priority of the desired attainment of each goal or subgoal and rank them in ordinal sequence for decision analysis. The true value of goal programming is, therefore, the solution of problems involving multiple, conflicting goals according to the manager’s priority structure.

3.6.3 Formulation of Goal Programming Problem

It is important to consider formulation of the model before getting into the details of goal programming solution. In fact, the most difficult problem in the application of management science to decision analysis is the formulation of the model for the practical problem in question. Model formulation is the process of transforming a real-word decision problem into a management science model. While the advent of micro-computers with powerful software have now made the solution an easy task, the problem formulation requires a great deal of effort in terms of conceptualisation. One key to successful application of goal programming is the ability to recognise when a problem can be solved by goal programming and to formulate the corresponding model.

Steps of goal programming formulation

Step 1: Define variables and constants

Define the choice variables and the right hand side constants. The right hand side constants may be either available resources or specified goal levels. It requires a careful analysis of the problem in order to identify all relevant variables that have some effect on the set of goals stated by the decision-marker.

Step 2: Formulation of constraints

Through an analysis of the relationships among choice variables and their relationships to the goals, a set of constraint should be formulated. A constraint may be either a system constraint representing a relationship among the variables or a goal constraints that defines the relationship between the choice variables and the goals. It should be remembered that if there is no deviational variable to minimise in order to achieve a certain goal, a new constraint must be created. Also, if further refinement of goals and priorities are required, it may be facilitated by decomposing certain deviational variables.

Step 3: Developing of objective function

Through the analysis of the decision-maker’s goal structure, the objective function must be developed. First, the pre-emptive priority factors should be assigned to certain deviational variables that are relevant to goal attainment. Second, if necessary differential weights must be assigned to deviational variables at the same priority level. It is imperative that goals at the same priority level be commensurable.

Example 1

The manufacturing plant of an electronics firm produces tow types of television sets. both colour and black and white. According to the past experience, production of either a colour or a black and white set requires an average of one hour in the plant. The plant has a normal production capacity of 40 hours a week. The marketing department reports that, because of limited sales opportunity, the maximum number of colour and black and white sets that can be sold are 24 and 30, respectively for the week. The gross margin from the sale of a colour set is 80, whereas it is 40 from a black and white set.

The chairman of the company has set the following goals as arranged in the order of their importance to the organisation:

  1. First, he wants to avoid any underutilisation of normal production capacity (no layoffs of production workers).
  2. Second, he wants to sell as many television sets as possible. Since the gross margin from the sale of a colour television set is twice the amount from a black and white set, he was twice as much desire to achieve sales for colour television sets as for black and white sets.
  3. Third, the chairman wants to minimise the overtime operation of the plant as much as possible.

Formulate this as a goal programming problem.

Solution: First, we note that the gross margin shows that the colour set priority is two times that of the black and white set. Let x1 denote the number of colour television sets and x2 denote the black and white sets for production. Then,

  1. The production capacity of both types of sets is given as x1 + x2 + = 40 where are the deviational variables, representing underutilisation of normal operation and overtime operation of the plant, respectively.
  2. The sales capacity for two type of sales are

    where are the deviational variables representing under achievement of sales towards goals and stand for deviational variables to represent over achievement of sales towards goals.

  3. If p1, p2, and p3 designate the priority of the goals, the complete formulation of goal programming is as follows:

     

    Minimise

    subject to the constraints

Example 2

A textile company produces two types of materials A and B. The material A is produced according to direct orders from furniture manufacturers. The material B, is distributed to retail stores. The average rate of production for material A and B are identical at 1000 metres/hour. By running two shifts the operational capacity of the plant is 80 hours per week.

The marketing department reports that the maximum estimated sales for the following week is 70,000 metres of material A and 45,000 metres of material B. According to the accounting department the profit from a metre of material A is 2.50 and from a metre of material B is 1.50.

The management of the company decides that a stable employment level is a primary goal for the firm. Therefore, whenever there is a demand exceeding normal production capacity, the management simply expands production capacity by providing overtime. However, management feels that overtime operation of the plant of more than 10 hours per week should be avoided because of the accelerating costs. The management has the following goals in the order of importance:

  1. The first goal is to avoid any underutilisation of production capacity (i.e. to maintain stable employment at normal capacity).
  2. The second goal is to limit the overtime operation of the plant to 10 hours.
  3. The third goal is to achieve the sales goals of 70,000 metres of material A and 45,000 metres of material B.
  4. The last goal is to minimise the overtime operation of the plant as much as possible.

Formulate this is a goal programming problem to help the management for the best solution.

Solution:

Constraint for production capacity:

The production capacity is limited to 80 hours at present by running two shifts. However, since overtime of the plant is allowed to a certain extent, the constraint may be written as

where,

 

x1 — Number of hours used for producing material A.
x2 — Number of hours used for producing material
B.
— Underutilization of production capacity as set at 80 hours of operation.
— overutilization of normal production capacity beyond 80 hours.

Constraints for Sales:

The maximum sales for material A and material B are 70,000 and 45,000 metres, respectively. Hence, it is assumed that overachievement of sales beyond the maximum limits are impossible. Then, the sales constraints will be (as before x1 and x2 are expressed in thousands)

where,

= underachievement of sales goal of material A.
= underachievement of sales goal of material B.

Constraint for overtime operation:

Form the case itself, only production and sale constraints can be formulated. However, the analysis of goals indicates that overtime operation of the plant is to minimised to 10 hours or less. To solve the problem by goal programming, we need a deviational variable that represents the overtime operation of the plant beyond 10 hours. By minimising this deviational variable to zero we can achieve the goal. Since there is no such deviational variable in the three constraints presented above, we must create a new constraint.

The overtime operation of the plant should be limited to 10 hours or less. However, it may not be possible to limit the overtime operation to 10 hours or less in order to meet higher order goals. Therefore, can be smaller than, equal to 10. Constraint regarding overtime can be expressed as d Δ d

where,

 

negative deviation of overtime operation from 10 hours.
overtime operation beyond 10 hours.

One metre of material A gives a profit of 2.50 and one metre of B gives a profit of 1.50. Since the production rate is same for both A and B, namely, 1000 metres per hour, the hourly profits of A and B are in the ratio of 2.50 : 1.50 or 5 : 3. Hence, it is appropriate to assign these as differential weights in goal 3rd. The differential weights imply that management is relatively more concerned with the achievement of the sales goal for material A than that for material B.

Hence, the problem is

subject to

The simplex method of goal programming Simplex method can be used to solve a goal programming problem.

Example 1

A company manufactures two products, radios and transistors, which must be processed through assembly and finishing departments. Assembly department has 90 hours available, finishing department can handle up to 72 hours of work. Manufacturing one radio requires 6 hours in assembly and 3 hours in finishing department. Each transistor requires 3 hours in assembly and 6 hours in finishing. If profit is 120 per radio and 90 per transistor, determine the best combination of radios and transistors to realise a maximum profit of 2,100.

Solution: This is a single goal (profit) problem. Let

d = amount by which profit goal is underachieved.

d+ = amount by which profit goal is overachieved.

The given problem can be expressed as the following goal problem:

 

Minimise z = d (under achievement of the profit target)

subject to

120x1 + 90x2 + dd+ = 2100
(∵profit earned + underachievement − overachievement = target)

Adding slack variables, the assembly and finishing constraints can be expressed as

 

6x1 + 3x2 + s1 = 90
3x1 + 6x2 + s2 = 72,
x1, x2, s1, s2 ≥ 0

If we convert the problem as maximisation type, the objective function becomes maximise z* = −d.

Solution by simplex method:

Initial Table:

 

TABLE 3.35

First iteration: Introduce x1 and drop s1.

 

TABLE 3.36

Second iteration: Introduce x2 and drop s2.

 

TABLE 3.37

Since all zjcj ≥ 0, the current solution is optimum and the optimal solution is x1 = 12, x2 = 6 and minimum of z = 120.

Minimum of z = 120 means that the target profit of 2,100 is underachieved 120, that is profit actually earned (by manufacturing 12 radios and 6 transistors) is 2,100 − 120 = 1,980.

Example 2

If in example 1, the company sets two equally ranked goals, one to reach a profit goal of 1,500 and the other to meet a radio goal of 10 sets. Find the optimal solution.

Solution: Since the two goals are equally ranked, à1 deviation from the profit target is important (in the goal programming model) as a deviation of 1 radio set. Let

= amount by which the profit goal is under achieved.

= amount by which the profit goal is over achieved.

= amount by which the radio goal is under achieved.

= amount by which the radio goal is over achieved.

Then, the goal programming model becomes

subject to

Convert the problem to maximisation, the objective function becomes maximise z* = −dpdr.

Adding slack variables, the assembly and finishing constraints can be written as

 

6x1 + 3x2 + s1 = 90
3x1 + 6x2 + s2 = 72

Note that and are the basic variables corresponding to the first and second constraint, respectively.

Initial table:

 

TABLE 3.38

First iteration: Introduce x1 and drop s2.

 

TABLE 3.39

Second iteration: Introduce and drop .

 

TABLE 3.40

It may be noted from the above Table that the goal of 10 radios is achieved and in fact improved (25/2 − 10 = 5/2), 5/2 appears in this Table as (overachievement of radios). Profit goal of 1500 is also reached since both and are zero (because they are not in the final solution) and therefore, profit is exactly 1500.

3.6.4 Graphical Method of Goal Programming

The graphical method of solving goal programming problem is quite similar to the graphical method of linear programming. In linear programming, the method is used to maximise an objective function with one goal, whereas in goal programming, it is used to minimise the total deviation from a set of multiple goals. The deviation from the goal with the highest priority is minimised to the fullest extent possible before deviation from the next goal is minimised.

Example 1

A manufacturing firm produces two types of product A and B. According to the past experience, production of either product A or product B requires an average of one hour in the plant. The plant has a normal production capacity of 400 hours a month. The marketing department of the firm reports that because of limited market, the maximum number of product A and product B that can be sold in a month are 240 and 300, respectively. The net profit from the sale of product A and product B are 800 and 400, respectively. The manager of the firm has set the following goals arranged in the order of importance.

p1: He wants to avoid any underutilisation of normal production capacity.

p2: He wants to sell maximum possible units of product A and product B. since the net profit from the sale of product A is twice the amount from that of product B, the manager has twice as much desire to achieve sales for product A as for product B.

p3: He wants to minimise the overtime operation of the plants as much as possible.

Solution: Formulation:

Let x1 and x2 be the numbers of units of product A and product B to be produced, respectively. Since overtime operation is allowed, the plant capacity can be expressed as

where is the underutilization of production capacity and is the overtime operation of the normal production capacity. Since the sales goals given are the maximum possible sales volume, positive deviations will not appear in the sales constraints. The sales constraints can be expressed as:

where,

 

= underachievement of sales goal for product A
= Underachievement of sales goal for product B

Hence, the complete model is

 

subject to

First plot all the goal constraints on a graph as shown in Fig. 3.1. Since the underutilization and overutilization of the plant capacity is permissible, the deviational variables and are indicated by arrows in Fig. 3.1. Similarly, are also denoted by arrows in the diagram. The feasible region is

FIGURE 3.1

The first goal is completely achieved at the line EF in the reduced feasible region EBF. Now, the manager would like to move to priority level 2, where the differential weight for product A is two times that of B (assigned in the ratio of the profits). The sales goal of product A is completely achieved at line EB in the feasible region EBF. So, we see that the first two goals are completely achieved at point B in the feasible region EBF. The third goal on overtime operation cannot be achieved without dispensing with the first two goals in the priority order. The optimal solution to the problem is

 

x1 = 240, x2 = 300, = 0, = 0, = 0, = 140

Example 2

A small furnishing company manufactures tables and chairs. Each chair requires 4 man-hours of labour while each table requires 5 man-hours of labour. If only 40 man-hours are available each week and the owner of the company would neither hire additional labour nor utilise overtime, formulate the linear goal programming problem and solve it. Both the table and the chair fetch a profit of 2000 per week. Also, he would like to supply 10 chairs, if possible, per week to a sister concern.

Solution: The goals of the company and their assigned priorities are

Priority Goal
p1 To avoid hiring extra labour or utilise overtime.
p2 To reach a profit goal of 2000 a week.
p3 To supply 10 chairs a week to the sister concern.

Let x1 and x2 denote the number of chairs and tables to be produced per week. and represent the amount by which the ith goal is underachieved and overachieved, respectively, then the goals are

g1: 4x1 + 5x2 + = 40 (man-hour goal)

g2: 100x1 + 100x2 + = 2,000 (profit goal)

g3: x2 + = 10 (Supply of chair goal)

 

To achieve the first goal completely, the overtime must be minimised. Similarly, the second goal will be fulfilled when the underachievement in profit, is minimised and the supply of chair goal will be attained when underachievement is minimised. Therefore, the objective function of the problem may be written as

 

Minimise {, , }

subject to

The three goal constraints are plotted as straight lines by assuming each = = 0. Arrows(←or→) are then associated with each line to represent underachievement or over achievement of the goal.

FIGURE 3.2

Man-hour goal is the most important and is the first to be considered. Since this goal is achieved when is minimised. We may set = 0 as its minimum value. The region satisfying x1, x2 ≥ 0 and is shown shaded in the Fig. 3.2.

Any point within this region satisfies the man-hour goal and minimises at the priority level one.

The next important goal is the profit goal (p2) and to attain it we must minimise . Clearly, cannot be decreased drawn to zero since this would degrade our previous solution3/4a higher priority goal. The minimum value of without affecting the previously attained higher goal is the point (x1 = 4, x2 = 0) and is marked in Fig. 3.2.

The last important goal (in the order of priority) is g3. To satisfy this we must minimise . However, any minimisation of would require a movement from the solution previously determined and would degrade at least one higher priority goal already attained.

Therefore, the optimal solution to the problem is x1 = 10, x2 = 0 and minimum of z = (0, 1000, 10)

The optimum values of z simply indicate that goal is completely attained, while g2 and g3 are only partially attained.

FIGURE 3.3

3.7 INTEGER PROGRAMMING

3.7.1 Introduction

So far, we have considered decision variables which can take only integer values. In many practical problems the decision variables make sense only if they have integer values. For example, it is often necessary to assign men, machines, and vehicles to activities in integer quantities. This restriction is a difficult one to handle mathematically. However, some progress has been made in developing solution procedures for the case of linear programming problems subjected to this additional restriction that the decision variables must have integer values.

A linear programming problem in which all or some of the decision variables are restricted to assume non-negative integer values is called an integer programming problem [or IPP or integer linear programming].

Importance of Integer Programming Integer programming problem is of particular importance in business and industry where, quite often, the fractional solutions are unrealistic because the units are not divisible. For example, it is quite possible to use 2.34 kg of raw material, or 4.56 machine hours and so on. But it is meaningless to produce 5.67 chairs or 7.89 tables, or to open 4.23 shops and so on. Hence, a new procedure has been developed in this direction for the case of LPP subjected to the additional restriction that the decision variables must have integer values.

Applications of Integer Programming Integer programming problems occur quite frequently in business and industry. Transportation problems, assignment problems are integer programming problems, since the decision variables are either zero or one.

Many other decision problems can necessitate integer programming models. One category of such problems deals with the sequencing, scheduling and routing decisions. An example is the travelling salesman problem which aims at finding a least distance route for a salesman who must visit each of n cities, starting and ending his journey at home city. Larger expenditures of capital and resources are required in capital budgeting decisions. This is the main reason why integer programming is so important for marginal decisions.

An optimal solution to a capital budgeting problem may yield considerably more profit to a firm than an approximate or guessed solution. For example, fertilizer manufacturing firm with 15 plants may be able to substantially increase profits by cutting down to 10 plants or less-provided this reduction is planned optimally.

Rounding off Techniques and its Draw Backs IPP can be solved by the usual simplex method by ignoring the integer restriction and then rounding off the non-integer values to integers in the optimal solution obtained by the simplex method. But there is no guarantee that the integer valued solution thus obtained will satisfy one or more constraints and as such the new solution may not be feasible.

Definition of Integer Programming Integer programming problem (IPP) The linear programming problem, maximise z = CX, subject to AXb, X ≤ 0 and some xjX are integers where A is an m × n real matrix is called an integer programming problem.

All integer programming problem (All I.P.P. or pure I.P.P.) An integer programming problem is said to be an all integer programming problem if all xjX are integers.

Mixed integer programming problem (Mixed I.P.P.) An integer programming problem is said to be mixed integer programming problem if not all xjX are integers.

Zero-one programming problem or standard discrete problem: If all the variables in the optimum solution are allowed to take values either 0 or 1 as in ‘do’ or ‘not to do’ type decisions, then the problem is called the zero-one programming problem.

3.7.2 Formulation of Integer Programming Problem

Integer programming problem formulation do not pose problem when the variables inherently are discrete in nature. Problems arise in situations where direct LP or IP formulations are not possible. First, we see a direct IP formulation of the problem situation.

Example 1

Suppose we are landed on a treasure island full of three types of valuable stones, emerald (E), ruby (R) and topaz (T). Each piece of E, R and T weighs, 3, 2, 2 kg and is known to have a value of 4, 3, 1, crore respectively. We have a bag that can carry a maximum of 11 kg. The stones cannot be made into smaller pieces. Formulate this situation as an IPP so as to maximise the total value carried.

Solution:

Decision variables We have to decide how many pieces of each type of stone has to be carried so as to be within the capacity of the bag.

Let x1 denote the number of emeralds to be carried and let x2 be the number of rubies to be carried and let x3 be the number of topaz to be carried.

Objective function

The total value carried is 4x1 + 3x2 + x3 and so the objective function is maximise z = 4x1 + 3x2 + x3.

Constraints: The total weight to be carried is 3x1 + 2x2 + 2x3. Since the capacity of the bag is limited to 11 kg we have

 

3x1 + 2x2 + 2x3 ≤ 11

Also, x1, x2, x3 denote the number of stones, that cannot be negative and so x1, x2, x3 ≥ 0.

Hence, the complete IPP is

 

Maximise z = 4x1 + 3x2 + x3

subject to

3x1 + 2x2 + 2x3 ≤ 11
x1, x2, x3 ≥ 0

Example 2 Capital Budgeting Problem

Consider a company which would like to take up three projects. However, because of budget limitations not all the projects can be selected. It is known that project j has a present value of cj, and would require an investment of ajt in period t. The capital available in period t is bT. The problem is to choose projects so as to maximise present value, without violating the budget constraints. Formulate the problem as an IP.

Solution: For choice of situations of “yes-no”, “go-no-go” type, where the objective is to determine whether or not a particular activity is to be undertaken, integer binary variables that can take a value of 0 or 1 can be used to represent the decision variables. Here, we find that for each project, we want to find out whether it should be taken up or not. Define three decision variables xj(j = 1, 2, 3) corresponding to each project, and

 

xj = 1, if project j is selected
= 0, otherwise

Then, the objective function and the constraints can be expressed in terms of the decision variables, to give the required formulation.

 

Maximise z = cj xj

subject to

ΣajtxjbT, for all period t
xj = 1 or 0, for all projects.

Solution Methodologies of Integer Programming

Integer programming methods can be categorized as

  1. Cutting plane methods
  2. Search methods

Cutting methods: A systematic procedure for solving pure integer programming problem was first developed by RE Gomory in 1958. Later on, he extended the procedure to solve mixed IPP named as cutting plane algorithm; the method consists in first solving the IPP as ordinary LPP by ignoring the integerality restriction and then introducing additional constraints one after the other to cut (eliminate) certain part of the solution space until an integral solution is obtained.

Search methods: It is an enumeration method in which all feasible integer points are enumerated. The widely used search method is the branch and bound technique. It also starts with the continuous optimum, but systematically partitions the solution space into subproblems that eliminate parts that contain no feasible integer solution. It was originally developed by AH Land and AG Doig.

3.7.3 Gomory’s all IPP Method (or) Cutting Plane Method or Gomory’s Fractional Cut Method

An optimal solution to an IPP is first obtained by simplex method ignoring the restriction of integral values. In the optimum solution if all the variables have integer values, the current solution will be the desired optimum integer solution. Otherwise, we add a new constraint to the problem such that the new set of feasible solutions includes all the original feasible integer solutions but does not include the optimum non-integer solution initially found. We then solve the revised problem using the simplex method and see if we can get an integer solution. If not, we add another fractional cut and repeat the process until an integer solution is found. Since we never eliminate any feasible integer solution from consideration when we add functional cuts, the integer solution ultimately found must be optimum.

Algorithm:

Step 1: Convert the problem into maximisation if it is in the minimisation form. All the coefficients and constants should be integers.

Step 2: Ignore the integerality condition and find the optimum solution of the resulting LPP by simplex method.

Step 3: Test the integerality of the optimum solution:

  1. If all XBi ≥ 0 and are integers, an optimum integer solution is obtained.
  2. If all XBi ≥ 0 and at least one XBi is not an integer, then go to the next step.

Step 4: Rewrite each XBi as XBi = [XBi] + f, where [XBi] is the integral part of XBi and fi is the positive fractional part of XBi, 0 ≤ fi ≤ 1. Choose the largest fraction of that is choose Max {fi}. In case of a tie, select arbitrarily. Let Max {fi} = fk corresponding to XBk (then kth row is called as a source row).

Step 5: Express each of the negative fractions if any, in the kth row (source row) of the optimum simplex Table as the sum of a negative integer and a non-negative fraction.

Step 6: Find the fractional cut constraint (Gomorian constraint or secondary constraint).

From the source row

that is,

in the form

or,

where s1 is the Gomorian slack.

Step 7: Add the fractional cut constraint obtained in step 6 at the bottom of the optimum simplex table obtained in step 2. Find the new feasible optimum solution using dual simplex method.

Step 8: Go to step 3 and repeat the procedure until an optimum integer solution is obtained.

Remark: In cutting plane method, the fractional cut constraints cut the useless area of the feasible region in the graphical solution of the problem. That is, cut area which has no integer-valued feasible solution. Thus, these Gomorian constraints eliminate all the non-integral solutions without loosing any integer valued solution.

Example 1

Solve the following problem

 

Maximise z = 6x1 + 2x2

subject to

3x1 + 2x2 = 5
x1, x2 ≥ 0 and are integers.

Solution: (By Gomory’s cutting plane method)

The given LPP can be written as

 

Maximise z = 6x1 + 2x2

subject to

 

x1 + 2/3 x2 = 5/3
x1, x2 ≥ 0 and are integers.

By taking x1 as the basic variable the initial basic feasible solution is x1 = 5/3.

Initial table:

TABLE 3.41

Since all zjcj 0 the current solution is optimum. The optimum solution is

 

x1 = 5/3, x2 = 0 and maximum of z = 10.

But the solution obtained is not an integer solution.

Construction of Gomory’s Constraint:

Now, the constraint equation (x1 − equation) can be written as 1 + 2/3 = (1 + 0)x1 +(0 + 2/3)x2 and so the Gomory’s constraint is

 

0x1 + 2/3x2 ≥ 2/3
−2/3x2 ≤ −2/3

or

 

−2/3 x2 + s1 = −2/3 where s1 is a Gomory’s slack variable.

We obtain s1 −2/3. Since the solution is infeasible apply dual simplex method. Then, the next Table is

 

TABLE 3.42

Drop s1 from the basis and introduce x2. Then, the next iteration is

 

TABLE 3.43

Since all XBi ≥ 0 and all zjcj ≥ 0, the current solution is optimal. The solution is also an integer solution.

The optimum integer solution is

 

x1 = 1, x2 = 1 with maximum of z = 8

Example 2

Solve the integer programming problem:

 

Maximise z = 7x1 + 9x2

subject to

x1 + 3x2 ≤ 6
7x1 + x2 ≤ 35
x1, x2 ≤ 0 and are integers.

Solution: Ignore the integer restriction and solve the problem by regular simplex method. Then, the optimal simplex table is

TABLE 3.44

Now

 

7/2 = 3 + 1/2(f1 = 1/2)
9/2 = 4 + 1/2(f2 = 1/2)

and maximum of {1/2, 1/2} = 1/2. Hence, arbitrary choose first row (x2 row) for constructing the Gemory’s constraint.

Now, x2 equation is

 

3 + 1/2 = (0 + 0)x1 + (1 + 0)x1 + (0 + 7/22)s1 + (0 + 1/22)s2.

Hence, the Gomory’s constraint is

 

7/22s1 + 1/22s1 ≥ 1/2

or

−7/22s1 − 1/22s1 ≤ −1/2

that is,

−7/22s1 − 1/22s1 + s3 = −1/2

where, s3 is the Gomory’s slack variable. Hence, s3 = −1/2. As the solution is infeasible apply dual simplex method after including the Gomory’s constraint equation in the above Table.

 

TABLE 3.45

Since maximum of

Introduce s1 and drop s3.

TABLE 3.46

Since all XBi ≥ 0 and all zjcj ≥ 0 the current solution is optimum and the optimum non-integer solution is

 

x1 = 32/7, x2 = 3 with maximum of z = 59.

Since

 

32/7 = 4 + 4/7

and,

 

11/7 = 1 + 4/7

and maximum {4/7, 4/7} = 4/7.

Arbitrarily choose x1 equation for constructing the next Gomory’s constraint.

 

4 + 4/7 = (1 + 0)x1 + (0 + 0)x2 + (0 + 0)s1 + (0 + 1/7)s2 + (0 + 6/7)s3
1/7s2 + 6/7s2 ≥ 4/7

i.e.

− 1/7 s2 − 6/7s2 ≤ − 4/7

i.e.

− 1/7s2 − 6/7s2 + s4 = − 4/7     where s4 is a slack variable. The next iteration is

 

TABLE 3.47

Since maximum {1/−1/7, 8/ −6/7} = − 7 corresponding to s2. Drop s4 and introduce s2.