GEATbx: Main page  Tutorial  Algorithms  M-functions  Parameter/Options  Example functions  www.geatbx.com

# 7 Constraint Optimization in the GEATbx

Constraint optimization is a broad and sometimes complex topic. However, in real-world optimization a few constraints are often present. In this chapter the handling of such constraints using the GEATbx is described.

Constraints of the variables are a simple method of constraint handling. To be honest, these constraints are enforced by the GEATbx (from the first day many years ago). Section 7.1 describes definition and application of this method.

A different field of constraints are functional constraints. The GEATbx supports these constraints by using additional objective values coupled with specific goal values. A description of this method including many pointers to examples is given in Section 7.2.

## 7.1 Constraining the variables

Constraints on the varaiables of a problem are enforced. When you define/describe your problem in your objective function, you must provide a vector of the upper and lower bounds of the variables (called VLUB internally, GEATbx option: System.ObjFunVarBounds).

This vector of upper and lower bounds contains the boundaries for each variable. Only inside these boundaries the variable is changed during the optimization.

For nearly all problems I recommend to define these boundaries inside the objective function. All example functions use this method. Inside the demo functions (or inside the main function of the GEATbx) these boundaries can or will be taken automatically directly from the objective function (using the utility function geaobjpara).

Please take your time to define appropriate values for the boundaries of the variables. If you are looking for good values in the range [0.1 0.2], a definition of boundaries in the range [0, 1000] would produce a much more difficult optimization problem. An appropriate definition of the variable boundaries is one of the most important prerequisites for the successful solution of an optimization problem.

If you do not know meaningful boundaries for the variables of your problem you may start with values far enough to include the possible areas. A better way would be to get a better understanding of the problem to solve. Ways to achieve this are described in 'How to Approach new Optimization Problems', Chapter 9.

## 7.2 Functional constraints

Functional constraints come in a heap of different possibilities. Here (and in the GEATbx) I concentrate on the following variants:

• inequality constraint(s): x2+9x1 >= 6, -x2+9x1 >= 1; example in mobjdebconstr.
• equality constraint(s): -2x1^4 + 2 - x2 = 0; example in mobjsoland.

to be extended.

The used method is nearly identical to COMOGA, published in [SRB1995].

### 7.2.1 Functional constraints using additional objectives and goals

Constraints, which are not satiesfied, are set to the distance from the boundary (in the simple case). Thus, you get an additional objective value reflecting the violation of the constraint. The corresponding goal for this objective is set to zero. In this way the multi-objective optimization gets a hint to search for solutions with a smaller objective, thus minimizing the corresponding objective value (and thus the violation of the constraint).

As soon as a constraint is satiesfied, the correspoinding objective is set to zero. In this way. these objectives no longer influence the multi-objective optimization - the goal of zero is reached and does not change as long as the constraint is satiesfied. (This setting the objective to zero is very importantz. Otherwise the optimization would still be influenced and the results are considerably different. Took me some time to see and later understand this aspect.)

### 7.2.2 Implementation of functional constraints (larger than, >=)

This whole mechanism can be implemented in a few lines of Matlab code (including for loops and if statements) or using one of these powerful Matlab one-liners (which are not always obvious at first glance - but calculate quick). Here is a description of the one-liner method.

Lets implement the constraints of Deb's constrained function inside the objective function (mobjdebconstr): x2+9x1 >= 6, -x2+9x1 >= 1

Define the constraint boundaries:

`FunConstraints = [6, 1];`

Calculate the constraints

```G1 =  x2 + 9*x1;
G2 = -x2 + 9*x1;
GAll = [G1 G2];```

Create a matrix with constraint boundary values for check of constraint violation:

`BAll = repmat(FunConstraints, [Nind, 1]);`

Set all constrained objectives, which are satisfied to zero and all other (violated constraints) to the distance from the boundary. As we are maximizing (>=6, >=1) and our objectives are minimized we multiply with -1:

`ObjAdd = ((GAll-BAll) >= 0).*0 + ((GAll-BAll) < 0).*(GAll-BAll).*-1;`

The first part sets all satiesfied constraints to zero (the constraints G1 or G2 are larger than the defined boundaries BAll). The second part takes all values with unsatiesfied constraints and multiplies them with the difference between constraint and defined boundary. Additionally, the resulting value is multiplied with -1 (the constraints in this example must be larger than the boundary, but the GEATbx is minimizing all the time).

That's it. Now run your multi-objective optimization and you get results, where the constraints are satiesfied (larger than or equal to the defined boundaries) and the (standard) objectives are minimized.

### 7.2.3 Implementation of functional constraints (smaller than, <=)

Lets implement the constraints of Belegundu's constrained function inside the objective function (mobjbelegundu): -x1 + x2 - 1 <= 0, x1 + x2 - 7 <= 0.

Define the constraint boundaries:

`FunConstraints = [0, 0];`

Calculate the constraints

```G1 = -x1 + x2 - 1;
G2 =  x1 + x2 - 7;
GAll = [G1 G2];```

Create a matrix with constraint boundary values for check of constraint violation:

`BAll = repmat(FunConstraints, [Nind, 1]);`

Set all constrained objectives, which are satisfied to zero and all other (violated constraints) to the distance from the boundary. As we are minimizing (<=0, <=0) and our objectives are minimized too we do not need any further adjustments:

`ObjAdd = ((GAll-BAll) <= 0) .* 0 + ((GAll-BAll) > 0) .* (GAll-BAll);`

### 7.2.4 Implementation of functional constraints (equal to, ==)

Lets implement the equality constraint of Soland's function inside the objective function (mobjsoland): 0 = -2x1^4 + 2 - x2.

Calculate the constraint:

`G1 = abs(-2.*x1.^4 + 2 - x2);`

A true equality is nearly impossible (or in very special cases only). Thus, we nearly never find a satiesfied (equality) constraint. Two ways to solve this problem. First, we do not use a boundary of zero. Instead we use a value very near zero (for instance 0.005 in this example) and go on. Or we set the corresponding goal value to a small value (for instance 0.001) and not zero. In real world applications each of these methods will work reasonably well.

Define the constraint boundary:

`FunConstraints = [0.005];`

Set the constrained objective, which is (reasonably) satisfied to zero and all other (violated constraints) to the distance from the boundary.

`ObjAdd = (G1 <= FunConstraints) .* 0 + (G1 > FunConstraints) .* G1;`

In the end the equality constraint is transferred into an inequality constraint (and handled according to the previous examples).

I am looking for a method to encapsulate this whole mechanism in a separate function. At the moment am naot sure on the best approach. Thus, use the described methid. Later there might be a fully encapsulated function offering more clarity and comfort.

 GEATbx: Main page  Tutorial  Algorithms  M-functions  Parameter/Options  Example functions  www.geatbx.com

This document is part of version 3.8 of the GEATbx: Genetic and Evolutionary Algorithm Toolbox for use with Matlab - www.geatbx.com.
The Genetic and Evolutionary Algorithm Toolbox is not public domain.