Note

SimpleGADriver is based on a simple genetic algorithm implementation sourced from the lecture notes for the class 2009 AAE550 taught by Prof. William A. Crossley at Purdue University.

This genetic algorithm optimizer supports integer and continuous variables. It uses a binary encoding scheme to encode any continuous variables into a user-definable number of bits. The number of bits you choose should be equal to the base-2 logarithm of the number of discrete values you want between the min and max value. A higher value means more accuracy for this variable, but it also increases the number of generations (and hence total evaluations) that will be required to find the minimum. If you do not specify a value for bits for a continuous variable, then the variable is assumed to be integer, and encoded as such. Note that if the range between the upper and lower bounds is not a power of two, then the variable is discretized beyond the upper bound, but those points that the GA generates which exceed the declared upper bound are discarded before evaluation.

The SimpleGADriver supports both constrained and unconstrained optimization.

The examples below show a mixed-integer problem to illustrate usage of this driver with both integer and discrete design variables.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

False


Option Default Acceptable Values Acceptable Types Description
Pc 0.1 N/A N/A Crossover rate.
Pm 0.01 N/A N/A Mutation rate.
bits {} N/A ['dict'] Number of bits of resolution. Default is an empty dict, where every unspecified variable is assumed to be integer, and the number of bits is calculated automatically. If you have a continuous var, you should set a bits value as a key in this dictionary.
compute_pareto False N/A ['bool'] When True, compute a set of non-dominated points based on all given objectives and update it each generation. The multi-objective weight and exponents are ignored because the algorithm uses all objective values instead of a composite.
cross_bits False [True, False] ['bool'] If True, crossover swaps single bits instead the default k-point crossover.
debug_print [] N/A ['list'] List of what type of Driver variables to print at each iteration. Valid items in list are 'desvars', 'ln_cons', 'nl_cons', 'objs', 'totals'
elitism True [True, False] ['bool'] If True, replace worst performing point with best from previous generation each iteration.
gray False [True, False] ['bool'] If True, use Gray code for binary encoding. Gray coding makes the binary representation of adjacent integers differ by one bit.
max_gen 100 N/A N/A Number of generations before termination.
multi_obj_exponent1.0 N/A N/A Multi-objective weighting exponent.
multi_obj_weights {} N/A ['dict'] Weights of objectives for multi-objective optimization.Weights are specified as a dictionary with the absolute namesof the objectives. The same weights for all objectives are assumed, if not given.
penalty_exponent 1.0 N/A N/A Penalty function exponent.
penalty_parameter 10.0 N/A N/A Penalty function parameter.
pop_size 0 N/A N/A Number of points in the GA. Set to 0 and it will be computed as four times the number of bits.
procs_per_model 1 N/A N/A Number of processors to give each model under MPI.
run_parallel False [True, False] ['bool'] Set to True to execute the points in a generation in parallel.

The call signature for the SimpleGADriver constructor is:

pyOptSparseDriver.__init__(**kwargs)[source]

Initialize pyopt.

You can change the number of generations to run the genetic algorithm by setting the “max_gen” option.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 5

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

False


You can change the population size by setting the “pop_size” option. The default value for pop_size is 0, which means that the driver automatically computes a population size that is 4 times the total number of bits for all variables encoded.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['pop_size'] = 10

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

False


If you have more than one objective, you can use the SimpleGADriver to compute a set of non-dominated candidate optima by setting the “compute_pareto” option to True. In this case, the final state of the model will only be one of the pareto-optimal designs. The full set can be accessed via the ‘desvar_nd’ and ‘obj_nd’ attributes on the driver for the design variable values and their corresponding objectives.

import openmdao.api as om

class Box(om.ExplicitComponent):

def setup(self):

def compute(self, inputs, outputs):
length = inputs['length']
width = inputs['width']
height = inputs['height']

outputs['top_area'] = length * width
outputs['front_area'] = length * height
outputs['area'] = 2*length*height + 2*length*width + 2*height*width
outputs['volume'] = length*height*width

prob = om.Problem()

# setup the optimization
prob.driver.options['max_gen'] = 20
prob.driver.options['bits'] = {'length': 8, 'width': 8, 'height': 8}
prob.driver.options['penalty_parameter'] = 10.
prob.driver.options['compute_pareto'] = True

prob.setup()

prob.set_val('length', 1.5)
prob.set_val('width', 1.5)
prob.set_val('height', 1.5)

prob.run_driver()

desvar_nd = prob.driver.desvar_nd
nd_obj = prob.driver.obj_nd

print(desvar_nd)

[[1.83607843 0.54705882 0.95686275]
[1.50823529 0.28627451 1.95529412]
[1.45607843 1.90313725 0.34588235]
[1.76156863 0.54705882 1.01647059]
[1.85098039 1.03137255 0.49490196]
[1.87333333 0.57686275 0.90470588]
[1.38156863 1.87333333 0.38313725]
[1.68705882 0.39803922 1.47098039]
[1.86588235 0.50980392 1.01647059]
[2.         0.42784314 1.13568627]
[1.99254902 0.69607843 0.70352941]
[1.74666667 0.39803922 1.40392157]
[1.99254902 0.30117647 1.38901961]
[1.97764706 0.20431373 1.95529412]
[1.99254902 0.57686275 0.84509804]
[1.9254902  0.30117647 1.50823529]
[1.75411765 0.57686275 0.97176471]
[1.94039216 0.92705882 0.5545098 ]
[1.74666667 0.81529412 0.70352941]
[1.75411765 0.42039216 1.35176471]]

sorted_obj = nd_obj[nd_obj[:, 0].argsort()]

print(sorted_obj)

[[-3.86688166 -0.40406044]
[-2.9490436  -0.43176932]
[-2.90409227 -0.57991234]
[-2.76768966 -0.60010888]
[-2.48163045 -0.67151557]
[-2.45218301 -0.69524183]
[-2.37115433 -0.7374173 ]
[-2.27137255 -0.85568627]
[-1.89661453 -0.95123414]
[-1.7905827  -0.96368166]
[-1.75687505 -1.00444291]
[-1.70458962 -1.01188512]
[-1.69481569 -1.08065621]
[-1.68389927 -1.1494273 ]
[-1.40181684 -1.3869704 ]
[-1.21024148 -1.40545716]
[-1.07596647 -1.79885767]
[-0.91605383 -1.90905037]
[-0.52933041 -2.58813856]
[-0.50363183 -2.77111711]]


## Constrained Optimization¶

The SimpleGADriver supports both constrained and unconstrained optimization. If you have constraints, the constraints are added to the objective after they have been weighted using a user-tunable penalty multiplier and exponent.

All constraints are converted to the form of $$g(x)_i \leq 0$$ for inequality constraints and $$h(x)_i = 0$$ for equality constraints. The constraint vector for inequality constraints is the following:

$$g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}$$ $$h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}$$

The number of all constraints:

$$N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}$$

The fitness function is constructed with the penalty parameter $$p$$ and the exponent $$\kappa$$:

$\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k^{\kappa}) • p \cdot \sum_{k=1}^{N^h}|h_k|^{\kappa}$

where $$\delta_k = 0$$ if $$g_k$$ is satisfied, 1 otherwise

The following example shows how to set the penalty parameter $$p$$ and the exponent $$\kappa$$:

import openmdao.api as om

class Cylinder(om.ExplicitComponent):
"""Main class"""

def setup(self):

def compute(self, inputs, outputs):
height = inputs['height']

area = height * radius * 2 * 3.14 + 3.14 * radius ** 2 * 2
volume = 3.14 * radius ** 2 * height
outputs['Area'] = area
outputs['Volume'] = volume

prob = om.Problem()

# setup the optimization
prob.driver.options['penalty_parameter'] = 3.
prob.driver.options['penalty_exponent'] = 1.
prob.driver.options['max_gen'] = 50
prob.driver.options['bits'] = {'radius': 8, 'height': 8}

prob.setup()

prob.set_val('height', 3.)

prob.run_driver()

# These go to 0.5 for unconstrained problem. With constraint and penalty, they
# will be above 1.0 (actual values will vary.)
print(prob.get_val('height'))

[1.25882353]
[2.01764706]


## Running a GA in Parallel¶

If you have a model that doesn’t contain any distributed components or parallel groups, then the model evaluations for a new generation can be performed in parallel by turning on the “run_parallel” option:

%%px

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 10
prob.driver.options['run_parallel'] = True

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

# Optimal solution
print(prob.get_val('comp.f'))

[stdout:0] [1.25172426]
[stdout:1] [1.25172426]
[stdout:2] [1.25172426]
[stdout:3] [1.25172426]

%%px

print(prob.get_val('xI'))

[stdout:0] [9.]
[stdout:1] [9.]
[stdout:2] [9.]
[stdout:3] [9.]

%%px

print(prob.get_val('xC'))

[stdout:0] [2.11764706]
[stdout:1] [2.11764706]
[stdout:2] [2.11764706]
[stdout:3] [2.11764706]


## Running a GA on a Parallel Model in Parallel¶

If you have a model that does contain distributed components or parallel groups, you can also use SimpleGADriver to optimize it. If you have enough processors, you can also simultaneously evaluate multiple points in your population by turning on the “run_parallel” option and setting the “procs_per_model” to the number of processors that your model requires. Take care that you submit your parallel run with enough processors such that the number of processors the model requires divides evenly into it, as in this example, where the model requires 2 and we give it 4.

%%px

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=['*'])

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])
promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

model.add_subsystem('comp', om.ExecComp('f = f1 + f2'))
model.connect('par.comp1.f', 'comp.f1')
model.connect('par.comp2.f', 'comp.f2')

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 10
prob.driver.options['pop_size'] = 25
prob.driver.options['run_parallel'] = True
prob.driver.options['procs_per_model'] = 2

prob.driver._randomstate = 1

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

# Optimal solution
print(prob.get_val('comp.f'))

[stdout:0] [0.98799098]
[stdout:1] [0.98799098]
[stdout:2] [0.98799098]
[stdout:3] [0.98799098]

%%px
print(prob.get_val('xI'))

[stdout:0] [-3.]
[stdout:1] [-3.]
[stdout:2] [-3.]
[stdout:3] [-3.]

%%px
print(prob.get_val('xC'))

[stdout:0] [11.94117647]
[stdout:1] [11.94117647]
[stdout:2] [11.94117647]
[stdout:3] [11.94117647]