Note

SimpleGADriver is based on a simple genetic algorithm implementation sourced from the lecture notes for the class 2009 AAE550 taught by Prof. William A. Crossley at Purdue University.

This genetic algorithm optimizer supports integer and continuous variables. It uses a binary encoding scheme to encode any continuous variables into a user-definable number of bits. The number of bits you choose should be equal to the base-2 logarithm of the number of discrete values you want between the min and max value. A higher value means more accuracy for this variable, but it also increases the number of generations (and hence total evaluations) that will be required to find the minimum. If you do not specify a value for bits for a continuous variable, then the variable is assumed to be integer, and encoded as such. Note that if the range between the upper and lower bounds is not a power of two, then the variable is discretized beyond the upper bound, but those points that the GA generates which exceed the declared upper bound are discarded before evaluation.

The SimpleGADriver supports both constrained and unconstrained optimization.

OptionDefaultAcceptable ValuesAcceptable TypesDescription
Pc0.1N/AN/ACrossover rate.
Pm0.01N/AN/AMutation rate.
bits{}N/A['dict']Number of bits of resolution. Default is an empty dict, where every unspecified variable is assumed to be integer, and the number of bits is calculated automatically. If you have a continuous var, you should set a bits value as a key in this dictionary.
compute_paretoFalseN/A['bool']When True, compute a set of non-dominated points based on all given objectives and update it each generation. The multi-objective weight and exponents are ignored because the algorithm uses all objective values instead of a composite.
cross_bitsFalse[True, False]['bool']If True, crossover swaps single bits instead the default k-point crossover.
debug_print[]['desvars', 'nl_cons', 'ln_cons', 'objs', 'totals']['list']List of what type of Driver variables to print at each iteration.
elitismTrue[True, False]['bool']If True, replace worst performing point with best from previous generation each iteration.
grayFalse[True, False]['bool']If True, use Gray code for binary encoding. Gray coding makes the binary representation of adjacent integers differ by one bit.
invalid_desvar_behaviorwarn['warn', 'raise', 'ignore']N/ABehavior of driver if the initial value of a design variable exceeds its bounds. The default value may beset using the OPENMDAO_INVALID_DESVAR_BEHAVIOR environment variable to one of the valid options.
max_gen100N/AN/ANumber of generations before termination.
multi_obj_exponent1.0N/AN/AMulti-objective weighting exponent.
multi_obj_weights{}N/A['dict']Weights of objectives for multi-objective optimization.Weights are specified as a dictionary with the absolute namesof the objectives. The same weights for all objectives are assumed, if not given.
penalty_exponent1.0N/AN/APenalty function exponent.
penalty_parameter10.0N/AN/APenalty function parameter.
pop_size0N/AN/ANumber of points in the GA. Set to 0 and it will be computed as four times the number of bits.
procs_per_model1N/AN/ANumber of processors to give each model under MPI.
run_parallelFalse[True, False]['bool']Set to True to execute the points in a generation in parallel.

The call signature for the SimpleGADriver constructor is:

Initialize the SimpleGADriver driver.

The examples below show a mixed-integer problem to illustrate usage of this driver with both integer and discrete design variables. The driver iter_count attribute reflects the number of times the model is evaluated in the course of the run.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()
print(prob.driver.iter_count)

4849


You can change the number of generations to run the genetic algorithm by setting the “max_gen” option.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 5

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()
print(prob.driver.iter_count)

289


You can change the population size by setting the “pop_size” option. The default value for pop_size is 0, which means that the driver automatically computes a population size that is 4 times the total number of bits for all variables encoded.

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['pop_size'] = 10

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()
print(prob.driver.iter_count)

1011


If you have more than one objective, you can use the SimpleGADriver to compute a set of non-dominated candidate optima by setting the “compute_pareto” option to True. In this case, the final state of the model will only be one of the pareto-optimal designs. The full set can be accessed via the ‘desvar_nd’ and ‘obj_nd’ attributes on the driver for the design variable values and their corresponding objectives.

import openmdao.api as om

class Box(om.ExplicitComponent):

def setup(self):

def compute(self, inputs, outputs):
length = inputs['length']
width = inputs['width']
height = inputs['height']

outputs['top_area'] = length * width
outputs['front_area'] = length * height
outputs['area'] = 2*length*height + 2*length*width + 2*height*width
outputs['volume'] = length*height*width

prob = om.Problem()

# setup the optimization
prob.driver.options['max_gen'] = 20
prob.driver.options['bits'] = {'length': 8, 'width': 8, 'height': 8}
prob.driver.options['penalty_parameter'] = 10.
prob.driver.options['compute_pareto'] = True

prob.model.add_objective('front_area', scaler=-1)  # maximize
prob.model.add_objective('top_area', scaler=-1)  # maximize

prob.setup()

prob.set_val('length', 1.5)
prob.set_val('width', 1.5)
prob.set_val('height', 1.5)

prob.run_driver()

desvar_nd = prob.driver.desvar_nd
nd_obj = prob.driver.obj_nd

print(desvar_nd)

/usr/share/miniconda/envs/test/lib/python3.11/site-packages/openmdao/core/group.py:1098: DerivativesWarning:Constraints or objectives [box.front_area, box.top_area, box.volume] cannot be impacted by the design variables of the problem because no partials were defined for them in their parent component(s).

[[1.83607843 0.54705882 0.95686275]
[1.50823529 0.28627451 1.95529412]
[1.45607843 1.90313725 0.34588235]
[1.76156863 0.54705882 1.01647059]
[1.85098039 1.03137255 0.49490196]
[1.87333333 0.57686275 0.90470588]
[1.38156863 1.87333333 0.38313725]
[1.68705882 0.39803922 1.47098039]
[1.86588235 0.50980392 1.01647059]
[2.         0.42784314 1.13568627]
[1.99254902 0.69607843 0.70352941]
[1.74666667 0.39803922 1.40392157]
[1.99254902 0.30117647 1.38901961]
[1.97764706 0.20431373 1.95529412]
[1.99254902 0.57686275 0.84509804]
[1.9254902  0.30117647 1.50823529]
[1.75411765 0.57686275 0.97176471]
[1.94039216 0.92705882 0.5545098 ]
[1.74666667 0.81529412 0.70352941]
[1.75411765 0.42039216 1.35176471]]

sorted_obj = nd_obj[nd_obj[:, 0].argsort()]

print(sorted_obj)

[[-3.86688166 -0.40406044]
[-2.9490436  -0.43176932]
[-2.90409227 -0.57991234]
[-2.76768966 -0.60010888]
[-2.48163045 -0.67151557]
[-2.45218301 -0.69524183]
[-2.37115433 -0.7374173 ]
[-2.27137255 -0.85568627]
[-1.89661453 -0.95123414]
[-1.7905827  -0.96368166]
[-1.75687505 -1.00444291]
[-1.70458962 -1.01188512]
[-1.69481569 -1.08065621]
[-1.68389927 -1.1494273 ]
[-1.40181684 -1.3869704 ]
[-1.21024148 -1.40545716]
[-1.07596647 -1.79885767]
[-0.91605383 -1.90905037]
[-0.52933041 -2.58813856]
[-0.50363183 -2.77111711]]


## Constrained Optimization#

The SimpleGADriver supports both constrained and unconstrained optimization. If you have constraints, the constraints are added to the objective after they have been weighted using a user-tunable penalty multiplier and exponent.

All constraints are converted to the form of $$g(x)_i \leq 0$$ for inequality constraints and $$h(x)_i = 0$$ for equality constraints. The constraint vector for inequality constraints is the following:

$$g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}$$ $$h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}$$

The number of all constraints:

$$N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}$$

The fitness function is constructed with the penalty parameter $$p$$ and the exponent $$\kappa$$:

$\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k^{\kappa}) • p \cdot \sum_{k=1}^{N^h}|h_k|^{\kappa}$

where $$\delta_k = 0$$ if $$g_k$$ is satisfied, 1 otherwise

The following example shows how to set the penalty parameter $$p$$ and the exponent $$\kappa$$:

import openmdao.api as om

class Cylinder(om.ExplicitComponent):
"""Main class"""

def setup(self):

def compute(self, inputs, outputs):
height = inputs['height']

area = height * radius * 2 * 3.14 + 3.14 * radius ** 2 * 2
volume = 3.14 * radius ** 2 * height
outputs['Area'] = area
outputs['Volume'] = volume

prob = om.Problem()

# setup the optimization
prob.driver.options['penalty_parameter'] = 3.
prob.driver.options['penalty_exponent'] = 1.
prob.driver.options['max_gen'] = 50
prob.driver.options['bits'] = {'radius': 8, 'height': 8}

prob.setup()

prob.set_val('height', 3.)

prob.run_driver()

# These go to 0.5 for unconstrained problem. With constraint and penalty, they
# will be above 1.0 (actual values will vary.)
print(prob.get_val('height'))

/usr/share/miniconda/envs/test/lib/python3.11/site-packages/openmdao/core/group.py:1098: DerivativesWarning:Constraints or objectives [cylinder.Area, cylinder.Volume] cannot be impacted by the design variables of the problem because no partials were defined for them in their parent component(s).

[1.25882353]
[2.01764706]


## Running a GA in Parallel#

If you have a model that doesn’t contain any distributed components or parallel groups, then the model evaluations for a new generation can be performed in parallel by turning on the “run_parallel” option:

%%px

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 10
prob.driver.options['run_parallel'] = True

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

if prob.comm.rank == 0:
print("\nOptimal Solution:")
print(f"{prob.get_val('comp.f')=}")
print(f"{prob.get_val('xI')=}")
print(f"{prob.get_val('xC')=}")

[stdout:0]
Optimal Solution:
prob.get_val('comp.f')=array([1.25172426])
prob.get_val('xI')=array([9.])
prob.get_val('xC')=array([2.11764706])


## Running a GA on a Parallel Model in Parallel#

If you have a model that does contain distributed components or parallel groups, you can also use SimpleGADriver to optimize it. If you have enough processors, you can also simultaneously evaluate multiple points in your population by turning on the “run_parallel” option and setting the “procs_per_model” to the number of processors that your model requires. Take care that you submit your parallel run with enough processors such that the number of processors the model requires divides evenly into it, as in this example, where the model requires 2 and we give it 4.

Note

This feature requires MPI, and may not be able to be run on Colab or Binder.

%%px

import openmdao.api as om
from openmdao.test_suite.components.branin import Branin

prob = om.Problem()
model = prob.model

par = model.add_subsystem('par', om.ParallelGroup(),
promotes_inputs=['*'])

promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])
promotes_inputs=[('x0', 'xI'), ('x1', 'xC')])

model.add_subsystem('comp', om.ExecComp('f = f1 + f2'))
model.connect('par.comp1.f', 'comp.f1')
model.connect('par.comp2.f', 'comp.f2')

prob.driver.options['bits'] = {'xC': 8}
prob.driver.options['max_gen'] = 10
prob.driver.options['pop_size'] = 25
prob.driver.options['run_parallel'] = True
prob.driver.options['procs_per_model'] = 2

prob.driver._randomstate = 1

prob.setup()

prob.set_val('xC', 7.5)
prob.set_val('xI', 0.0)

prob.run_driver()

if prob.comm.rank == 0:
print("\nOptimal Solution:")
print(f"{prob.get_val('comp.f')=}")
print(f"{prob.get_val('xI')=}")
print(f"{prob.get_val('xC')=}")

[stdout:0]
Optimal Solution:
prob.get_val('comp.f')=array([0.98799098])
prob.get_val('xI')=array([-3.])
prob.get_val('xC')=array([11.94117647])