genetic_algorithm_driver.py

Driver for a simple genetic algorithm.

This is the Simple Genetic Algorithm implementation based on 2009 AAE550: MDO Lecture notes of Prof. William A. Crossley.

This basic GA algorithm is compartmentalized into the GeneticAlgorithm class so that it can be used in more complicated driver.

The following reference is only for the automatic population sizing: Williams E.A., Crossley W.A. (1998) Empirically-Derived Population Size and Mutation Rate Guidelines for a Genetic Algorithm with Uniform Crossover. In: Chawdhry P.K., Roy R., Pant R.K. (eds) Soft Computing in Engineering Design and Manufacturing. Springer, London.

The following reference is only for the penalty function: Smith, A. E., Coit, D. W. (1995) Penalty functions. In: Handbook of Evolutionary Computation, 97(1).

The following reference is only for weighted sum multi-objective optimization: Sobieszczanski-Sobieski, J., Morris, A. J., van Tooren, M. J. L. (2015) Multidisciplinary Design Optimization Supported by Knowledge Based Engineering. John Wiley & Sons, Ltd.

class openmdao.drivers.genetic_algorithm_driver.GeneticAlgorithm(objfun, comm=None, model_mpi=None)[source]

Bases: object

Simple Genetic Algorithm.

This is the Simple Genetic Algorithm implementation based on 2009 AAE550: MDO Lecture notes of Prof. William A. Crossley. It can be used standalone or as part of the OpenMDAO Driver.

Attributes

comm

(MPI communicator or None) The MPI communicator that will be used objective evaluation for each generation.

elite

(bool) Elitism flag.

gray_code

(bool) Gray code binary representation flag.

cross_bits

(bool) Crossover swaps bits instead of tails flag. Swapping bits is similar to mutation, so when used Pc should be increased and Pm reduced.

lchrom

(int) Chromosome length.

model_mpi

(None or tuple) If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

npop

(int) Population size.

objfun

(function) Objective function callback.

__init__(self, objfun, comm=None, model_mpi=None)[source]

Initialize genetic algorithm object.

Parameters
objfunfunction

Objective callback function.

commMPI communicator or None

The MPI communicator that will be used objective evaluation for each generation.

model_mpiNone or tuple

If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

crossover(self, old_gen, Pc)[source]

Apply crossover to the current generation.

Crossover swaps tails (k-point crossover) of two adjacent genes.

Parameters
old_genndarray

Points in current generation

Pcfloat

Probability of crossover.

Returns
ndarray

Current generation with crossovers applied.

decode(self, gen, vlb, vub, bits)[source]

Decode from binary array to real value array.

Parameters
genndarray

Population of points, encoded.

vlbndarray

Lower bound array.

vubndarray

Upper bound array.

bitsndarray(dtype=np.int)

Number of bits for decoding.

Returns
ndarray

Decoded design variable values.

encode(self, x, vlb, vub, bits)[source]

Encode array of real values to array of binary arrays.

The array of arrays represents a single population member.

Parameters
xndarray

Design variable values.

vlbndarray

Lower bound array.

vubndarray

Upper bound array.

bitsndarray(dtype=np.int)

Number of bits for decoding.

Returns
ndarray

Single population member, encoded.

execute_ga(self, x0, vlb, vub, vob, bits, pop_size, max_gen, random_state, Pm=None, Pc=0.5)[source]

Perform the genetic algorithm.

Parameters
x0ndarray

Initial design values

vlbndarray

Lower bounds array.

vubndarray

Upper bounds array. This includes over-allocation so that every point falls on an integer value.

vobndarray

Outer bounds array. This is purely for bounds check.

bitsndarray

Number of bits to encode the design space for each element of the design vector.

pop_sizeint

Number of points in the population.

max_genint

Number of generations to run the GA.

random_statenp.random.RandomState, int

Random state (or seed-number) which controls the seed and random draws.

Pmfloat or None

Mutation rate

Pcfloat

Crossover rate

Returns
ndarray

Best design point

float

Objective value at best design point.

int

Number of successful function evaluations.

static from_gray(g)[source]

Convert a Gray coded binary array to normal binary coding.

The input and output arrays represent a single population member.

Parameters
gbinary array

Gray coded binary array, e.g. np.array([0, 0, 1, 1]).

Returns
ndarray

Binary array using normal coding, e.g. np.array([0, 0, 1, 0]).

mutate(self, current_gen, Pm)[source]

Apply mutations to the current generation.

A mutation flips the state of the gene from 0 to 1 or 1 to 0.

Parameters
current_genndarray

Points in current generation

Pmfloat

Probability of mutation.

Returns
ndarray

Current generation with mutations applied.

shuffle(self, old_gen)[source]

Shuffle (reorder) the points in the population.

Used in tournament selection.

Parameters
old_genndarray

Old population.

Returns
ndarray

New shuffled population.

ndarray(dtype=np.int)

Index array that maps the shuffle from old to new.

static to_gray(g)[source]

Convert a binary array representing a single population member to Gray code.

Parameters
gbinary array

Normal binary array, e.g. np.array([0, 0, 1, 0]).

Returns
ndarray

Binary array using Gray code, e.g. np.array([0, 0, 1, 1]).

tournament(self, old_gen, fitness)[source]

Apply tournament selection and keep the best points.

Parameters
old_genndarray

Points in current generation

fitnessndarray

Objective value of each point.

Returns
ndarray

New generation with best points.

class openmdao.drivers.genetic_algorithm_driver.SimpleGADriver(**kwargs)[source]

Bases: openmdao.core.driver.Driver

Driver for a simple genetic algorithm.

__init__(self, **kwargs)[source]

Initialize the SimpleGADriver driver.

Parameters
**kwargsdict of keyword arguments

Keyword arguments that will be mapped into the Driver options.

add_recorder(self, recorder)

Add a recorder to the driver.

Parameters
recorderCaseRecorder

A recorder instance.

cleanup(self)

Clean up resources prior to exit.

declare_coloring(self, num_full_jacs=3, tol=1e-25, orders=None, perturb_size=1e-09, min_improve_pct=5.0, show_summary=True, show_sparsity=False)

Set options for total deriv coloring.

Parameters
num_full_jacsint

Number of times to repeat partial jacobian computation when computing sparsity.

tolfloat

Tolerance used to determine if an array entry is nonzero during sparsity determination.

ordersint

Number of orders above and below the tolerance to check during the tolerance sweep.

perturb_sizefloat

Size of input/output perturbation during generation of sparsity.

min_improve_pctfloat

If coloring does not improve (decrease) the number of solves more than the given percentage, coloring will not be used.

show_summarybool

If True, display summary information after generating coloring.

show_sparsitybool

If True, display sparsity with coloring info after generating coloring.

get_constraint_values(self, ctype='all', lintype='all', driver_scaling=True)

Return constraint values.

Parameters
ctypestring

Default is ‘all’. Optionally return just the inequality constraints with ‘ineq’ or the equality constraints with ‘eq’.

lintypestring

Default is ‘all’. Optionally return just the linear constraints with ‘linear’ or the nonlinear constraints with ‘nonlinear’.

driver_scalingbool

When True, return values that are scaled according to either the adder and scaler or the ref and ref0 values that were specified when add_design_var, add_objective, and add_constraint were called on the model. Default is True.

Returns
dict

Dictionary containing values of each constraint.

get_design_var_values(self)

Return the design variable values.

Returns
dict

Dictionary containing values of each design variable.

get_objective_values(self, driver_scaling=True)

Return objective values.

Parameters
driver_scalingbool

When True, return values that are scaled according to either the adder and scaler or the ref and ref0 values that were specified when add_design_var, add_objective, and add_constraint were called on the model. Default is True.

Returns
dict

Dictionary containing values of each objective.

property msginfo

Return info to prepend to messages.

Returns
str

Info to prepend to messages.

objective_callback(self, x, icase)[source]

Evaluate problem objective at the requested point.

In case of multi-objective optimization, a simple weighted sum method is used:

\[f = (\sum_{k=1}^{N_f} w_k \cdot f_k)^a\]

where \(N_f\) is the number of objectives and \(a>0\) is an exponential weight. Choosing \(a=1\) is equivalent to the conventional weighted sum method.

The weights given in the options are normalized, so:

\[\sum_{k=1}^{N_f} w_k = 1\]

If one of the objectives \(f_k\) is not a scalar, its elements will have the same weights, and it will be normed with length of the vector.

Takes into account constraints with a penalty function.

All constraints are converted to the form of \(g_i(x) \leq 0\) for inequality constraints and \(h_i(x) = 0\) for equality constraints. The constraint vector for inequality constraints is the following:

\[ \begin{align}\begin{aligned}g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}\\h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}\end{aligned}\end{align} \]

The number of all constraints:

\[N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}\]

The fitness function is constructed with the penalty parameter \(p\) and the exponent \(\kappa\):

\[\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k)^{\kappa} + p \cdot \sum_{k=1}^{N^h}|h_k|^{\kappa}\]

where \(\delta_k = 0\) if \(g_k\) is satisfied, 1 otherwise

Note

The values of \(\kappa\) and \(p\) can be defined as driver options.

Parameters
xndarray

Value of design variables.

icaseint

Case number, used for identification when run in parallel.

Returns
float

Objective value

bool

Success flag, True if successful

int

Case number, used for identification when run in parallel.

record_iteration(self)

Record an iteration of the current Driver.

run(self)[source]

Execute the genetic algorithm.

Returns
boolean

Failure flag; True if failed to converge, False is successful.

set_design_var(self, name, value)

Set the value of a design variable.

Parameters
namestr

Global pathname of the design variable.

valuefloat or ndarray

Value for the design variable.

use_fixed_coloring(self, coloring=<object object at 0x7fdd46258cf0>)

Tell the driver to use a precomputed coloring.

Parameters
coloringstr

A coloring filename. If no arg is passed, filename will be determined automatically.