differential_evolution_driver.py

Driver for a differential evolution genetic algorithm.

TODO: add better references than: https://en.wikipedia.org/wiki/Differential_evolution

Most of this driver (except execute_ga) is based on SimpleGA, so the following may still apply:

The following reference is only for the penalty function: Smith, A. E., Coit, D. W. (1995) Penalty functions. In: Handbook of Evolutionary Computation, 97(1).

The following reference is only for weighted sum multi-objective optimization: Sobieszczanski-Sobieski, J., Morris, A. J., van Tooren, M. J. L. (2015) Multidisciplinary Design Optimization Supported by Knowledge Based Engineering. John Wiley & Sons, Ltd.

class openmdao.drivers.differential_evolution_driver.DifferentialEvolution(objfun, comm=None, model_mpi=None)[source]

Bases: object

Differential Evolution Genetic Algorithm.

TODO: add better references than: https://en.wikipedia.org/wiki/Differential_evolution

Attributes

comm

(MPI communicator or None) The MPI communicator that will be used objective evaluation for each generation.

lchrom

(int) Chromosome length.

model_mpi

(None or tuple) If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

npop

(int) Population size.

objfun

(function) Objective function callback.

__init__(objfun, comm=None, model_mpi=None)[source]

Initialize genetic algorithm object.

Parameters
objfunfunction

Objective callback function.

commMPI communicator or None

The MPI communicator that will be used objective evaluation for each generation.

model_mpiNone or tuple

If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

execute_ga(x0, vlb, vub, pop_size, max_gen, random_state, F=0.5, Pc=0.5)[source]

Perform the genetic algorithm.

Parameters
x0ndarray

Initial design values

vlbndarray

Lower bounds array.

vubndarray

Upper bounds array.

pop_sizeint

Number of points in the population.

max_genint

Number of generations to run the GA.

random_statenp.random.RandomState, int

Random state (or seed-number) which controls the seed and random draws.

Ffloat

Differential rate

Pcfloat

Crossover rate

Returns
ndarray

Best design point

float

Objective value at best design point.

int

Number of successful function evaluations.

class openmdao.drivers.differential_evolution_driver.DifferentialEvolutionDriver(**kwargs)[source]

Bases: openmdao.core.driver.Driver

Driver for a differential evolution genetic algorithm.

This algorithm requires that inputs are floating point numbers.

__init__(**kwargs)[source]

Initialize the DifferentialEvolutionDriver driver.

Parameters
**kwargsdict of keyword arguments

Keyword arguments that will be mapped into the Driver options.

add_recorder(recorder)

Add a recorder to the driver.

Parameters
recorderCaseRecorder

A recorder instance.

cleanup()

Clean up resources prior to exit.

declare_coloring(num_full_jacs=3, tol=1e-25, orders=None, perturb_size=1e-09, min_improve_pct=5.0, show_summary=True, show_sparsity=False)

Set options for total deriv coloring.

Parameters
num_full_jacsint

Number of times to repeat partial jacobian computation when computing sparsity.

tolfloat

Tolerance used to determine if an array entry is nonzero during sparsity determination.

ordersint

Number of orders above and below the tolerance to check during the tolerance sweep.

perturb_sizefloat

Size of input/output perturbation during generation of sparsity.

min_improve_pctfloat

If coloring does not improve (decrease) the number of solves more than the given percentage, coloring will not be used.

show_summarybool

If True, display summary information after generating coloring.

show_sparsitybool

If True, display sparsity with coloring info after generating coloring.

get_constraint_values(ctype='all', lintype='all', driver_scaling=True)

Return constraint values.

Parameters
ctypestring

Default is ‘all’. Optionally return just the inequality constraints with ‘ineq’ or the equality constraints with ‘eq’.

lintypestring

Default is ‘all’. Optionally return just the linear constraints with ‘linear’ or the nonlinear constraints with ‘nonlinear’.

driver_scalingbool

When True, return values that are scaled according to either the adder and scaler or the ref and ref0 values that were specified when add_design_var, add_objective, and add_constraint were called on the model. Default is True.

Returns
dict

Dictionary containing values of each constraint.

get_design_var_values(get_remote=True)

Return the design variable values.

Parameters
get_remotebool or None

If True, retrieve the value even if it is on a remote process. Note that if the variable is remote on ANY process, this function must be called on EVERY process in the Problem’s MPI communicator. If False, only retrieve the value if it is on the current process, or only the part of the value that’s on the current process for a distributed variable.

Returns
dict

Dictionary containing values of each design variable.

get_objective_values(driver_scaling=True)

Return objective values.

Parameters
driver_scalingbool

When True, return values that are scaled according to either the adder and scaler or the ref and ref0 values that were specified when add_design_var, add_objective, and add_constraint were called on the model. Default is True.

Returns
dict

Dictionary containing values of each objective.

property msginfo

Return info to prepend to messages.

Returns
str

Info to prepend to messages.

objective_callback(x, icase)[source]

Evaluate problem objective at the requested point.

In case of multi-objective optimization, a simple weighted sum method is used:

\[f = (\sum_{k=1}^{N_f} w_k \cdot f_k)^a\]

where \(N_f\) is the number of objectives and \(a>0\) is an exponential weight. Choosing \(a=1\) is equivalent to the conventional weighted sum method.

The weights given in the options are normalized, so:

\[\sum_{k=1}^{N_f} w_k = 1\]

If one of the objectives \(f_k\) is not a scalar, its elements will have the same weights, and it will be normed with length of the vector.

Takes into account constraints with a penalty function.

All constraints are converted to the form of \(g_i(x) \leq 0\) for inequality constraints and \(h_i(x) = 0\) for equality constraints. The constraint vector for inequality constraints is the following:

\[ \begin{align}\begin{aligned}g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}\\h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}\end{aligned}\end{align} \]

The number of all constraints:

\[N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}\]

The fitness function is constructed with the penalty parameter \(p\) and the exponent \(\kappa\):

\[\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k)^{\kappa} + p \cdot \sum_{k=1}^{N^h}|h_k|^{\kappa}\]

where \(\delta_k = 0\) if \(g_k\) is satisfied, 1 otherwise

Note

The values of \(\kappa\) and \(p\) can be defined as driver options.

Parameters
xndarray

Value of design variables.

icaseint

Case number, used for identification when run in parallel.

Returns
float

Objective value

bool

Success flag, True if successful

int

Case number, used for identification when run in parallel.

record_iteration()

Record an iteration of the current Driver.

run()[source]

Execute the genetic algorithm.

Returns
boolean

Failure flag; True if failed to converge, False is successful.

set_design_var(name, value, set_remote=True)

Set the value of a design variable.

Parameters
namestr

Global pathname of the design variable.

valuefloat or ndarray

Value for the design variable.

set_remotebool

If True, set the global value of the variable (value must be of the global size). If False, set the local value of the variable (value must be of the local size).

use_fixed_coloring(coloring=<object object>)

Tell the driver to use a precomputed coloring.

Parameters
coloringstr

A coloring filename. If no arg is passed, filename will be determined automatically.