differential_evolution_driver.py

differential_evolution_driver.py#

Driver for a differential evolution genetic algorithm.

TODO: add better references than: https://en.wikipedia.org/wiki/Differential_evolution

Most of this driver (except execute_ga) is based on SimpleGA, so the following may still apply:

The following reference is only for the penalty function: Smith, A. E., Coit, D. W. (1995) Penalty functions. In: Handbook of Evolutionary Computation, 97(1).

The following reference is only for weighted sum multi-objective optimization: Sobieszczanski-Sobieski, J., Morris, A. J., van Tooren, M. J. L. (2015) Multidisciplinary Design Optimization Supported by Knowledge Based Engineering. John Wiley & Sons, Ltd.

class openmdao.drivers.differential_evolution_driver.DifferentialEvolution(objfun, comm=None, model_mpi=None)[source]

Bases: object

Differential Evolution Genetic Algorithm.

TODO : add better references than: https://en.wikipedia.org/wiki/Differential_evolution

Parameters:
objfunfunction

Objective callback function.

commMPI communicator or None

The MPI communicator that will be used objective evaluation for each generation.

model_mpiNone or tuple

If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

Attributes:
commMPI communicator or None

The MPI communicator that will be used objective evaluation for each generation.

lchromint

Chromosome length.

model_mpiNone or tuple

If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

npopint

Population size.

objfunfunction

Objective function callback.

_lhsfunction

The lazily-imported pyDOE3 latin hypercube sampling function.

Methods

execute_ga(x0, vlb, vub, pop_size, max_gen, ...)

Perform the genetic algorithm.

__init__(objfun, comm=None, model_mpi=None)[source]

Initialize genetic algorithm object.

execute_ga(x0, vlb, vub, pop_size, max_gen, random_state, F=0.5, Pc=0.5)[source]

Perform the genetic algorithm.

Parameters:
x0ndarray

Initial design values.

vlbndarray

Lower bounds array.

vubndarray

Upper bounds array.

pop_sizeint

Number of points in the population.

max_genint

Number of generations to run the GA.

random_stateint

Seed-number which controls the random draws.

Ffloat

Differential rate.

Pcfloat

Crossover rate.

Returns:
ndarray

Best design point.

float

Objective value at best design point.

int

Number of successful function evaluations.

class openmdao.drivers.differential_evolution_driver.DifferentialEvolutionDriver(**kwargs)[source]

Bases: Driver

Driver for a differential evolution genetic algorithm.

This algorithm requires that inputs are floating point numbers.

Parameters:
**kwargsdict of keyword arguments

Keyword arguments that will be mapped into the Driver options.

Attributes:
_problem_commMPI.Comm or None

The MPI communicator for the Problem.

_concurrent_pop_sizeint

Number of points to run concurrently when model is a parallel one.

_concurrent_colorint

Color of current rank when running a parallel model.

_desvar_idxdict

Keeps track of the indices for each desvar, since DifferentialEvolution sees an array of design variables.

_ga<DifferentialEvolution>

Main genetic algorithm lies here.

_nfitint

Number of successful function evaluations.

_randomstateint

Seed-number which controls the random draws.

Methods

add_recorder(recorder)

Add a recorder to the driver.

check_relevance()

Check if there are constraints that don't depend on any design vars.

cleanup()

Clean up resources prior to exit.

compute_lagrange_multipliers([...])

Get the approximated Lagrange multipliers of one or more constraints.

declare_coloring([num_full_jacs, tol, ...])

Set options for total deriv coloring.

get_coloring_fname([mode])

Get the filename for the coloring file.

get_constraint_values([ctype, lintype, ...])

Return constraint values.

get_design_var_values([get_remote, ...])

Return the design variable values.

get_driver_derivative_calls()

Return number of derivative evaluations made during a driver run.

get_driver_objective_calls()

Return number of objective evaluations made during a driver run.

get_exit_status()

Return exit status of driver run.

get_objective_values([driver_scaling])

Return objective values.

get_reports_dir()

Get the path to the directory where the report files should go.

objective_callback(x, icase)

Evaluate problem objective at the requested point.

record_derivatives()

Record the current total jacobian.

record_iteration()

Record an iteration of the current Driver.

run()

Execute the genetic algorithm.

scaling_report([outfile, title, ...])

Generate a self-contained html file containing a detailed connection viewer.

set_design_var(name, value[, set_remote])

Set the value of a design variable.

use_fixed_coloring([coloring])

Tell the driver to use a precomputed coloring.

__init__(**kwargs)[source]

Initialize the DifferentialEvolutionDriver driver.

objective_callback(x, icase)[source]

Evaluate problem objective at the requested point.

In case of multi-objective optimization, a simple weighted sum method is used:

\[f = (\sum_{k=1}^{N_f} w_k \cdot f_k)^a\]

where \(N_f\) is the number of objectives and \(a>0\) is an exponential weight. Choosing \(a=1\) is equivalent to the conventional weighted sum method.

The weights given in the options are normalized, so:

\[\sum_{k=1}^{N_f} w_k = 1\]

If one of the objectives \(f_k\) is not a scalar, its elements will have the same weights, and it will be normed with length of the vector.

Takes into account constraints with a penalty function.

All constraints are converted to the form of \(g_i(x) \leq 0\) for inequality constraints and \(h_i(x) = 0\) for equality constraints. The constraint vector for inequality constraints is the following:

\[ \begin{align}\begin{aligned}g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}\\h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}\end{aligned}\end{align} \]

The number of all constraints:

\[N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}\]

The fitness function is constructed with the penalty parameter \(p\) and the exponent \(\kappa\):

\[\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k)^{\kappa} + p \cdot \sum_{k=1}^{N^h}|h_k|^{\kappa}\]

where \(\delta_k = 0\) if \(g_k\) is satisfied, 1 otherwise

Note

The values of \(\kappa\) and \(p\) can be defined as driver options.

Parameters:
xndarray

Value of design variables.

icaseint

Case number, used for identification when run in parallel.

Returns:
float

Objective value.

bool

Success flag, True if successful.

int

Case number, used for identification when run in parallel.

run()[source]

Execute the genetic algorithm.

Returns:
bool

Failure flag; True if failed to converge, False is successful.