genetic_algorithm_driver.py¶
Driver for a simple genetic algorithm.
This is the Simple Genetic Algorithm implementation based on 2009 AAE550: MDO Lecture notes of Prof. William A. Crossley.
This basic GA algorithm is compartmentalized into the GeneticAlgorithm class so that it can be used in more complicated driver.
The following reference is only for the automatic population sizing: Williams E.A., Crossley W.A. (1998) EmpiricallyDerived Population Size and Mutation Rate Guidelines for a Genetic Algorithm with Uniform Crossover. In: Chawdhry P.K., Roy R., Pant R.K. (eds) Soft Computing in Engineering Design and Manufacturing. Springer, London.
The following reference is only for the penalty function: Smith, A. E., Coit, D. W. (1995) Penalty functions. In: Handbook of Evolutionary Computation, 97(1).
The following reference is only for weighted sum multiobjective optimization: SobieszczanskiSobieski, J., Morris, A. J., van Tooren, M. J. L. (2015) Multidisciplinary Design Optimization Supported by Knowledge Based Engineering. John Wiley & Sons, Ltd.

class
openmdao.drivers.genetic_algorithm_driver.
GeneticAlgorithm
(objfun, comm=None, model_mpi=None)[source]¶ Bases:
object
Simple Genetic Algorithm.
This is the Simple Genetic Algorithm implementation based on 2009 AAE550: MDO Lecture notes of Prof. William A. Crossley. It can be used standalone or as part of the OpenMDAO Driver.
Attributes
comm (MPI communicator or None) The MPI communicator that will be used objective evaluation for each generation. elite (bool) Elitism flag. lchrom (int) Chromosome length. model_mpi (None or tuple) If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank. npop (int) Population size. objfun (function) Objective function callback. 
__init__
(objfun, comm=None, model_mpi=None)[source]¶ Initialize genetic algorithm object.
Parameters:  objfun : function
Objective callback function.
 comm : MPI communicator or None
The MPI communicator that will be used objective evaluation for each generation.
 model_mpi : None or tuple
If the model in objfun is also parallel, then this will contain a tuple with the the total number of population points to evaluate concurrently, and the color of the point to evaluate on this rank.

crossover
(old_gen, Pc)[source]¶ Apply crossover to the current generation.
Crossover flips two adjacent genes.
Parameters:  old_gen : ndarray
Points in current generation
 Pc : float
Probability of crossover.
Returns:  ndarray
Current generation with crossovers applied.

decode
(gen, vlb, vub, bits)[source]¶ Decode from binary array to real value array.
Parameters:  gen : ndarray
Population of points, encoded.
 vlb : ndarray
Lower bound array.
 vub : ndarray
Upper bound array.
 bits : ndarray
Number of bits for decoding.
Returns:  ndarray
Decoded design variable values.

encode
(x, vlb, vub, bits)[source]¶ Encode array of real values to array of binary arrays.
Parameters:  x : ndarray
Design variable values.
 vlb : ndarray
Lower bound array.
 vub : ndarray
Upper bound array.
 bits : int
Number of bits for decoding.
Returns:  ndarray
Population of points, encoded.

execute_ga
(x0, vlb, vub, vob, bits, pop_size, max_gen, random_state, Pm=None, Pc=0.5)[source]¶ Perform the genetic algorithm.
Parameters:  x0 : ndarray
Initial design values
 vlb : ndarray
Lower bounds array.
 vub : ndarray
Upper bounds array. This includes overallocation so that every point falls on an integer value.
 vob : ndarray
Outer bounds array. This is purely for bounds check.
 bits : ndarray
Number of bits to encode the design space for each element of the design vector.
 pop_size : int
Number of points in the population.
 max_gen : int
Number of generations to run the GA.
 random_state : np.random.RandomState, int
Random state (or seednumber) which controls the seed and random draws.
 Pm : float or None
Mutation rate
 Pc : float
Crossover rate
Returns:  ndarray
Best design point
 float
Objective value at best design point.
 int
Number of successful function evaluations.

mutate
(current_gen, Pm)[source]¶ Apply mutations to the current generation.
A mutation flips the state of the gene from 0 to 1 or 1 to 0.
Parameters:  current_gen : ndarray
Points in current generation
 Pm : float
Probability of mutation.
Returns:  ndarray
Current generation with mutations applied.


class
openmdao.drivers.genetic_algorithm_driver.
SimpleGADriver
(**kwargs)[source]¶ Bases:
openmdao.core.driver.Driver
Driver for a simple genetic algorithm.

__init__
(**kwargs)[source]¶ Initialize the SimpleGADriver driver.
Parameters:  **kwargs : dict of keyword arguments
Keyword arguments that will be mapped into the Driver options.

add_recorder
(recorder)¶ Add a recorder to the driver.
Parameters:  recorder : CaseRecorder
A recorder instance.

cleanup
()¶ Clean up resources prior to exit.

get_constraint_values
(ctype='all', lintype='all', unscaled=False, filter=None, ignore_indices=False)¶ Return constraint values.
Parameters:  ctype : string
Default is ‘all’. Optionally return just the inequality constraints with ‘ineq’ or the equality constraints with ‘eq’.
 lintype : string
Default is ‘all’. Optionally return just the linear constraints with ‘linear’ or the nonlinear constraints with ‘nonlinear’.
 unscaled : bool
Set to True if unscaled (physical) design variables are desired.
 filter : list
List of constraint names used by recorders.
 ignore_indices : bool
Set to True if the full array is desired, not just those indicated by indices.
Returns:  dict
Dictionary containing values of each constraint.

get_design_var_values
(filter=None, unscaled=False, ignore_indices=False)¶ Return the design variable values.
This is called to gather the initial design variable state.
Parameters:  filter : list
List of desvar names used by recorders.
 unscaled : bool
Set to True if unscaled (physical) design variables are desired.
 ignore_indices : bool
Set to True if the full array is desired, not just those indicated by indices.
Returns:  dict
Dictionary containing values of each design variable.

get_objective_values
(unscaled=False, filter=None, ignore_indices=False)¶ Return objective values.
Parameters:  unscaled : bool
Set to True if unscaled (physical) design variables are desired.
 filter : list
List of objective names used by recorders.
 ignore_indices : bool
Set to True if the full array is desired, not just those indicated by indices.
Returns:  dict
Dictionary containing values of each objective.

get_response_values
(filter=None)¶ Return response values.
Parameters:  filter : list
List of response names used by recorders.
Returns:  dict
Dictionary containing values of each response.

objective_callback
(x, icase)[source]¶ Evaluate problem objective at the requested point.
In case of multiobjective optimization, a simple weighted sum method is used:
\[f = (\sum_{k=1}^{N_f} w_k \cdot f_k)^a\]where \(N_f\) is the number of objectives and \(a>0\) is an exponential weight. Choosing \(a=1\) is equivalent to the conventional weighted sum method.
The weights given in the options are normalized, so:
\[\sum_{k=1}^{N_f} w_k = 1\]If one of the objectives \(f_k\) is not a scalar, its elements will have the same weights, and it will be normed with length of the vector.
Takes into account constraints with a penalty function.
All constraints are converted to the form of \(g_i(x) \leq 0\) for inequality constraints and \(h_i(x) = 0\) for equality constraints. The constraint vector for inequality constraints is the following:
\[ \begin{align}\begin{aligned}g = [g_1, g_2 \dots g_N], g_i \in R^{N_{g_i}}\\h = [h_1, h_2 \dots h_N], h_i \in R^{N_{h_i}}\end{aligned}\end{align} \]The number of all constraints:
\[N_g = \sum_{i=1}^N N_{g_i}, N_h = \sum_{i=1}^N N_{h_i}\]The fitness function is constructed with the penalty parameter \(p\) and the exponent \(\kappa\):
\[\Phi(x) = f(x) + p \cdot \sum_{k=1}^{N^g}(\delta_k \cdot g_k)^{\kappa} + p \cdot \sum_{k=1}^{N^h}h_k^{\kappa}\]where \(\delta_k = 0\) if \(g_k\) is satisfied, 1 otherwise
Note
The values of \(\kappa\) and \(p\) can be defined as driver options.
Parameters:  x : ndarray
Value of design variables.
 icase : int
Case number, used for identification when run in parallel.
Returns:  float
Objective value
 bool
Success flag, True if successful
 int
Case number, used for identification when run in parallel.

record_iteration
()¶ Record an iteration of the current Driver.

run
()[source]¶ Execute the genetic algorithm.
Returns:  boolean
Failure flag; True if failed to converge, False is successful.

set_design_var
(name, value)¶ Set the value of a design variable.
Parameters:  name : str
Global pathname of the design variable.
 value : float or ndarray
Value for the design variable.

set_simul_deriv_color
(simul_info)¶ Set the coloring (and possibly the subjac sparsity) for simultaneous total derivatives.
Parameters:  simul_info : str or dict
# Information about simultaneous coloring for design vars and responses. If a # string, then simul_info is assumed to be the name of a file that contains the # coloring information in JSON format. If a dict, the structure looks like this: { "fwd": [ # First, a list of column index lists, each index list representing columns # having the same color, except for the very first index list, which contains # indices of all columns that are not colored. [ [i1, i2, i3, ...] # list of noncolored columns [ia, ib, ...] # list of columns in first color [ic, id, ...] # list of columns in second color ... # remaining color lists, one list of columns per color ], # Next is a list of lists, one for each column, containing the nonzero rows for # that column. If a column is not colored, then it will have a None entry # instead of a list. [ [r1, rn, ...] # list of nonzero rows for column 0 None, # column 1 is not colored [ra, rb, ...] # list of nonzero rows for column 2 ... ], ], # This example is not a bidirectional coloring, so the opposite direction, "rev" # in this case, has an empty row index list. It could also be removed entirely. "rev": [[[]], []], "sparsity": # The sparsity entry can be absent, indicating that no sparsity structure is # specified, or it can be a nested dictionary where the outer keys are response # names, the inner keys are design variable names, and the value is a tuple of # the form (row_list, col_list, shape). { resp1_name: { dv1_name: (rows, cols, shape), # for subjac d_resp1/d_dv1 dv2_name: (rows, cols, shape), ... }, resp2_name: { ... } ... } }

set_total_jac_sparsity
(sparsity)¶ Set the sparsity of subjacobians of the total jacobian.
Note: This currently will have no effect if you are not using the pyOptSparseDriver.
Parameters:  sparsity : str or dict
# Sparsity is a nested dictionary where the outer keys are response # names, the inner keys are design variable names, and the value is a tuple of # the form (row_list, col_list, shape). { resp1: { dv1: (rows, cols, shape), # for subjac d_resp1/d_dv1 dv2: (rows, cols, shape), ... }, resp2: { ... } ... }
