pymoo_driver.py#

OpenMDAO Wrapper for the pymoo optimization library.

pymoo offers state of the art single- and multi-objective optimization algorithms and many more features related to multi-objective optimization such as visualization and decision making.

Available Optimizers#

Single-Objective:
  • GA: Genetic Algorithm

  • DE: Differential Evolution

  • BRKGA: Biased Random Key Genetic Algorithm

  • NelderMead: Nelder-Mead simplex algorithm

  • PatternSearch: Pattern search algorithm

  • CMAES: Covariance Matrix Adaptation Evolution Strategy

  • ES: Evolution Strategy

  • SRES: Stochastic Ranking Evolution Strategy

  • ISRES: Improved Stochastic Ranking Evolution Strategy

  • NRBO: Newton-Raphson Based Optimizer

  • DIRECT: DIRECT (Dividing RECTangles) deterministic global optimizer

  • G3PCX: Generalized Generation Gap with Parent-Centric Crossover (no constraint support)

  • NicheGA: Niching Genetic Algorithm

  • PSO: Particle Swarm Optimization (no constraint support)

  • EPPSO: Extended and Parallelised PSO (no constraint support)

  • RandomSearch: Random Search (no constraint support)

  • Optuna: Optuna-based optimizer (requires the optuna package)

  • MixedVariableGA: Genetic Algorithm with support for discrete (integer) and mixed integer design variables

Multi-Objective:
  • NSGA2: Non-dominated Sorting Genetic Algorithm II

  • RNSGA2: Reference Point Based NSGA-II

  • PINSGA2: Pareto-Improving NSGA-II

  • NSGA3: Non-dominated Sorting Genetic Algorithm III

  • UNSGA3: Unified NSGA-III

  • RNSGA3: Reference Point Based NSGA-III

  • MOEAD: Multi-Objective Evolutionary Algorithm Based on Decomposition

  • AGEMOEA: Adaptive Geometry Estimation based MOEA

  • AGEMOEA2: Improved Adaptive Geometry Estimation based MOEA (no constraint support)

  • CTAEA: Constrained Two-Archive Evolutionary Algorithm

  • SMSEMOA: S-Metric Selection EMOA

  • RVEA: Reference Vector Guided Evolutionary Algorithm

  • CMOPSO: Constrained Multi-Objective Particle Swarm Optimization

  • MOPSO_CD: Multi-Objective Particle Swarm Optimization with Crowding Distance

  • DNSGA2: Dynamic NSGA-II (unconstrained only)

  • KGB: KGB-DMOEA (unconstrained only)

  • SPEA2: Strength Pareto Evolutionary Algorithm 2

Notes#

Algorithm-specific hyperparameters (e.g. population size, mutation and crossover operators) are passed via the alg_settings dict, which is unpacked into the algorithm constructor.

Run-level settings accepted by pymoo’s algorithm.setup() (e.g. seed, verbose, termination, callback, save_history) are passed via the run_settings dict, which is unpacked into pymoo.optimize.minimize().

For multi-objective optimizations the Pareto front is stored on the driver in driver.pareto after run_driver() completes. driver.pareto['X'] and driver.pareto['F'] are dicts keyed by design variable and objective name respectively, so individual variables can be extracted by name.

Population-level MPI parallelism is enabled automatically when more than one MPI rank is available. Ranks are divided into groups of procs_per_model (default 1), where each group cooperates on a single model evaluation.

For additional processing, the pymoo results object can be accessed at the pymoo_results attribute on the driver.

See the pymoo documentation at https://pymoo.org/index.html for detailed information on algorithm-specific options and capabilities.

class openmdao.drivers.pymoo_driver.MPIElementwiseRunner(comm, procs_per_model=1)[source]

Bases: object

Elementwise evaluation runner that distributes population members across MPI ranks.

Background

pymoo’s ElementwiseProblem._evaluate evaluates one individual at a time. The elementwise runner is what calls _evaluate repeatedly to cover the whole population. The default runner, LoopedElementwiseEvaluation, does this sequentially on a single process:

return [f(x) for x in X]

This runner replaces that loop with a parallel pattern using MPI.

How it works

The total MPI ranks are divided into groups of procs_per_model ranks each. With 8 total ranks and procs_per_model=2, there are 4 groups:

  • Group 0: ranks 0 and 4

  • Group 1: ranks 1 and 5

  • Group 2: ranks 2 and 6

  • Group 3: ranks 3 and 7

The group a rank belongs to is called its color: color = rank % n_groups. All ranks in the same group share a model sub-communicator (set up in _setup_comm before Problem.setup() runs) so that models with parallel components (e.g. ParallelGroup) receive the right communicator.

At each generation, individuals in the population are distributed round-robin across groups. With 100 individuals and 4 groups:

  • Group 0 (ranks 0, 4) evaluates individuals 0, 4, 8, …, 96

  • Group 1 (ranks 1, 5) evaluates individuals 1, 5, 9, …, 97

  • Group 2 (ranks 2, 6) evaluates individuals 2, 6, 10, …, 98

  • Group 3 (ranks 3, 7) evaluates individuals 3, 7, 11, …, 99

All ranks in a group call f(x) for the same individual, cooperating through the model sub-communicator. After evaluation, only the root rank of each group (i.e. rank < n_groups) contributes its results to the allgather, which broadcasts the complete population results to all ranks. pymoo then runs its selection/crossover/mutation identically on all ranks and moves to the next generation.

When procs_per_model=1 (the default), every rank is its own group — this reduces to simple round-robin across all ranks with no sub-communicator overhead.

Parameters:
commMPI.Comm

The full problem-level communicator (not the model sub-communicator).

procs_per_modelint

Number of MPI ranks that cooperate on a single model evaluation.

Attributes:
commMPI.Comm

The full problem-level communicator.

n_groupsint

Number of parallel evaluation groups (comm.size // procs_per_model).

colorint

The group index this rank belongs to (comm.rank % n_groups).

Methods

__call__(f, X)

Evaluate each individual in X, distributing work across MPI groups.

__init__(comm, procs_per_model=1)[source]

Initialize the MPIElementwiseRunner.

Parameters:
commMPI.Comm

The full problem-level communicator (not the model sub-communicator).

procs_per_modelint

Number of MPI ranks that cooperate on a single model evaluation.

class openmdao.drivers.pymoo_driver.pymooDriver(**kwargs)[source]

Bases: Driver

Driver wrapper for the pymoo optimization library.

Supports both single- and multi-objective gradient-free optimization using evolutionary and swarm-based algorithms. For single-objective problems the model is set to the optimal point after run_driver() completes. For multi-objective problems the full Pareto front is stored in driver.pareto.

Discrete (integer) and mixed integer design variables are supported via the MixedVariableGA optimizer, which uses pymoo’s mixed-variable-aware sampling and mating operators. All other optimizers require continuous design variables only.

Population-level MPI parallelism is enabled automatically when more than one MPI rank is available. Ranks are divided into groups of procs_per_model (default 1). Each group cooperates on one model evaluation, enabling models with parallel components (e.g. ParallelGroup) to each receive their own sub-communicator. With 8 ranks and procs_per_model=2, 4 individuals are evaluated simultaneously.

pymooDriver supports the following:

equality_constraints (algorithm-dependent) inequality_constraints (algorithm-dependent) two_sided_constraints (algorithm-dependent) linear_constraints (algorithm-dependent) multiple_objectives (algorithm-dependent) integer_design_vars (MixedVariableGA only)

Parameters:
**kwargsdict of keyword arguments

Keyword arguments that will be mapped into the Driver options.

Attributes:
alg_settingsdict

Algorithm-specific hyperparameters passed to the algorithm constructor (e.g. pop_size, mutation and crossover operators).

run_settingsdict

Run-level settings passed to pymoo.optimize.minimize() and forwarded to algorithm.setup() (e.g. seed, verbose, termination).

pymoo_resultspymoo.core.result.Result

The result object returned by pymoo.optimize.minimize() after the optimization completes.

paretodict

Pareto front results for multi-objective optimizations. Contains keys ‘X’ (dict mapping each design variable name to its values across all Pareto solutions), ‘F’ (dict mapping each objective name to its values across all Pareto solutions), ‘X_raw’ (raw design variable array from pymoo, shape (n_solutions, n_vars)), and ‘F_raw’ (raw objective array from pymoo, shape (n_solutions, n_objs)). Populated only when a multi-objective optimizer is used. Individual variables are accessed as pareto['X']['dv_name'] and pareto['F']['obj_name']. The raw arrays are intended for use with pymoo visualization utilities.

alg_classtype

The pymoo algorithm class resolved from the ‘optimizer’ option.

_model_ranbool

Flag indicating whether the model has been run at least once, used to control relevance filtering on subsequent evaluations.

_moo_probpymooProblem or None

The pymoo problem wrapper built during run().

_problem_commMPI.Comm or None

The full problem-level communicator across all ranks. Stored in _setup_comm before Problem.setup() runs. Used by the MPI runner to coordinate population distribution across all ranks.

Methods

add_recorder(recorder)

Add a recorder to the driver.

check_relevance()

Check if there are constraints that don't depend on any design vars.

cleanup()

Clean up resources prior to exit.

compute_lagrange_multipliers([...])

Get the approximated Lagrange multipliers of one or more constraints.

declare_coloring([num_full_jacs, tol, ...])

Set options for total deriv coloring.

get_algorithm(alg_name)

Return the pymoo algorithm class for the given algorithm name.

get_coloring_fname([mode])

Get the filename for the coloring file.

get_constraint_values([ctype, lintype, ...])

Return constraint values.

get_design_var_values([get_remote, ...])

Return the design variable values.

get_driver_derivative_calls()

Return number of derivative evaluations made during a driver run.

get_driver_objective_calls()

Return number of objective evaluations made during a driver run.

get_exit_status()

Return exit status of driver run.

get_objective_values([driver_scaling])

Return objective values.

get_reports_dir()

Get the path to the directory where the report files should go.

record_derivatives()

Record the current total jacobian.

record_iteration()

Record an iteration of the current Driver.

run()

Optimize the problem using the selected pymoo optimizer.

scaling_report([outfile, title, ...])

Generate a self-contained html file containing a detailed connection viewer.

set_design_var(name, value[, set_remote, ...])

Set the value of a design variable.

use_fixed_coloring([coloring])

Tell the driver to use a precomputed coloring.

__init__(**kwargs)[source]

Initialize the pymooDriver.

Parameters:
**kwargsdict of keyword arguments

Keyword arguments that will be mapped into the Driver options.

get_algorithm(alg_name)[source]

Return the pymoo algorithm class for the given algorithm name.

Parameters:
alg_namestr

Name of the algorithm, must be a member of _all_optimizers.

Returns:
type

The pymoo algorithm class corresponding to alg_name.

run()[source]

Optimize the problem using the selected pymoo optimizer.

Returns:
bool

Failure flag; True if the optimization failed to find a feasible solution, False if successful.

class openmdao.drivers.pymoo_driver.pymooProblem(driver, x_info, ieq_con_info, eq_con_info, obj_info, runner=None)[source]

Bases: ElementwiseProblem

Pymoo ElementwiseProblem that delegates function evaluation to an OpenMDAO driver.

Wraps an OpenMDAO problem as a pymoo optimization problem, translating between pymoo’s interface and OpenMDAO’s driver interface. Inequality constraints are converted to the pymoo convention (g <= 0) and equality constraints to (h == 0). When MixedVariableGA is selected or discrete (integer) design variables are present, pymoo’s vars dict interface is used so that pymoo samples and mutates variable types correctly.

Parameters:
driverpymooDriver

The OpenMDAO driver managing the optimization.

x_infodict

Design variable metadata with keys ‘vars’, ‘indices’, ‘lower’, ‘upper’.

ieq_con_infodict

Inequality constraint metadata with keys ‘vars’, ‘indices’, ‘is_upper’, ‘bound’.

eq_con_infodict

Equality constraint metadata with keys ‘vars’, ‘indices’, ‘equals’.

obj_infodict

Objective metadata with keys ‘vars’, ‘indices’, ‘size’.

runnercallable or None

Pymoo elementwise runner. Pass an MPIElementwiseRunner instance for population-level MPI parallelism. If None, uses pymoo’s default sequential LoopedElementwiseEvaluation.

Attributes:
driverpymooDriver

Reference to the OpenMDAO driver.

x_infodict

Design variable metadata.

ieq_con_infodict

Inequality constraint metadata.

eq_con_infodict

Equality constraint metadata.

obj_infodict

Objective metadata.

failbool

Flag set to True if an exception occurred during evaluation.

Methods

bounds

do

evaluate

has_bounds

has_constraints

ideal_point

nadir_point

name

pareto_front

pareto_set

__init__(driver, x_info, ieq_con_info, eq_con_info, obj_info, runner=None)[source]

Initialize the pymooProblem.

Parameters:
driverpymooDriver

The OpenMDAO driver managing the optimization.

x_infodict

Design variable metadata with keys ‘vars’, ‘indices’, ‘lower’, ‘upper’.

ieq_con_infodict

Inequality constraint metadata with keys ‘vars’, ‘indices’, ‘is_upper’, ‘bound’.

eq_con_infodict

Equality constraint metadata with keys ‘vars’, ‘indices’, ‘equals’.

obj_infodict

Objective metadata with keys ‘vars’, ‘indices’, ‘size’.

runnercallable or None

Pymoo elementwise runner. Pass an MPIElementwiseRunner instance for population-level MPI parallelism. If None, uses pymoo’s default sequential LoopedElementwiseEvaluation.