pyOptSparseDriver¶
pyOptSparseDriver wraps the optimizer package pyOptSparse, which provides a common interface for 11 optimizers, some of which are included in the package (e.g., SLSQP and NSGA2), and some of which are commercial products that must be obtained from their respective authors (e.g. SNOPT). The pyOptSparse package is based on pyOpt, but adds support for sparse specification of constraint Jacobians. Most of the sparsity features are only applicable when using the SNOPT optimizer.
Note
The pyOptSparse package does not come included with the OpenMDAO installation. It is a separate optional package that can be obtained from mdolab.
In this simple example, we use the SLSQP optimizer to minimize the objective of SellarDerivativesGrouped.
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver()
prob.driver.options['optimizer'] = "SLSQP"
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
Optimization Problem -- Optimization using pyOpt_sparse ================================================================================ Objective Function: _objfunc Solution: -------------------------------------------------------------------------------- Total Time: 0.0818 User Objective Time : 0.0285 User Sensitivity Time : 0.0465 Interface Time : 0.0060 Opt Solver Time: 0.0008 Calls to Objective Function : 6 Calls to Sens Function : 6 Objectives Index Name Value Optimum 0 obj_cmp.obj 3.183394E+00 0.000000E+00 Variables (c - continuous, i - integer, d - discrete) Index Name Type Lower Bound Value Upper Bound Status 0 pz.z_0 c -1.000000E+01 1.977639E+00 1.000000E+01 1 pz.z_1 c 0.000000E+00 -5.387670E-15 1.000000E+01 l 2 px.x_0 c 0.000000E+00 8.372099E-15 1.000000E+01 l Constraints (i - inequality, e - equality) Index Name Type Lower Value Upper Status 0 con_cmp1.con1 i -1.797693E+308 -1.251692E-10 0.000000E+00 u 1 con_cmp2.con2 i -1.797693E+308 -2.024472E+01 0.000000E+00 --------------------------------------------------------------------------------
print(prob['z'][0])
1.97763888351
pyOptSparseDriver Options¶
Option | Default | Acceptable Values | Acceptable Types | Description |
---|---|---|---|---|
debug_print | [] | N/A | [‘list’] | List of what type of Driver variables to print at each iteration. Valid items in list are ‘desvars’, ‘ln_cons’, ‘nl_cons’, ‘objs’, ‘totals’ |
dynamic_derivs_repeats | 3 | N/A | [‘int’] | Number of compute_totals calls during dynamic computation of simultaneous derivative coloring or derivatives sparsity |
dynamic_derivs_sparsity | False | N/A | [‘bool’] | Compute derivative sparsity dynamically if True |
dynamic_simul_derivs | False | N/A | [‘bool’] | Compute simultaneous derivative coloring dynamically if True |
gradient method | openmdao | [{‘snopt_fd’, ‘openmdao’, ‘pyopt_fd’}] | N/A | Finite difference implementation to use |
optimizer | SLSQP | [‘ALPSO’, ‘CONMIN’, ‘FSQP’, ‘IPOPT’, ‘NLPQLP’, ‘NSGA2’, ‘PSQP’, ‘SLSQP’, ‘SNOPT’, ‘NLPY_AUGLAG’, ‘NOMAD’] | N/A | Name of optimizers to use |
print_results | True | N/A | [‘bool’] | Print pyOpt results if True |
title | Optimization using pyOpt_sparse | N/A | N/A | Title of this optimization run |
Note: Options can be passed as keyword arguments at initialization.
pyOptSparseDriver has a small number of unified options that can be specified as keyword arguments when it is instantiated or by using the “options” dictionary. We have already shown how to set the optimizer option. Next we see how the print_results option can be used to turn on or off the echoing of the results when the optimization finishes. The default is True, but here, we turn it off.
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver(optimizer='SLSQP')
prob.driver.options['print_results'] = False
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
print(prob['z'][0])
1.97763888351
Every optimizer also has its own specialized settings that allow you to fine-tune the algorithm that it uses. You can access these within the opt_setting dictionary. These options are different for each optimizer, so to find out what they are, you need to read your optimizer’s documentation. We present a few common ones here.
SLSQP-Specific Settings¶
Here, we set a convergence tolerance for SLSQP:
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver()
prob.driver.options['optimizer'] = "SLSQP"
prob.driver.opt_settings['ACC'] = 1e-9
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
Optimization Problem -- Optimization using pyOpt_sparse ================================================================================ Objective Function: _objfunc Solution: -------------------------------------------------------------------------------- Total Time: 0.0881 User Objective Time : 0.0297 User Sensitivity Time : 0.0515 Interface Time : 0.0061 Opt Solver Time: 0.0008 Calls to Objective Function : 6 Calls to Sens Function : 6 Objectives Index Name Value Optimum 0 obj_cmp.obj 3.183394E+00 0.000000E+00 Variables (c - continuous, i - integer, d - discrete) Index Name Type Lower Bound Value Upper Bound Status 0 pz.z_0 c -1.000000E+01 1.977639E+00 1.000000E+01 1 pz.z_1 c 0.000000E+00 -5.387670E-15 1.000000E+01 l 2 px.x_0 c 0.000000E+00 8.372099E-15 1.000000E+01 l Constraints (i - inequality, e - equality) Index Name Type Lower Value Upper Status 0 con_cmp1.con1 i -1.797693E+308 -1.251692E-10 0.000000E+00 u 1 con_cmp2.con2 i -1.797693E+308 -2.024472E+01 0.000000E+00 --------------------------------------------------------------------------------
print(prob['z'][0])
1.97763888351
Similarly, we can set an iteration limit. Here, we set it to just a few iterations, and don’t quite reach the optimum.
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver()
prob.driver.options['optimizer'] = "SLSQP"
prob.driver.opt_settings['MAXIT'] = 3
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
Optimization Problem -- Optimization using pyOpt_sparse ================================================================================ Objective Function: _objfunc Solution: -------------------------------------------------------------------------------- Total Time: 0.0546 User Objective Time : 0.0201 User Sensitivity Time : 0.0298 Interface Time : 0.0042 Opt Solver Time: 0.0005 Calls to Objective Function : 4 Calls to Sens Function : 4 Objectives Index Name Value Optimum 0 obj_cmp.obj 3.203561E+00 0.000000E+00 Variables (c - continuous, i - integer, d - discrete) Index Name Type Lower Bound Value Upper Bound Status 0 pz.z_0 c -1.000000E+01 1.983377E+00 1.000000E+01 1 pz.z_1 c 0.000000E+00 -7.880881E-13 1.000000E+01 l 2 px.x_0 c 0.000000E+00 -7.583490E-15 1.000000E+01 l Constraints (i - inequality, e - equality) Index Name Type Lower Value Upper Status 0 con_cmp1.con1 i -1.797693E+308 -2.043382E-02 0.000000E+00 1 con_cmp2.con2 i -1.797693E+308 -2.023325E+01 0.000000E+00 --------------------------------------------------------------------------------
print(prob['z'][0])
1.98337708331
SNOPT-Specific Settings¶
SNOPT has many customizable settings. Here we show two common ones.
Setting the convergence tolerance:
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver()
prob.driver.options['optimizer'] = "SNOPT"
prob.driver.opt_settings['Major feasibility tolerance'] = 1e-9
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
Optimization Problem -- Optimization using pyOpt_sparse ================================================================================ Objective Function: _objfunc Solution: -------------------------------------------------------------------------------- Total Time: 0.0855 User Objective Time : 0.0307 User Sensitivity Time : 0.0472 Interface Time : 0.0057 Opt Solver Time: 0.0018 Calls to Objective Function : 7 Calls to Sens Function : 6 Objectives Index Name Value Optimum 0 obj_cmp.obj 3.183394E+00 0.000000E+00 Variables (c - continuous, i - integer, d - discrete) Index Name Type Lower Bound Value Upper Bound Status 0 pz.z_0 c -1.000000E+01 1.977639E+00 1.000000E+01 1 pz.z_1 c 0.000000E+00 0.000000E+00 1.000000E+01 l 2 px.x_0 c 0.000000E+00 0.000000E+00 1.000000E+01 l Constraints (i - inequality, e - equality) Index Name Type Lower Value Upper Status 0 con_cmp1.con1 i -1.797693E+308 -1.489888E-07 0.000000E+00 u 1 con_cmp2.con2 i -1.797693E+308 -2.024472E+01 0.000000E+00 --------------------------------------------------------------------------------
print(prob['z'][0])
1.97763892535
Setting a limit on the number of major iterations. Here, we set it to just a few iterations, and don’t quite reach the optimum.
import numpy as np
from openmdao.api import Problem, pyOptSparseDriver
from openmdao.test_suite.components.sellar import SellarDerivativesGrouped
prob = Problem()
model = prob.model = SellarDerivativesGrouped()
prob.driver = pyOptSparseDriver()
prob.driver.options['optimizer'] = "SNOPT"
prob.driver.opt_settings['Major iterations limit'] = 4
model.add_design_var('z', lower=np.array([-10.0, 0.0]), upper=np.array([10.0, 10.0]))
model.add_design_var('x', lower=0.0, upper=10.0)
model.add_objective('obj')
model.add_constraint('con1', upper=0.0)
model.add_constraint('con2', upper=0.0)
prob.set_solver_print(level=0)
prob.setup(check=False, mode='rev')
prob.run_driver()
Optimization Problem -- Optimization using pyOpt_sparse ================================================================================ Objective Function: _objfunc Solution: -------------------------------------------------------------------------------- Total Time: 0.0808 User Objective Time : 0.0313 User Sensitivity Time : 0.0422 Interface Time : 0.0056 Opt Solver Time: 0.0016 Calls to Objective Function : 6 Calls to Sens Function : 5 Objectives Index Name Value Optimum 0 obj_cmp.obj 3.184748E+00 0.000000E+00 Variables (c - continuous, i - integer, d - discrete) Index Name Type Lower Bound Value Upper Bound Status 0 pz.z_0 c -1.000000E+01 1.978025E+00 1.000000E+01 1 pz.z_1 c 0.000000E+00 0.000000E+00 1.000000E+01 l 2 px.x_0 c 0.000000E+00 0.000000E+00 1.000000E+01 l Constraints (i - inequality, e - equality) Index Name Type Lower Value Upper Status 0 con_cmp1.con1 i -1.797693E+308 -1.372125E-03 0.000000E+00 1 con_cmp2.con2 i -1.797693E+308 -2.024395E+01 0.000000E+00 --------------------------------------------------------------------------------
print(prob['z'][0])
1.97802478193
You can learn more about the available options in the SNOPT_Manual.