NewtonSolver#

NewtonSolver implements Newton’s method to solve the system that contains it. This is the most general solver in OpenMDAO, in that it can solve any topology including cyclic connections and implicit states in the system or subsystems. Newton’s method requires derivatives, so a linear solver can also be specified. By default, NewtonSolver uses the linear solver that is slotted in the containing system.

OpenMDAO’s implementation is very close to pure Newton’s method, but adds line searches, bounds on state variables, and techniques to unstick the solver if it stalls. For an intuitive understanding of how Newton’s method works and how it extends to multiple dimensions, take a look at the following video:

import numpy as np
import openmdao.api as om

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                           z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.DirectSolver()

model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

prob.run_model()

print(prob.get_val('y1'))
print(prob.get_val('y2'))
NL: Newton Converged in 3 iterations
[25.58830237]
[12.05848815]

Most of the solvers in OpenMDAO operate hierarchically, in that you can use solvers on subgroups to subdivide the calculation effort. However, NewtonSolver is an exception. It does not call solve_nonlinear on its subsystems, nor does it pass data along the connections. Instead, the Newton solver sets all inputs in all systems and subsystems that it contains, as it follows the gradient, driving the residuals to convergence. After each iteration, the iteration count and the residual norm are checked to see if termination has been satisfied.

NewtonSolver Options#

OptionDefaultAcceptable ValuesAcceptable TypesDescription
atol1e-10N/AN/Aabsolute error tolerance
cs_reconvergeTrue[True, False]['bool']When True, when this driver solves under a complex step, nudge the Solution vector by a small amount so that it reconverges.
debug_printFalse[True, False]['bool']If true, the values of input and output variables at the start of iteration are printed and written to a file after a failure to converge.
err_on_non_convergeFalse[True, False]['bool']When True, AnalysisError will be raised if we don't converge.
iprint1N/A['int']whether to print output
max_sub_solves10N/A['int']Maximum number of subsystem solves.
maxiter10N/A['int']maximum number of iterations
reraise_child_analysiserrorFalse[True, False]['bool']When the option is true, a solver will reraise any AnalysisError that arises during subsolve; when false, it will continue solving.
restart_from_successfulFalse[True, False]['bool']If True, the states are cached after a successful solve and used to restart the solver in the case of a failed solve.
rtol1e-10N/AN/Arelative error tolerance
solve_subsystems**Required**[True, False]['bool']Set to True to turn on sub-solvers (Hybrid Newton).
stall_limit0N/AN/ANumber of iterations after which, if the residual norms are identical within the stall_tol, then terminate as if max iterations were reached. Default is 0, which disables this feature.
stall_tol1e-12N/AN/AWhen stall checking is enabled, the threshold below which the residual norm is considered unchanged.
stall_tol_typerel['abs', 'rel']N/ASpecifies whether the absolute or relative norm of the residual is used for stall detection.

NewtonSolver Constructor#

The call signature for the NewtonSolver constructor is:

NewtonSolver.__init__(**kwargs)[source]

Initialize all attributes.

NewtonSolver Option Examples#

maxiter

maxiter lets you specify the maximum number of Newton iterations to apply. In this example, we cut it back from the default, ten, down to two, so that it terminates a few iterations earlier and doesn’t reach the specified absolute or relative tolerance.

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.DirectSolver()

newton = model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
newton.options['maxiter'] = 2

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

prob.run_model()

print(prob.get_val('y1'))
print(prob.get_val('y2'))
NL: NewtonSolver 'NL: Newton' on system '' failed to converge in 2 iterations.
[25.58785168]
[12.06074161]

atol

Here, we set the absolute tolerance to a looser value that will trigger an earlier termination. After each iteration, the norm of the residuals is calculated by calling apply_nonlinear on all of the components. If this norm value is lower than the absolute tolerance atol, the iteration will terminate.

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.DirectSolver()

newton = model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
newton.options['atol'] = 1e-4

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

prob.run_model()

print(prob.get_val('y1'))
print(prob.get_val('y2'))
NL: Newton Converged in 3 iterations
[25.58830237]
[12.05848815]

rtol

Here, we set the relative tolerance to a looser value that will trigger an earlier termination. After each iteration, the norm of the residuals is calculated by calling apply_nonlinear on all of the components. If the ratio of the currently calculated norm to the initial residual norm is lower than the relative tolerance rtol, the iteration will terminate.

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.DirectSolver()

newton = model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
newton.options['rtol'] = 1e-3

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

prob.run_model()

print(prob.get_val('y1'))
print(prob.get_val('y2'))
NL: Newton Converged in 2 iterations
[25.58785168]
[12.06074161]

solve_subsystems

If you set this option to True, NewtonSolver will call solve_nonlinear on all of its subsystems. You can use this to solve difficult multi-level problems by attaching solvers to subsystems. This assures that those subsystems will already be in an internally solved state when the Newton solver goes to solve it.

This example shows two instances of the Sellar model, which we have connected together to form a larger cycle. We specify a Newton solver in each Sellar subgroup as well as a top-level Newton solver, which we tell to solve its subsystems.

from openmdao.test_suite.components.double_sellar import DoubleSellar

prob = om.Problem(model=DoubleSellar())
model = prob.model

g1 = model.g1
g1.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
g1.nonlinear_solver.options['rtol'] = 1.0e-5
g1.linear_solver = om.DirectSolver()

g2 = model.g2
g2.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
g2.nonlinear_solver.options['rtol'] = 1.0e-5
g2.linear_solver = om.DirectSolver()

model.nonlinear_solver = om.NewtonSolver()
model.linear_solver = om.ScipyKrylov()

model.nonlinear_solver.options['solve_subsystems'] = True

prob.setup()
prob.run_model()
+  
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+  
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
+  
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+  
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
+  
+  ==
+  g1
+  ==
+  NL: Newton Converged in 1 iterations
+  
+  ==
+  g2
+  ==
+  NL: Newton Converged in 1 iterations
+  
+  ==
+  g1
+  ==
+  NL: Newton Converged in 0 iterations
+  
+  ==
+  g2
+  ==
+  NL: Newton Converged in 0 iterations
NL: Newton Converged in 3 iterations
print(prob.get_val('g1.y1'))
print(prob.get_val('g1.y2'))
print(prob.get_val('g2.y1'))
print(prob.get_val('g2.y2'))
[0.64]
[0.8]
[0.64]
[0.8]

max_sub_solves

This option is used in conjunction with the “solve_subsystems” option. It controls the number of iterations for which NewtonSolver will allow subsystems to solve themselves. When the iteration count exceeds max_sub_solves, Newton returns to its default behavior.

For example, if you set max_sub_solves to zero, then the solvers on subsystems are executed during the initial evaluation, but not during any subsequent iteration.

from openmdao.test_suite.components.double_sellar import SubSellar

prob = om.Problem()
model = prob.model

model.add_subsystem('g1', SubSellar())
model.add_subsystem('g2', SubSellar())

model.connect('g1.y2', 'g2.x')
model.connect('g2.y2', 'g1.x')

# Converge the outer loop with Gauss Seidel, with a looser tolerance.
model.nonlinear_solver = om.NewtonSolver()
model.linear_solver = om.DirectSolver()

g1 = model.g1
g1.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
g1.nonlinear_solver.options['rtol'] = 1.0e-5
g1.linear_solver = om.DirectSolver()

g2 = model.g2
g2.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
g2.nonlinear_solver.options['rtol'] = 1.0e-5
g2.linear_solver = om.DirectSolver()

model.nonlinear_solver = om.NewtonSolver()
model.linear_solver = om.ScipyKrylov()

model.nonlinear_solver.options['solve_subsystems'] = True
model.nonlinear_solver.options['max_sub_solves'] = 0

prob.setup()
prob.run_model()
+  
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+  
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
NL: Newton Converged in 3 iterations

err_on_non_converge

If you set this to True, then when the solver doesn’t converge, either by hitting the iteration limit without meeting the tolerance criteria, or by encountering a NaN or inf, it will raise an AnalysisError exception. This is mainly important when coupled with a higher-level solver or driver (e.g., pyOptSparseDriver)that can handle the AnalysisError by adapting the stepsize and retrying. This feature can be set on any iterative nonlinear or linear solver.

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.DirectSolver()

newton = model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)
newton.options['maxiter'] = 1
newton.options['err_on_non_converge'] = True

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

try:
    prob.run_model()
except om.AnalysisError:
    pass
NL: NewtonSolver 'NL: Newton' on system '' failed to converge in 1 iterations.

**stall_limit, stall_tol, and stall_tol_type **

In some cases, nonlinear solvers can stall out where the norm of the residual stops changing at all. This can happen for a couple of reasons. You can hit numerical noise problems and just be wandering around in a circle, or you can get stuck on a bound and the line search just keeps running into the same spot no matter what. Either way, if you have say 100 max iterations and you stall at 15 … you waste a lot of compute time. To remedy this, you can turn on stall detection in all nonlinear solvers by setting the “stall_limit” option to a number greater than zero.

In this example, we set stall_limit to 3. While the solver iterates, it will compare the value of the residual norm to the value computed in the previous iteration. If the value matches for three iterations in a row, then iteration will terminate due to detection of a stall. If “err_on_non_converge” is set to True, then an AnalysisError will be raised just as if we had reached the iteration count limit.

We also set the stall_tol to 1e-6, which is the threshold below which a change in the relative residual norm is considered to be unchanged.

The option stall_tol_type is used to specify whether the absolute or relative norm of the residual is used to detect a stall condition. If an outer solver is used in conjunction with solve_subsystems, the relative change in the residual norm will be small for those residuals converged by the inner solver. In such cases, setting this option to ‘abs’ may more reliably detect a stall.

prob = om.Problem()

prob.model.add_subsystem('comp', om.ExecComp('y=3*x+1'), promotes=['*'])

balance = prob.model.add_subsystem('balance', om.BalanceComp(),
                                   promotes=['*'])
balance.add_balance('x', lower=-.1, upper=10, rhs_val=0, lhs_name='y')

newton = prob.model.nonlinear_solver = om.NewtonSolver()
newton.options['solve_subsystems'] = True
newton.options['stall_limit'] = 3
newton.options['stall_tol'] = 1e-8
newton.options['stall_tol_type'] = 'abs'
newton.options['maxiter'] = 100

prob.model.linear_solver = om.DirectSolver()

prob.setup()
prob.set_solver_print()

prob.run_model()
NL: Newton 0 ; 4 1
|  LS: BCHK 0 ; 0.7 0.175
NL: Newton 1 ; 0.7 0.175
|  LS: BCHK 0 ; 0.7 1
NL: Newton 2 ; 0.7 0.175
|  LS: BCHK 0 ; 0.7 1
NL: Newton 3 ; 0.7 0.175
|  LS: BCHK 0 ; 0.7 1
NL: Newton 4 ; 0.7 0.175
NL: NewtonSolver 'NL: Newton' on system '' stalled after 4 iterations.

Specifying a Linear Solver#

We can choose a different linear solver for calculating the Newton step by setting the linear_solver attribute. The default is to use the linear solver that was specified on the containing system, which by default is LinearBlockGS. In the following example, we modify the model to use DirectSolver instead.

from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = om.Problem()
model = prob.model

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = om.LinearBlockGS()

newton = model.nonlinear_solver = om.NewtonSolver(solve_subsystems=False)

newton.linear_solver = om.DirectSolver()

prob.setup()

prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))

prob.run_model()

print(prob.get_val('y1'))
print(prob.get_val('y2'))
NL: Newton Converged in 3 iterations
[25.58830237]
[12.05848815]

Specifying a Line Search Algorithm#

NewtonSolver has a linesearch attribute, which supports specification of a supplemental algorithm that can find a better point along the Newton search direction. This is typically used for cases where we have declared upper or lower bounds on some of the model outputs and we want to prevent Newton from moving into this non-feasible space during iteration. An algorithm that does this is called a line search.

By default, NewtonSolver does not perform a line search. We will show how to specify one. First, let’s set up a problem that has implicit bounds on one of its states.

class CompAtan(om.ImplicitComponent):
    """
    A simple implicit component with the following equation:

    F(x, y) = 33.0 * atan(y-20)**2 + x

    x is an input, y is the state to be solved.
    for x = -100, y should be 19.68734033

    This equation poses a challenge because a guess that is far from the solution yields large
    gradients and divergence. Additionally, the jacobian becomes singular at y = 20. To address
    this, a lower and upper bound are added on y so that a solver with a BoundsEnforceLS does not
    allow it to stray into problematic regions.
    """

    def setup(self):
        self.add_input('x', 1.0)
        self.add_output('y', 1.0, lower=1.0, upper=19.9)

        self.declare_partials(of='y', wrt='x')
        self.declare_partials(of='y', wrt='y')

    def apply_nonlinear(self, inputs, outputs, residuals):
        x = inputs['x']
        y = outputs['y']

        residuals['y'] = (33.0 * atan(y-20.0))**2 + x

    def linearize(self, inputs, outputs, jacobian):
        x = inputs['x']
        y = outputs['y']

        jacobian['y', 'y'] = 2178.0*atan(y-20.0) / (y**2 - 40.0*y + 401.0)
        jacobian['y', 'x'] = 1.0

This equation poses a challenge because a guess that is far from the solution yields large gradients and the solution will diverge. Additionally, the jacobian becomes singular at y = 20. To address both of these problems, a lower and upper bound are added on y so that a solver with a BoundsEnforceLS does not allow it to stray into problematic regions. Without the linsearch, Newton is unable to solve this problem unless you start very close to the solution.

Here, we specify BoundsEnforceLS as our line search algorithm, and we get the expected solution for “y”.

from openmdao.solvers.linesearch.tests.test_backtracking import CompAtan

prob = om.Problem()
model = prob.model

model.add_subsystem('comp', CompAtan(), promotes_inputs=['x'])

prob.setup()

prob.set_val('x', -100.0)

# Initial value for the state:
prob.set_val('comp.y', 12.0)

# You can change the om.NewtonSolver settings after setup is called
newton = prob.model.nonlinear_solver = om.NewtonSolver()
prob.model.linear_solver = om.DirectSolver()
newton.options['iprint'] = 2
newton.options['rtol'] = 1e-8
newton.options['solve_subsystems'] = True

newton.linesearch = om.BoundsEnforceLS()
newton.linesearch.options['iprint'] = 2

prob.run_model()

print(prob.get_val('comp.y'))
NL: Newton 0 ; 2178.39766 1
|  LS: BCHK 0 ; 89.1820479 0.0409392874
NL: Newton 1 ; 89.1820479 0.0409392874
|  LS: BCHK 0 ; 146.249915 1.63990308
NL: Newton 2 ; 146.249915 0.0671364633
|  LS: BCHK 0 ; 14.6043983 0.0998591916
NL: Newton 3 ; 14.6043983 0.00670419295
|  LS: BCHK 0 ; 0.371568958 0.0254422641
NL: Newton 4 ; 0.371568958 0.000170569848
|  LS: BCHK 0 ; 0.000278617482 0.000749840578
NL: Newton 5 ; 0.000278617482 1.27900193e-07
|  LS: BCHK 0 ; 1.58237867e-10 5.67939478e-07
NL: Newton 6 ; 1.58237867e-10 7.2639569e-14
NL: Newton Converged
[19.68734036]