# NewtonSolver¶

NewtonSolver implements Newton’s method to solve the system that contains it. This is the most general solver in OpenMDAO, in that it can solve any topology including cyclic connections and implicit states in the system or subsystems. Newton’s method requires derivatives, so a linear solver can also be specified. By default, NewtonSolver uses the linear solver that is slotted in the containing system.

import numpy as np

from openmdao.api import Problem, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

model.nonlinear_solver = NewtonSolver()

prob.setup()

prob.run_model()

|  LN: LNBGS Failed to Converge in 10 iterations
|  LN: LNBGS Converged in 7 iterations
|  LN: LNBGS Converged in 7 iterations
NL: Newton Converged in 3 iterations
print(prob['y1'])

[ 25.58830237]
print(prob['y2'])

[ 12.05848815]

Most of the solvers in OpenMDAO operate hierarchically, in that you can use solvers on subgroups to subdivide the calculation effort. However, NewtonSolver is an exception. It does not call solve_nonlinear on its subsystems, nor does it pass data along the connections. Instead, the Newton solver sets all inputs in all systems and subsystems that it contains, as it follows the gradient, driving the residuals to convergence. After each iteration, the iteration count and the residual norm are checked to see if termination has been satisfied.

## NewtonSolver Options¶

Option Default Acceptable Values Acceptable Types Description
atol 1e-10 N/A N/A absolute error tolerance
debug_print False N/A [‘bool’] If true, the values of input and output variables at the start of iteration are printed and written to a file after a failure to converge.
err_on_maxiter False N/A [‘bool’] When True, AnalysisError will be raised if we don’t converge.
iprint 1 N/A [‘int’] whether to print output
max_sub_solves 10 N/A [‘int’] Maximum number of subsystem solves.
maxiter 10 N/A [‘int’] maximum number of iterations
rtol 1e-10 N/A N/A relative error tolerance
solve_subsystems False N/A [‘bool’] Set to True to turn on sub-solvers (Hybrid Newton).

Note: Options can be passed as keyword arguments at initialization.

## NewtonSolver Option Examples¶

maxiter

maxiter lets you specify the maximum number of Newton iterations to apply. In this example, we cut it back from the default, ten, down to two, so that it terminates a few iterations earlier and doesn’t reach the specified absolute or relative tolerance.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

nlgbs = model.nonlinear_solver = NewtonSolver()
nlgbs.options['maxiter'] = 2

prob.setup()

prob.run_model()

|  LN: LNBGS Failed to Converge in 10 iterations
|  LN: LNBGS Converged in 7 iterations
NL: Newton Failed to Converge in 2 iterations
print(prob['y1'])

[ 25.58785168]
print(prob['y2'])

[ 12.06074161]

atol

Here, we set the absolute tolerance to a looser value that will trigger an earlier termination. After each iteration, the norm of the residuals is calculated by calling apply_nonlinear on implicit components and evaluate on explicit components. If this norm value is lower than the absolute tolerance atol, the iteration will terminate.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

nlgbs = model.nonlinear_solver = NewtonSolver()
nlgbs.options['atol'] = 1e-4

prob.setup()

prob.run_model()

|  LN: LNBGS Failed to Converge in 10 iterations
|  LN: LNBGS Converged in 7 iterations
|  LN: LNBGS Converged in 7 iterations
NL: Newton Converged in 3 iterations
print(prob['y1'])

[ 25.58830237]
print(prob['y2'])

[ 12.05848815]

rtol

Here, we set the relative tolerance to a looser value that will trigger an earlier termination. After each iteration, the norm of the residuals is calculated by calling apply_nonlinear on implicit components and evaluate on explicit components. If the ratio of the currently calculated norm to the initial residual norm is lower than the relative tolerance rtol, the iteration will terminate.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

nlgbs = model.nonlinear_solver = NewtonSolver()
nlgbs.options['rtol'] = 1e-3

prob.setup()

prob.run_model()

|  LN: LNBGS Failed to Converge in 10 iterations
|  LN: LNBGS Converged in 7 iterations
NL: Newton Converged in 2 iterations
print(prob['y1'])

[ 25.58785168]
print(prob['y2'])

[ 12.06074161]

solve_subsystems

If you set this option to True, NewtonSolver will call solve_nonlinear on all of its subsystems. You can use this to solve difficult multi-level problems by attaching solvers to subsystems. This assures that those subsystems will already be in an internally solved state when the Newton solver goes to solve it.

This example shows two instances of the Sellar model, which we have connected together to form a larger cycle. We specify a Newton solver in each Sellar subgroup as well as a top-level Newton solver, which we tell to solve its subsystems.

from openmdao.api import Problem, NewtonSolver, DirectSolver, ScipyKrylov
from openmdao.test_suite.components.double_sellar import DoubleSellar

prob = Problem(model=DoubleSellar())
model = prob.model

g1 = model.g1
g1.nonlinear_solver = NewtonSolver()
g1.nonlinear_solver.options['rtol'] = 1.0e-5
g1.linear_solver = DirectSolver()

g2 = model.g2
g2.nonlinear_solver = NewtonSolver()
g2.nonlinear_solver.options['rtol'] = 1.0e-5
g2.linear_solver = DirectSolver()

model.nonlinear_solver = NewtonSolver()
model.linear_solver = ScipyKrylov()

model.nonlinear_solver.options['solve_subsystems'] = True

prob.setup()
prob.run_model()

+
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
+
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
+
+  ==
+  g1
+  ==
+  NL: Newton Converged in 1 iterations
+
+  ==
+  g2
+  ==
+  NL: Newton Converged in 1 iterations
+
+  ==
+  g1
+  ==
+  NL: Newton Converged in 0 iterations
+
+  ==
+  g2
+  ==
+  NL: Newton Converged in 0 iterations
NL: Newton Converged in 3 iterations
print(prob['g1.y1'])

[ 0.64]
print(prob['g1.y2'])

[ 0.8]
print(prob['g2.y1'])

[ 0.64]
print(prob['g2.y2'])

[ 0.8]

max_sub_solves

This option is used in conjunction with the “solve_subsystems” option. It controls the number of iterations for which NewtonSolver will allow subsystems to solve themselves. When the iteration count exceeds max_sub_solves, Newton returns to its default behavior.

For example, if you set max_sub_solves to zero, then the solvers on subsystems are executed during the initial evaluation, but not during any subsequent iteration.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp, DirectSolver, ScipyKrylov
from openmdao.test_suite.components.double_sellar import SubSellar

prob = Problem()
model = prob.model

model.connect('g1.y2', 'g2.x')
model.connect('g2.y2', 'g1.x')

# Converge the outer loop with Gauss Seidel, with a looser tolerance.
model.nonlinear_solver = NewtonSolver()
model.linear_solver = DirectSolver()

g1 = model.g1
g1.nonlinear_solver = NewtonSolver()
g1.nonlinear_solver.options['rtol'] = 1.0e-5
g1.linear_solver = DirectSolver()

g2 = model.g2
g2.nonlinear_solver = NewtonSolver()
g2.nonlinear_solver.options['rtol'] = 1.0e-5
g2.linear_solver = DirectSolver()

model.nonlinear_solver = NewtonSolver()
model.linear_solver = ScipyKrylov()

model.nonlinear_solver.options['solve_subsystems'] = True
model.nonlinear_solver.options['max_sub_solves'] = 0

prob.setup()
prob.run_model()

+
+  ==
+  g1
+  ==
+  NL: Newton Converged in 2 iterations
+
+  ==
+  g2
+  ==
+  NL: Newton Converged in 2 iterations
NL: Newton Converged in 3 iterations

err_on_maxiter

If you set this to True, then when the solver hits the iteration limit without meeting the tolerance criteria, it will raise an AnalysisError exception. This is mainly important when coupled with a higher-level solver or driver (e.g., pyOptSparseDriver)that can handle the AnalysisError by adapting the stepsize and retrying.


import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, ExecComp, AnalysisError
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

nlgbs = model.nonlinear_solver = NewtonSolver()
nlgbs.options['maxiter'] = 1
nlgbs.options['err_on_maxiter'] = True

prob.setup()


This feature can be set on any iterative nonlinear or linear solver.

## Specifying a Linear Solver¶

We can choose a different linear solver for calculating the Newton step by setting the linear_solver attribute. The default is to use the linear solver that was specified on the containing system, which by default is LinearBlockGS. In the following example, we modify the model to use DirectSolver instead.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, LinearBlockGS, \
ExecComp, DirectSolver
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, \
SellarDis2withDerivatives

prob = Problem()
model = prob.model

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.linear_solver = LinearBlockGS()

nlgbs = model.nonlinear_solver = NewtonSolver()

nlgbs.linear_solver = DirectSolver()

prob.setup()

prob.run_model()

NL: Newton Converged in 3 iterations
print(prob['y1'])

[ 25.58830237]
print(prob['y2'])

[ 12.05848815]

## Specifying a Line Search Algorithm¶

NewtonSolver has a linesearch attribute, which supports specification of a supplemental algorithm that can find a better point along the Newton search direction. This is typically used for cases where we have declared upper or lower bounds on some of the model outputs and we want to prevent Newton from moving into this non-feasible space during iteration. An algorithm that does this is called a line search.

By default, NewtonSolver does not perform a line search. We will show how to specify one. First, let’s set up a problem that has implicit bounds on one of its states.

class ImplCompTwoStates(ImplicitComponent):
"""
A Simple Implicit Component with an additional output equation.

f(x,z) = xz + z - 4
y = x + 2z

Sol: when x = 0.5, z = 2.666
Sol: when x = 2.0, z = 1.333

Coupled derivs:

y = x + 8/(x+1)
dy_dx = 1 - 8/(x+1)**2 = -2.5555555555555554

z = 4/(x+1)
dz_dx = -4/(x+1)**2 = -1.7777777777777777
"""

def setup(self):
self.add_output('z', 2.0, lower=1.5, upper=2.5)

self.maxiter = 10
self.atol = 1.0e-12

self.declare_partials(of='*', wrt='*')

def apply_nonlinear(self, inputs, outputs, residuals):
"""
Don't solve; just calculate the residual.
"""

x = inputs['x']
y = outputs['y']
z = outputs['z']

residuals['y'] = y - x - 2.0*z
residuals['z'] = x*z + z - 4.0

def linearize(self, inputs, outputs, jac):
"""
Analytical derivatives.
"""

# Output equation
jac[('y', 'x')] = -1.0
jac[('y', 'y')] = 1.0
jac[('y', 'z')] = -2.0

# State equation
jac[('z', 'z')] = inputs['x'] + 1.0
jac[('z', 'x')] = outputs['z']


In this component, the state “z” is only valid between 1.5 and 2.5, while the other state is valid everywhere. You can verify that if NewtonSolver is used with no backtracking specified, the solution violates the bounds on “z”. Here, we specify ArmijoGoldsteinLS as our line search algorithm, and we get a solution on the lower bounds for “z”.

from openmdao.api import Problem, Group, IndepVarComp, NewtonSolver, ScipyKrylov, ArmijoGoldsteinLS
from openmdao.test_suite.components.implicit_newton_linesearch import ImplCompTwoStates

top = Problem()
top.model = Group()
top.model.connect('px.x', 'comp.x')

top.model.nonlinear_solver = NewtonSolver()
top.model.nonlinear_solver.options['maxiter'] = 10
top.model.linear_solver = ScipyKrylov()

ls = top.model.nonlinear_solver.linesearch = ArmijoGoldsteinLS()
ls.options['maxiter'] = 10

top.setup(check=False)

top['px.x'] = 2.0
top['comp.y'] = 0.0
top['comp.z'] = 1.6
top.run_model()

NL: Newton Failed to Converge in 10 iterations
print(top['comp.z'])

[ 1.5]

Tags