PETScKrylov

PETScKrylov is an iterative linear solver that wraps the linear solution methods found in PETSc via petsc4py. The default method is “fgmres”, or the Flexible Generalized Minimal RESidual method, though you may choose any of the other methods in PETSc. This linear solver is capable of handling any system topology effectively. It also solves all subsystems below it in the hierarchy, so assigning different solvers to subsystems will have no effect on the solution at this level.

This solver works under MPI, so it is a good alternative to ScipyKrylov. This solver is also re-entrant, so there are no problems if it is nested during preconditioning.

Here, we calculate the total derivatives across the Sellar system.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NonlinearBlockGS, PETScKrylov, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NonlinearBlockGS()

model.linear_solver = PETScKrylov()

prob.setup()
prob.run_model()
NL: NLBGS Converged in 7 iterations
wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
9.61001055699
print(J['obj', 'z'][0][1])
1.78448533563

PETScKrylov Options

Option Default Acceptable Values Acceptable Types Description
assemble_jac False N/A [‘bool’] Activates use of assembled jacobian by this solver.
atol 1e-10 N/A N/A absolute error tolerance
err_on_maxiter False N/A [‘bool’] When True, AnalysisError will be raised if we don’t converge.
iprint 1 N/A [‘int’] whether to print output
ksp_type fgmres [‘richardson’, ‘chebyshev’, ‘cg’, ‘groppcg’, ‘pipecg’, ‘pipecgrr’, ‘cgne’, ‘nash’, ‘stcg’, ‘gltr’, ‘fcg’, ‘pipefcg’, ‘gmres’, ‘pipefgmres’, ‘fgmres’, ‘lgmres’, ‘dgmres’, ‘pgmres’, ‘tcqmr’, ‘bcgs’, ‘ibcgs’, ‘fbcgs’, ‘fbcgsr’, ‘bcgsl’, ‘cgs’, ‘tfqmr’, ‘cr’, ‘pipecr’, ‘lsqr’, ‘preonly’, ‘qcg’, ‘bicg’, ‘minres’, ‘symmlq’, ‘lcd’, ‘python’, ‘gcr’, ‘pipegcr’, ‘tsirm’, ‘cgls’] N/A KSP algorithm to use. Default is ‘fgmres’.
maxiter 100 N/A [‘int’] maximum number of iterations
precon_side right [‘left’, ‘right’] N/A Preconditioner side, default is right.
restart 1000 N/A [‘int’] Number of iterations between restarts. Larger values increase iteration cost, but may be necessary for convergence
rtol 1e-10 N/A N/A relative error tolerance

Note: Options can be passed as keyword arguments at initialization.

PETScKrylov Option Examples

maxiter

maxiter lets you specify the maximum number of GMRES (or other algorithm) iterations to apply. The default maximum is 100, which is much higher than the other linear solvers because each multiplication by the system Jacobian is considered to be an iteration. You may have to decrease this value if you have a coupled system that is converging very slowly. (Of course, in such a case, it may be better to add a preconditioner.) Alternatively, you may have to raise maxiter if you have an extremely large number of components in your system (a 1000-component ring would need 1000 iterations just to make it around once.)

This example shows what happens if you set maxiter too low (the derivatives should be nonzero, but it stops too soon.)

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NonlinearBlockGS, PETScKrylov, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NonlinearBlockGS()

model.linear_solver = PETScKrylov()
model.linear_solver.options['maxiter'] = 3

prob.setup()
prob.run_model()
NL: NLBGS Converged in 7 iterations
wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
9.26540544431
print(J['obj', 'z'][0][1])
1.87246623559

atol

The absolute convergence tolerance, the absolute size of the (possibly preconditioned) residual norm.

You may need to adjust this setting if you have abnormally large or small values in your global Jacobian.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NonlinearBlockGS, PETScKrylov, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NonlinearBlockGS()

model.linear_solver = PETScKrylov()
model.linear_solver.options['atol'] = 1.0e-20

prob.setup()
prob.run_model()
NL: NLBGS Converged in 7 iterations
wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
9.61001055699
print(J['obj', 'z'][0][1])
1.78448533563

rtol

The relative convergence tolerance, the relative decrease in the (possibly preconditioned) residual norm.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NonlinearBlockGS, PETScKrylov, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NonlinearBlockGS()

model.linear_solver = PETScKrylov()
model.linear_solver.options['rtol'] = 1.0e-20

prob.setup()
prob.run_model()
NL: NLBGS Converged in 7 iterations
wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
9.61001055699
print(J['obj', 'z'][0][1])
1.78448533563

ksp_type

You can specify which PETSc algorithm to use in place of ‘fgmres’ by settng the “ksp_type” in the options dictionary. Here, we use ‘gmres’ instead.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, NonlinearBlockGS, PETScKrylov, \
     ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, \
     SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NonlinearBlockGS()

model.linear_solver = PETScKrylov()
model.linear_solver.options['ksp_type'] = 'gmres'

prob.setup()
prob.run_model()
NL: NLBGS Converged in 7 iterations
wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
9.61001055699
print(J['obj', 'z'][0][1])
1.78448533563

Specifying a Preconditioner

You can specify a preconditioner to improve the convergence of the iterative linear solution by setting the precon attribute. The motivation for using a preconditioner is the observation that iterative methods have better convergence properties if the linear system has a smaller condition number, so the goal of the preconditioner is to improve the condition number in part or all of the Jacobian.

Here, we add a Gauss-Seidel preconditioner to the simple Sellar solution with Newton. Note that the number of GMRES iterations is lower when using the preconditioner.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, LinearBlockGS, PETScKrylov, \
     NewtonSolver, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, \
     SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NewtonSolver()
model.linear_solver = PETScKrylov()

model.linear_solver.precon = LinearBlockGS()
model.linear_solver.precon.options['maxiter'] = 2

prob.setup()
prob.run_model()
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
|  | precon:LN: LNBGS Failed to Converge in 2 iterations
NL: Newton Converged in 3 iterations
print(prob['y1'])
[ 25.58830237]
print(prob['y2'])
[ 12.05848815]

While the default preconditioning “side” is right-preconditioning, you can also use left-preconditioning provided that you choose a “ksp_type” that supports it. Here we solve the same problem with left-preconditioning using the Richardson method and a DirectSolver.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, DirectSolver, PETScKrylov, \
     NewtonSolver, ExecComp
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, \
     SellarDis2withDerivatives

prob = Problem()
model = prob.model = Group()

model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])

model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])

model.add_subsystem('obj_cmp', ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
                                        z=np.array([0.0, 0.0]), x=0.0),
                    promotes=['obj', 'x', 'z', 'y1', 'y2'])

model.add_subsystem('con_cmp1', ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])

model.nonlinear_solver = NewtonSolver()
model.linear_solver = PETScKrylov()

model.linear_solver.precon = DirectSolver()
model.linear_solver.options['precon_side'] = 'left'
model.linear_solver.options['ksp_type'] = 'richardson'

prob.setup()
prob.run_model()
NL: Newton Converged in 3 iterations
print(prob['y1'])
[ 25.58830237]
print(prob['y2'])
[ 12.05848815]