PETScDirectSolver#

PETScDirectSolver is a linear solver that assembles the system Jacobian and solves the linear system with LU factorization and back substitution using the petsc4py library. It can handle any system topology. Since it assembles a global Jacobian for all of its subsystems, any linear solver that is assigned in any of its subsystems does not participate in this calculation (though they may be used in other ways such as in subsystem Newton solves). Unlike the standard DirectSolver which always uses SuperLU (for sparse) or LAPACK (dense) through scipy, the PETScDirectSolver has access to multiple DirectSolvers available in PETSc.

PETSc is more general and has more options than scipy for direct solvers, but due to the generality PETSc provides a thicker wrapper around its solvers. So when considering whether to use DirectSolver or PETScDirectSolver, one should consider the size of their problem and sparsity pattern. Typically PETSc will be more beneficial for larger matrices where the factorization / solving speedup from a different solver is larger than the slowdown due to overhead from the heavier wrapper. For more info about the solvers exposed by the PETScDirectSolver, see this summary table of direct solvers from the PETSc documentation.

Of the available sparse direct solvers, SuperLU, KLU, UMFPACK, and PETSc can only be used in serial. The MUMPS and SuperLU-Dist solvers can be used in serial, but will also be run distributed if the script is run with MPI. For dense matrices the solver will ignore the sparse algorithms and automatically use LAPACK, which only runs in serial.

As an example, here we calculate the total derivatives of the Sellar system objective with respect to the design variable ‘z’, using the “klu” direct solver in the PETScDirectSolver.

import openmdao.api as om
from openmdao.test_suite.components.sellar_feature import SellarDerivatives

prob = om.Problem()
model = prob.model = SellarDerivatives()

model.nonlinear_solver = om.NonlinearBlockGS()
model.linear_solver = om.PETScDirectSolver(sparse_solver_name='klu')

prob.setup()
prob.run_model()

wrt = ['z']
of = ['obj']

J = prob.compute_totals(of=of, wrt=wrt, return_format='flat_dict')
print(J['obj', 'z'][0][0])
print(J['obj', 'z'][0][1])
NL: NLBGS Converged in 8 iterations
9.610010556989947
1.7844853356313648

PETScDirectSolver Options#

OptionDefaultAcceptable ValuesAcceptable TypesDescription
assemble_jacTrue[True, False]['bool']Activates use of assembled jacobian by this solver.
err_on_singularTrue[True, False]['bool']Raise an error if LU decomposition is singular. Must always be 'True' for the PETScDirectSolver. This option is only maintained for compatibility with parent solver methods.
iprint1N/A['int']whether to print output
rhs_checkingFalseN/A['bool', 'dict']If True, check RHS vs. cache and/or zero to avoid some solves.Can also be set to a dict of options for the LinearRHSChecker to allow finer control over it. Allowed options are: ('check_zero', 'rtol', 'atol', 'max_cache_entries', 'collect_stats', 'auto', 'verbose')
sparse_solver_namesuperlu['superlu', 'klu', 'umfpack', 'petsc', 'mumps', 'superlu_dist']N/ADirect solver algorithm from PETSc that will be used for the LU factorization and solve if the matrix is sparse. For a dense matrix, this option will be ignored and LAPACK will be automatically used.

PETScDirectSolver Constructor#

The call signature for the PETScDirectSolver constructor is:

PETScDirectSolver.__init__(**kwargs)[source]

Declare the solver options.