# In-Memory Assembly of Jacobians¶

When you have groups, or entire models, that are small enough to fit the entire Jacobian into memory, you can have OpenMDAO actually assemble the partial-derivative Jacobian in memory. In many cases this can yield a substantial speed up over the default, matrix-free implementation in OpenMDAO.

Note

Assembled Jacobians are especially effective when you have a deeply-nested hierarchy with a large number of components and/or variables. See the Theory Manual entry on assembled Jacobians for more details on how to best select which type of Jacobian to use.

To use an assembled Jacobian, you set the assemble_jac option of the linear solver that will use it to True. The type of the assembled jacobian will be determined by the value of options['assembled_jac_type'] in the solver’s containing system. There are two options of ‘assembled_jac_type’ to choose from, dense and csc. For example:

model.options['assembled_jac_type'] = 'dense'
model.linear_solver = DirectSolver(assemble_jac=True)

‘csc’ is the default, and you should try that first if you’re not sure of which one to use. Most problems, even if they have dense sub-Jacobians from each component, are fairly sparse at the model level and the DirectSolver will usually be much faster with a sparse factorization.

Note

You are allowed to use multiple assembled Jacobians at multiple different levels of your model hierarchy. This may be useful if you have nested non-linear solvers to converge very difficult problems.

## Usage Example¶

In the following example, borrowed from the newton solver tutorial, we assemble the Jacobian at the same level of the model hierarchy as the NewtonSolver and DirectSolver. In general, you will always do the assembly at the same level as the linear solver that will make use of the Jacobian matrix.

from openmdao.api import Group, NewtonSolver, DirectSolver, Problem, IndepVarComp

from openmdao.test_suite.test_examples.test_circuit_analysis import Resistor, Diode, Node

class Circuit(Group):

def setup(self):

self.connect('n1.V', ['R1.V_in', 'R2.V_in'])
self.connect('R1.I', 'n1.I_out:0')
self.connect('R2.I', 'n1.I_out:1')

self.connect('n2.V', ['R2.V_out', 'D1.V_in'])
self.connect('R2.I', 'n2.I_in:0')
self.connect('D1.I', 'n2.I_out:0')

self.nonlinear_solver = NewtonSolver()
self.nonlinear_solver.options['iprint'] = 2
self.nonlinear_solver.options['maxiter'] = 20
##################################################################
# Assemble at the group level. Default assembled jac type is 'csc'
##################################################################
self.options['assembled_jac_type'] = 'csc'
self.linear_solver = DirectSolver(assemble_jac=True)

p = Problem()
model = p.model

model.connect('source.I', 'circuit.I_in')
model.connect('ground.V', 'circuit.Vg')

p.setup()

# set some initial guesses
p['circuit.n1.V'] = 10.
p['circuit.n2.V'] = 1.

p.run_model()

print(p['circuit.n1.V'])
print(p['circuit.n2.V'])
print(p['circuit.R1.I'])
print(p['circuit.R2.I'])
print(p['circuit.D1.I'])

# sanity check: should sum to .1 Amps
print(p['circuit.R1.I'] + p['circuit.D1.I'])
=======
circuit
=======
NL: Newton 0 ; 21.5152153 1
NL: Newton 1 ; 8.236021 0.382799841
NL: Newton 2 ; 3.02968362 0.140815863
NL: Newton 3 ; 1.11434147 0.0517931826
NL: Newton 4 ; 0.40971227 0.0190429082
NL: Newton 5 ; 0.150488221 0.00699450221
NL: Newton 6 ; 0.0551231446 0.00256205406
NL: Newton 7 ; 0.0200406862 0.000931465752
NL: Newton 8 ; 0.0071383729 0.000331782546
NL: Newton 9 ; 0.00240366903 0.000111719497
NL: Newton 10 ; 0.00069274325 3.21978303e-05
NL: Newton 11 ; 0.000129518518 6.01985693e-06
NL: Newton 12 ; 7.66202506e-06 3.56121236e-07
NL: Newton 13 ; 3.16310628e-08 1.4701718e-09
NL: Newton 14 ; 1.15199744e-12 5.35433844e-14
NL: Newton Converged
[ 9.90830282]
[ 0.73858486]
[ 0.09908303]
[ 0.00091697]
[ 0.00091697]
[ 0.1]