Declaring Partial Derivatives#
If you know additional information about the structure of partial derivatives in your component (for example, if an output does not depend on a particular input), you can use the declare_partials
() method to inform the framework. This will allow the framework to be more efficient in terms of memory and computation (especially if using a sparse AssembledJacobian
). This information should be declared in the setup_partials method of your component.
- Component.declare_partials(of, wrt, dependent=True, rows=None, cols=None, val=None, method='exact', step=None, form=None, step_calc=None, minimum_step=None)[source]
Declare information about this component’s subjacobians.
- Parameters:
- ofstr or iter of str
The name of the residual(s) that derivatives are being computed for. May also contain a glob pattern.
- wrtstr or iter of str
The name of the variables that derivatives are taken with respect to. This can contain the name of any input or output variable. May also contain a glob pattern.
- dependentbool(True)
If False, specifies no dependence between the output(s) and the input(s). This is only necessary in the case of a sparse global jacobian, because if ‘dependent=False’ is not specified and declare_partials is not called for a given pair, then a dense matrix of zeros will be allocated in the sparse global jacobian for that pair. In the case of a dense global jacobian it doesn’t matter because the space for a dense subjac will always be allocated for every pair.
- rowsndarray of int or None
Row indices for each nonzero entry. For sparse subjacobians only.
- colsndarray of int or None
Column indices for each nonzero entry. For sparse subjacobians only.
- valfloat or ndarray of float or scipy.sparse
Value of subjacobian. If rows and cols are not None, this will contain the values found at each (row, col) location in the subjac.
- methodstr
The type of approximation that should be used. Valid options include: ‘fd’: Finite Difference, ‘cs’: Complex Step, ‘exact’: use the component defined analytic derivatives. Default is ‘exact’.
- stepfloat
Step size for approximation. Defaults to None, in which case the approximation method provides its default value.
- formstr
Form for finite difference, can be ‘forward’, ‘backward’, or ‘central’. Defaults to None, in which case the approximation method provides its default value.
- step_calcstr
Step type for computing the size of the finite difference step. It can be ‘abs’ for absolute, ‘rel_avg’ for a size relative to the absolute value of the vector input, or ‘rel_element’ for a size relative to each value in the vector input. In addition, it can be ‘rel_legacy’ for a size relative to the norm of the vector. For backwards compatibilty, it can be ‘rel’, which is now equivalent to ‘rel_avg’. Defaults to None, in which case the approximation method provides its default value.
- minimum_stepfloat
Minimum step size allowed when using one of the relative step_calc options.
- Returns:
- dict
Metadata dict for the specified partial(s).
Usage#
Specifying that a variable does not depend on another. Note that this is not typically required, because by default OpenMDAO assumes that all variables are independent. However, in some cases it might be needed if a previous glob pattern matched a large set of variables and some sub-set of that needs to be marked as independent.
def setup(self):
self.add_input('x', shape=1)
self.add_input('y1', shape=2)
self.add_input('y2', shape=2)
self.add_input('y3', shape=2)
self.add_input('z', shape=(2, 2))
self.add_output('f', shape=1)
self.add_output('g', shape=(2, 2))
Declaring multiple derivatives using glob patterns (see https://docs.python.org/3.6/library/fnmatch.html).
def setup(self):
self.add_input('x', shape=1)
self.add_input('y1', shape=2)
self.add_input('y2', shape=2)
self.add_input('y3', shape=2)
self.add_input('z', shape=(2, 2))
self.add_output('f', shape=1)
self.add_output('g', shape=(2, 2))
Using the val argument to set a constant partial derivative. Note that this is intended for cases when the derivative value is constant, and hence the derivatives do not ever need to be recomputed in compute_partials
. Here are several examples of how you can specify derivative values for differently-shaped partial derivative sub-Jacobians.
Scalar (see \(\frac{\partial f}{\partial x}\) )
Dense Array (see \(\frac{\partial g}{\partial y_1}\) )
Nested List (see \(\frac{\partial g}{\partial y_1}\) and \(\frac{\partial g}{\partial y_3}\) )
Sparse Matrix (see Sparse Partial Derivatives doc for more details) (see \(\frac{\partial g}{\partial y_2}\) and \(\frac{\partial g}{\partial x}\))
import numpy as np
import scipy as sp
import openmdao.api as om
class SimpleCompConst(om.ExplicitComponent):
def setup(self):
self.add_input('x', shape=1)
self.add_input('y1', shape=2)
self.add_input('y2', shape=2)
self.add_input('y3', shape=2)
self.add_input('z', shape=(2, 2))
self.add_output('f', shape=1)
self.add_output('g', shape=(2, 2))
def setup_partials(self):
# Declare derivatives
self.declare_partials('f', ['y1', 'y2', 'y3'], dependent=False)
self.declare_partials('g', 'z', dependent=False)
self.declare_partials('f', 'x', val=1.)
self.declare_partials('f', 'z', val=np.ones((1, 4)))
# y[13] is a glob pattern for ['y1', 'y3']
self.declare_partials('g', 'y[13]', val=[[1, 0], [1, 0], [0, 1], [0, 1]])
self.declare_partials('g', 'y2', val=[1., 1., 1., 1.], cols=[0, 0, 1, 1], rows=[0, 2, 1, 3])
self.declare_partials('g', 'x', val=sp.sparse.coo_matrix(((1., 1.), ((0, 3), (0, 0)))))
def compute(self, inputs, outputs):
outputs['f'] = np.sum(inputs['z']) + inputs['x']
outputs['g'] = np.outer(inputs['y1'] + inputs['y3'], inputs['y2']) + inputs['x'] * np.eye(2)
def compute_partials(self, inputs, partials):
# note: all the partial derivatives are constant, so no calculations happen here.
pass