Approximating Partial Derivatives

OpenMDAO allows you to specify analytic derivatives for your models, but it is not a requirement. If certain partial derivatives are not available, you can ask the framework to approximate the derivatives by using the declare_partials method inside setup, and give it a method that is either ‘fd’ for finite diffference or ‘cs’ for complex step.

Component.declare_partials(of, wrt, dependent=True, rows=None, cols=None, val=None, method='exact', step=None, form=None, step_calc=None)[source]

Declare information about this component’s subjacobians.

Parameters:
of : str or list of str

The name of the residual(s) that derivatives are being computed for. May also contain a glob pattern.

wrt : str or list of str

The name of the variables that derivatives are taken with respect to. This can contain the name of any input or output variable. May also contain a glob pattern.

dependent : bool(True)

If False, specifies no dependence between the output(s) and the input(s). This is only necessary in the case of a sparse global jacobian, because if ‘dependent=False’ is not specified and declare_partials is not called for a given pair, then a dense matrix of zeros will be allocated in the sparse global jacobian for that pair. In the case of a dense global jacobian it doesn’t matter because the space for a dense subjac will always be allocated for every pair.

rows : ndarray of int or None

Row indices for each nonzero entry. For sparse subjacobians only.

cols : ndarray of int or None

Column indices for each nonzero entry. For sparse subjacobians only.

val : float or ndarray of float or scipy.sparse

Value of subjacobian. If rows and cols are not None, this will contain the values found at each (row, col) location in the subjac.

method : str

The type of approximation that should be used. Valid options include: ‘fd’: Finite Difference, ‘cs’: Complex Step, ‘exact’: use the component defined analytic derivatives. Default is ‘exact’.

step : float

Step size for approximation. Defaults to None, in which case the approximation method provides its default value.

form : string

Form for finite difference, can be ‘forward’, ‘backward’, or ‘central’. Defaults to None, in which case the approximation method provides its default value.

step_calc : string

Step type for finite difference, can be ‘abs’ for absolute’, or ‘rel’ for relative. Defaults to None, in which case the approximation method provides its default value.

Usage

  1. You may use glob patterns as arguments to to and wrt.
import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent

class FDPartialComp(ExplicitComponent):
    def setup(self):
        self.add_input('x', shape=(4,))
        self.add_input('y', shape=(2,))
        self.add_input('y2', shape=(2,))
        self.add_output('f', shape=(2,))

        self.declare_partials('f', 'y*', method='fd')
        self.declare_partials('f', 'x', method='fd')

    def compute(self, inputs, outputs):
        f = outputs['f']

        x = inputs['x']
        y = inputs['y']

        f[0] = x[0] + y[0]
        f[1] = np.dot([0, 2, 3, 4], x) + y[1]

model = Group()
comp = IndepVarComp()
comp.add_output('x', np.ones(4))
comp.add_output('y', np.ones(2))

model.add_subsystem('input', comp)
model.add_subsystem('example', FDPartialComp())

model.connect('input.x', 'example.x')
model.connect('input.y', 'example.y')

problem = Problem(model=model)
problem.setup(check=False)
problem.run_model()
totals = problem.compute_totals(['example.f'], ['input.x', 'input.y'])

print(totals['example.f', 'input.x'])
[[ 1. -0. -0. -0.]
 [-0.  2.  3.  4.]]
print(totals['example.f', 'input.y'])
[[ 1. -0.]
 [-0.  1.]]
  1. For finite difference approximations (method='fd'), we have three (optional) parameters: the form, step size, and the step_calc. The form should be one of the following:
    • form='forward' (default): Approximates the derivative as \(\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x+\delta, y) - f(x,y)}{||\delta||}\). Error scales like \(||\delta||\).
    • form='backward': Approximates the derivative as \(\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x,y) - f(x-\delta, y) }{||\delta||}\). Error scales like \(||\delta||\).
    • form='central': Approximates the derivative as \(\displaystyle\frac{\partial f}{\partial x} \approx \frac{f(x+\delta, y) - f(x-\delta,y)}{2||\delta||}\). Error scales like \(||\delta||^2\), but requires an extra function evaluation.

The step size can be any nonzero number, but should be positive (one can change the form to perform backwards finite difference formulas), small enough to reduce truncation error, but large enough to avoid round-off error. Choosing a step size can be highly problem dependent, but for double precision floating point numbers and reasonably bounded derivatives, \(10^{-6}\) can be a good place to start. The step_calc can be either ‘abs’ for absoluate or ‘rel’ for relative. It determines whether the stepsize ie absolute or a percentage of the input value.

import numpy as np

from openmdao.api import Problem, Group, IndepVarComp, ExplicitComponent

class FDPartialComp(ExplicitComponent):

    def setup(self):
        self.add_input('x', shape=(4,))
        self.add_input('y', shape=(2,))
        self.add_input('y2', shape=(2,))
        self.add_output('f', shape=(2,))

        self.declare_partials('f', 'y*', method='fd', form='backward', step=1e-6)
        self.declare_partials('f', 'x', method='fd', form='central', step=1e-4)

    def compute(self, inputs, outputs):
        f = outputs['f']

        x = inputs['x']
        y = inputs['y']

        f[0] = x[0] + y[0]
        f[1] = np.dot([0, 2, 3, 4], x) + y[1]

model = Group()
comp = IndepVarComp()
comp.add_output('x', np.ones(4))
comp.add_output('y', np.ones(2))

model.add_subsystem('input', comp)
model.add_subsystem('example', FDPartialComp())

model.connect('input.x', 'example.x')
model.connect('input.y', 'example.y')

problem = Problem(model=model)
problem.setup(check=False)
problem.run_model()
totals = problem.compute_totals(['example.f'], ['input.x', 'input.y'])

print(totals['example.f', 'input.x'])
[[ 1. -0. -0. -0.]
 [-0.  2.  3.  4.]]
print(totals['example.f', 'input.y'])
[[ 1. -0.]
 [-0.  1.]]

Complex Step

If you have a pure python component (or an external code that can support complex inputs and outputs), then you can also choose to use complex step to calculate the Jacobian of that component. This will give more accurate derivatives that are insensitive to the step size. Like finite difference, complex step runs your component using the apply_nonlinear or solve_nonlinear functions, but it applies a step in the complex direction. You can activate it using the declare_partials method inside setup and giving it a method of ‘cs’. In many cases, this will require no other changes to your code, as long as all of the calculation in your solve_nonlinear and apply_nonlinear support complex numbers. During a complex step, the incoming inputs vector will return a complex number when a variable is being stepped. Likewise, the outputs and residuals vectors will accept complex values. If you are allocating temporary numpy arrays, remember to conditionally set their dtype based on the dtype in the outputs vector.

Here is how to turn on complex step for all input/output pairs in the Sellar problem:

class SellarDis1CS(ExplicitComponent):
    """
    Component containing Discipline 1 -- no derivatives version.
    Uses Complex Step
    """

    def setup(self):

        # Global Design Variable
        self.add_input('z', val=np.zeros(2))

        # Local Design Variable
        self.add_input('x', val=0.)

        # Coupling parameter
        self.add_input('y2', val=1.0)

        # Coupling output
        self.add_output('y1', val=1.0)

        # Finite difference all partials.
        self.declare_partials('*', '*', method='cs')

    def compute(self, inputs, outputs):
        """
        Evaluates the equation
        y1 = z1**2 + z2 + x1 - 0.2*y2
        """
        z1 = inputs['z'][0]
        z2 = inputs['z'][1]
        x1 = inputs['x']
        y2 = inputs['y2']

        outputs['y1'] = z1**2 + z2 + x1 - 0.2*y2
class SellarDis2CS(ExplicitComponent):
    """
    Component containing Discipline 2 -- no derivatives version.
    Uses Complex Step
    """

    def setup(self):
        # Global Design Variable
        self.add_input('z', val=np.zeros(2))

        # Coupling parameter
        self.add_input('y1', val=1.0)

        # Coupling output
        self.add_output('y2', val=1.0)

        # Finite difference all partials.
        self.declare_partials('*', '*', method='cs')

    def compute(self, inputs, outputs):
        """
        Evaluates the equation
        y2 = y1**(.5) + z1 + z2
        """

        z1 = inputs['z'][0]
        z2 = inputs['z'][1]
        y1 = inputs['y1']

        # Note: this may cause some issues. However, y1 is constrained to be
        # above 3.16, so lets just let it converge, and the optimizer will
        # throw it out
        if y1.real < 0.0:
            y1 *= -1

        outputs['y2'] = y1**.5 + z1 + z2