Checking Partial Derivatives with Finite Difference#

In addition to using approximations to estimate partial derivatives, you can also use approximations to check your implementations of the partial derivatives for a component.

Problem has a method, check_partials, that checks partial derivatives comprehensively for all Components in your model. To do this check, the framework compares the analytic result against a finite difference result. This means that the check_partials function can be quite computationally expensive. So use it to check your work, but don’t leave the call in your production run scripts.

Problem.check_partials(out_stream=DEFAULT_OUT_STREAM, includes=None, excludes=None, compact_print=False, abs_err_tol=1e-06, rel_err_tol=1e-06, method='fd', step=None, form='forward', step_calc='abs', minimum_step=1e-12, force_dense=True, show_only_incorrect=False)[source]

Check partial derivatives comprehensively for all components in your model.

Parameters:
out_streamfile-like object

Where to send human readable output. By default it goes to stdout. Set to None to suppress.

includesNone or list_like

List of glob patterns for pathnames to include in the check. Default is None, which includes all components in the model.

excludesNone or list_like

List of glob patterns for pathnames to exclude from the check. Default is None, which excludes nothing.

compact_printbool

Set to True to just print the essentials, one line per input-output pair.

abs_err_tolfloat

Threshold value for absolute error. Errors about this value will have a ‘*’ displayed next to them in output, making them easy to search for. Default is 1.0E-6.

rel_err_tolfloat

Threshold value for relative error. Errors about this value will have a ‘*’ displayed next to them in output, making them easy to search for. Note at times there may be a significant relative error due to a minor absolute error. Default is 1.0E-6.

methodstr

Method, ‘fd’ for finite difference or ‘cs’ for complex step. Default is ‘fd’.

stepNone, float, or list/tuple of float

Step size(s) for approximation. Default is None, which means 1e-6 for ‘fd’ and 1e-40 for ‘cs’.

formstr

Form for finite difference, can be ‘forward’, ‘backward’, or ‘central’. Default ‘forward’.

step_calcstr

Step type for computing the size of the finite difference step. It can be ‘abs’ for absolute, ‘rel_avg’ for a size relative to the absolute value of the vector input, or ‘rel_element’ for a size relative to each value in the vector input. In addition, it can be ‘rel_legacy’ for a size relative to the norm of the vector. For backwards compatibilty, it can be ‘rel’, which is now equivalent to ‘rel_avg’. Defaults to None, in which case the approximation method provides its default value.

minimum_stepfloat

Minimum step size allowed when using one of the relative step_calc options.

force_densebool

If True, analytic derivatives will be coerced into arrays. Default is True.

show_only_incorrectbool, optional

Set to True if output should print only the subjacs found to be incorrect.

Returns:
dict of dicts of dicts

First key is the component name. Second key is the (output, input) tuple of strings. Third key is one of [‘rel error’, ‘abs error’, ‘magnitude’, ‘J_fd’, ‘J_fwd’, ‘J_rev’, ‘rank_inconsistent’]. For ‘rel error’, ‘abs error’, and ‘magnitude’ the value is a tuple containing norms for forward - fd, adjoint - fd, forward - adjoint. For ‘J_fd’, ‘J_fwd’, ‘J_rev’ the value is a numpy array representing the computed Jacobian for the three different methods of computation. The boolean ‘rank_inconsistent’ indicates if the derivative wrt a serial variable is inconsistent across MPI ranks.

Note

For components that provide their partials directly (from the compute_partials or linearize methods, only information about the forward derivatives are shown. For components that are matrix-free, both forward and reverse derivative information is shown.

Implicit components are matrix-free if they define a apply_linear method. Explicit components are matrix-free if they define a compute_jacvec_product method.

Basic Usage#

When the difference between the FD derivative and the provided derivative is larger (in either a relative or absolute sense) than 1e-6, that partial derivative will be marked with a ‘*’.

import numpy as np

import openmdao.api as om

class MyComp(om.ExplicitComponent):
    def setup(self):
        self.add_input('x1', 3.0)
        self.add_input('x2', 5.0)

        self.add_output('y', 5.5)

        self.declare_partials(of='*', wrt='*')

    def compute(self, inputs, outputs):
        outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']

    def compute_partials(self, inputs, partials):
        """Intentionally incorrect derivative."""
        J = partials
        J['y', 'x1'] = np.array([4.0])
        J['y', 'x2'] = np.array([40])

prob = om.Problem()

prob.model.add_subsystem('comp', MyComp())

prob.set_solver_print(level=0)

prob.setup(mode='rev')
prob.run_model()

data = prob.check_partials()
------------------------
Component: MyComp 'comp'
------------------------

  comp: 'y' wrt 'x1'
     Forward Magnitude: 4.000000e+00
          Fd Magnitude: 3.000000e+00 (fd:forward)

    Absolute Error (Jfor - Jfd) : 1.000000e+00 *

    Relative Error (Jfor - Jfd) / Jfd : 3.333333e-01 *

    Raw Forward Derivative (Jfor)
    [[4.]]

    Raw FD Derivative (Jfd)
    [[3.]]

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  comp: 'y' wrt 'x2'
     Forward Magnitude: 4.000000e+01
          Fd Magnitude: 4.000000e+00 (fd:forward)

    Absolute Error (Jfor - Jfd) : 3.600000e+01 *

    Relative Error (Jfor - Jfd) / Jfd : 9.000000e+00 *

    Raw Forward Derivative (Jfor)
    [[40.]]

    Raw FD Derivative (Jfd)
    [[4.]]

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
x1_error = data['comp']['y', 'x1']['abs error']

print(x1_error.forward)
1.0000000004688445
x2_error = data['comp']['y', 'x2']['rel error']

print(x2_error.forward)
8.99999999860222

Turn off standard output and just view the derivatives in the return:

import numpy as np

import openmdao.api as om

class MyComp(om.ExplicitComponent):
    def setup(self):
        self.add_input('x1', 3.0)
        self.add_input('x2', 5.0)

        self.add_output('y', 5.5)

        self.declare_partials(of='*', wrt='*')

    def compute(self, inputs, outputs):
        outputs['y'] = 3.0*inputs['x1'] + 4.0*inputs['x2']

    def compute_partials(self, inputs, partials):
        """Intentionally incorrect derivative."""
        J = partials
        J['y', 'x1'] = np.array([4.0])
        J['y', 'x2'] = np.array([40])

prob = om.Problem()

prob.model.add_subsystem('comp', MyComp())

prob.set_solver_print(level=0)

prob.setup()
prob.run_model()
data = prob.check_partials(out_stream=None, compact_print=True)
print(data)
{'comp': {('y', 'x1'): {'J_fwd': array([[4.]]), 'J_fd': array([[3.]]), 'abs error': ErrorTuple(forward=1.0000000004688445, reverse=None, forward_reverse=None), 'rel error': ErrorTuple(forward=0.3333333335417087, reverse=None, forward_reverse=None), 'magnitude': MagnitudeTuple(forward=4.0, reverse=None, fd=2.9999999995311555)}, ('y', 'x2'): {'J_fwd': array([[40.]]), 'J_fd': array([[4.]]), 'abs error': ErrorTuple(forward=35.99999999944089, reverse=None, forward_reverse=None), 'rel error': ErrorTuple(forward=8.99999999860222, reverse=None, forward_reverse=None), 'magnitude': MagnitudeTuple(forward=40.0, reverse=None, fd=4.000000000559112)}}}

Show Only Incorrect Printing Option#

If you are only concerned with seeing the partials calculations that are incorrect, set show_only_incorrect to True. This applies to both compact_print equal to True and False.

import numpy as np
import openmdao.api as om

class MyCompGoodPartials(om.ExplicitComponent):
    def setup(self):
        self.add_input('x1', 3.0)
        self.add_input('x2', 5.0)
        self.add_output('y', 5.5)
        self.declare_partials(of='*', wrt='*')

    def compute(self, inputs, outputs):
        outputs['y'] = 3.0 * inputs['x1'] + 4.0 * inputs['x2']

    def compute_partials(self, inputs, partials):
        """Correct derivative."""
        J = partials
        J['y', 'x1'] = np.array([3.0])
        J['y', 'x2'] = np.array([4.0])

class MyCompBadPartials(om.ExplicitComponent):
    def setup(self):
        self.add_input('y1', 3.0)
        self.add_input('y2', 5.0)
        self.add_output('z', 5.5)
        self.declare_partials(of='*', wrt='*')

    def compute(self, inputs, outputs):
        outputs['z'] = 3.0 * inputs['y1'] + 4.0 * inputs['y2']

    def compute_partials(self, inputs, partials):
        """Intentionally incorrect derivative."""
        J = partials
        J['z', 'y1'] = np.array([33.0])
        J['z', 'y2'] = np.array([40.0])

prob = om.Problem()
prob.model.add_subsystem('good', MyCompGoodPartials())
prob.model.add_subsystem('bad', MyCompBadPartials())
prob.model.connect('good.y', 'bad.y1')

prob.set_solver_print(level=0)
prob.setup()
prob.run_model()

prob.check_partials(compact_print=True, show_only_incorrect=True)
prob.check_partials(compact_print=False, show_only_incorrect=True)
** Only writing information about components with incorrect Jacobians **

----------------------------------
Component: MyCompBadPartials 'bad'
----------------------------------

+-----------------+------------------+-------------+-------------+-------------+-------------+--------------------+
| of '<variable>' | wrt '<variable>' |   calc mag. |  check mag. |  a(cal-chk) |  r(cal-chk) | error desc         |
+=================+==================+=============+=============+=============+=============+====================+
| 'z'             | 'y1'             |  3.3000e+01 |  3.0000e+00 |  3.0000e+01 |  1.0000e+01 |  >ABS_TOL >REL_TOL |
+-----------------+------------------+-------------+-------------+-------------+-------------+--------------------+
| 'z'             | 'y2'             |  4.0000e+01 |  4.0000e+00 |  3.6000e+01 |  9.0000e+00 |  >ABS_TOL >REL_TOL |
+-----------------+------------------+-------------+-------------+-------------+-------------+--------------------+

#################################################################
Sub Jacobian with Largest Relative Error: MyCompBadPartials 'bad'
#################################################################
+-----------------+------------------+-------------+-------------+-------------+-------------+
| of '<variable>' | wrt '<variable>' |   calc mag. |  check mag. |  a(cal-chk) |  r(cal-chk) |
+=================+==================+=============+=============+=============+=============+
| 'z'             | 'y1'             |  3.3000e+01 |  3.0000e+00 |  3.0000e+01 |  1.0000e+01 |
+-----------------+------------------+-------------+-------------+-------------+-------------+

** Only writing information about components with incorrect Jacobians **

----------------------------------
Component: MyCompBadPartials 'bad'
----------------------------------

  bad: 'z' wrt 'y1'
     Forward Magnitude: 3.300000e+01
          Fd Magnitude: 3.000000e+00 (fd:forward)

    Absolute Error (Jfor - Jfd) : 3.000000e+01 *

    Relative Error (Jfor - Jfd) / Jfd : 1.000000e+01 *

    Raw Forward Derivative (Jfor)
    [[33.]]

    Raw FD Derivative (Jfd)
    [[3.00000001]]

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
  bad: 'z' wrt 'y2'
     Forward Magnitude: 4.000000e+01
          Fd Magnitude: 4.000000e+00 (fd:forward)

    Absolute Error (Jfor - Jfd) : 3.600000e+01 *

    Relative Error (Jfor - Jfd) / Jfd : 9.000000e+00 *

    Raw Forward Derivative (Jfor)
    [[40.]]

    Raw FD Derivative (Jfd)
    [[4.]]

 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
{'good': {('y', 'x1'): {'J_fwd': array([[3.]]),
   'J_fd': array([[3.]]),
   'abs error': ErrorTuple(forward=4.688445187639445e-10, reverse=None, forward_reverse=None),
   'rel error': ErrorTuple(forward=1.5628150627907208e-10, reverse=None, forward_reverse=None),
   'magnitude': MagnitudeTuple(forward=3.0, reverse=None, fd=2.9999999995311555)},
  ('y', 'x2'): {'J_fwd': array([[4.]]),
   'J_fd': array([[4.]]),
   'abs error': ErrorTuple(forward=5.591118679149076e-10, reverse=None, forward_reverse=None),
   'rel error': ErrorTuple(forward=1.3977796695918903e-10, reverse=None, forward_reverse=None),
   'magnitude': MagnitudeTuple(forward=4.0, reverse=None, fd=4.000000000559112)}},
 'bad': {('z', 'y1'): {'J_fwd': array([[33.]]),
   'J_fd': array([[3.00000001]]),
   'abs error': ErrorTuple(forward=29.999999993363417, reverse=None, forward_reverse=None),
   'rel error': ErrorTuple(forward=9.999999975665864, reverse=None, forward_reverse=None),
   'magnitude': MagnitudeTuple(forward=33.0, reverse=None, fd=3.000000006636583)},
  ('z', 'y2'): {'J_fwd': array([[40.]]),
   'J_fd': array([[4.]]),
   'abs error': ErrorTuple(forward=35.999999995888174, reverse=None, forward_reverse=None),
   'rel error': ErrorTuple(forward=8.999999989720436, reverse=None, forward_reverse=None),
   'magnitude': MagnitudeTuple(forward=40.0, reverse=None, fd=4.0000000041118255)}}}

Running With Multiple FD Step Sizes#

If the step argument is provided as a list of values instead of a single value, the FD partial derivatives will be evaluated and displayed for each given step size. This can be useful in certain cases where complex step checks are not possible and the component(s) being checked are expensive to execute. Supplying multiple FD step sizes in that case will only compute the analytic derivatives once and compare those to each computed FD derivative, which is less expensive than making a separate call to check_partials for each FD step size.