MetaModelStructuredComp

MetaModelStructuredComp is a smooth interpolation Component for data that exists on a regular, structured, grid. This differs from MetaModelUnStructured which accepts unstructured data as collections of points.

Note

OpenMDAO contains two components that perform interpolation: SplineComp and MetaModelStructuredComp. While they provide access to mostly the same algorithms, their usage is subtly different. The fundamental differences between them are as follows:

MetaModelStructuredComp is used when you have a set of known data values y on a structured grid x and want to interpolate a new y value at a new x location that lies inside the grid. In this case, you generally start with a known set of fixed “training” values and their locations.

SplineComp is used when you want to create a smooth curve with a large number of points, but you want to control the shape of the curve with a small number of control points. The x locations of the interpolated points (and where applicable, the control points) are fixed and known, but the y values at the control points vary as the curve shape is modified by an upstream connection.

MetaModelStructuredComp can be used for multi-dimensional design spaces, whereas SplineComp is restricted to one dimension.

MetaModelStructuredComp produces smooth fits through provided training data using polynomial splines of various orders. The interpolation methods include three that wrap methods in scipy.interpolate, as well as five methods that are written in pure python. For all methods, derivatives are automatically computed. The following table summarizes the methods and gives the number of points required for each.

Method

Order

Description

slinear

1

Basic linear interpolation

lagrange2

2

Second order Lagrange polynomial

lagrange3

3

Third order Lagrange polynomial

akima

3

Interpolation using Akima splines

cubic

3

Cubic spline, with continuity of derivatives between segments

scipy_slinear

1

Scipy linear interpolation. Same as slinear, though slower

scipy_cubic

3

Scipy cubic interpolation. More accurate than cubic, but slower

scipy_quintic

5

Scipy quintic interpolation. Most accurate, but slowest

trilinear

1

Linear on a fixed 3D table

akima1D

3

Akima on a fixed 1D table

Note that MetaModelStructuredComp only accepts scalar inputs and outputs. If you have a multivariable function, each input variable needs its own named OpenMDAO input.

For multi-dimensional data, fits are computed on a separable per-axis basis. A single interpolation method is used for all dimensions, so the minimum table dimension must be high enough to use the chosen interpolate. However, if you choose one of the scipy methods, then automatic order reduction is supported. In this case, if a particular dimension does not have enough training data points to support a selected spline order (e.g. 3 sample points, but an order 5 ‘scipy_quintic’ spline is specified), then the order of the fitted spline will be automatically reduced to one of the lower order scipy methods (‘scipy_cubic’ or ‘scipy_slinear’) for that dimension alone.

The available methods include two (“trilinear” and “akima1D”) that operate on a fixed number of dimensions in the table. These methods sacrifice flexibility in favor of computational efficiency. Some of the efficiency is gained by caching coefficients in a bin once they are computed, so these methods also assume that the table values are fixed.

Extrapolation is supported, but disabled by default. It can be enabled via the extrapolate option (see below).

MetaModelStructuredComp Options

MetaModelStructuredComp Constructor

The call signature for the MetaModelStructuredComp constructor is:

MetaModelStructuredComp.__init__(**kwargs)[source]

Initialize all attributes.

MetaModelStructuredComp Examples

A simple quick-start example is fitting the exclusive-or (“XOR”) operator between two inputs, x and y:

import numpy as np
import openmdao.api as om

# Create regular grid interpolator instance
xor_interp = om.MetaModelStructuredComp(method='scipy_slinear')

# set up inputs and outputs
xor_interp.add_input('x', 0.0, training_data=np.array([0.0, 1.0]), units=None)
xor_interp.add_input('y', 1.0, training_data=np.array([0.0, 1.0]), units=None)


xor_interp.add_output('xor', 1.0, training_data=np.array([[0.0, 1.0], [1.0, 0.0]]), units=None)

# Set up the OpenMDAO model
model = om.Group()
model.add_subsystem('comp', xor_interp, promotes=["*"])
prob = om.Problem(model)
prob.setup()

prob.set_val('x', 0)

# Now test out a 'fuzzy' XOR
prob.set_val('x', 0.9)
prob.set_val('y', 0.001242)

prob.run_model()

print(prob.get_val('xor'))

# we can verify all gradients by checking against finite-difference
prob.check_partials(compact_print=True);
[0.8990064]

An important consideration for multi-dimensional input is that the order in which the input variables are added sets the expected dimension of the output training data. For example, if inputs x, y and z are added to the component with training data array lengths of 5, 12, and 20 respectively, and are added in x, y, and z order, than the output training data must be an ndarray with shape (5, 12, 20).

This is illustrated by the example:

# create input param training data, of sizes 25, 5, and 10 points resp.
p1 = np.linspace(0, 100, 25)
p2 = np.linspace(-10, 10, 5)
p3 = np.linspace(0, 1, 10)

# can use meshgrid to create a 3D array of test data
P1, P2, P3 = np.meshgrid(p1, p2, p3, indexing='ij')
f = np.sqrt(P1) + P2 * P3

# verify the shape matches the order and size of the input params
print(f.shape)
(25, 5, 10)
# Create regular grid interpolator instance
interp = om.MetaModelStructuredComp(method='scipy_cubic')
interp.add_input('p1', 0.5, training_data=p1)
interp.add_input('p2', 0.0, training_data=p2)
interp.add_input('p3', 3.14, training_data=p3)

interp.add_output('f', 0.0, training_data=f)

# Set up the OpenMDAO model
model = om.Group()
model.add_subsystem('comp', interp, promotes=["*"])
prob = om.Problem(model)
prob.setup()

# set inputs
prob.set_val('p1', 55.12)
prob.set_val('p2', -2.14)
prob.set_val('p3', 0.323)

prob.run_model()

print(prob.get_val('f'))

# we can verify all gradients by checking against finite-difference
prob.check_partials(compact_print=True);
[6.73306472]

You can also predict multiple independent output points by setting the vec_size argument to be equal to the number of points you want to predict. Here, we set it to 2 and predict 2 points with MetaModelStructuredComp:

# create input param training data, of sizes 25, 5, and 10 points resp.
p1 = np.linspace(0, 100, 25)
p2 = np.linspace(-10, 10, 5)
p3 = np.linspace(0, 1, 10)

# can use meshgrid to create a 3D array of test data
P1, P2, P3 = np.meshgrid(p1, p2, p3, indexing='ij')
f = np.sqrt(P1) + P2 * P3

# Create regular grid interpolator instance
interp = om.MetaModelStructuredComp(method='scipy_cubic', vec_size=2)
interp.add_input('p1', 0.5, training_data=p1)
interp.add_input('p2', 0.0, training_data=p2)
interp.add_input('p3', 3.14, training_data=p3)

interp.add_output('f', 0.0, training_data=f)

# Set up the OpenMDAO model
model = om.Group()
model.add_subsystem('comp', interp, promotes=["*"])
prob = om.Problem(model)
prob.setup()

# set inputs
prob.set_val('p1', np.array([55.12, 12.0]))
prob.set_val('p2', np.array([-2.14, 3.5]))
prob.set_val('p3', np.array([0.323, 0.5]))

prob.run_model()

print(prob.get_val('f'))
[6.73306472 5.21186454]

Dynamic Training Points

Finally, it is possible for MetaModelStructuredComp to be passed the values for each point on the grid from other components in a model. To enable this feature, set the option training_data_gradients to True. Every time add_input is called to add an input grid dimension “f”, an additional coresponding input named “f_train” is added to the component. This input can be connected to an output somewhere in your model. The output should be a multi-dimensional array whose shape tuple contains the lengths of each dimension of the grid as declared when calling add_input on the MetaModelStructuredComp.

Note that the grid point locations are always fixed and cannot be provided dynamically. Only the values at those grid points can be provided this way.

When training_data_gradients is set to True, the partial derivative of the interpolated output with respect to the training inputs is also computed by the metamodel component. This may have a small performance impact for some of the interpolation methods. Again, the grid point locations are fixed, so the derivative of the interpolated output with respect to those is not computed.

The following example shows the use of dynamic training. Here, the component TableGen computes the values of the table each iteration. The formula from the previous example is slightly modified to add an input variable “k” that adjusts the table values. This input could be declared as a design variable and adjusted by an optimizer. At the end of the example, we run the model and do a check_totals to verify that we are computing correct derivatives across the training point inputs (“comp.f” with respect to “tab.k”).

import numpy as np

import openmdao.api as om


# Define grids of sizes 25, 5, and 10 points for the dimensions of our training data.
p1 = np.linspace(0, 100, 25)
p2 = np.linspace(-10, 10, 5)
p3 = np.linspace(0, 1, 10)


class TableGen(om.ExplicitComponent):

    def setup(self):
        self.add_input('k', 1.0)
        self.add_output('values', np.zeros((25, 5, 10)))

        # These are fixed locations.
        self.P1, self.P2, self.P3 = np.meshgrid(p1, p2, p3, indexing='ij')

        self.declare_partials('values', 'k')

    def compute(self, inputs, outputs):
        P1, P2, P3 = self.P1, self.P2, self.P3
        k = inputs['k']

        outputs['values'] = np.sqrt(P1) + P2 * P3 * k

    def compute_partials(self, inputs, partials):
        P2, P3 = self.P2, self.P3
        partials['values', 'k'] = P2 * P3


prob = om.Problem()
model = prob.model

model.add_subsystem('tab', TableGen())

interp = om.MetaModelStructuredComp(method='lagrange3', training_data_gradients=True)
interp.add_input('p1', 0.5, p1)
interp.add_input('p2', 0.0, p2)
interp.add_input('p3', 3.14, p3)

# No need to pass training data into this call, because it is provided by 'tab'.
interp.add_output('f', 0.0)

model.add_subsystem('comp', interp)

# The new input for the training data is called "f_train".
model.connect('tab.values', 'comp.f_train')

prob.setup(force_alloc_complex=True)

# set inputs
prob.set_val('comp.p1', 55.12)
prob.set_val('comp.p2', -2.14)
prob.set_val('comp.p3', 0.323)

prob.run_model()

print(prob.get_val('comp.f'))

# we can verify all gradients by checking against finite-difference
prob.check_totals(of='comp.f', wrt=['tab.k', 'comp.p1', 'comp.p2', 'comp.p3'],
                  method='cs', compact_print=True);
[6.73306794]
-------------------------
Group: Group 'Full Model'
-------------------------
'<output>'                     wrt '<variable>'                   | calc mag.  | check mag. | a(cal-chk) | r(cal-chk)
---------------------------------------------------------------------------------------------------------------------

'comp.f'                       wrt 'comp.p1'                      | 6.7349e-02 | 6.7349e-02 | 6.6169e-14 | 9.8248e-13
'comp.f'                       wrt 'comp.p2'                      | 3.2300e-01 | 3.2300e-01 | 6.6613e-16 | 2.0623e-15
'comp.f'                       wrt 'comp.p3'                      | 2.1400e+00 | 2.1400e+00 | 1.7675e-13 | 8.2592e-14
'comp.f'                       wrt 'tab.k'                        | 6.9122e-01 | 6.9122e-01 | 2.2204e-16 | 3.2124e-16

Standalone Interface for Table Interpolation

The underlying interpolation algorithms can be used standalone (i.e., outside of the MetaModelStructuredComp) through the InterpND class. This can be useful for inclusion in another component. The following component shows how to perform interpolation on the same table as in the previous example using standalone code. This time, we choose ‘lagrange3’ as the interpolation algorithm.

from openmdao.components.interp_util.interp import InterpND

# create input param training data, of sizes 25, 5, and 10 points resp.
p1 = np.linspace(0, 100, 25)
p2 = np.linspace(-10, 10, 5)
p3 = np.linspace(0, 1, 10)

# can use meshgrid to create a 3D array of test data
P1, P2, P3 = np.meshgrid(p1, p2, p3, indexing='ij')
f = np.sqrt(P1) + P2 * P3

interp = InterpND(method='lagrange3', points=(p1, p2, p3), values=f)

x = np.array([55.12, -2.14, 0.323])
f, df_dx = interp.interpolate(x, compute_derivative=True)

print(f)
print(df_dx)
[6.73306794]
[[ 0.06734927  0.323      -2.14      ]]