Raising an AnalysisError#
This example demonstrates the effect of raising an AnalysisError
in your Component’s compute
function. The result depends on which driver and optimizer is used. The SNOPT
and IPOPT
optimizers, used in conjunction with pyOptSparseDriver, are good options if your model has invalid regions.
Model#
For this somewhat contrived case, we will assume some range of input values to our Component is invalid and raise an AnalysisError
if those inputs are encountered. We will use the Paraboloid as the basis for our example, modifying it so that it will raise an AnalysisError if the x or y inputs are within a specified range.
Paraboloid
class definition
class Paraboloid(om.ExplicitComponent):
"""
Evaluates the equation f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
This version of Paraboloid optionally raises an analysis error when the
design variables x and y are in an invalid region defined by the specified
"invalid_x" and "invalid_y" ranges.
The path of evaluated points to the optmized solution is recorded as
well as the number of analysis errors raised.
Parameters
----------
invalid_x : tuple of float or None
The range of values for x which will trigger an AnalysisError
invalid_y : tuple of float or None
The range of values for y which will trigger an AnalysisError
func : str, 'compute' or 'compute_partials'
The function that will raise the AnalysisError (compute or compute_partials).
Attributes
----------
invalid_x : tuple of float or None
The range of values for x which will trigger an AnalysisError
invalid_y : tuple of float or None
The range of values for y which will trigger an AnalysisError
func : str, 'compute' or 'compute_partials'
The function that will raise the AnalysisError (compute or compute_partials).
"""
def __init__(self, invalid_x=None, invalid_y=None, func='compute'):
super().__init__()
self.invalid_x = invalid_x
self.invalid_y = invalid_y
self.func = func
self.eval_count = -1
self.eval_history = []
self.raised_eval_errors = []
self.grad_count = -1
self.grad_history = []
self.raised_grad_errors = []
def setup(self):
self.add_input('x', val=0.0)
self.add_input('y', val=0.0)
self.add_output('f_xy', val=0.0)
self.declare_partials('*', '*')
def compute(self, inputs, outputs):
"""
f(x,y) = (x-3)^2 + xy + (y+4)^2 - 3
"""
self.eval_count += 1
x = inputs['x']
y = inputs['y']
f_xy = outputs['f_xy'] = (x-3.0)**2 + x*y + (y+4.0)**2 - 3.0
self.eval_history.append((x.item(), y.item(), f_xy.item()))
if self.invalid_x and self.func == 'compute':
beg, end = self.invalid_x
if x > beg and x < end:
self.raised_eval_errors.append(self.eval_count)
raise om.AnalysisError(f'Invalid x: {beg} < {x.item():8.4f} < {end}).')
if self.invalid_y and self.func == 'compute':
beg, end = self.invalid_y
if y > beg and y < end:
self.raised_eval_errors.append(self.eval_count)
raise om.AnalysisError(f'Invalid y: {beg} < {y.item():8.4f} < {end}).')
def compute_partials(self, inputs, partials):
"""
Partial derivatives.
"""
self.grad_count += 1
x = inputs['x']
y = inputs['y']
partials['f_xy', 'x'] = 2.0*x - 6.0 + y
partials['f_xy', 'y'] = 2.0*y + 8.0 + x
self.grad_history.append((x.item(), y.item()))
if self.invalid_x and self.func == 'compute_partials':
beg, end = self.invalid_x
if x > beg and x < end:
self.raised_grad_errors.append(self.grad_count)
raise om.AnalysisError(f'Invalid x: {beg} < {x.item():8.4f} < {end}).')
if self.invalid_y and self.func == 'compute_partials':
beg, end = self.invalid_y
if y > beg and y < end:
self.raised_grad_errors.append(self.grad_count)
raise om.AnalysisError(f'Invalid y: {beg} < {y.item():8.4f} < {end}).')
First, we will define a function to create a Problem instance while allowing us to specify the optimizer and the invalid region:
import openmdao.api as om
from openmdao.test_suite.components.paraboloid_invalid_region import Paraboloid
def setup_problem(optimizer, invalid_x=None, invalid_y=None):
# Paraboloid model with optional AnalysisErrors
model = om.Group()
model.add_subsystem('p1', om.IndepVarComp('x', 50.0), promotes=['*'])
model.add_subsystem('p2', om.IndepVarComp('y', 50.0), promotes=['*'])
comp = model.add_subsystem('comp',
Paraboloid(invalid_x, invalid_y),
promotes=['*'])
model.add_subsystem('con', om.ExecComp('c = - x + y'), promotes=['*'])
model.add_design_var('x', lower=-50.0, upper=50.0)
model.add_design_var('y', lower=-50.0, upper=50.0)
model.add_objective('f_xy')
model.add_constraint('c', upper=-15.)
# pyOptSparseDriver with selected optimizer
driver = om.pyOptSparseDriver(optimizer=optimizer)
if optimizer == 'IPOPT':
driver.opt_settings['file_print_level'] = 5
driver.options['print_results'] = False
driver.options['output_dir'] = None # will put the optimizer output file in the current directory
# setup problem & initialize values
prob = om.Problem(model, driver)
prob.setup()
prob.set_val('x', 50)
prob.set_val('y', 50)
return prob, comp
Example using IPOPT#
First we will run the Paraboloid optimization as normal, without raising any errors. In doing this, we can see the nominal path that the optimizer follows throught solution space to arrive at the optimum. For this initial case, we will use the IPOPT
optimizer:
prob, comp = setup_problem('IPOPT')
prob.run_driver()
for (x, y, f_xy) in comp.eval_history:
print(f"x: {x:9.5f} y: {y:9.5f} f_xy: {f_xy:10.5f}")
x: 49.50000 y: 49.50000 f_xy: 7471.75015
x: 1.91747 y: -25.87525 f_xy: 427.08359
x: 10.04882 y: -10.64684 f_xy: -16.12184
x: 9.14631 y: -9.47131 f_xy: -21.91518
x: 7.28379 y: -7.95036 f_xy: -26.95255
x: 7.18531 y: -7.85195 f_xy: -27.06436
x: 7.16735 y: -7.83402 f_xy: -27.08265
x: 7.16667 y: -7.83333 f_xy: -27.08333
x: 7.16667 y: -7.83333 f_xy: -27.08333
x: 7.16667 y: -7.83333 f_xy: -27.08333
Now, we will define our invalid region as x
between 7.2 and 10.2 and y
between -50 and -10. This region was chosen as it is crossed in the course of the nominal optimization from our chosen starting point at x=50
, y=50
.
invalid_x = (7.2, 10.2)
invalid_y = (-50., -40.)
We will recreate the problem using this invalid region and see that the optimizer’s path to the optimum now must reroute around the invalid values. It will take many more iterations to get to the solution, but IPOPT still gets there in the end:
prob, comp = setup_problem('IPOPT', invalid_x, invalid_y)
prob.run_driver()
for i, (x, y, f_xy) in enumerate(comp.eval_history):
print(f"{i:2d} x: {x:9.5f} y: {y:9.5f} f_xy: {f_xy:10.5f}")
0 x: 49.50000 y: 49.50000 f_xy: 7471.75015
1 x: 1.91747 y: -25.87525 f_xy: 427.08359
2 x: 10.04882 y: -10.64684 f_xy: -16.12184
3 x: 5.98314 y: -18.26105 f_xy: 100.01816
4 x: 9.26294 y: -9.65590 f_xy: -21.22839
5 x: 7.62304 y: -13.95847 f_xy: 11.13770
6 x: 6.80309 y: -16.10976 f_xy: 48.51363
7 x: 8.56687 y: -8.93122 f_xy: -24.20564
8 x: 7.68498 y: -12.52049 f_xy: -4.67191
9 x: 7.24404 y: -14.31513 f_xy: 17.71439
10 x: 7.02356 y: -15.21244 f_xy: 32.06239
11 x: 8.01980 y: -8.42815 f_xy: -25.78517
12 x: 7.52168 y: -11.82030 f_xy: -10.30586
13 x: 7.27262 y: -13.51637 f_xy: 7.51714
14 x: 7.14809 y: -14.36441 f_xy: 18.94949
15 x: 7.63794 y: -8.12375 f_xy: -26.53291
16 x: 7.39302 y: -11.24408 f_xy: -14.35239
17 x: 7.27056 y: -12.80424 f_xy: -0.34162
18 x: 7.20932 y: -13.58433 f_xy: 8.64389
19 x: 7.17871 y: -13.97437 f_xy: 13.63168
20 x: 7.40133 y: -7.95811 f_xy: -26.86226
21 x: 7.29002 y: -10.96624 f_xy: -16.01136
22 x: 7.23436 y: -12.47030 f_xy: -3.53886
23 x: 7.20654 y: -13.22233 f_xy: 4.45916
24 x: 7.19262 y: -13.59835 f_xy: 8.89860
25 x: 7.26649 y: -7.87450 f_xy: -27.00529
26 x: 7.22956 y: -10.73642 f_xy: -17.35104
27 x: 7.21109 y: -12.16739 f_xy: -6.30064
28 x: 7.20186 y: -12.88287 f_xy: 0.78038
29 x: 7.19724 y: -13.24061 f_xy: 4.70984
30 x: 7.19562 y: -7.85876 f_xy: -27.05540
31 x: 7.17091 y: -7.82964 f_xy: -27.08301
32 x: 7.16638 y: -7.83363 f_xy: -27.08333
33 x: 7.16667 y: -7.83333 f_xy: -27.08333
34 x: 7.16667 y: -7.83333 f_xy: -27.08333
35 x: 7.16667 y: -7.83333 f_xy: -27.08333
We can see how many times our Component raised an AnalysisError and at which iteration they occurred:
print(f"Number of errors: {len(comp.raised_eval_errors)}")
print(f"Iterations:{comp.raised_eval_errors}")
Number of errors: 21
Iterations:[2, 4, 5, 7, 8, 9, 11, 12, 13, 15, 16, 17, 18, 20, 21, 22, 23, 25, 26, 27, 28]
Looking at the IPOPT output file (IPOPT.out
) will reveal what happened when the optimizer encountered these bad points. Here we just show a relevant subsection of the file:
with open(prob.get_outputs_dir() / "IPOPT.out", encoding="utf-8") as f:
IPOPT_history = f.read()
beg = IPOPT_history.find("iter objective")
end = IPOPT_history.find("(scaled)", beg)
print(IPOPT_history[beg:end])
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 7.4717501e+03 1.50e+01 1.52e+02 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 4.2708359e+02 0.00e+00 3.76e+01 1.3 7.54e+01 - 8.53e-03 1.00e+00f 1
Warning: Cutting back alpha due to evaluation error
2 1.0001816e+02 0.00e+00 1.84e+01 1.1 1.52e+01 - 9.05e-01 5.00e-01f 2
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
3 4.8513633e+01 0.00e+00 1.41e+01 -0.2 8.61e+00 - 9.89e-01 2.50e-01f 3
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
4 3.2062390e+01 0.00e+00 1.24e+01 -0.9 7.18e+00 - 1.00e+00 1.25e-01f 4
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
5 1.8949488e+01 0.00e+00 1.09e+01 -2.0 6.78e+00 - 1.00e+00 1.25e-01f 4
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
6 1.3631680e+01 0.00e+00 1.02e+01 -2.2 6.24e+00 - 1.00e+00 6.25e-02f 5
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
7 8.8986042e+00 0.00e+00 9.58e+00 -2.3 6.02e+00 - 1.00e+00 6.25e-02f 5
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
8 4.7098388e+00 0.00e+00 8.98e+00 -2.3 5.72e+00 - 1.00e+00 6.25e-02f 5
9 -2.7055396e+01 0.00e+00 3.56e-02 -2.4 5.41e+00 - 1.00e+00 9.96e-01f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 -2.7083014e+01 0.00e+00 1.63e-02 -4.2 5.58e-02 - 1.00e+00 9.65e-01f 1
11 -2.7083330e+01 0.00e+00 9.09e-04 -6.1 4.68e-03 - 1.00e+00 9.67e-01f 1
12 -2.7083333e+01 0.00e+00 3.13e-08 -8.0 2.95e-04 - 1.00e+00 1.00e+00f 1
13 -2.7083333e+01 0.00e+00 1.50e-11 -11.0 2.10e-08 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 13
Specifically, we can see the following message when IPOPT changes its search in response to the bad point:
Warning: Cutting back alpha due to evaluation error
count = 0
for line in IPOPT_history.split('\n'):
if 'Cutting back alpha' in line:
print(line)
count = count + 1
print("\nNumber of times IPOPT encountered an evaluation error:", count)
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Warning: Cutting back alpha due to evaluation error
Number of times IPOPT encountered an evaluation error: 21
Example using SNOPT#
We can exercise the same model using SNOPT
as our optimizer, with similar results. First we will run the nominal case, and then again with the invalid region:
prob, comp = setup_problem('SNOPT')
prob.run_driver()
for i, (x, y, f_xy) in enumerate(comp.eval_history):
print(f"{i:2d} x: {x:9.5f} y: {y:9.5f} f_xy: {f_xy:10.5f}")
0 x: 50.00000 y: 50.00000 f_xy: 7622.00000
1 x: 49.99997 y: 49.99997 f_xy: 7621.99165
2 x: 50.00000 y: 49.00000 f_xy: 7465.00000
3 x: 50.00000 y: 48.00000 f_xy: 7310.00000
4 x: 17.50000 y: -12.75000 f_xy: 60.68750
5 x: 8.77419 y: -9.05425 f_xy: -23.55699
6 x: 7.15642 y: -7.84358 f_xy: -27.08302
7 x: 7.16633 y: -7.83367 f_xy: -27.08333
8 x: 7.16667 y: -7.83333 f_xy: -27.08333
9 x: 7.16667 y: -7.83333 f_xy: -27.08333
prob, comp = setup_problem('SNOPT', invalid_x, invalid_y)
prob.run_driver()
for i, (x, y, f_xy) in enumerate(comp.eval_history):
print(f"{i:2d} x: {x:9.5f} y: {y:9.5f} f_xy: {f_xy:10.5f}")
0 x: 50.00000 y: 50.00000 f_xy: 7622.00000
1 x: 49.99997 y: 49.99997 f_xy: 7621.99165
2 x: 50.00000 y: 49.00000 f_xy: 7465.00000
3 x: 50.00000 y: 48.00000 f_xy: 7310.00000
4 x: 17.50000 y: -12.75000 f_xy: 60.68750
5 x: 8.77419 y: -9.05425 f_xy: -23.55699
6 x: 16.62742 y: -12.38043 f_xy: 47.08356
7 x: 7.15642 y: -7.84358 f_xy: -27.08302
8 x: 7.16637 y: -7.83363 f_xy: -27.08333
9 x: 7.16667 y: -7.83333 f_xy: -27.08333
10 x: 7.16667 y: -7.83333 f_xy: -27.08333
print(f"Number of errors: {len(comp.raised_eval_errors)}")
print(f"Iterations:{comp.raised_eval_errors}")
Number of errors: 1
Iterations:[5]
In this case we can see that we raised a single AnalysisError. We can again find evidence of SNOPT encountering this evaluation error in the SNOPT_print.out
file, but still finding the solution. For SNOPT, we are looking for the D
code at the end of an iteration. Here again we just show a relevant subsection of the file:
with open(prob.get_outputs_dir() / "SNOPT_print.out", encoding="utf-8", errors='ignore') as f:
SNOPT_history = f.read()
beg = SNOPT_history.find(" Itns Major Minor")
end = SNOPT_history.find("Problem name", beg)
print(SNOPT_history[beg:end])
Itns Major Minors Step nCon Feasible Optimal MeritFunction L+U BSwap nS condZHZ Penalty
1 0 1 1 3.0E-01 1.1E+00 7.6220000E+03 1 _ r
1 1 0 6.7E-02 2 2.8E-01 1.1E+00 2.1353000E+04 1 1.3E+02 _ rl
3 2 2 7.1E-02 3 2.6E-01 7.4E-01 2.0440000E+04 1 2 1.1E+01 1.3E+02 _s l
4 3 1 1.0E+00 4 (0.0E+00) 1.4E+00 6.0687500E+01 1 1 2 1.1E+00 1.1E+01 _
6 4 2 1.0E-01 6 (0.0E+00) 1.4E+00 4.7083562E+01 1 1 6.2E+00 1.1E+01 _ D
7 5 1 1.0E+00 7 (2.3E-16) 3.1E-02 -2.7083019E+01 1 1 6.2E+00 1.1E+01 _
8 6 1 1.0E+00 8 (2.3E-16) 9.0E-04 -2.7083333E+01 1 1 6.0E+00 1.1E+01 _
8 7 0 1.0E+00 9 (2.3E-16)(1.8E-15)-2.7083333E+01 1 1 6.0E+00 1.1E+01 _
1
SNOPTC EXIT 0 -- finished successfully
SNOPTC INFO 1 -- optimality conditions satisfied
count = 0
for line in SNOPT_history.split('\n'):
if line.endswith(' D'):
print(line)
count = count + 1
print("\nNumber of times SNOPT encountered an evaluation error:", count)
6 4 2 1.0E-01 6 (0.0E+00) 1.4E+00 4.7083562E+01 1 1 6.2E+00 1.1E+01 _ D
Number of times SNOPT encountered an evaluation error: 1
Note
Not all optimizers will respond as nicely to an AnalysisError as the two demonstrated here (IPOPT
and SNOPT
). Some optimizers may fail to navigate around the bad region and find a solution at all. Other may find an incorrect solution. It is important to understand the capabilities of your chosen optimizer when working with a model that may raise an AnalysisError.