OpenMDAO Logo

An open-source framework for efficient multidisciplinary optimization.

OpenMDAO Version 0.12.0

One of OpenMDAO’s core goals is to support tight integration of high-fidelity codes into design optimization problems. With the release of version 0.12.0 we’ve taken a big step toward realizing that goal. This release includes the most significant refactor of our code base since the initial release in 2010. It also represents the largest break in backward compatibility we’ve ever done. Breaking existing models for our users is not something we take lightly, but we feel that the great improvements in efficiency, scalability, and support for parallel computing justified the cost. So, let us tell you about all the new things you get for upgrading your models. Then we’ll talk about the things that have changed and the features that have been removed.

Data Passing

The most important aspect of the refactor comes from how we handle data passing between components.  Before, we used to consider each component as its own individual entity, and the framework would make copies of that data to pass between each one. In order to integrate CFD and FEA codes, we needed to consider hundreds of millions of variables distributed across different machines. At this scale, copying data to pass between codes simply does not work. Instead, we needed to re-egineer things to work with shared data across multiple components without copying the data.

Furthermore, we’ve built the data-passing system with PETSc distributed arrays at its core so that the framework itself can run efficiently in parallel across a cluster. To date, we’ve used the new data-passing system on problems with as many as seventeen million variables with great success. You can see these results in a recent paper we published on the design optimization of a small satellite platform. We’re also really excited about some initial results we’ve seen running OpenMDAO in parallel. Although the parallel capability is not in the V 0.12.0 release, we could not run in parallel without this refactor.

Backwards Compatibility Issues

Of course, nothing in life is free! We had no choice but to break backwards compatibility to make this all work. We now require you to initialize all your array variables before you run your model.  We expect this issue to have the most widespread impact, so it’s worth discussing in a bit of detail.

In older versions of our software, you could declare array variables without giving an initial value. Then you could set an array at the beginning of your model and rely on data connections to cascade that array sizing down through the rest of the model. However, OpenMDAO now needs to know the sizes of all your arrays for all components as part of its initial setup. If you think about this, in a distributed computing context it makes perfect sense. We must be able to allocate distributed arrays of fixed sizes and then map the components I/O to those arrays. We can’t do that unless we know how big the arrays are! So, many of you are going to get an error about not having arrays properly sized when you try to run your older models in version 0.12.0 the first time. We suggest that you use an argument to the __init__ method of your components to tell them how big to make your arrays. If that doesn’t work for you, let us know, and we’ll help you figure out how to manage this new model requirement.

If you’ve been using any of our RemoteAllocator features, especially if you were using them to run things in parallel, then you should be aware that this no longer works in version 0.12.0. To get a much more powerful and efficient MPI-based parallel capability, we needed to sacrifice the RemoteAllocator features. Depending on how you were using it, we might be able to help you find a solution. But if you were using it for parallelizing model executions, then you should hold off upgrading until we release our MPI capability in the next few months. For now, if you need this feature, we suggest you stick with V 0.10.3.

Removal of GUI

The last major change in version 0.12.0 is the removal of the GUI. To be honest, we were never 100% happy with the user experience from our GUI and supporting an interactive running mode turned out to be a huge source of code complexity and, as a result, bugs. We found this out the hard way, when working on the original derivatives implementation in version 0.10.0. So we decided to remove the GUI and its requirements for interactive execution.

We’re planning to re-introduce some form of GUI in the next year, but it will look and operate much differently. If you have a GUI model that you want to try in 0.12.0, let us know, and we’ll help you convert it to a text-based model.

What’s Coming

Ok. Back to the good news! We’re really excited about version 0.12.0 because it is paving the way for true high-fidelity integration. We’re not quite there yet, but the way forward is clear. The framework now scales well, and in a distributed sense, to the order of magnitude needed to attack high fidelity MDO problems. But more importantly, OpenMDAO’s ability to automatically implement a coupled adjoint has the potential to make efficient code coupling much easier to implement. In the coming year we’ll be working on high fidelity integration and distributed execution. We’ll be ready to release some initial MPI capability in the next few months.

Acknowledgements

Before we finish up, it’s important to recognize the significant contributions of our collaborators. First we’d like to acknowledge some external code contributions to the OpenMDAO project from  Rémi Lafage of ONERA and Rémi Vauclin of ISAE . They spent a considerable amount of time and effort working with the MetaModel capabilities of the framework, and recently contributed a major upgrade for it back to the community. They build a multi-fidelity version of the class and also built a multi-fidelity Kriging surrogate model to go with it. They graciously worked with us on a couple iterations of their code so we could put it into the main code base.

Next we’d like to acknowledge Dr. Andrew Ning (Brigham Young University) for bravely being the very first external user of our derivatives system. Dr. Ning led his research group in implementing analytic derivatives for an extremely complex aero-structural wind turbine design optimization with over 100 components in it. He suffered through more bugs than we’d care to admit. Without his help, we wouldn’t have the extensive test set needed to give us confidence that the new code in version 0.12.0 is working well.

Last, we’d like to thank Dr. John Hwang and Dr. Joaquim Martins (MDO Lab, University of Michigan). They helped our team to understand the application of MDO at scales that are orders of magnitude larger than we had initially been thinking about. But more importantly, they developed the fundamental theory and provided us with several prototype implementations of the data-passing scheme, which were pivotal in ensuring that the OpenMDAO code was well designed and efficient. We simply could not have gotten through this refactor without their invaluable help.

Comments are closed.

Fork me on GitHub