OpenMDAO Logo

An open-source MDAO framework written in Python.

February 6, 2015
by admin
Comments Off on OpenMDAO, Now In Anaconda

OpenMDAO, Now In Anaconda

A lot of our users also use Anaconda, and until now, we didn’t support installation into Anaconda.  But as of today, we are rolling out our first effort at making OpenMDAO installable into an Anaconda environment.  This is part of an attempt to make our installation process faster and easier.  Since we have only tried it and tested it in the Windows, Linux and Mac platforms that we have available, you may encounter some troubles (or not), and we encourage you to come to the forums to let us know what setup you have and what degree of success you have with the Anaconda installation.

Caveat: Thanks to a bug in Traits (we’ve put in a bug report and await a new version), we can’t currently support an Anaconda install on a Windows2012_64bit installation unless you’re using a 32-bit Anaconda installed on the 64-bit architecture.  We will announce when that problem is resolved for our 64-bit Windows users.

As of this posting, you can choose to use Anaconda to install the development version of OpenMDAO (for the brave, this method may have some kinks), or the 0.12.0 release version (the more stable of the two methods).  Documentation for attempting this install is currently only in our dev-docs:

Anaconda Installation

We wish you luck, and hope that this makes OpenMDAO easier to install and use than ever.

January 27, 2015
by admin
Comments Off on OpenMDAO Version 0.12.0

OpenMDAO Version 0.12.0

One of OpenMDAO’s core goals is to support tight integration of high-fidelity codes into design optimization problems. With the release of version 0.12.0 we’ve taken a big step toward realizing that goal. This release includes the most significant refactor of our code base since the initial release in 2010. It also represents the largest break in backward compatibility we’ve ever done. Breaking existing models for our users is not something we take lightly, but we feel that the great improvements in efficiency, scalability, and support for parallel computing justified the cost. So, let us tell you about all the new things you get for upgrading your models. Then we’ll talk about the things that have changed and the features that have been removed.

Data Passing

The most important aspect of the refactor comes from how we handle data passing between components.  Before, we used to consider each component as its own individual entity, and the framework would make copies of that data to pass between each one. In order to integrate CFD and FEA codes, we needed to consider hundreds of millions of variables distributed across different machines. At this scale, copying data to pass between codes simply does not work. Instead, we needed to re-egineer things to work with shared data across multiple components without copying the data.

Furthermore, we’ve built the data-passing system with PETSc distributed arrays at its core so that the framework itself can run efficiently in parallel across a cluster. To date, we’ve used the new data-passing system on problems with as many as seventeen million variables with great success. You can see these results in a recent paper we published on the design optimization of a small satellite platform. We’re also really excited about some initial results we’ve seen running OpenMDAO in parallel. Although the parallel capability is not in the V 0.12.0 release, we could not run in parallel without this refactor.

Backwards Compatibility Issues

Of course, nothing in life is free! We had no choice but to break backwards compatibility to make this all work. We now require you to initialize all your array variables before you run your model.  We expect this issue to have the most widespread impact, so it’s worth discussing in a bit of detail.

In older versions of our software, you could declare array variables without giving an initial value. Then you could set an array at the beginning of your model and rely on data connections to cascade that array sizing down through the rest of the model. However, OpenMDAO now needs to know the sizes of all your arrays for all components as part of its initial setup. If you think about this, in a distributed computing context it makes perfect sense. We must be able to allocate distributed arrays of fixed sizes and then map the components I/O to those arrays. We can’t do that unless we know how big the arrays are! So, many of you are going to get an error about not having arrays properly sized when you try to run your older models in version 0.12.0 the first time. We suggest that you use an argument to the __init__ method of your components to tell them how big to make your arrays. If that doesn’t work for you, let us know, and we’ll help you figure out how to manage this new model requirement.

If you’ve been using any of our RemoteAllocator features, especially if you were using them to run things in parallel, then you should be aware that this no longer works in version 0.12.0. To get a much more powerful and efficient MPI-based parallel capability, we needed to sacrifice the RemoteAllocator features. Depending on how you were using it, we might be able to help you find a solution. But if you were using it for parallelizing model executions, then you should hold off upgrading until we release our MPI capability in the next few months. For now, if you need this feature, we suggest you stick with V 0.10.3.

Removal of GUI

The last major change in version 0.12.0 is the removal of the GUI. To be honest, we were never 100% happy with the user experience from our GUI and supporting an interactive running mode turned out to be a huge source of code complexity and, as a result, bugs. We found this out the hard way, when working on the original derivatives implementation in version 0.10.0. So we decided to remove the GUI and its requirements for interactive execution.

We’re planning to re-introduce some form of GUI in the next year, but it will look and operate much differently. If you have a GUI model that you want to try in 0.12.0, let us know, and we’ll help you convert it to a text-based model.

What’s Coming

Ok. Back to the good news! We’re really excited about version 0.12.0 because it is paving the way for true high-fidelity integration. We’re not quite there yet, but the way forward is clear. The framework now scales well, and in a distributed sense, to the order of magnitude needed to attack high fidelity MDO problems. But more importantly, OpenMDAO’s ability to automatically implement a coupled adjoint has the potential to make efficient code coupling much easier to implement. In the coming year we’ll be working on high fidelity integration and distributed execution. We’ll be ready to release some initial MPI capability in the next few months.


Before we finish up, it’s important to recognize the significant contributions of our collaborators. First we’d like to acknowledge some external code contributions to the OpenMDAO project from  Rémi Lafage of ONERA and Rémi Vauclin of ISAE . They spent a considerable amount of time and effort working with the MetaModel capabilities of the framework, and recently contributed a major upgrade for it back to the community. They build a multi-fidelity version of the class and also built a multi-fidelity Kriging surrogate model to go with it. They graciously worked with us on a couple iterations of their code so we could put it into the main code base.

Next we’d like to acknowledge Dr. Andrew Ning (Brigham Young University) for bravely being the very first external user of our derivatives system. Dr. Ning led his research group in implementing analytic derivatives for an extremely complex aero-structural wind turbine design optimization with over 100 components in it. He suffered through more bugs than we’d care to admit. Without his help, we wouldn’t have the extensive test set needed to give us confidence that the new code in version 0.12.0 is working well.

Last, we’d like to thank Dr. John Hwang and Dr. Joaquim Martins (MDO Lab, University of Michigan). They helped our team to understand the application of MDO at scales that are orders of magnitude larger than we had initially been thinking about. But more importantly, they developed the fundamental theory and provided us with several prototype implementations of the data-passing scheme, which were pivotal in ensuring that the OpenMDAO code was well designed and efficient. We simply could not have gotten through this refactor without their invaluable help.

December 12, 2014
by admin
Comments Off on 2015 Wind Energy Systems Engineering Workshop

2015 Wind Energy Systems Engineering Workshop

If you’re at all interested in systems analysis, systems engineering, wind turbines, or MDAO, you should consider going to the 2015 Wind Energy Systems Engineering Workshop hosted by our colleagues at at the National Renewable Energy Lab (NREL). The workshop is held only once every two years, and it’s well worth attending. You’d be hard pressed to find a crowd more committed to looking at wind energy space from a systems perspective than this group. Go to their website to find details and register.

November 13, 2014
by admin
Comments Off on OpenMDAO released!

OpenMDAO released!

OpenMDAO has been released today.  As you can see from the news article before this one, will be the last release of its kind.  Grab here, and read about what’s new here.

November 12, 2014
by admin
Comments Off on Coming Soon: OpenMDAO V 0.12

Coming Soon: OpenMDAO V 0.12

Version 0.12? Where did v 0.11 go? Well, we were talking to our good friends at Microsoft and they suggested that we should skip a number in our versioning to help distinguish the newer product more. So, Windows 8.1 -> Windows 10 and OpenMDAO 0.10.3.x -> OpenMDAO 0.12. Expect this to be released in late November or early December of 2014.

All kidding aside though, the next release of OpenMDAO is a really big one. It’s the culmination of 10 months of work to make OpenMDAO more efficient, more scalable, and to prepare us for some really cool MPI-based capability in the future. On a small satellite design problem with 25000 design variables, the new version of OpenMDAO is 20% faster. Most of these changes are under-the-hood things that users won’t have to ever look at. But there are some major impacts to this upgrade that need to be discussed.

  1. The GUI won’t work in v0.12.0 and greater
  2. All array variables must now be fully initialized before running your model
  3. Running a DOE or CaseIteratorDriver in parallel is going to change in a big way
  4. A single instance of a component can no longer exist in multiple workflows
  5. The geometry interface is changing

Before getting into the details, let’s start with this: If any of these issues is a show stopper for you, then you DON’T upgrade to the new version yet. At least not for your current models. We do suggest you build any new models with the newer version, since we’re not going to be providing any serious support for the older versions going forward. Ok, now onto the more detailed discussion of each point.

The GUI is going away

It’s not going away forever, but just for now. The way the GUI worked added a significant amount of complexity and overhead to the framework, so we needed to remove the GUI’s plumbing to get things working better. We’ve figured out how to put the GUI back in, without hurting our performance. In fact, the end result is going to be a lot better. But we wanted to get the new framework out there for users to start working with and didn’t want you to have to wait for us to re-build the GUI. So for now, no more GUI. If anyone wants to try V0.12 with a model that was previously in the GUI, then get in touch with us. We’ll help you port the model out of the GUI so you can try things out!

All Array variables need to be initialized

Some of you might have made models with array inputs or outputs, where you didn’t set a default size or provide an initial value for the arrays. You were relying on the connections you made to propagate the array values down stream to other components. However, in order to make things work properly in the new version, the framework needs to know the shape of all arrays before execution starts. We’ve run into a few of our own models where we didn’t bother setting initial values up for arrays before. So its expected that you’ll run into this as well. This is something we’re looking for a way to work around, but for now, it is a requirement.

CaseIterDriver and Parallel Execution

We’re not actually sure how many folks have actually tried out the “sequential=False” option for DOEDriver and CaseIteratorDriver. However, if you have tried it, expect major changes here. In v 0.10.3 and older, this used something called a ResourceAllocationManager (RAM) to shell out jobs to workers. This could be set up to work locally without you doing much, or with some work, you could set up a RAM to point to a remote cluster. However, in V 0.12 and up, this is all going to work really differently. The RAM is going away, in favor of a more direct approach using an MPI-based implementation. Those are all the details I wish to share on this subject at the moment, but just know that you can look forward to a lot of MPI-based awesomeness in the near future from us. For now, if you need “sequential=False”, stick with 0.10.3 or older.

One Component in Multiple Workflows

This was a neat idea that we had early on in OpenMDAO. It sounded great at the time, and we did find an occasional application for it. The two cases of note were our implementation for Efficient Global Optimization (EGO) and our implementation of the BLISS architecture. In 0.10.0 we changed the whole MetaModel interface, and EGO no longer required a single component in multiple workflow. So that only left BLISS. For a number of reasons, we didn’t feel that this use case warranted the complexity that came with supporting this idea. So we removed the feature and simplified the code base a bunch. If this feature was something you were actually using a lot, please get in touch with us. We really want to talk to you about it and see how best to port your models to the latest version.

The Geometry Interface is Changing

Given that the GUI is going away, this should not be a huge surprise. But the change is a bit more fundamental than just not having a native viewer anymore. We wanted to take a clean look at how geometry interfaces with the framework and especially with the derivatives system. Plus, truth be told, our old geometry API was really clunky. So we’ve cleaned it up a good bit and simplified it a lot. If you happen to have written something to our old geometry api and would like some help moving it to the new one, just let us know!

October 14, 2014
by admin
Comments Off on Design Optimization: Quiet Aircraft Wing Slat With Abaqus

Design Optimization: Quiet Aircraft Wing Slat With Abaqus

During low speed maneuvers associated with approach and landing, typical transport-class aircraft deploy high lift devices to improve stall and lift characteristics. Unfortunately, this also results in increased airframe noise, of which the leading-edge slat is a significant component. A proposed solution to mitigate slat noise is the development of a slat-cove filler (SCF). The SCF design considered in a current study incorporates shape memory alloys (SMAs), which are a class of active material that undergoes a solid-to-solid phase transformation allowing for large recoverable deformations. SMAs are considered in the current work in order to satisfy three conflicting design requirements: 1) stiffness under aerodynamic loads, 2) compliance to accommodate slat movement, and 3) low overall weight. The researcher is being done by W. Scholten and D. Hartl from the Texas Institute for Intelligent Materials and Structures associated with the Texas A&M University Department of Aerospace Engineering in close collaboration with T. Turner at NASA Langley. They are using OpenMDAO to perform a structural design optimization of the SMA-based SCF. The goal of the optimization is to minimize the actuation torque needed to retract the slat and attached SCF. The optimization process considered the highly nonlinear SCF structural response associated with aerodynamic loads and slat retraction/deployment. Structural analysis of SCF design configurations is performed using the Abaqus Unifed FEA suite in combination with custom material models.

OpenMDAO managed the integration of the material and FEA modeling and performed the optimization. The animation, above, is showing three of the hundreds of designs iteratively considered during optimization, including both feasible and infeasible solutions. But lest you think this is meerely a very interesting optimization problem, it turns out they have built and tested a number demonstration prototypes too!sma_slat_cove_filter







August 28, 2014
by admin
Comments Off on OpenMDAO v0.10.1 Released

OpenMDAO v0.10.1 Released

OpenMDAO has a new release out, 0.10.1. It’s a minor release. You can grab the new code here, and read about it here.

April 1, 2014
by admin

Aero-Structural Optimization of Wind Turbine Blades

A team of researchers from the Technical University of Denmark, F. Zahle, D. Verelst, F. Bertagnolio, and C. Bak,  are using OpenMDAO to perform an aero-structural optimization of wind turbine blades. Their goal is to design airfoils that are more effective over the varied wind conditions seen by wind turbines in real-world conditions. They performed an airfoil optimization that considered aerodynamics at multiple wind conditions with clean and rough blade surfaces. They also considered the structural needs of the blades in order to retain structural integrity. Aerodynamics computations are handled by XFOIL or their in-house CFD code, EllipSys2D, and the structural calculations are handled by BECAS. OpenMDAO is managing the interdisciplinary coupling between the aerodynamics and the structures and is facilitating the switching between XFOIL and EllipSys2D. The researchers have made a nice animation of the optimization process.

Fork me on GitHub