I'm currently working on a project to run an OpenMDAO parallelized optimization problem. The optimization problem I would like to run is the vehicle complex tutorial http://openmdao.org/releases/0.1.5/docs/user-guide/example.html. I will be running this optimization problem on a cluster of 4 raspberry pi machines. I've got the raspberry pi's talking properly to each other, each can run OpenMDAO correctly, but I'm not sure how to allocate resources to them through the resource allocators here. I have looked at the API for ClusterAllocator, but I'm not sure how to properly use it. So, I have one main question:

What are the steps required to set up the resource allocator and use it to allocate tasks to different machines?

Any inputs are greatly appreciated. Thank you very much.


asked 14 Nov '14, 00:29

Shreyyas%20V's gravatar image

Shreyyas V

Wow, you either have the best timing ever or the worst! I'm not sure which. First, have you seen this recent post I made about upcoming changes to OpenMDAO? If not, pleas take a moment to read it. Its got some relevant information to you.

Second, We were just the other day talking about totally re-writing this car optimization tutorial. The way it is written right now is not super clear and could be done much better. So as a warning, I'm not sure its the best example of how to do vehicle design with OpenMDAO. We recently have done some other work with time integration and discrete optimization that I wanted to port over to this problem to make it much more interesting and useful. The update to this example is going to be happening in december, when we get an intern to work on it.

Given the above two points, I'm not confident its worth the effort to write a Resource Allocation Manager for your application because it could be obsolete soon. However, our new MPI capabilities won't even be ready for you to test for at least a month. So, in the event that you don't want to wait, I need a bit more information before I can point you in the right direction.

How did you link up the pi's? Are they set up as an MPI cluster? Or just a loose network of computers. Is there some kind of queuing system that needs to be delt with?


answered 14 Nov '14, 08:03

justingray's gravatar image

justingray ♦♦

Oh wow, haha hopefully great timing! Yep, I see that's really interesting, switching over to MPI sounds like a great idea. The pi's I'm using right now all use MPI so that would be very convenient. For now though, if its possible, I would like to work with the resources we have. This is currently a project being performed for a university and I'm expecting to complete it by the end of this month. So if you don't mind pointing towards the correct direction, that would be great.

In terms of the actual project, the idea is to run an OpenMDAO optimization problem on the 4 RPIs and compare the performance to running that problem on a standard PC. I hope to scale up to more RPIs if the optimization works properly on these 4 first. If you think that there is a better optimization to run besides the vehicle one, kindly let me know too. I was looking into the Atlas plugin - https://github.com/OpenMDAO-Plugins/Atlas. Here's some information about the cluster:

The RPI cluster is composed of 4 Model B RPIs. Each has MPICH2 and MPI for Python installed on them to get information passing between them, and running test examples like mpiexec -f machinefile -n 4 ~/mpi-build/examples/cpi show the multiple pis generating output properly. (If it helps, I based this MPI cluster on the 64 node RPI cluster from USouthampton - https://www.southampton.ac.uk/~sjc/raspberrypi/pi_supercomputer_southampton.htm). There is no queuing system that needs to be dealt with. Physically, they are simply linked together by ethernet through a router.

Truly appreciate your inputs!

(14 Nov '14, 13:51) Shreyyas V Shreyyas%20V's gravatar image

ok, so you mostly want to do embarrassingly parallel stuff? How were you expecting to parallelize an optimization? None of the example optimizations we have work in parallel right now. That is something that will be working in the near future, with the MPI stuff, but not yet.

For now, If you wanted to run a DOE or a CaseIteratorDriver in parallel, even around an optimization, thats about the only thing that I can think of that will work for you. In that case, you really are treating your pi cluster as a local network of disconnected computers, each of which you'll want to start up a special server on and then treat them as dumb computer resources. Does that sound like what you want to do?

If you're looking for something that is fairly compute intensive and well sorted out, I suggest the CADRE (https://github.com/openmdao-plugins/cadre) satellite problem. Especially if you're just going to treat it as a black box for benchmarking.

(14 Nov '14, 14:03) justingray ♦♦ justingray's gravatar image

I see. Yep, the original intention was to perform an embarrassingly parallel optimization problem on the pis. That would be ideal since it would truly show the power of the raspberry pi parallelization compared to a normal PC. But as we saw with the vehicle example there's numerous dependencies among the three components. I was still deciding on which components can be parallelized. Once I decided which pieces can be isolated in the optimization, I would’ve used the resource allocator to assign those tasks. In the end, we can imagine that one pi would be doing more work and the other pi's contributing pieces where parallelization could help in the performance. What steps do you think are needed to make an example optimization parallelized? Would you think such a parallelization would be possible with CADRE?

Considering the other option you mentioned, would you happen to have any advice on running DOEs or CaseIteratorDriver around an optimization in parallel for these pis? For the DOEs/CaseIteratorDriver I believe the idea would be to run a problem on each of the pis, but with varying values for parameters.



(14 Nov '14, 21:09) Shreyyas V Shreyyas%20V's gravatar image

CADRE actually has some opportunity for parallelization as you described, where different parts would run on different pi's. But getting that to work isn't going to happen in any current release of OpenMDAO. You'll need the MPI capability that were just developing now. We'll be very happy to have you as an early beta tester, but we just need a bit more time.

The cadre problem has 6 independent orbits that it analyzes, which could all be run in parallel. You could easily modify it to have only 4 orbits to match your current hardware. I've never set up a pi cluster before, but you'll need it to look like a regular cluster interface where you can issue an MPIrun command with 4 processors and have. The CADRE problem would be decent, since each of the parallel sub-problems is fairly large. So the amount of MPI based communication overhead should be lower relative to compute. Since you only have ethernet connections between the pi's, mpi overhead will be a bit heavier. Regardless, this is a really neat idea. Once we have our parallel capability up and running I think we'll be excited to see if you can get this working!

Running things in parallel with caseiteratordriver, using the RAM, is possible right now. What you would want to do is pick one of the design variables, and remove it from the optimizer. Then pick 4 values for it, and have the case iterator driver run 4 separate optimizations with each of those values. If you're going to take this route, I suggest you get it running in serial first. Then second, you can set it to run in parallel on your local multi-core machine. Lastly, use the RAM to run on the PIs. I think that RAM will be fairly easy to set up. You should be able to use an existing RAM class we have, ClusterAllocator to allow your OpenMDAO to delegate to the pi's. You could run from one of the pis or even your laptop and just allocate to the pi's.


answered 15 Nov '14, 07:53

justingray's gravatar image

justingray ♦♦

Got it, that sounds good. I'll go ahead with your suggestions to run the serial version first, then in parallel on my multicore PC machine, and then finally run it on the PIs using RAM. Just had a couple of questions on CADRE and the RAM delegation.

First, I was looking at the docs for CADRE and it looks like I need a SNOPT license to run the full optimization for CADRE with all the design points and such. Is that actually necessary or can I run the optimizations you mentioned above without needing the SNOPT license? If I don't require it, then I simply run the lines mentioned in - http://openmdao-plugins.github.io/CADRE/full.html, making the following changes correct?

top = CADRE_Optimization(n=1500, m=300, n=4)
['Data', 'ConCh', 'ConDs', 'SOC']:

Second, to use the ClusterAllocator, I've made the following test.py script. Is this the proper way to create a ClusterAllocator and assign machines to the allocator? Currently, it says it makes a connection to the pi04, it just stalls, and then mentions no hosts were able to be connected. Is there anything you think I'm missing/can improve on here?

alt text

Thanks a bunch!


answered 15 Nov '14, 15:03

Shreyyas%20V's gravatar image

Shreyyas V

You will need SNOPT to run the full problem. We don't know of any open source optimizers that can handle it. You could maybe scale the problem way down and try SLSQP, but im not optimistic about that option. If you contact the SNOPT folks, I'm pretty sure you can get an academic license.

Take a look at this test to see how to use ClusterAllocator. If you can't get it working from that example, I'll take a more detailed look.

(15 Nov '14, 17:45) justingray ♦♦ justingray's gravatar image

Definitely. I'll call them up and see if I can get a license.

Will do. Which example test should I be looking at btw?

(15 Nov '14, 19:07) Shreyyas V Shreyyas%20V's gravatar image

@Shreyyas V How do you installed openmdao for raspberry pi? I am using python3


answered 01 Aug '15, 09:25

supra's gravatar image


Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported



Asked: 14 Nov '14, 00:29

Seen: 5,380 times

Last updated: 01 Aug '15, 09:25

powered by OSQA