# mpi.py¶

A bunch of MPI utilities.

class openmdao.utils.mpi.FakeComm[source]

Bases: object

Fake MPI communicator class used if mpi4py is not installed.

Attributes
rankint

index of current proc; value is 0 because there is only 1 proc.

sizeint

number of procs in the comm; value is 1 since MPI is not available.

__init__()[source]

Initialize attributes.

openmdao.utils.mpi.check_mpi_env()[source]

Determine if the environment variable governing MPI usage is set.

Returns
bool

True if MPI is required, False if it’s to be skipped, None if not set.

openmdao.utils.mpi.check_mpi_exceptions(fn)[source]

Wrap a function in multi_proc_exception_check.

This does nothing if not running under MPI.

Parameters
fnfunction

The function being checked for possible memory leaks.

Returns
function

A wrapper for fn that reports possible memory leaks.

openmdao.utils.mpi.debug(*msg)[source]

Print debug message to stdout.

Parameters
*msgtuple of str

Strings to be printed.

openmdao.utils.mpi.multi_proc_exception_check(comm)[source]

Raise an exception on all procs if it is raised on one.

Exception raised will be the one from the lowest rank where an exception occurred.

Wrap this around code that you want to globally fail if it fails on any MPI process in comm. If not running under MPI, don’t handle any exceptions.

Parameters
commMPI communicator or None

Communicator from the ParallelGroup that owns the calling solver.

Yields
None
openmdao.utils.mpi.multi_proc_fail_check(comm)[source]

Raise an AnalysisError on all procs if it is raised on one.

Wrap this around code that you want to globally fail if it fails on any MPI process in comm. If not running under MPI, don’t handle any exceptions.

Parameters
commMPI communicator or None

Communicator from the ParallelGroup that owns the calling solver.

Yields
None
openmdao.utils.mpi.use_proc_files()[source]

Cause stdout/err from each MPI process to be written to [rank].out.