Package openmdao.util

This package contains a number of utilities that are used inside of openmdao. It does not depend on any other openmdao package.

addreqs.py

A script to add a group of required packages to the current python environment.

openmdao.util.addreqs.add_reqs(argv=None, default_flink=None)[source]

debug.py

Routines to help out with obtaining debugging information

openmdao.util.debug.dumpit(obj, stream=<open file '<stdout>', mode 'w' at 0x264078>, recurse=True, ignore_address=True)[source]

Try to dump out the guts of an object, and optionally its children.

openmdao.util.debug.print_funct_call(funct, *args, **kwargs)[source]
openmdao.util.debug.traceit(frame, event, arg)[source]

A function useful for tracing Python execution. Wherever you want the tracing to start, insert a call to sys.settrace(traceit).

decorators.py

Some useful decorators

openmdao.util.decorators.add_delegate(*delegates)[source]

A class decorator that takes delegate classes or (name,delegate) tuples as args. For each tuple, an instance with the given name will be created in the wrapped __init__ method of the class. If only the delegate class is provided, then the instance created in the wrapped __init__ method will be named using an underscore (_) followed by the lower case name of the class. All of the public methods from the delegate classes will be added to the class unless there is an attribute or method in the class with the same name. In that case the delegate method will be ignored.

openmdao.util.decorators.forwarder(cls, fnc, delegatename)[source]

Returns a method that forwards calls on the scoping object to calls on the delegate object. The signature of the delegate method is preserved in the forwarding method.

openmdao.util.decorators.print_timing(func)[source]
openmdao.util.decorators.replace_funct(fnc, body)[source]

Returns a function with a new body that replaces the given function. The signature of the original function is preserved in the new function.

openmdao.util.decorators.stub_if_missing_deps(*deps)[source]

A class decorator that will try to import the specified modules and in the event of failure will stub out the class, raising a RuntimeError that explains the missing dependencies whenever an attempt is made to instatiate the class.

deps: str args
args in deps may have the form a.b.c or a.b.c:attr, where attr would be searched for within the module a.b.c after a.b.c is successfully imported.

dep.py

Routines analyzing dependencies (class and module) in Python source.

class openmdao.util.dep.ClassInfo(name, bases, decorators, impls)[source]

Bases: object

resolve_true_basenames(finfo)[source]

Take module pathnames of base classes that may be an indirect names and convert them to their true absolute module pathname. For example, “from openmdao.main.api import Component” implies the module path of Component is openmdao.main.api.Component, but inside of openmdao.main.api is the import statement “from openmdao.main.component import Component,” so the true module pathname of Component is openmdao.main.component.Component.

finfo: list

class openmdao.util.dep.PythonSourceFileAnalyser(fname, stop_classes=None)[source]

Bases: ast.NodeVisitor

Collects info about imports and class inheritance from a Python file.

translate(finfo)[source]

Take module pathnames of classes that may be indirect names and convert them to their true absolute module pathname. For example, “from openmdao.main.api import Component” implies the module path of Component is openmdao.main.api.Component, but inside of openmdao.main.api is the import statement “from openmdao.main.component import Component,” so the true module pathname of Component is openmdao.main.component.Component.

visit_ClassDef(node)[source]

This executes every time a class definition is parsed.

visit_Import(node)[source]

This executes every time an “import foo” style import statement is parsed.

visit_ImportFrom(node)[source]

This executes every time a “from foo import bar” style import statement is parsed.

class openmdao.util.dep.PythonSourceTreeAnalyser(startdir=None, exclude=None)[source]

Bases: object

find_inheritors(base)[source]

Returns a list of names of classes that inherit from the given base class.

class openmdao.util.dep.StrVisitor[source]

Bases: ast.NodeVisitor

get_value()[source]
visit_Name(node)[source]
visit_Str(node)[source]

distutils_fix.py

Importing this file will fix problems we’ve found in distutils.

Current fixes are:

Update the library_dir_option function in MSVCCompiler to add quotes around /LIBPATH entries.

doctools.py

A utility to extract Traits information from the code and get it into the Sphinx documentation.

Note

No traits docs will be generated unless the class containing the traits has a doc string!

openmdao.util.doctools.get_traits_info(app, what, name, obj, options, lines)[source]

Gets traits info.

openmdao.util.doctools.setup(app)[source]

Connect the doctools to the process-docstring hook.

dumpdistmeta.py

If run as main, dumpdistmeta.py will print out either a pretty-printed dict full of the metadata found in the specified distribution or just the value of a single piece of metadata if metadata-item is specified on the command line. The distribution can be in the form of an installed egg, a zipped egg, or a gzipped tar file containing a distutils distribution.

usage: dumpdistmeta.py distribution [metadata-item]

Example output:

$ dumpdistmeta.py pyparsing-1.5.1-py2.5.egg
{'SOURCES': ['README',
             'pyparsing.py',
             'setup.py',
             'pyparsing.egg-info/PKG-INFO',
             'pyparsing.egg-info/SOURCES.txt',
             'pyparsing.egg-info/dependency_links.txt',
             'pyparsing.egg-info/top_level.txt'],
 'author': 'Paul McGuire',
 'author-email': 'ptmcg@users.sourceforge.net',
 'classifier': 'Programming Language :: Python',
 'dependency_links': [],
 'description': 'UNKNOWN',
 'download-url': 'http://sourceforge.net/project/showfiles.php?group_id=97203',
 'entry_points': {},
 'home-page': 'http://pyparsing.wikispaces.com/',
 'license': 'MIT License',
 'metadata-version': '1.0',
 'name': 'pyparsing',
 'platform': None,
 'py_version': '2.5',
 'summary': 'Python parsing module',
 'top_level': ['pyparsing'],
 'version': '1.5.1',
 'zip-safe': False}

Example output:

$ dumpdistmeta.py pyparsing-1.5.1-py2.5.egg license
MIT License
openmdao.util.dumpdistmeta.get_dist_metadata(dist, dirname='')[source]

Retrieve metadata from within a distribution. Returns a dict.

openmdao.util.dumpdistmeta.get_metadata(path)[source]

Retrieve metadata from a file or directory specified by path, or from the name of a distribution that happens to be installed. path can be an installed egg, a zipped egg file, or a zipped or unzipped tar file of a python distutils or setuptools source distribution.

Returns a dict.

openmdao.util.dumpdistmeta.get_resource_files(dist, exList=None, incList=None, dirname='')[source]

A generator that retrieves resource file pathnames from within a distribution.

eggloader.py

Egg loading utilities.

openmdao.util.eggloader.load(instream, fmt=4, package=None, logger=None)[source]

Load object(s) from an input stream (or filename). If instream is a string that is not an existing filename or absolute path, then it is searched for using pkg_resources. Returns the root object.

instream: file or string
Stream or filename to load from.
fmt: int
Format of state data.
package: string
Name of package to use.
logger: Logger
Used for recording progress, etc.
openmdao.util.eggloader.load_from_eggfile(filename, entry_group, entry_name, logger=None, observer=None)[source]

Extracts files in egg to a subdirectory matching the saved object name. Then loads object graph state by invoking the given entry point. Returns the root object.

filename: string
Name of egg file.
entry_group: string
Name of group.
entry_name: string
Name of entry point in group.
logger: Logger
Used for recording progress, etc.
observer: callable
Called via an EggObserver.
openmdao.util.eggloader.load_from_eggpkg(package, entry_group, entry_name, instance_name=None, logger=None, observer=None)[source]

Load object graph state by invoking the given package entry point. Returns the root object.

package: string
Name of package to load from.
entry_group: string
Name of group.
entry_name: string
Name of entry point in group.
instance_name: string
Name for instance loaded.
logger: Logger
Used for recording progress, etc.
observer: callable
Called via an EggObserver.
openmdao.util.eggloader.check_requirements(required, logger=None, indent_level=0)[source]

Display requirements (if logger debug level enabled) and note conflicts. Returns a list of unavailable requirements.

required: list
List of package requirements.
logger: Logger
Used for recording progress, etc.
indent_level: int
Used to improve readability of log messages.

eggobserver.py

class openmdao.util.eggobserver.EggObserver(observer, logger)[source]

Bases: object

Provides a convenient API for calling an observer of egg operations. observer will be called with:

  • ('analyze', filename, -1, -1) during module analysis.
  • ('add', filename, file_fraction, byte_fraction) while writing files.
  • ('copy', filename, file_fraction, byte_fraction) while copying files.
  • ('extract', filename, file_fraction, byte_fraction) while extracting files.
  • ('complete', egg_name, 1, 1) when complete.
  • ('except', message, -1, -1) when an exception occurs.
add(path, file_fraction, byte_fraction)[source]

Observe add of file. If observer returns False, raises RuntimeError.

path: string
Name of file being added.
file_fraction: float
Fraction of total files processed.
byte_fraction: float
Fraction of total bytes processed.
analyze(path)[source]

Observe analysis of file. If observer returns False, raises RuntimeError.

path: string
Name of file being analyzed.
complete(path)[source]

Observe operation complete.

path: string
Name of file saved/loaded.
copy(path, file_fraction, byte_fraction)[source]

Observe copy of file. If observer returns False, raises RuntimeError.

path: string
Name of file being copied.
file_fraction: float
Fraction of total files processed.
byte_fraction: float
Fraction of total bytes processed.
exception(msg)[source]

Observe exception.

msg: string
Exception message.
extract(path, file_fraction, byte_fraction)[source]

Observe extraction of file. If observer returns False, raises RuntimeError.

path: string
Name of file being extracted.
file_fraction: float
Fraction of total files processed.
byte_fraction: float
Fraction of total bytes processed.

eggsaver.py

Egg save utilities.

Note that pickle can’t save references to functions that aren’t defined at the top level of a module, and there doesn’t appear to be a viable workaround. Normally pickle won’t handle instance methods either, but there is code in place to work around that.

When saving to an egg, the module named __main__ changes when reloading. This requires finding the real module name and munging references to __main__. References to old-style class types can’t be restored correctly.

openmdao.util.eggsaver.save(root, outstream, fmt=4, proto=-1, logger=None)[source]

Save the state of root and its children to an output stream (or filename). If outstream is a string, then it is used as a filename. The format can be supplied in case something other than cPickle is needed. For the pickle formats, a proto of -1 means use the highest protocol.

root: object
The root of the object tree to save
outstream: file or string
Stream or filename to save to.
fmt: int
What format to save the object state in.
proto: int
What protocol to use when pickling.
logger: Logger
Used for recording progress, etc.
openmdao.util.eggsaver.save_to_egg(entry_pts, version=None, py_dir=None, src_dir=None, src_files=None, dst_dir=None, logger=None, observer=None, need_requirements=True)[source]

Save state and other files to an egg. Analyzes the objects saved for distribution dependencies. Modules not found in any distribution are recorded in an egg-info/openmdao_orphans.txt file. Also creates and saves loader scripts for each entry point.

entry_pts: list
List of (obj, obj_name, obj_group) tuples. The first of these specifies the root object and package name.
version: string
Defaults to a timestamp of the form ‘YYYY.MM.DD.HH.mm’.
py_dir: string
The (root) directory for local Python files. It defaults to the current directory.
src_dir: string
The root of all (relative) src_files.
dst_dir: string
The directory to write the egg in.
observer: callable
Will be called via an EggObserver intermediary.
need_requirements: bool
If True, distributions required by the egg will be determined. This can be set False if the egg is just being used for distribution within the local host.

Returns (egg_filename, required_distributions, orphan_modules).

eggwriter.py

Writes Python egg files. Supports what’s needed for saving and loading components/simulations.

openmdao.util.eggwriter.egg_filename(name, version)[source]

Returns name for egg file as generated by setuptools.

name: string
Must be alphanumeric.
version: string
Must be alphanumeric.
openmdao.util.eggwriter.write(name, version, doc, entry_map, src_files, distributions, modules, dst_dir, logger, observer=None, compress=True)[source]

Write egg in the manner of setuptools, with some differences:

  • Writes directly to the zip file, avoiding some intermediate copies.
  • Doesn’t compile any Python modules.
name: string
Must be an alphanumeric string.
version: string
Must be an alphanumeric string.
doc: string
Used for the Summary and Description entries in the egg’s metadata.
entry_map: dict
A pkg_resources EntryPoint map: a dictionary mapping group names to dictionaries mapping entry point names to EntryPoint objects.
src_files: list
List of non-Python files to include.
distributions: list
List of Distributions this egg depends on. It is used for the Requires entry in the egg’s metadata.
modules: list
List of module names not found in a distribution that this egg depends on. It is used for the Requires entry in the egg’s metadata and is also recorded in the ‘openmdao_orphans.txt’ resource.
dst_dir: string
The directory to write the egg to.
logger: Logger
Used for recording progress, etc.
observer: callable
Will be called via an EggObserver intermediary.

Returns the egg’s filename.

envirodump.py

envirodump.py
USAGE:  "python envirodump.py", text file generated in that spot

TO DO:
  get package VERSIONS
      attempt to use pip freeze if available
  fix aliases problem
  do a WHICH on compilers?
  deal with sys.exit on bad Python version?
X make docstrings consistent
X make Exception handling consistent 
  http://www.opensource.apple.com/source/python/python-3/python/Tools/freeze/modulefinder.py
openmdao.util.envirodump.callit(f, funct)[source]
openmdao.util.envirodump.envdump()[source]
openmdao.util.envirodump.find_compiler(f, name, ext='')[source]

This function will find compilers, print their paths, and reveal their versions.

openmdao.util.envirodump.get_aliases(f)[source]

Gets aliases on Unix/Mac

openmdao.util.envirodump.get_compiler_info(f)[source]

This function will call out for specific information on each compiler.

openmdao.util.envirodump.get_dump_time(f)[source]

Writes the time and date of the system dump

openmdao.util.envirodump.get_pkg_info(f)[source]

This function will list python packages found on sys.path

openmdao.util.envirodump.get_platform_info(f)[source]

This function will capture specific platform information, such as OS name, architecture, and linux distro.

openmdao.util.envirodump.get_python_info(f)[source]

This function will capture specific python information, such as version number, compiler, and build.

openmdao.util.envirodump.get_system_env(f)[source]

This function captures the values of a system’s environment variables at the time of the run, and presents them in alphabetical order.

openmdao.util.envirodump.system_info_dump(f)[source]

This function runs the rest of the utility. It creates an object in which to write and passes it to each separate function. Finally, it writes the value of that object into a dumpfile.

fileutil.py

Misc. file utility routines

class openmdao.util.fileutil.DirContext(destdir)[source]

Bases: object

Supports using the ‘with’ statement in place of try-finally for entering a directory, executing a block, then returning to the original directory.

openmdao.util.fileutil.build_directory(dct, force=False, topdir='.')[source]

Create a directory structure based on the contents of a nested dict. The directory is created in the specified top directory, or in the current working directory if one isn’t specified. If a file being created already exists, a warning will be issued and the file will not be changed if force is False. If force is True, the file will be overwritten.

The structure of the dict is as follows: if the value at a key is a dict, then that key is used to create a directory. Otherwise, the key is used to create a file and the value stored at that key is written to the file. All keys must be relative names or a RuntimeError will be raised.

openmdao.util.fileutil.cleanup(*fnames, **kwargs)[source]

delete the given files or directories if they exists

openmdao.util.fileutil.copy(src, dest)[source]

Copy a file or directory.

openmdao.util.fileutil.expand_path(path)[source]
openmdao.util.fileutil.find_files(start, match=None, exclude=None, nodirs=True)[source]

Return filenames (using a generator).

start: str or list of str
Starting directory or list of directories.
match: str or predicate funct
Either a string containing a glob pattern to match or a predicate function that returns True on a match.
exclude: str or predicate funct
Either a string containing a glob pattern to exclude or a predicate function that returns True to exclude.
nodirs: bool
If False, return names of files and directories.

Walks all subdirectories below each specified starting directory.

openmdao.util.fileutil.find_in_dir_list(fname, dirlist, exts=('', ))[source]

Search the given list of directories for the specified file. Return the absolute path of the file if found, or None otherwise.

fname: str
Base name of file.
dirlist: list of str
List of directory paths, relative or absolute.
exts: tuple of str
Tuple of extensions (including the ‘.’) to apply to fname for loop, e.g., (‘.exe’,’.bat’).
openmdao.util.fileutil.find_in_path(fname, pathvar=None, sep=':', exts=('', ))[source]

Search for a given file in all of the directories given in the pathvar string. Return the absolute path to the file if found, None otherwise.

fname: str
Base name of file.
pathvar: str
String containing search paths. Defaults to $PATH
sep: str
Delimiter used to separate paths within pathvar.
exts: tuple of str
Tuple of extensions (including the ‘.’) to apply to fname for loop, e.g., (‘.exe’,’.bat’).
openmdao.util.fileutil.find_module(name, path=None)[source]

Return the pathname of the file corresponding to the given module name, or None if it can’t be found. If path is set, search in path for the file, otherwise search in sys.path

openmdao.util.fileutil.find_up(name, path=None)[source]

Search upward from the starting path (or the current directory) until the given file or directory is found. The given name is assumed to be a basename, not a path. Returns the absolute path of the file or directory if found, None otherwise.

name: str
Base name of the file or directory being searched for
path: str (optional)
Starting directory. If not supplied, current directory is used.
openmdao.util.fileutil.get_ancestor_dir(path, num_levels=1)[source]

Return the name of the directory that is ‘num_levels’ levels above the specified path. If num_levels is larger than the number of members in the path, then the root directory name will be returned.

openmdao.util.fileutil.get_cfg_file()[source]

Attempts to get the /OpenMDAO-Framework/config/testhosts.cfg first, then gets the ~/.openmdao/testhosts.cfg next.

openmdao.util.fileutil.get_module_path(fpath)[source]

Given a module filename, return its full Python name including enclosing packages. (based on existence of __init__.py files)

filewrap.py

A collection of utilities for file wrapping.

Note: This is a work in progress.

class openmdao.util.filewrap.FileParser(end_of_line_comment_char=None, full_line_comment_char=None)[source]

Bases: object

Utility to locate and read data from a file.

mark_anchor(anchor, occurrence=1)[source]

Marks the location of a landmark, which lets you describe data by relative position. Note that a forward search begins at the old anchor location. If you want to restart the search for the anchor at the file beginning, then call reset_anchor() before mark_anchor.

anchor: str
The text you want to search for.
occurrence: integer
find nth instance of text; default is 1 (first). Use -1 to find last occurrence. Reverse searches always start at the end of the file no matter the state of any previous anchor.
reset_anchor()[source]

Resets anchor to the beginning of the file.

set_delimiters(delimiter)[source]

Lets you change the delimiter that is used to identify field boundaries.

delimiter: str
A string containing characters to be used as delimiters. The default value is ‘ ‘. which means that spaces and tabs are not taken as data but instead mark the boundaries. Note that the parser is smart enough to recognize characters within quotes as non-delimiters.
set_file(filename)[source]

Set the name of the file that will be generated.

filename: str
Name of the input file to be generated.
transfer_2Darray(rowstart, fieldstart, rowend, fieldend=None)[source]

Grabs a 2D array of variables relative to the current anchor. Each line of data is placed in a separate row.

rowstart: integer
Row number to start, relative to the current anchor
fieldstart: integer
field number to start.
rowend: integer
row number to end relative to current anchor.
fieldend: integer (optional)
field number to end. If not specified, grabs all fields up to the end of the line.

If the delimiter is set to ‘columns’, then the values contained in fieldstart and fieldend should be the column number instead of the field number.

transfer_array(rowstart, fieldstart, rowend=None, fieldend=None)[source]

Grabs an array of variables relative to the current anchor.

rowstart: integer
Row number to start, relative to the current anchor
fieldstart: integer
field number to start
rowend: integer (optional)
row number to end. If not set, then only one row is grabbed.

Setting the delimiter to ‘columns’ elicits some special behavior from this method. Normally, the extraction process wraps around at the end of a line and continues grabbing each field at the start of a newline. When the delimiter is set to columns, the paramters (rowstart, fieldstart, rowend, fieldend) demark a box, and all values in that box are retrieved. Note that standard whitespace is the secondary delimiter in this case.

transfer_keyvar(key, field, occurrence=1, rowoffset=0)[source]

Searches for a key relative to the current anchor and then grabs a field from that line.

field: integer
Which field to transfer. Field 0 is the key.
occurrence: integer
find nth instance of text; default is 1 (first value field). Use -1 to find last occurance. Position 0 is the key field, so it should not be used as a value for occurrence.
rowoffset: integer (optional)
Optional row offset from the occurrence of key. This can also be negative.

You can do the same thing with a call to mark_anchor and transfer_var. This function just combines them for convenience.

transfer_line(row)[source]

Returns a whole line, relative to current anchor.

row: integer
number of lines offset from anchor line (0 is anchor line). This can be negative.
transfer_var(row, field, fieldend=None)[source]

Grabs a single variable relative to the current anchor.

— If the delimiter is a set of chars (e.g., ”, ”) —

row: integer
number of lines offset from anchor line (0 is anchor line). This can be negative.
field: integer
which word in line to retrieve.

fieldend - IGNORED

— If the delimiter is “columns” —

row: integer
number of lines offset from anchor line (0 is anchor line). This can be negative.
field: integer
character position to start
fieldend: integer (optional)
position of last character to return. If omitted, the end of the line is used
class openmdao.util.filewrap.InputFileGenerator[source]

Bases: object

Utility to generate an input file from a template. Substitution of values is supported. Data is located with a simple API.

clearline(row)[source]

Replace the contents of a row with the newline character.

row: integer
row number to clear, relative to current anchor.
generate()[source]

Use the template file to generate the input file.

mark_anchor(anchor, occurrence=1)[source]

Marks the location of a landmark, which lets you describe data by relative position. Note that a forward search begins at the old anchor location. If you want to restart the search for the anchor at the file beginning, then call reset_anchor() before mark_anchor.

anchor: str
The text you want to search for.
occurrence: integer
Find nth instance of text; default is 1 (first). Use -1 to find last occurrence. Reverse searches always start at the end of the file no matter the state of any previous anchor.
reset_anchor()[source]

Resets anchor to the beginning of the file.

set_delimiters(delimiter)[source]

Lets you change the delimiter that is used to identify field boundaries.

delimiter: str
A string containing characters to be used as delimiters.
set_generated_file(filename)[source]

Set the name of the file that will be generated.

filename: str
Name of the input file to be generated.
set_template_file(filename)[source]

Set the name of the template file to be used The template file is also read into memory when this method is called.

filename: str
Name of the template file to be used.
transfer_2Darray(value, row_start, row_end, field_start, field_end, sep=', ')[source]

Changes the values of a 2D array in the template relative to the current anchor. This method is specialized for 2D arrays, where each row of the array is on its own line.

value: ndarray
array of values to insert.
row_start: integer
Starting row for inserting the array. This is relative to the anchor, and can be negative.
row_end: integer
Final row for the array, relative to the anchor.
field_start: integer
starting field in the given row_start as denoted by delimiter(s).
field_end: integer
the final field the array uses in row_end. We need this to figure out if the template is too small or large
sep: str (optional) (currently unsupported)
Separator to append between values if we go beyond the template
transfer_array(value, row_start, field_start, field_end, row_end=None, sep=', ')[source]

Changes the values of an array in the template relative to the current anchor. This should generally be used for one-dimensional or free form arrays.

value: float, integer, bool, str
array of values to insert.
row_start: integer
starting row for inserting the array. This is relative to the anchor, and can be negative.
field_start: integer
starting field in the given row_start as denoted by delimiter(s).
field_end: integer
the final field the array uses in row_end. We need this to figure out if the template is too small or large
row_end: integer (optional)
Use if the array wraps to cover additional lines.
sep: integer (optional)
Separator to use if we go beyond the template.
transfer_var(value, row, field)[source]

Changes a single variable in the template relative to the current anchor.

row - number of lines offset from anchor line (0 is anchor line). This can be negative.

field - which word in line to replace, as denoted by delimiter(s)

class openmdao.util.filewrap.ToFloat(expr, savelist=False)[source]

Bases: pyparsing.TokenConverter

Converter for PyParsing that is used to turn a token into a float.

postParse(instring, loc, tokenlist)[source]

Converter to make token into a float.

class openmdao.util.filewrap.ToInf(expr, savelist=False)[source]

Bases: pyparsing.TokenConverter

Converter for PyParsing that is used to turn a token into Python inf.

postParse(instring, loc, tokenlist)[source]

Converter to make token into Python inf.

class openmdao.util.filewrap.ToInteger(expr, savelist=False)[source]

Bases: pyparsing.TokenConverter

Converter for PyParsing that is used to turn a token into an int.

postParse(instring, loc, tokenlist)[source]

Converter to make token into an integer.

class openmdao.util.filewrap.ToNan(expr, savelist=False)[source]

Bases: pyparsing.TokenConverter

Converter for PyParsing that is used to turn a token into Python nan.

postParse(instring, loc, tokenlist)[source]

Converter to make token into Python nan.

filexfer.py

openmdao.util.filexfer.filexfer(src_server, src_path, dst_server, dst_path, mode='')[source]

Transfer a file from one place to another.

If src_server or dst_server is None, then the os module is used for the source or destination respectively. Otherwise the respective object must support open(), stat(), and chmod().

After the copy has completed, permission bits from stat() are set via chmod().

src_server: Proxy
Host to get file from.
src_path: string
Path to file on src_server.
dst_server: Proxy
Host to put file to.
dst_path: string
Path to file on dst_server.
mode: string
Mode settings for open(), not including ‘r’ or ‘w’.
openmdao.util.filexfer.pack_zipfile(patterns, filename, logger=None)[source]

Create ‘zip’ file filename of files in patterns. Returns (nfiles, nbytes).

patterns: list
List of fnmatch style patterns.
filename: string
Name of zip file to create.
logger: Logger
Used for recording progress.

Note

The code uses glob.glob() to process patterns. It does not check for the existence of any matches.

openmdao.util.filexfer.translate_newlines(filename)[source]

Translate the newlines of filename to the local standard.

filename: string
Name of the file to be translated. The translated file will replace this file.
openmdao.util.filexfer.unpack_zipfile(filename, logger=None, textfiles=None)[source]

Unpack ‘zip’ file filename. Returns (nfiles, nbytes).

filename: string
Name of zip file to unpack.
logger: Logger
Used for recording progress.
textfiles: list
List of fnmatch style patterns specifying which upnapcked files are text files possibly needing newline translation. If not supplied, the first 4KB of each is scanned for a zero byte. If not found then the file is assumed to be a text file.

git.py

openmdao.util.git.download_github_tar(org_name, repo_name, version, dest='.')[source]

Pull a tarfile of a github repo and place it in the specified destination directory. ‘version’ can be a tag or a commit id.

grab_distrib.py

openmdao.util.grab_distrib.grab_distrib(req, index=None, dest='.', search_pypi=True)[source]

Downloads a distribution from the given package index(s) based on the given requirement string(s). Downloaded distributions are placed in the specified destination or the current directory if no destination is specified. If a distribution cannot be found in the given index(s), the Python Package Index will be searched as a last resort unless search_pypi is False. This does NOT install the distribution.

Requirements may be supplied as strings or as Requirement objects.

log.py

This is just a wrapper for the logging module. Messages can be routed to the console via enable_console(). If the file logger.cfg exists, it can be used to configure logging. See the Python documentation for logging.config for details. The example below is equivalent to calling enable_console():

[loggers]
keys=root

[handlers]
keys=consoleHandler

[formatters]
keys=consoleFormatter

[logger_root]
level=DEBUG
handlers=consoleHandler

[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=consoleFormatter
args=(sys.stderr,)

[formatter_consoleFormatter]
format=%(levelname)s %(name)s: %(message)s
class openmdao.util.log.Logger(name, level=None)[source]

Bases: object

Pickle-able logger. Mostly a pass-through to a real logger.

critical(msg, *args, **kwargs)[source]

Log a critical message.

debug(msg, *args, **kwargs)[source]

Log a debug message.

error(msg, *args, **kwargs)[source]

Log an error message.

exception(msg, *args, **kwargs)[source]

Log an exception.

info(msg, *args, **kwargs)[source]

Log an information message.

log(level, msg, *args, **kwargs)[source]

Log a message at a specified level.

rename(name)[source]

Change name reported in log.

warning(msg, *args, **kwargs)[source]

Log a warning message.

level

Logging message level.

class openmdao.util.log.NullLogger[source]

Bases: object

Can be useful when no logger has been supplied to a routine. It produces no output.

critical(msg, *args, **kwargs)[source]

Log a critical message.

debug(msg, *args, **kwargs)[source]

Log a debug message.

error(msg, *args, **kwargs)[source]

Log an error message.

exception(msg, *args, **kwargs)[source]

Log an exception.

info(msg, *args, **kwargs)[source]

Log an information message.

log(level, msg, *args, **kwargs)[source]

Log a message at a specified level.

warning(msg, *args, **kwargs)[source]

Log a warning message.

openmdao.util.log.getLogger(name)[source]

Return the named logger.

openmdao.util.log.enable_console()[source]

Configure logging to receive log messages at the console.

openmdao.util.log.disable_console()[source]

Stop receiving log messages at the console.

openmdao.util.log.enable_trace(stream=None)[source]

Enable iteration tracing.

stream: file or string
File object or filename for trace output. Only used on first enable, default sys.stderr.
openmdao.util.log.disable_trace()[source]

Disable iteration tracing.

mkpseudo.py

openmdao.util.mkpseudo.mkpseudo(argv=None)[source]

A command line script (mkpseudo) points to this. It generates a source distribution package that’s empty aside from having a number of dependencies on other packages.

usage: make_pseudopkg <pkg_name> <version> [-d <dest_dir>] [-l <links_url>] [-r req1] ... [-r req_n]

If pkg_name contains dots, a namespace package will be built.

Required dependencies are specified using the same notation used by setuptools/easy_install/distribute/pip.

Note

If your required dependencies use the “<” or “>” characters, you must put the entire requirement in quotes to avoid misinterpretation by the shell.

namelist_util.py

Utilities for reading and writing Fortran namelists.

class openmdao.util.namelist_util.Card(name, value, is_comment=0)[source]

Bases: object

Data object that stores the value of a single card for a namelist.

class openmdao.util.namelist_util.Namelist(comp)[source]

Bases: object

Utility to ease the task of constructing a formatted output file.

add_comment(comment)[source]

Add a comment in the namelist.

comment: string
Comment text to be added. Text should include comment character if one is desired. (Note that a comment character isn’t always needed in a Namelist. It seems to figure out whether something is a comment without it.)
add_container(varpath='', skip=None)[source]

Add every variable in an OpenMDAO container to the namelist. This can be used it your component has containers of variables.

varpath: string
dotted path of container in the data hierarchy
skip: list of str
list of variables to skip printing to the file
add_group(name)[source]

Add a new group to the namelist. Any variables added after this are added to this new group.

name: string
Group name to be added.
add_newvar(name, value)[source]

Add a new variable to the namelist.

name: string
Name of the variable to be added.
value: int, float, string, ndarray, list, bool
Value of the variable to be added.
add_var(varpath)[source]

Add an openmdao variable to the namelist.

varpath: string varpath is the dotted path (e.g., comp1.container1.var1).

generate()[source]

Generates the input file. This should be called after all cards and groups are added to the namelist.

load_model(rules=None, ignore=None, single_group=-1)[source]

Loads the current deck into an OpenMDAO component.

rules: dict of lists of strings (optional)
An optional dictionary of rules can be passed if the component has a hierarchy of containers for its input variables. If no rules dictionary is passed, load_model will attempt to find each namelist variable in the top level of the model hierarchy.
ignore: list of strings (optional)
List of variable names that can safely be ignored.
single_group: integer (optional)

Group id number to use for processing one single namelist group. Useful if extra processing is needed, or if multiple groups have the same name.

Returns a tuple containing the following values: (empty_groups, unlisted_groups, unlinked_vars). These need to be examined after calling load_model to make sure you loaded every variable into your model.

empty_groups: ordereddict( integer : string )
Names and ID number of groups that don’t have cards. This includes strings found at the top level that aren’t comments; these need to be processed by your wrapper to determine how the information fits into your component’s variable hierarchy.
unlisted_groups: ordereddict( integer : string )
This dictionary includes the names and ID number of groups that have variables that couldn’t be loaded because the group wasn’t mentioned in the rules dictionary.

unlinked_vars: list containing all variable names that weren’t found in the component.

parse_file()[source]

Parses an existing namelist file and creates a deck of cards to hold the data. After this is executed, you need to call the load_model() method to extract the variables from this data structure.

set_filename(filename)[source]

Set the name of the file that will be generated or parsed.

filename: string
Name of the file to be written.
set_title(title)[source]

Sets the title for the namelist Note that a title is not required.

title: string
The title card in the namelist - generally optional.
class openmdao.util.namelist_util.ToBool(expr, savelist=False)[source]

Bases: pyparsing.TokenConverter

Converter for PyParsing that is used to turn a token into a Boolean.

postParse(instring, loc, tokenlist)[source]

Converter to make token into a bool.

network.py

Routines to help out with obtaining debugging information

openmdao.util.network.get_unused_ip_port()[source]

find an unused IP port ref: http://code.activestate.com/recipes/531822-pick-unused-port/ note: use the port before it is taken by some other process!

parse_phoenixwrapper.py

Parses the variable definition section of a Phoenix Integration ModelCenter component wrapper and generates an OpenMDAO component stub.

openmdao.util.parse_phoenixwrapper.parse_phoenixwrapper(infile, outfile, compname)[source]

Generates a dummy component given a Phoenix Integration Modelcenter script wrapper. The first section of this wrapper is parsed, and the appropriate variables and containers are placed in the new OpenMDAO component.

infile - ModelCenter scriptwrapper.

outfile - File containing new OpenMDAO component skeleton.

compname - Name for new component.

publickey.py

Support for generation, use, and storage of public/private key pairs. The pk_encrypt(), pk_decrypt(), pk_sign(), and pk_verify() functions provide a thin interface over Crypto.PublicKey.RSA methods for easier use and to work around some issues found with some keys read from ssh id_rsa files.

openmdao.util.publickey.decode_public_key(text)[source]

Return public key from text representation.

text: string
base64 encoded key data.
openmdao.util.publickey.encode_public_key(key)[source]

Return base64 text representation of public key key.

key: public key
Public part of key pair.
openmdao.util.publickey.get_key_pair(user_host, logger=None, overwrite_cache=False, ignore_ssh=False)[source]

Returns RSA key containing both public and private keys for the user identified in user_host. This can be an expensive operation, so we avoid generating a new key pair whenever possible. If ~/.ssh/id_rsa exists and is private, that key is returned.

user_host: string
Format user@host.
logger: logging.Logger
Used for debug messages.
overwrite_cache: bool
If True, a new key is generated and forced into the cache of existing known keys. Used for testing.
ignore_ssh: bool
If True, ignore any existing ssh id_rsa key file. Used for testing.

Note

To avoid unnecessary key generation, the public/private key pair for the current user is stored in the private file ~/.openmdao/keys. On Windows this requires the pywin32 extension. Also, the public key is stored in ssh form in ~/.openmdao/id_rsa.pub.

openmdao.util.publickey.is_private(path)[source]

Return True if path is accessible only by ‘owner’.

path: string
Path to file or directory to check.

Note

On Windows this requires the pywin32 extension.

openmdao.util.publickey.make_private(path)[source]

Make path accessible only by ‘owner’.

path: string
Path to file or directory to be made private.

Note

On Windows this requires the pywin32 extension.

openmdao.util.publickey.pk_decrypt(encrypted, private_key)[source]

Return encrypted decrypted by private_key as a string.

encrypted: list
Chunks of encrypted data returned by pk_encrypt().
private_key: Crypto.PublicKey.RSA
Private portion of key pair.
openmdao.util.publickey.pk_encrypt(data, public_key)[source]

Return list of chunks of data encrypted by public_key.

data: string
The message to be encrypted.
public_key: Crypto.PublicKey.RSA
Public portion of key pair.
openmdao.util.publickey.pk_sign(hashed, private_key)[source]

Return signature for hashed using private_key.

hashed: string
A hash value of the data to be signed.
private_key: Crypto.PublicKey.RSA
Private portion of key pair.
openmdao.util.publickey.pk_verify(hashed, signature, public_key)[source]

Verify hashed based on signature and public_key.

hashed: string
A hash for the data that is signed.
signature: tuple
Value returned by pk_sign().
public_key: Crypto.PublicKey.RSA
Public portion of key pair.
openmdao.util.publickey.read_authorized_keys(filename=None, logger=None)[source]

Return dictionary of public keys, indexed by user, read from filename. The file must be in ssh format, and only RSA keys are processed. If the file is not private, then no keys are returned.

filename: string
File to read from. The default is ~/.ssh/authorized_keys.
logger: logging.Logger
Used for log messages.
openmdao.util.publickey.write_authorized_keys(allowed_users, filename, logger=None)[source]

Write allowed_users to filename in ssh format. The file will be made private if supported on this platform.

allowed_users: dict
Dictionary of public keys indexed by user.
filename: string
File to write to.
logger: logging.Logger
Used for log messages.

shellproc.py

exception openmdao.util.shellproc.CalledProcessError(returncode, cmd, errormsg)[source]

Bases: subprocess.CalledProcessError

subprocess.CalledProcessError plus errormsg attribute.

class openmdao.util.shellproc.ShellProc(args, stdin=None, stdout=None, stderr=None, env=None)[source]

Bases: subprocess.Popen

A slight modification to subprocess.Popen. If args is a string then the shell argument is set True. Updates a copy of os.environ with env, and opens files for any stream which is a basestring.

args: string or list
If a string, then this is the command line to execute and the subprocess.Popen shell argument is set True. Otherwise this is a list of arguments, the first is the command to execute.
stdin, stdout, stderr: string, file, or int
Specify handling of corresponding stream. If a string, a file of that name is opened. Otherwise see the subprocess documentation.
env: dict
Environment variables for the command.
close_files()[source]

Closes files that were implicitly opened.

error_message(return_code)[source]

Return error message for return_code. The error messages are derived from the operating system definitions, some programs don’t necessarily return exit codes conforming to these definitions.

return_code: int
Return code from poll().
terminate(timeout=None)[source]

Stop child process. If timeout is specified then wait() will be called to wait for the process to terminate.

timeout: float (seconds)
Maximum time to wait for the process to stop. A value of zero implies an infinite maximum wait.
wait(poll_delay=0.0, timeout=0.0)[source]

Polls for command completion or timeout. Closes any files implicitly opened. Returns (return_code, error_msg).

poll_delay: float (seconds)
Time to delay between polling for command completion. A value of zero uses an internal default.
timeout: float (seconds)
Maximum time to wait for command completion. A value of zero implies an infinite maximum wait.
openmdao.util.shellproc.call(args, stdin=None, stdout=None, stderr=None, env=None, poll_delay=0.0, timeout=0.0)[source]

Run command with arguments. Returns (return_code, error_msg).

args: string or list
If a string, then this is the command line to execute and the subprocess.Popen shell argument is set True. Otherwise this is a list of arguments, the first is the command to execute.
stdin, stdout, stderr: string, file, or int
Specify handling of corresponding stream. If a string, a file of that name is opened. Otherwise see the subprocess documentation.
env: dict
Environment variables for the command.
poll_delay: float (seconds)
Time to delay between polling for command completion. A value of zero uses an internal default.
timeout: float (seconds)
Maximum time to wait for command completion. A value of zero implies an infinite maximum wait.
openmdao.util.shellproc.check_call(args, stdin=None, stdout=None, stderr=None, env=None, poll_delay=0.0, timeout=0.0)[source]

Run command with arguments. If non-zero return_code, raises CalledProcessError.

args: string or list
If a string, then this is the command line to execute and the subprocess.Popen shell argument is set True. Otherwise this is a list of arguments, the first is the command to execute.
stdin, stdout, stderr: string, file, or int
Specify handling of corresponding stream. If a string, a file of that name is opened. Otherwise see the subprocess documentation.
env: dict
Environment variables for the command.
poll_delay: float (seconds)
Time to delay between polling for command completion. A value of zero uses an internal default.
timeout: float (seconds)
Maximum time to wait for command completion. A value of zero implies an infinite maximum wait.

stream.py

class openmdao.util.stream.Stream(file_obj, binary=False, big_endian=False, single_precision=False, integer_8=False, unformatted=False, recordmark_8=False)[source]

Bases: object

Wrapper of standard Python file object. Supports reading/writing int and float arrays in various formats.

file_obj: file
File object opened for reading or writing.
binary: bool
If True, the data is in binary, not text, form.
big_endian: bool
If True, the data bytes are in ‘big-endian’ order. Only meaningful if binary.
single_precision: bool
If True, floating-point data is 32 bits, not 64. Only meaningful if binary.
integer_8: bool
If True, integer data is 64 bits, not 32. Only meaningful if binary.
unformatted: bool
If True, the data is surrounded by Fortran record length markers. Only meaningful if binary.
recordmark_8: bool
If True, the record length markers are 64 bits, not 32. Only meaningful if unformatted.
close()[source]

Close underlying file.

read_float(full_record=False)[source]

Returns next float.

full_record: bool
If True, then read surrounding recordmarks. Only meaningful if unformatted.
read_floats(shape, order='C', full_record=False)[source]

Returns floats as a numpy array of shape.

shape: tuple(int)
Dimensions of returned array.
order: string
If ‘C’, the data is in row-major order. If ‘Fortran’, the data is in column-major order.
full_record: bool
If True, then read surrounding recordmarks. Only meaningful if unformatted.
read_int(full_record=False)[source]

Returns next integer.

full_record: bool
If True, then read surrounding recordmarks. Only meaningful if unformatted.
read_ints(shape, order='C', full_record=False)[source]

Returns integers as a numpy array of shape.

shape: tuple(int)
Dimensions of returned array.
order: string
If ‘C’, the data is in row-major order. If ‘Fortran’, the data is in column-major order.
full_record: bool
If True, then read surrounding recordmarks. Only meaningful if unformatted.
read_recordmark()[source]

Returns value of next recordmark.

reclen_floats(count)[source]

Returns record length for count floats.

count: int
Number of floats in record.
reclen_ints(count)[source]

Returns record length for count ints.

count: int
Number of ints in record.
write_array(data, order='C', fmt='%s', sep=' ', linecount=0)[source]

Writes array as text.

data: numpy.ndarray
Data array.
order: string
If ‘C’, the data is written in row-major order. If ‘Fortran’, the data is written in column-major order.
fmt: string
Format specifier for each item.
sep: string
Separator between items.
linecount: int
If > zero, then at most linecount values are written per line.
write_float(value, fmt='%.16g', sep='', full_record=False)[source]

Writes a float.

value: float
Value to be written.
fmt: string
Format to use when writing as text.
sep: string
Appended to stream after writing value.
full_record: bool
If True, then write surrounding recordmarks. Only meaningful if unformatted.
write_floats(data, order='C', fmt='%.16g', sep=' ', linecount=0, full_record=False)[source]

Writes a float array.

data: numpy.ndarray
Float data array.
order: string
If ‘C’, the data is written in row-major order. If ‘Fortran’, the data is written in column-major order.
fmt: string
Format specifier for each item.
sep: string
Separator between items.
linecount: int
If > zero, then at most linecount values are written per line.
full_record: bool
If True, then write surrounding recordmarks. Only meaningful if unformatted.
write_int(value, fmt='%d', sep='', full_record=False)[source]

Writes an integer.

value: int
Value to be written.
fmt: string
Format to use when writing as text.
sep: string
Appended to stream after writing value.
full_record: bool
If True, then write surrounding recordmarks. Only meaningful if unformatted.
write_ints(data, order='C', fmt='%d', sep=' ', linecount=0, full_record=False)[source]

Writes an integer array.

data: numpy.ndarray
Integer data array.
order: string
If ‘C’, the data is written in row-major order. If ‘Fortran’, the data is written in column-major order.
fmt: string
Format specifier for each item.
sep: string
Separator between items.
linecount: int
If > zero, then at most linecount values are written per line.
full_record: bool
If True, then write surrounding recordmarks. Only meaningful if unformatted.
write_recordmark(length)[source]

Writes recordmark.

length: int
Length of record (bytes).

testutil.py

Utilities for the OpenMDAO test process.

openmdao.util.testutil.assertRaisesError(test_case_instance, code, err_type, err_msg)[source]

Determine that code raises err_type with err_msg.

openmdao.util.testutil.assert_raises(test_case, code, globals, locals, exception, msg, use_exec=False)[source]

Determine that code raises exception with msg.

test_case: unittest.TestCase
TestCase instance used for assertions.
code: string
Statement to be executed.
globals, locals: dict
Arguments for eval().
exception: Exception
Exception that should be raised.
msg: string
Expected message from exception.
use_exec: bool
If True, then evaluate code with exec() rather than eval(). This is necessary for testing statements that are not expressions.
openmdao.util.testutil.assert_rel_error(test_case, actual, desired, tolerance)[source]

Determine that the relative error between actual and desired is within tolerance. If desired is zero then use absolute error.

test_case: unittest.TestCase
TestCase instance used for assertions.
actual: float
The value from the test.
desired: float
The value expected.
tolerance: float
Maximum relative error (actual - desired) / desired.
openmdao.util.testutil.find_python()[source]

Return path to the OpenMDAO python command

openmdao.util.testutil.make_protected_dir()[source]

Returns the the absolute path of an inaccessible directory. Files cannot be created in it, it can’t be os.chdir() to, etc. Not supported on Windows.

typegroups.py

view_docs.py

openmdao.util.view_docs.view_docs(browser=None)[source]

A script (openmdao docs) points to this. It just pops up a browser to view the openmdao Sphinx docs. If the docs are not already built, it builds them before viewing; but if the docs already exist, it’s not smart enough to rebuild them if they’ve changed since the last build.

If this is run from a non-developer install, i.e., there is no local copy of the docs, it just looks for the docs on the openmdao.org website.

wrkpool.py

class openmdao.util.wrkpool.WorkerPool[source]

Bases: object

Pool of worker threads; grows as necessary.

static cleanup()[source]

Cleanup resources (worker threads).

static get(one_shot=False)[source]

Get a worker queue from the pool. Work requests should be of the form:

(callable, *args, **kwargs, reply_queue)

Work replies are of the form:

(queue, retval, exc, traceback)

one_shot: bool
If True, the worker will self-release after processing one request.
static get_instance()[source]

Return singleton instance.

static release(queue)[source]

Release a worker queue back to the pool.

queue: Queue
Worker queue previously obtained from get().