We are excited to announce that Enthought is undertaking a multi-year project to bring the strengths of NumPy to high-performance distributed computing. The goal is to provide a more intuitive and user-friendly interface to both distributed array computing and to high-performance parallel libraries. We will release the project as open source, providing another tool in our toolbox to help with data processing, modeling, and simulation capabilities in the realm of big data. The project is funded under a Phase II grant from the DOE SBIR program [0] [1], and is headed by Kurt Smith.
The project will develop three packages designed to work in concert to provide a high-performance computing framework. To maximize interoperability and extensibility, the project will design a distributed array protocol akin to the Python PEP-3118 buffer protocol [2], making it possible for other libraries and projects to easily interoperate with Odin and PyTrilinos distributed data structures. The protocol will allow interoperability with the Global Arrays and the Global Arrays in NumPy (GAIN) projects based out of Pacific Northwest National Laboratory (PNNL). Computational scientist Jeff Daily, who leads GAIN development at PNNL, will help in this effort.
The three components are described in more detail below.
Optimized Distributed NumPy (ODIN)
ODIN provides a NumPy-like interface for distributed array computations. It provides
- distributed parallel computing on array expressions;
- specification of an array’s domain decomposition, whether for processing or for storage across files, with sensible defaults;
- specification of the processes involved in specific array computations;
- features for specifying the locality of computations, whether global or local;
- support for out-of-core computations;
- interoperability with existing NumPy-based packages.
Expressions involving ODIN arrays will allow users to perform sophisticated array computations in a distributed fashion, including basic array computations, array slicing and fancy-indexing computations, finite-difference-style computations, and several more. ODIN’s road map includes array expression analysis and loop fusion for optimizations of distributed computations. ODIN will provide built-in capabilities for distributed UFunc calculations as well as reduction and accumulation-type computations. Odin is designed to be extensible and adaptable to existing libraries, and will allow domain experts to make their distributed algorithms easily available to a much wider audience based on a common platform. The package will build on existing technologies and takes inspiration from several distributed array libraries and languages already in existence, including Chapel, X10, Fortress, HP-Fortran, and Julia. Odin will interoperate with the Trilinos suite of HPC solvers via PyTrilinos, and will provide a high-level interface to make Trilinos and PyTrilinos easier to use.
ODIN will be tested on the Texas Advanced Computing Center’s Stampede supercomputer, and scaling tests will be run on Stampede’s Intel Phi accelerators.
PyTrilinos improvements and enhancements
Trilinos is a suite of dozens of HPC packages that provide access to state-of-the-art distributed solvers, and PyTrilinos is the Python interface to several of the Trilinos packages. The Trilinos packages, developed primarily at Sandia National Laboratories, allow scientists to solve partial differential equations and large linear, nonlinear, and optimization problems in parallel, from desktops to distributed clusters to supercomputers, with active research on modern architectures such as GPUs. Bill Spotz, senior research scientist at Sandia, will lead the PyTrilinos portion of the project to improve and continue to expand the PyTrilinos interfaces, making Trilinos easier to use.
Seamless
Seamless provides functionality to speed up Python via JIT compilation and makes integration between Python and other languages nearly effortless. Based on LLVM, Seamless uses LLVM’s introspection capabilities to easily wrap existing C, C++ (and eventually Fortran) libraries while minimizing code duplication, combining many of the best features of Cython, Ctypes, SWIG, and PyPy.
We are very excited to have the opportunity to work on this Python HPC framework, and look forward to working with the Scientific Python community to move NumPy into the next age of distributed scientific computing. We will be updating Enthought’s website with project progress and updates. We would like to thank the Department of Energy’s SBIR program for the opportunity to develop these packages, and the collaborators and industry partners whose support made this possible.
[0] http://science.energy.gov/sbir/awards/
[1] http://science.energy.gov/~/media/sbir/excel/2013_Phase_II_Release_1.xlsx
[2] http://www.python.org/dev/peps/pep-3118/
Related Content
ChatGPT on Software Engineering
Recently, I’ve been working on a new course offering in Enthought Academy titled Software Engineering for Scientists and Engineers course. I’ve focused on distilling the…
What’s in a __name__?
if __name__ == “__main__”: When I was new to Python, I ran into a mysterious block of code that looked something like: def main(): …
Retuning the Heavens: Machine Learning and Ancient Astronomy
What can we learn about machine learning from ancient astronomy? When thinking about Machine Learning it is easy to be model-centric and get caught up…
Extracting Target Labels from Deep Learning Classification Models
In the blog post Configuring a Neural Network Output Layer we highlighted how to correctly set up an output layer for deep learning models. Here,…
Exploring Python Objects
Introduction When we teach our foundational Python class, one of the things we do is make sure that our students know how to explore Python…
Choosing the Right Number of Clusters
Introduction When I first started my machine learning journey, K-means clustering was one of the first algorithms I was introduced to – and it is…
Prospecting for Data on the Web
Introduction At Enthought we teach a lot of scientists and engineers about using Python and the ecosystem of scientific Python packages for processing, analyzing, and…
Configuring a Neural Network Output Layer
Introduction If you have used TensorFlow before, you know how easy it is to create a simple neural network model using the Keras API. Just…
No Zero Padding with strftime()
One of the best features of Python is that it is platform independent. You can write code on Linux, Windows, and MacOS and it works…