We are excited to announce that Enthought is undertaking a multi-year project to bring the strengths of NumPy to high-performance distributed computing. The goal is to provide a more intuitive and user-friendly interface to both distributed array computing and to high-performance parallel libraries. We will release the project as open source, providing another tool in our toolbox to help with data processing, modeling, and simulation capabilities in the realm of big data. The project is funded under a Phase II grant from the DOE SBIR program [0] [1], and is headed by Kurt Smith.
The project will develop three packages designed to work in concert to provide a high-performance computing framework. To maximize interoperability and extensibility, the project will design a distributed array protocol akin to the Python PEP-3118 buffer protocol [2], making it possible for other libraries and projects to easily interoperate with Odin and PyTrilinos distributed data structures. The protocol will allow interoperability with the Global Arrays and the Global Arrays in NumPy (GAIN) projects based out of Pacific Northwest National Laboratory (PNNL). Computational scientist Jeff Daily, who leads GAIN development at PNNL, will help in this effort.
The three components are described in more detail below.
Optimized Distributed NumPy (ODIN)
ODIN provides a NumPy-like interface for distributed array computations. It provides
- distributed parallel computing on array expressions;
- specification of an array’s domain decomposition, whether for processing or for storage across files, with sensible defaults;
- specification of the processes involved in specific array computations;
- features for specifying the locality of computations, whether global or local;
- support for out-of-core computations;
- interoperability with existing NumPy-based packages.
Expressions involving ODIN arrays will allow users to perform sophisticated array computations in a distributed fashion, including basic array computations, array slicing and fancy-indexing computations, finite-difference-style computations, and several more. ODIN’s road map includes array expression analysis and loop fusion for optimizations of distributed computations. ODIN will provide built-in capabilities for distributed UFunc calculations as well as reduction and accumulation-type computations. Odin is designed to be extensible and adaptable to existing libraries, and will allow domain experts to make their distributed algorithms easily available to a much wider audience based on a common platform. The package will build on existing technologies and takes inspiration from several distributed array libraries and languages already in existence, including Chapel, X10, Fortress, HP-Fortran, and Julia. Odin will interoperate with the Trilinos suite of HPC solvers via PyTrilinos, and will provide a high-level interface to make Trilinos and PyTrilinos easier to use.
ODIN will be tested on the Texas Advanced Computing Center’s Stampede supercomputer, and scaling tests will be run on Stampede’s Intel Phi accelerators.
PyTrilinos improvements and enhancements
Trilinos is a suite of dozens of HPC packages that provide access to state-of-the-art distributed solvers, and PyTrilinos is the Python interface to several of the Trilinos packages. The Trilinos packages, developed primarily at Sandia National Laboratories, allow scientists to solve partial differential equations and large linear, nonlinear, and optimization problems in parallel, from desktops to distributed clusters to supercomputers, with active research on modern architectures such as GPUs. Bill Spotz, senior research scientist at Sandia, will lead the PyTrilinos portion of the project to improve and continue to expand the PyTrilinos interfaces, making Trilinos easier to use.
Seamless
Seamless provides functionality to speed up Python via JIT compilation and makes integration between Python and other languages nearly effortless. Based on LLVM, Seamless uses LLVM’s introspection capabilities to easily wrap existing C, C++ (and eventually Fortran) libraries while minimizing code duplication, combining many of the best features of Cython, Ctypes, SWIG, and PyPy.
We are very excited to have the opportunity to work on this Python HPC framework, and look forward to working with the Scientific Python community to move NumPy into the next age of distributed scientific computing. We will be updating Enthought’s website with project progress and updates. We would like to thank the Department of Energy’s SBIR program for the opportunity to develop these packages, and the collaborators and industry partners whose support made this possible.
[0] http://science.energy.gov/sbir/awards/
[1] http://science.energy.gov/~/media/sbir/excel/2013_Phase_II_Release_1.xlsx
[2] http://www.python.org/dev/peps/pep-3118/
Related Content
Prospecting for Data on the Web
Introduction At Enthought we teach a lot of scientists and engineers about using Python and the ecosystem of scientific Python packages for processing, analyzing, and…
Configuring a Neural Network Output Layer
Introduction If you have used TensorFlow before, you know how easy it is to create a simple neural network model using the Keras API. Just…
No Zero Padding with strftime()
One of the best features of Python is that it is platform independent. You can write code on Linux, Windows, and MacOS and it works…
Sorting Out .sort() and sorted()
Sorting Out .sort() and sorted() Sometimes sorting a Python list can make it mysteriously disappear. This happens even to experienced Python programmers who use .sort()…
A Beginner’s Guide to Deep Learning
Deep learning. By this point, we’ve all heard of it. It’s the magic silver bullet that can fix any complex problem. It’s the special ingredient…
Giving Visibility to Renewable Energy
The ultimate project goal of EnergizAIR Infrastructure was to raise individual awareness of the contribution of renewable energy sources, and ultimately change behaviors. Now ten…
Introducing Enthought Edge: Unlocking the Value of R&D Data
While the value of R&D data is clear, finding a way to sort through it can be daunting given the special handling required to extract…
Machine Learning in Materials Science
The process of materials discovery is complex and iterative, requiring a level of expertise to be done effectively. Materials workflows that require human judgement present…
AI Needs the ‘Applied Sciences’ Treatment
As industries rapidly advance in AI/machine learning, a key to unlocking the power of these approaches for companies is an enabling environment. Domain experts need…
Join Our Mailing List!
Sign up below to receive email updates including the latest news, insights, and case studies from our team.