Join the 2021 GSH Geophysics in the cloud competition. Build a novel seismic inversion app and access all the data on demand with serverless cloud storage. Example notebooks show how to access this data and use AWS SageMaker to build your ML models. With prizes.
Author: Ben Lasscock, Ph.D.
Geophysics in the Cloud Competition
The 2021 Houston GSH Geophysics on Cloud Competition is sponsored by AWS Energy and Enthought. This competition will allow teams and individuals to develop new and innovative solutions for seismic inversion. We’re going all in on cloud. You will be provided the latest technologies for serverless access to big data, examples of AWS SageMaker to learn how to build ML models on the cloud and a gather.town, where we can work and collaborate in 8-bit.
Access All the Data
A common theme when discussing AI/ML in exploration geophysics has been that only a very small percentage of available data is used in analysis and decision making. One of the goals of this competition is to make ALL data available to the participants, on demand.
This competition presents both a logistical and technical challenge for both organizers and participants. The seismic datasets are large. Downloading this data would typically take hours, a cost multiplied across each and every participant. For the organizers, we don’t want to see the work of loading and manipulating large SEGY format data replicated across the teams. More overhead loading data means less time (and less fun) developing ML for seismic inversion.
While we want participants to have access to ALL the data, we expect they will only use what they find to be relevant in solving the competition problem. This detail is important when using specialist GPUs and tools like AWS SageMaker to build models. We don’t want to be wasting valuable compute time doing I/O.
Going Serverless
Competition datasets will be made available to the participants through a convenient api, the data reformatted for efficient serverless access. Serverless means that the data can be accessed directly from blob storage (S3). For the organizers, we don’t have to manage an extra server to provide access to data. For the participants, it means efficient access to the parts of the dataset they want, on demand.
One such efficient format of seismic data is OpenVDS. OpenVDS provides fast access to slices (inline, crossline, and time) and 3D chunks. The upcoming release of OpenVDS+, by Bluware, provides an easy (pip installable) library that participants can use in their notebooks. OpenVDS is also part of the OSDU Data Platform, so we should be seeing a lot more of it in the future.
Get Started
The problem of assembling an AI/ML ready data set has been solved by using a serverless model, making the most of the scarce resources available for the competition.
This story really isn’t too different from what we see in the industry at large: how to get the most innovation with the least expenditure while making highly efficient use of expert time.
Let the competition begin. Entries close 26 March, and the competition begins 1 April. No foolin’.
Visit the website to learn more and enter the competition.
About the Author
Ben Lasscock, holds a Ph.D. and a B.Sc. in theoretical physics as well as a B.Sc. in physics and theoretical physics from the University of Adelaide. Before coming to geoscience, Ben worked as a portfolio manager at a large hedge fund in Australia. He has publications in the areas of high energy physics, Bayesian time series analysis and geophysics.
Related Content
ChatGPT on Software Engineering
Recently, I’ve been working on a new course offering in Enthought Academy titled Software Engineering for Scientists and Engineers course. I’ve focused on distilling the…
What’s in a __name__?
if __name__ == “__main__”: When I was new to Python, I ran into a mysterious block of code that looked something like: def main(): Â …
3 Trends for Scientists To Watch in 2023
As a company that delivers Digital Transformation for Science, part of our job at Enthought is to understand the trends that will affect how our…
Retuning the Heavens: Machine Learning and Ancient Astronomy
What can we learn about machine learning from ancient astronomy? When thinking about Machine Learning it is easy to be model-centric and get caught up…
Extracting Target Labels from Deep Learning Classification Models
In the blog post Configuring a Neural Network Output Layer we highlighted how to correctly set up an output layer for deep learning models. Here,…
Exploring Python Objects
Introduction When we teach our foundational Python class, one of the things we do is make sure that our students know how to explore Python…
Choosing the Right Number of Clusters
Introduction When I first started my machine learning journey, K-means clustering was one of the first algorithms I was introduced to – and it is…
Prospecting for Data on the Web
Introduction At Enthought we teach a lot of scientists and engineers about using Python and the ecosystem of scientific Python packages for processing, analyzing, and…
Configuring a Neural Network Output Layer
Introduction If you have used TensorFlow before, you know how easy it is to create a simple neural network model using the Keras API. Just…
No Zero Padding with strftime()
One of the best features of Python is that it is platform independent. You can write code on Linux, Windows, and MacOS and it works…