Label Data Exponentially Faster to Train and Apply Machine Learning Models
Clients have proprietary seismic data and develop their own proprietary neural network models. Enthought collaboratively build customized software – in this case seismic data labeling – and a deep learning model building toolkit, which when combined, create an entirely new interpretation capability.
The Enthought seismic deep learning model building toolkit allows you to quickly label data, collaboratively interpret with models developed with Enthought experts, and use software technology designed for managing AI systems, models and their outputs.
Expertise and infrastructure technology are also available to assist in implementation, where existing licensed software packages and IT systems can limit the potential of AI/Machine Learning techniques.
Client seismic data and proprietary neural networks are complimented with Enthought customized data labelling software and deep learning toolkit. The result is orders of magnitude improvements in expert time usage, and interpretation quality and efficiency.
Accelerating Seismic Interpretation with Machine Learning
Seismic data has multiple challenges for interpreters, in particular data volumes, and the often awkward SEG-Y format. Advanced digital technologies offer significant potential to increase efficiency in handling, integrating, and analyzing data at multiple places in existing workflows, as well as creating a new generation of more efficient and collaborative workflows, with shared machine learning models.
This case study presents a partnership with a major independent where Enthought successfully researched the efficacy of using machine learning techniques and custom processing workflows to automatically delineate measured patterns throughout a seismic volume.
Enthought collaborate with clients to build customized software that creates new possibilities in geoscience.
Automating Sequence Stratigraphy Using Deep Learning
In a typical workflow, the sequence stratigrapher would interpret a seismic volume by analyzing a sequence of 2D cross-sections sampled regularly throughout. This typically involves interpreting 1 in every 10 or more of the possible cross sections to produce a consistent volume, an arduous and labor intensive process.
In this proof of concept project using AI, the interpreter had only to interpret less than 1 in every 1000 possible cross-sections from the publicly available Poseidon data set, training a machine which then interpreted the remaining volume.
Machine learning and AI can alleviate the drudgery of interpreting large seismic volumes and allow more time for experts to focus on quality and value.
You Know Seismic. We Know Python.
There are two major barriers today to achieving the potential of deep learning in seismic interpretation: fast, consistent labeling of the seismic data, and a deep learning toolkit that enables quick experimentation with models, while iterating and managing all the data associated with your model and results. Enthought has developed scientific software technology that solves both.
Geophysicists need to produce enough labeled data, fast enough, to train deep learning models that will have business impact, both in terms of interpretation quality and expert efficiency.
The seismic toolkit’s labeling tool enables fast, efficient creation of a significant amount of trained data, necessary for deep learning models to perform. As the labeling process iterates, a combination of ‘low cost, fast to train’ deep-learning models and proprietary post-processing methods enables the rate of label creation to accelerate.
The value comes through a geoscientist operator’s deep expertise in geophysics and subsurface understanding, collaborating with Enthought expertise in AI, machine learning, coding, and foundational geophysics knowledge.
Plug and Play Models Optimized for Limited Training Data
The figure above shows cross-sections through the F3 3D seismic volume with sequence labels predicted using the trained AI, overlaid in color. Deep learning was used to derive a horizon on top of the “Salt” label, detailing the complicated geology immediately bounding the intrusion of the salt. The AI is initially trained with the interpretation of 5 inlines (out of 650). An initial prediction is then made on an additional 5 lines; any errors are corrected, and the AI continues training on the updated lines. Horizons are then extracted automatically from the resulting 3D volume(s) of predicted label probability.
Revolutionize the way you Interpret Seismic Data
The figure above shows the corresponding Shannon-entropy volume, which is derived from the prediction probability. Hot colors indicate that we cannot be confident of assigning a single label at these locations. Naturally, at the sequence boundaries, there’s a transition, so these regions have higher entropy. There also seems to be a giant “bird” buried under that ground; or is it just the salt?
Enthought Presentation at SPE
Enthought's Ben Lasscock and Brendon Hall presented a poster at the 2019 SPE Data Science Convention. The poster is titled "Deep Learning Augments Seismic Interpretation". Seismic interpretation requires the repetitive application of pattern and texture recognition of seismic images, informed by the geologic understanding of a skilled interpreter. AI/Machine Learning (ML) promises to alleviate the repetitive nature of this task.