For those just getting started with advanced scientific computing techniques, here are four steps to efficiently turn data into decisions with business value.
Author:Â Ryan Swindeman, Scientific Software Developer
In the 4 minute video below, Enthought scientist Ryan Swindeman puts data into context as foundational to any digital transformation initiative, setting out four fundamental steps for data to science problems.
1. Data Preparation (or Data Conditioning): This is the essential, first step in a digital project. Data must be clean and accessible. Access to data must be quick, and reliable. The data must be cataloged or categorized, so that there is consistency in how it is reached and integrated into projects. Data preparation must be in service of addressing a business need or objective, to solve a specific problem, and not be a case of ‘we need to organize our data’.
2. Data Visualization: Visualizing data is important as a starting point to understanding a problem. This involves looking at the data in its native domain, identifying trends, and from there possibly transforming it to a different domain, cross-plotting to look for relationships, or running statistics as a way to discover features. Visualization is also a reliable way to increase efficiency in problem-solving. The understanding gained through visualization is essential for deep learning – if you do not understand the underlying trends or relationships in the data, you will not understand the outcomes produced by any AI/ML/Deep Learning.
3. Modeling and Optimization: This step uses the underlying dynamics or physics of the problem, and the applications are endless. (In geophysics, this is often called forward modeling and inversion.) Most critically, modeling and optimization allows scientists to prove (or disprove) hypotheses very quickly, enabling teams to test, iterate and change strategy, often resulting in problems being solved quickly.
4. AI/ML/Deep Learning: These advanced computing techniques are related, and differ in important ways. Unlike modeling and optimization, or inversion (which is a physics-based approach), AI/ML/Deep Learning is a data-driven approach. These techniques are beneficial if forward modeling and optimization are not possible because of a lack of understanding of the underlying physics, or if the physics leads to too many approximations. The problem-solving and analytical power of AI/ML/Deep Learning becomes obvious in pattern recognition or texture analysis.
These four steps provide a robust sequence for solving problems using data, whether a small set or large, fundamental to digital transformation projects.
About the Author
Ryan Swindeman, Scientific Software Developer, holds a M.S. in geophysics from the University of Texas at Austin and a B.S. in physics from the University of Illinois at Urbana-Champaign, with graduate research in computational seismology.
Related Content
Top 5 Takeaways from the American Chemical Society (ACS) 2023 Fall Meeting: R&D Data, Generative AI and More
By Mike Heiber, Ph.D., Materials Informatics Manager Enthought, Materials Science Solutions The American Chemical Society (ACS) is a premier scientific organization with members all over…
Real Scientists Make Their Own Tools
There’s a long history of scientists who built new tools to enable their discoveries. Tycho Brahe built a quadrant that allowed him to observe the…
How IT Contributes to Successful Science
With the increasing importance of AI and machine learning in science and engineering, it is critical that the leadership of R&D and IT groups at...
From Data to Discovery: Exploring the Potential of Generative Models in Materials Informatics Solutions
Generative models can be used in many more areas than just language generation, with one particularly promising area: molecule generation for chemical product development.
7 Pro-Tips for Scientists: Using LLMs to Write Code
Scientists gain superpowers when they learn to program. Programming makes answering whole classes of questions easy and new classes of questions become possible to answer….
The Importance of Large Language Models in Science Even If You Don’t Work With Language
OpenAI's ChatGPT, Google's Bard, and other similar Large Language Models (LLMs) have made dramatic strides in their ability to interact with people using natural language....
4 Reasons to Learn Xarray and Awkward Array—for NumPy and Pandas Users
You know it. We know it. NumPy is cool. Pandas is cool. We can bend them to our will, but sometimes they’re not the right tools…
Making the Most of Small Data in Scientific R&D
For many traditional innovation-driven organizations, scientific data is generated to answer specific immediate research questions and then archived to protect IP, with little attention paid...
7 Lesser-Known Command Line Tools That Ship with Python
Like most people, I mostly interact with Python using the default REPL or with IPython. Yet, I often reach for one of the Python tools…
ChatGPT on Software Engineering
Recently, I’ve been working on a new course offering in Enthought Academy titled Software Engineering for Scientists and Engineers course. I’ve focused on distilling the…