Small Data, Big Value

Small Data, Big Value

Author: Mason Dykstra, Ph.D., VP Energy Solutions 

Enthought welcomes Mason as VP, Energy Solutions, whose background from Anadarko, Statoil, and as a professor at the Colorado School of Mines qualifies him to make the case for ensuring ‘Small Data’ is equally part of the Fourth Industrial Revolution. The first in a Small Data series.

The origin of the term Big Data will likely never be agreed. However, in the world of science and computing, the case can be made that the term originated in Silicon Graphics in the 1990’s, whose work in video, for surveillance and Hollywood special effects, had it facing orders of magnitude more data than ever before. Recent advances in scientific computing technology and techniques, and massive generation of data, in particular by consumers and from social media, have put the term Big Data at center stage.  

However, in many scientific fields, Big Data does not exist. It’s all about getting the most from ‘Small Data’, ensuring scientific challenges with minimal data also benefit from the ‘Fourth Industrial Revolution’. In many natural sciences and engineering disciplines large volumes of data can be hard or very expensive to generate. The reality is these datasets are often limited in size, poorly curated, and bespoke to particular problems. So, either the fields lacking in Big Data will be left out of the ‘Revolution’, or we need to work on ways of unleashing the power of Small Data.

Scientists are particularly adept at teasing meaning out of Small Data and drawing important conclusions with limited datasets. The future will be a collaboration between humans and machines, but clearly we don’t only want to solve the problems that have Big Data behind them. In cases where datasets are relatively small, or important pieces of information are missing, how can we develop this type of ‘intelligence’ in machines? 

We need to engineer applications that can approach problems the way a scientist would. Scientists typically hypothesize as they go, which is to say they don’t wait until they have enough data to draw conclusions, but they actually generate, evolve and discard hypotheses along the way. While gathering data we are already engaging in problem-solving. 

For example, when a geologist is creating a map of the geologic layers and faults under the Earth, they continually make educated guesses about what some of the map features will look like before they have gathered all the data. Not only does this give the geologist something early on paper (ok, on screen), but actually it provides a basis for hypothesis testing, and can help steer the succeeding data-gathering step. Think of this as akin to coming into a new town for the first time – even though you might never have been to that particular town before, all towns share certain traits and tend to have similarities which we can use to imagine the parts we haven’t yet seen. This kind of intuitive thinking and rule-of-thumb-based guessing, although critical for many sciences, has not been the realm of computers. Yet.

So the real question is can we capture the essential parts of that rule-making process and combine it with ‘machine reasoning’ to develop Small Data approaches that are akin to the way a scientist would approach a problem? But much faster and more consistent? This is one of the major challenges for many scientists today, whether they recognize it yet or not.

One thing we do know, paraphrasing Antonio di Leva in The Lancet; ‘Machines will not replace scientists, but scientists using AI will soon replace those not using it.’

About the Author

Mason Dykstra, Ph.D., VP Energy Solutions  at Enthought, holds a PhD from the University of California Santa Barbara, an MS from the University of Colorado Boulder, and a BS from Northern Arizona University, all in the Geosciences. Mason has worked in Oil and Gas exploration, development, and production for over twenty years, split between oil industry-focused applied research at Colorado School of Mines and the University of California, Santa Barbara; and within companies including Anadarko Petroleum Corporation and Statoil (Equinor).

Share this article:

Related Content

Leveraging AI for More Efficient Research in BioPharma

In the rapidly-evolving landscape of drug discovery and development, traditional approaches to R&D in biopharma are no longer sufficient. Artificial intelligence (AI) continues to be a...

Read More

Utilizing LLMs Today in Industrial Materials and Chemical R&D

Leveraging large language models (LLMs) in materials science and chemical R&D isn't just a speculative venture for some AI future. There are two primary use...

Read More

Top 10 AI Concepts Every Scientific R&D Leader Should Know

R&D leaders and scientists need a working understanding of key AI concepts so they can more effectively develop future-forward data strategies and lead the charge...

Read More

Why A Data Fabric is Essential for Modern R&D

Scattered and siloed data is one of the top challenges slowing down scientific discovery and innovation today. What every R&D organization needs is a data...

Read More

Top 5 Takeaways from the American Chemical Society (ACS) 2023 Fall Meeting: R&D Data, Generative AI and More

By Mike Heiber, Ph.D., Materials Informatics Manager Enthought, Materials Science Solutions The American Chemical Society (ACS) is a premier scientific organization with members all over…

Read More

Real Scientists Make Their Own Tools

There’s a long history of scientists who built new tools to enable their discoveries. Tycho Brahe built a quadrant that allowed him to observe the…

Read More

How IT Contributes to Successful Science

With the increasing importance of AI and machine learning in science and engineering, it is critical that the leadership of R&D and IT groups at...

Read More

From Data to Discovery: Exploring the Potential of Generative Models in Materials Informatics Solutions

Generative models can be used in many more areas than just language generation, with one particularly promising area: molecule generation for chemical product development.

Read More

7 Pro-Tips for Scientists: Using LLMs to Write Code

Scientists gain superpowers when they learn to program. Programming makes answering whole classes of questions easy and new classes of questions become possible to answer….

Read More

The Importance of Large Language Models in Science Even If You Don’t Work With Language

OpenAI's ChatGPT, Google's Bard, and other similar Large Language Models (LLMs) have made dramatic strides in their ability to interact with people using natural language....

Read More