Retuning the Heavens: Machine Learning and Ancient Astronomy

What can we learn about machine learning from ancient astronomy?

When thinking about Machine Learning it is easy to be model-centric and get caught up in the details of getting a new model up and running: preparing a dataset for machine learning, partitioning the training and test data, engineering features, selecting features, finding an appropriate metric, choosing a model, tuning the hyper-parameters. Being model-centric is reinforced by the fact that we don’t always have control of the data or how it was collected. In most cases, we are presented with a dataset collected by someone else and are asked what we can make of it. As a result, it is easy to just accept the data and over-fit your thinking about machine learning to the specifics of your modeling process and experience. Sometimes it is a good idea to step away from these details and remind yourself of the basic components of a model and its data, how they interact with each other, and how they evolve.

Ask yourself, have I considered all aspects of the model and the data driving it? Does my overall machine learning process generalize well?

It is also useful to think about where your machine learning model fits into the field for which you are modeling. Is this an area with a long experience of research and modeling, one that can be well-informed by science? Or, is this an area where your modeling efforts are a first exploration where no one is sure about how the data are related? The answers to these questions will help determine how much statistical and mathematical rigor you should expect to see in your model.

One way to gain perspective when thinking about machine learning processes is to look at other people’s processes and the models developed from them. In this article we are going to examine what could be called one of the first machine learning models in history:  ancient astronomy, from Babylonian times up until the Newtonian age.

How could ancient astronomy be viewed as a machine learning system?

In a classic supervised learning paradigm we have four basic components:  observed data; a prediction to be made; a model (processes data into predictions); and a metric.  In our ancient astronomy example, the observed data consists of Babylonian star catalogs:  the time of year; what stars (including the planets) are seen at sunrise; and what stars are seen at sunset. What we want to predict is the time of year in the cycle of seasons, given the stars seen at sunrise and/or sunset. In Babylonian times, the primary purpose of these predictions was to figure out if it was a good time to plant the staple crop of winter wheat (planted in the autumn to over-winter, and then be harvested in spring). Our model in this case is a lookup table (essentially a decision tree) that converts astronomical observations into a fairly precise time of year that can be used for planting and harvesting decisions. Finally, the metric to measure success was simple–starvation or plenty. Bad predictions led to low yields:  crops that did not survive winter or ones that did not get the full benefit of spring rains before being stunted by the heat of summer. Good predictions generally yielded healthy crops and plentiful harvests.  

While the “learning” part of this system was not automated in the way a modern machine learning model is, it was still a data-driven system based on a series of observations. Over the system’s approximately 2,500 years in production, it had to be re-tuned periodically to account for the changing correlation of observations, seasons, and the geographic location where it was used. As such, it was a fragile system. It had narrow applicability in both time and space. It also needed to be routinely maintained to keep the predictions reliable and useful.

Over time, the model (the lookup table) was greatly improved with new data acquired over time and space. After about 1,000 years, the model had evolved into what we now call Ptolemaic astronomy. In this system, there was an underlying theory that the stars moved in perfectly circular motions around the Earth. In addition, the planets (wandering stars) had been better represented in the model and were modeled with nested circular motions (cycles and epicycles). This accounted for the times when, as observed from the Earth, a planet apparently moved in reverse for a short time (retrograde motion) before again returning to its forward motion through the sky.

The machine learning model was more complex and could make usefully accurate predictions of more astronomical phenomena. With this model, instead of simply looking up the season, it was possible to use geometry to calculate the season based on the position of the stars. The theory was successful enough that it could be turned into a mechanical device using gears to embody the theory and to allow users to mechanically make predictions about star positions, planetary positions, and even eclipses days, months, or even years into the future (one such device, called the Antikythera mechanism was found in an ancient Greek shipwreck in 1901).

Take a Look at the Data

The Ptolemaic model continued in use up through about 1600, with few changes other than re-tuning to account for new locations and the long-term, slow shift of star positions. The data used to run it was mostly unchanged; few astronomers after the Babylonians added new observations. The model was assumed to be a valid representation of the way the stars and planets moved.

This situation started to change in the early 1500s with the work of Nicolaus Copernicus. Copernicus’s main innovation was an attempt to make the Ptolemaic model heliocentric instead of geocentric. However, he was focused exclusively on improving the Ptolemaic model and, like everyone else at the time, accepted Ptolemy’s curated dataset as is. In the end, he was mainly tinkering with the existing model (we can think of it as being focused on manually tuning a “kernel” hyper-parameter that specified the center of the model geometry). One thing that limited the acceptance of Copernicus’s ideas was that the tuning that best simplified the math for making model calculations was not rigorously heliocentric, but one in which the center of the system was slightly offset from the sun’s position.

Copernicus’s main legacy was the issues he raised about the underlying Ptolemaic model. His work encouraged people to question whether or not it was a valid representation of what we now call the solar system (back then they called it the “heavens”). In the late 1500s this inspired Tycho Brahe to collect data to prove the heliocentric model correct. Brahe suspected that noisy and imprecise Babylonian data had thrown off Copernicus’s calculations and that if he could improve the data, the heliocentric model would just fall into place and work. This was the first time in about 2,000 years that attention was paid to the data and how to collect it properly. To accomplish this, Brahe and his team invented and built new instruments for astronomical observations that allowed them to systematically and easily collect new, more precise data. While Brahe never was able to prove the validity of Copernicus’s model (after all, it was not an accurate representation of planetary orbits) his data changed everything.

Brahe’s data enabled his younger colleague Johannes Kepler to figure out a more accurate orbit for Mars. Eventually Kepler realized that the martian orbit could not be adequately explained with perfectly circular orbits in either the geocentric or heliocentric models. When he realized that Brahe’s extensive and precise data showed Mars in an elliptical orbit around the sun (explaining the need for an offset center in the earlier purely heliocentric-circular model), he had his breakthrough. Not only could he explain the motion around the sun, he could even begin to see how all the planets sped up when near the sun and slowed down when further away. By the late 1600s, Isaac Newton was able to convert this updated, but ancient astronomical model into mathematical equations with his new notion of gravity. Now, with a set of initial conditions (the positions of the stars and planets) and an elapsed time, scientists were able to calculate new positions for the astronomical bodies in question with great precision.

Machine Learning and the Scientific Process

We started this article with an observation about how easy it is to get lost in the technical details of machine learning. At this point we may be lost in the historical details of some old, obsolete astronomy. Let’s back out of that with some generalizations the connect the two strands of this article, about where machine learning fits (or should fit) into the scientific process:

  1. Observe.  Many scientific enterprises start with a simple observation of nature. In Babylonia this started with the observation that the stars had regular cycles, returning eventually to their starting points and repeating their movements.
  2. Correlate. The next step is to note a correlation between what we observe and something that we want to predict. In our example, we have the correlation between the stars and the seasons.
  3. Predict.  In the simplest of cases we can create an algorithm that will take our observations and make predictions. With modern machine learning we can do the same for even very complex systems. Even without all of the modern tooling, for some 2,500 years, astronomers made useful predictions with the Ptolemaic model of the heavens. They got these useful predictions without really understanding the underlying mechanisms (or causes) in nature. It was a long path from predictability to understanding. As a result, successful models–whether ancient or modern–don’t have to be “true”, they just have to be “useful”.
  4. Update.  Once we have a working model, we can improve it. We can do this in two basic ways. First, is a data-centric approach. By improving our collection of data (minimize noise, increase precision, collect more samples) and our understanding of what the data actually represents, we improve our ability to detect patterns in the data (Brahe). Second is a model-centric approach. We can try different models and see which ones work the best given the data we have (Ptolemy, Copernicus). Nowadays we are not tied to just decision trees or circular models. There are many models to choose from and we can inform our model choices based on our understanding of the data (Kepler). Keep in mind that whatever model we work with, it is inherently limited by the data we feed into it. Be data-centric. A well-curated dataset can last a long time—even 2,500 years!  Collecting data is hard, time consuming, expensive, and not as fun as tinkering with models. Nevertheless, one of the most important things you can do for any model is to collect the best data you can and to revisit the data when instruments and processes improve.

In science and engineering building a machine learning (or ancient astronomical) model is just a starting point.

As we understand both our data and our models better, we can move beyond making useful predictions.  We can add the rigor of statistical modeling and see if part of the model might represent something real in the world. If we are lucky, we might even start seeing patterns that allow us to build mathematical models that give us new knowledge about the way the world really works and hints at other patterns we should be looking for.

About the Author

Eric Olsen holds a Ph.D. in history from the University of Pennsylvania, a M.S. in software engineering from Pennsylvania State University, and a B.A. in computer science from Utah State University. Eric spent three decades working in software development in a variety of fields, including atmospheric physics research, remote sensing and GIS, retail, and banking. In each of these fields, Eric focused on building software systems to automate and standardize the many repetitive, time-consuming, and unstable processes that he encountered.

Share this article:

Related Content

Leveraging AI for More Efficient Research in BioPharma

In the rapidly-evolving landscape of drug discovery and development, traditional approaches to R&D in biopharma are no longer sufficient. Artificial intelligence (AI) continues to be a...

Read More

Utilizing LLMs Today in Industrial Materials and Chemical R&D

Leveraging large language models (LLMs) in materials science and chemical R&D isn't just a speculative venture for some AI future. There are two primary use...

Read More

Top 10 AI Concepts Every Scientific R&D Leader Should Know

R&D leaders and scientists need a working understanding of key AI concepts so they can more effectively develop future-forward data strategies and lead the charge...

Read More

Why A Data Fabric is Essential for Modern R&D

Scattered and siloed data is one of the top challenges slowing down scientific discovery and innovation today. What every R&D organization needs is a data...

Read More

Jupyter AI Magics Are Not ✨Magic✨

It doesn’t take ✨magic✨ to integrate ChatGPT into your Jupyter workflow. Integrating ChatGPT into your Jupyter workflow doesn’t have to be magic. New tools are…

Read More

Top 5 Takeaways from the American Chemical Society (ACS) 2023 Fall Meeting: R&D Data, Generative AI and More

By Mike Heiber, Ph.D., Materials Informatics Manager Enthought, Materials Science Solutions The American Chemical Society (ACS) is a premier scientific organization with members all over…

Read More

Real Scientists Make Their Own Tools

There’s a long history of scientists who built new tools to enable their discoveries. Tycho Brahe built a quadrant that allowed him to observe the…

Read More

How IT Contributes to Successful Science

With the increasing importance of AI and machine learning in science and engineering, it is critical that the leadership of R&D and IT groups at...

Read More

From Data to Discovery: Exploring the Potential of Generative Models in Materials Informatics Solutions

Generative models can be used in many more areas than just language generation, with one particularly promising area: molecule generation for chemical product development.

Read More

7 Pro-Tips for Scientists: Using LLMs to Write Code

Scientists gain superpowers when they learn to program. Programming makes answering whole classes of questions easy and new classes of questions become possible to answer….

Read More