Climate models can run on supercomputers for months, but my new algorithm can make them ten times faster

Climate models are among the most complex pieces of software ever written, capable of simulating many different parts of the overall system, such as the atmosphere or the ocean. Many have been developed by hundreds of scientists over decades and are constantly being added to and refined. They can span more than a million lines of computer code – tens of thousands of printed pages.

Not surprisingly, these models are expensive. The simulations take time, often several months, and the supercomputers on which the models run consume a lot of energy. But a new algorithm I developed promises to make many of these climate model simulations ten times faster, and could ultimately be an important tool in the fight against climate change.

One reason climate modeling takes so long is that some of the processes simulated are intrinsically slow. The ocean is a good example. It takes a few thousand years for water to circulate from the surface to the deep ocean and back (the atmosphere, on the other hand, has a ‘mixing time’ of weeks).

Since the first climate models were developed in the 1970s, scientists realized this would become a problem. To use a model to simulate climate change, it must assume conditions representative of the time before industrialization led to the release of greenhouse gases into the atmosphere.

To achieve such a stable equilibrium, scientists “twist” their model by essentially running it until it stops changing (the system is so complex that, just like in the real world, there will always be some fluctuations).

An initial condition with minimal “drift” is essential to accurately simulate the effects of human-induced factors on climate. But thanks to the ocean and other slow components, this can take several months even on large supercomputers. No wonder climate scientists call this bottleneck one of the ‘great challenges’ of their field.

You can’t just throw more computers at the problem

You might be wondering, “Why don’t we use an even bigger machine?” Unfortunately it wouldn’t help. Simply put, supercomputers are just thousands of individual computer chips, each with dozens of processing units (CPUs or ‘cores’) connected together via a high-speed network.

One of the machines I use has over 300,000 cores and can perform almost 20 quadrillion arithmetic operations per second. (Obviously it’s shared by hundreds of users and each simulation will only use a small part of the machine.)

Big ocean wave, stormy sky

Big ocean wave, stormy sky

A climate model takes advantage of this by dividing the planet’s surface into smaller regions – subdomains – with calculations for each region running simultaneously on a different CPU. Basically, the more subdomains you have, the less time it takes to perform the calculations.

That is true to some extent. The problem is that the different subdomains need to ‘know’ what is happening in adjacent subdomains, which requires sending information between chips. That’s much slower than the speed at which modern chips can perform arithmetic calculations, what computer scientists call “bandwidth throttling.” (Anyone who’s tried to stream a video over a slow Internet connection knows what that means.) So there are diminishing returns as more computing power is thrown at the problem. Ocean models in particular suffer from such poor “scaling”.

Ten times faster

This is where the new computer algorithm I developed and published in Science Advances comes into the picture. It promises to dramatically reduce the spin-up time of the ocean and other components of Earth system models. When tested against typical climate models, the algorithm was on average ten times faster than current approaches, reducing time from many months to a week.

The time and energy that this can save climate scientists is valuable in itself. But being able to spin up models quickly also means that scientists can calibrate them against what we know actually happened in the real world, making them more accurate or better defining the uncertainty in their climate projections. Spin-ups are so time-consuming that neither is currently feasible.

The new algorithm will also allow us to perform simulations with more spatial detail. Currently, ocean models generally tell us nothing about features smaller than 1° in latitude and longitude (about 110 km at the equator). But many critical phenomena in the ocean occur on much smaller scales – tens of meters to several kilometers – and higher spatial resolution will certainly lead to more accurate climate projections, for example of sea level rise, storm surges and hurricane intensity.

How it works

Like so much ‘new’ research, it is based on an old idea, in this case an idea that goes back centuries to the Swiss mathematician Leonhard Euler. This is called ‘series acceleration’ and you can think of it as using information from the past to extrapolate into a ‘better’ future.

Among other things, it is widely used by chemists and materials scientists to calculate the structure of atoms and molecules, a problem that happens to take up more than half of the world’s supercomputer resources.

Sequence acceleration is useful when a problem is iterative in nature, which is exactly what climate model spin-up is: you feed the model output back as input to the model. Rinse and repeat until the output equals the input and you have found your equilibrium solution.

In the 1960s, Harvard mathematician DG Anderson came up with a clever way to combine multiple previous results into a single input, so that you arrive at the final solution with far fewer repetitions of the procedure. About ten times less than I discovered when I applied his scheme to the spin-up problem.

Developing a new algorithm is the easy part. Getting others to use it is often the bigger challenge. It is therefore promising that the UK Met Office and other climate modeling centers are trying it out.

The next major IPCC report is due in 2029. That still seems far away, but given the time it takes to develop models and carry out simulations, preparations are already in full swing. Coordinated through an international collaboration known as the Coupled Model Intercomparison Project, these simulations will form the basis for the report. It is exciting to think that my algorithm and software can make a contribution.


Read more: Noise in the brain allows us to make extraordinary leaps of imagination. It could also transform the power of computers


This article is republished from The Conversation under a Creative Commons license. Read the original article.

The conversationThe conversation

The conversation

Samar Khatiwala receives funding from UKRI.

Leave a Comment