DeepMind

Assembly Line

Probabilistic weather forecasting with machine learning

đź“… Date:

✍️ Authors: Ilan Price, Alvaro Sanchez-Gonzalez, Ferran Alet

đź”– Topics: Forecasting

🏢 Organizations: DeepMind


Weather forecasts are fundamentally uncertain, so predicting the range of probable weather scenarios is crucial for important decisions, from warning the public about hazardous weather to planning renewable energy use. Traditionally, weather forecasts have been based on numerical weather prediction (NWP)1, which relies on physics-based simulations of the atmosphere. Recent advances in machine learning (ML)-based weather prediction (MLWP) have produced ML-based models with less forecast error than single NWP simulations2,3. However, these advances have focused primarily on single, deterministic forecasts that fail to represent uncertainty and estimate risk. Overall, MLWP has remained less accurate and reliable than state-of-the-art NWP ensemble forecasts. Here we introduce GenCast, a probabilistic weather model with greater skill and speed than the top operational medium-range weather forecast in the world, ENS, the ensemble forecast of the European Centre for Medium-Range Weather Forecasts4. GenCast is an ML weather prediction method, trained on decades of reanalysis data. GenCast generates an ensemble of stochastic 15-day global forecasts, at 12-h steps and 0.25° latitude–longitude resolution, for more than 80 surface and atmospheric variables, in 8 min. It has greater skill than ENS on 97.2% of 1,320 targets we evaluated and better predicts extreme weather, tropical cyclone tracks and wind power production. This work helps open the next chapter in operational weather forecasting, in which crucial weather-dependent decisions are made more accurately and efficiently.

Read more at Nature

The Inside Story of Google’s Quiet Nuclear Quest

đź“… Date:

✍️ Author: Ross Koningstein

đź”– Topics: Machine Learning

🏢 Organizations: Google, TAE Technologies, DeepMind, Commonwealth Fusion Systems


The first research effort came from a proposal by my colleague Ted Baltz, a senior Google engineer, who wanted to bring the company’s computer-science expertise to fusion experiments at TAE Technologies in Foothill Ranch, Calif. He believed machine learning could improve plasma performance for fusion.

In 2014, TAE was experimenting with a warehouse-size plasma machine called C-2U. This machine heated hydrogen gas to over a million degrees Celsius and created two rings of plasma, which were slammed together at a speed of more than 960,000 kilometers per hour. Powerful magnets compressed the combined plasma rings, with the goal of fusing the hydrogen and producing energy. The challenge for TAE, as for all other companies trying to build commercial fusion reactors, was how to heat, contain, and control the plasma long enough to achieve real energy output, without damaging its machine.

A nice side benefit from our multiyear collaboration with TAE was that people within the company—engineers and executives—became knowledgeable about fusion. And that resulted in Alphabet investing in two fusion companies in 2021, TAE and Commonwealth Fusion Systems. By then, my colleagues at Google DeepMind were also using deep reinforcement learning for plasma control within tokamak fusion reactors.

Read more at IEEE Spectrum

How DeepMind is Reinventing the Robot

đź“… Date:

✍️ Author: Tom Chivers

đź”– Topics: robotics, artificial intelligence, robotic arm, computer vision

🏢 Organizations: DeepMind


To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.

There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.

There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.

But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.

Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.

Read more at IEEE Spectrum