Picture it - winter, somewhere in the northeastern US, two to three years from now. The National Weather Service is predicting a ‘historic’, ‘crippling’ blizzard. Snow accumulations are predicted to be over 18”, and coastal storm surges could top 4 feet, destroying homes and blocking evacuation.
Everyone sees all these warnings and thinks, “Yeah, right - just like that ‘crippling’ storm we had in January 2015, the one that dropped all of 8” of snow in NYC - the one that the National Weather Service actually apologized about.” Public officials, still stinging from accusations of overreacting with transit shutdowns and travel bans in 2015, decide to keep the roads open and mass transit running. Residents in coastal communities decide against making storm preparations.
And then the unthinkable happens - the storm exceeds the forecasts. Snow accumulations top two feet. Storm surges crest at 6 feet, and high tide brings massive flooding from Atlantic City to Cape Cod. Thousands of people are stuck on stalled subways and trains for hours without power, in temperatures far below freezing. Others are trapped in their homes without power or adequate supplies. First responders are overwhelmed and can’t reach all those impacted by the storm, and the death toll climbs to over 50.
This is the devastating impact of bad weather forecasting - when people see a storm warning that doesn’t pan out, they are less likely to respond to the next storm warning. We see it in Gulf Coast communities that are frequently warned of impending hurricanes.
But it doesn’t have to be this bad. We have the technology, today, to make weather forecasting phenomenally better. There are two main avenues to better forecasting: better sensing data and added compute power for modeling.
Sensing data is the bedrock of setting the initial conditions for forecast models. The more data you have on initial conditions, the better your model will be. But the state of the art in sensing is still ridiculously close to what we did 30 years ago. Worldwide, there are about 800 weather balloons launched each day - meaning each balloon is responsible for covering about 200,000 square miles of ground area.
If we really want to get great data on initial conditions, we need to make massive investments in weather-sensing drones - sensing platforms that can be deployed on demand and piloted into storm systems for detailed, fine-grained data on pressure and temperature. And we need to replace the costly system of oceanic weather buoys with autonomously-deployed sensing platforms, like those being developed by Liquid Robotics. Within five years, we could increase our sensing capacity by 10x simply through the cost reductions possible with autonomous deployment.
Once you have all this data, what are you going to do with it? The current approach of NOAA is to make massive investments in single-purpose computing platforms. Earlier this month, they announced that they are investing $45 million, with the goal of bringing their compute capacity for modeling up to 5 petaflops. It’s worth examining whether or not single-purpose computing is still the right approach in a world where massive amounts of cores are available on-demand.
On a sunny July day, 5 petaflops is probably vastly overpowered to create the forecast “sunny and mid-80s today”. But in the face of extreme weather, scaling up to twice that capacity could enable much more accurate forecasting - why not lease 600,000 cores from Google or Amazon for a day, run the severe weather model at 10 petaflops, and create a much more accurate model?
The bottom line is, it is within our technical capacity to end bad forecasts for severe weather in the next 5-10 years, but it is going to require some radical new approaches to sensing and computing.