Over the past month or so, I’ve been keeping my eye on the news about hurricanes Harvey, Irma, Jose, and Maria. It’s hard not to get caught up in all the dramatic scenes, the rescues, and the incredible amount of information that comes pouring in. While the forecasts and models remain somewhat imprecise – think of the predictions of which areas in Florida would be hardest hit, and what the storm surges would be – they have certainly improved over the years. Today, the information may not have pinpoint accuracy, people are no longer caught unawares by a major storm.
What’s most interesting to me in all this is, of course, the underlying science and technology of weather forecasting, and ZDNet recently had a good article on it.
The article featured Scott Capps, who is an atmospheric scientist.
Weather models are at the heart of what Capps does. They are used both for forecasting and to recreate historical data. Increasingly however over the last decade machine learning (ML) has come to be applied in atmospheric science.
“Machine learning takes weather data and builds relationships between that data and whatever predictors we are interested in,” Capps explains. ML in atmospheric science is in its infancy, but it’s seeing exponential growth, and in that respect this domain is not different as ML is being adopted pretty much everywhere these days.
The problem is, ML won’t necessarily tell you why. As a scientist, Capps is always interested in knowing the whys. Not just to satisfy his curiosity, but for very pragmatic reasons as well. This would allow improving physically grounded models as well as providing more clear explanations. (Source: ZD Net)
Those physically grounded models use satellite imagery and sensor data – primarily satellite imagery. Sensor data is “mostly used when doing predictions at a local, granular level to ground truth weather models, when using reliable equipment.”
However reliable the equipment and the forecasts are, weather systems are incredibly complex, and there’s still a lot of uncertainty around predicting the weather. That’s why we heard a lot about the “cone of uncertainty” around those hurricane forecasts.
Another interesting write-up on weather forecasting was on Motherboard.
Among the other topics hit on was the difference in accuracy between the US Global Forecast System and the European Center for Medium-Range Weather Forecasts. When they’re tracking storms and projecting their trajectories, the meteorologists typically show both models.
These models take environmental data collected by remote sensors in the ocean, hurricane-tracking aircraft, and satellites as input for algorithms that create simulations of weather patterns on a global scale. (Source: Motherboard)
Five years ago, it was the European system that did a far better job predicting Hurricane Sandy. So, in Sandy’s wake, Congress decided to invest in some “serious computing power,” authorizing
… spending $48 million to improve weather forecasting to make sure that such a deadly and expensive miscalculation like Sandy didn’t happen again. Of this, $25 million was devoted a massive $44 million upgrade of NOAA’s computing architecture that was completed in January of 2016.
This upgrade—which added the supercomputers Luna and Surge to NOAA’s weather modeling arsenal in Virginia and Florida, respectively—represented a nearly tenfold increase in NOAA’s computing power to 5.78 petaflops, or 5,780 million million operations per second.
To make all this power worthwhile, NOAA’s adding an “revolutionary” modeling algorithm.
…the finite-volume cubed-sphere dynamical core, or FV3. This algorithm, which NOAA approved last year as the replacement of the current core at the heart of the current Global Forecast System, allows for unprecedented high resolution simulations and far more localized forecasts, all while generating a global forecast four times a day.
It does this by essentially creating a 3D grid around the Earth and partitioning the atmosphere into smaller boxes, simulating the atmospheric conditions in each one, and then integrating the conditions from these boxes to give a comprehensive report of global weather patterns. Moreover, these boxes can be nested in one another, meaning that in future iterations of FV3, researchers will effectively be able to zoom in on local weather events and get accurate forecasts.
Fascinating stuff, all this. And as it improves, it will certainly save lives. Even if hurricane forecasting becomes 100% accurate, I still have to say that I’m just as happy to be watching hurricane season from afar. They’re not something we typically have to deal with in Upstate NY. (Just don’t get me going on lake effect snow…)