Weird History
8.2k readers

The Actual Reasons Meteorologists Get The Weather Wrong So Often

August 9, 2021 8.2k views14 items
Editor's Note: Voting and Reranking have been closed.

The last thing anyone wants is to haul around an umbrella and rain boots on a beautiful sunny day, or to be caught completely unprepared for a blizzard. "Dammit!" you might think to yourself, "That weather forecaster should get fired - they're always wrong, or my weather app told me it wasn't going to rain today, so why are my feet soaking wet?"

It's human nature to point a finger at someone or something when things don't go as expected. And in the case of predicting weather, it's often the meteorologist who gets the brunt of the blame.

So why do forecasters seem to get the weather wrong so often? Well, truthfully, they don't. At least when it comes to short-term forecasts, the predictions are correct far more often than not. But when they do get it wrong, there are many reasons, including how much data is collected, the ways in which it's collected and processed, computer error, and the simple fact that Mother Nature is chaotic and really, really hard to predict.

Below are some reasons why meteorologists are always getting blamed if the weather doesn't do what it was predicted to do.

  • Five- And Seven-Day Forecasts Are Actually Accurate More Than 80% Of The Time

    It can be easy to joke about meteorologists and wonder how they keep their jobs when they seem to get their predictions wrong so often.

    But studies have shown that - at least when it comes to short-term predictions - they actually get it right the majority of the time. A five-day forecast will be accurate about 90% of the time, while a seven-day forecast will have around 80% accuracy.

  • Longer-Term Forecasts Are More Difficult To Get Right

    While short-term forecast accuracy does drop significantly when trying to predict the weather for longer periods, 10-day (and longer) forecasts tend to have only about 50% accuracy.

    This is because the computer programs (called weather models) that calculate the forecasts don't have data from the future, so they have to rely on assumptions and estimates to make the predictions. The atmosphere is constantly changing, so these estimates become less reliable the further into the future one projects.

  • Meteorologists Follow Two Major Models That Predict Differently

    While several different models try to predict the weather, the two major ones used by most meteorologists are the American and European models.

    The consensus is that the European model is more accurate. In March 1993, this model correctly forecast the path and intensity of "The Storm of the Century," a major weather event along the eastern US coast. By accurately predicting the storm five days ahead, officials were able to prepare and declare a state of emergency. 

    Nine years later, the European model correctly predicted the odd westward path of Hurricane Sandy seven days prior to it making landfall, while the American model incorrectly predicted Sandy taking an eastern path.

    The European model is thought to surpass the American model for several reasons: (1) the European model is run on a more powerful supercomputer; (2) it has a more mathematical system in terms of dealing with the initial conditions of the atmosphere; and (3) it was developed at an institute whose singular focus is medium-range weather prediction.

    The medium-range American model, meanwhile, is in a collective of several other models, including some short-range prediction systems that can run as frequently as every hour; the American model isn't as singularly focused as its European counterpart.

    When the American and European models differ in their predictions, forecasters often have to choose only one. If the forecaster chooses wrong, this can lead to the forecast being wrong, which in turn can result in disaster if the area is unprepared for a major weather event.

  • The Atmosphere Is Chaotic And Random, Which Makes It Difficult To Predict

    Scientists consider meteorology and the prediction of weather and climate a prime example of a "chaotic system" - one that is sensitive to its initial conditions but follows mathematical laws even though its outward appearance may appear random.

    In the late 1950s and early '60s, Edward Lorenz discovered the idea of a chaotic system with a study that modeled the weather using a dozen differential equations. On one occasion he started the program in the middle rather than at its initial conditions and stored the data to three decimal places rather than its normal six. While Lorenz expected to get a close approximation to his results, the answer turned out to be quite different.

    In his 1962 paper "Deterministic Nonperiodic Flow" (considered the start of chaos theory), the scientist concluded that a small change in the initial conditions can drastically change the long-term behavior of a meteorological system. He called this phenomenon “the butterfly effect.” 

    Based on his results, Lorenz stated it is impossible to predict the weather accurately. In the years since his study, of course, supercomputers and other technological advances have changed that impossibility to a possibility.