The levees of the Red River in Grand Forks, North Dakota, are built to withstand 51-ft water levels. In 1997, the National Weather Service predicted a flood, but despite a 35% margin of error for previous estimates, it emphasized that the river would crest at 49 ft at most. When the waters rose to 54 ft, wreaking havoc on the area, local inhabitants were shocked and angry. Why had forecasters projected such confidence in their prediction? According to Nate Silver, who describes the incident in The Signal and the Noise, “The forecasters later told researchers that they were afraid the public might lose confidence in the forecast if they had conveyed any uncertainty in the outlook.”1
In hindsight, it's easy to criticize the forecasters. Not only were they wrong, but their unwillingness to admit to uncertainty had grave consequences. Silver suggests that the flood was largely preventable: sandbags could have augmented the levees, and water could have been diverted from populated areas. Looking back, it's hard to see any downside to admitting that the prediction could be off by 9 ft either way. But forecasters faced a trade-off: communicating uncertainty often undermines perceived expertise, but if you don't communicate uncertainty and end up being wrong, you risk losing even more credibility. Management of the Ebola “crisis” in the United States has crystallized this dilemma.
Click here to read the complete article, "Communicating Uncertainty — Ebola, Public Health, and the Scientific Process," by Lisa Rosenbaum, M.D., N Engl J Med 2015; 372:7-9 January 1, 2015DOI: 10.1056/NEJMp1413816