Friday 3 April 2020

CORONAVIRUS LESSONS FROM THE ASTEROID THAT DIDN'T HIT EARTH


 

Benny Peiser & Andrew Montford: The Wall Street Journal 2 April 2020
 
Scary projections based on faulty data can put policy makers under pressure to adopt draconian measures.

London: The coronavirus pandemic has dramatically demonstrated the limits of scientific modeling to predict the future. The most consequential coronavirus model, produced by a team at Imperial College London, tipped the British government, which had until then pursued a cautious strategy, into precipitate action, culminating in the lockdown under which we are all currently laboring.
 
With the Imperial team talking in terms of 250,000 to 510,000 deaths in the U.K. and social media aflame with demands for something to be done, Prime Minister Boris Johnson had no other option.

But last week, a team from Oxford University put forward an alternative model of how the pandemic might play out, suggesting a much less frightening future and a speedy end to the current nightmare.

How should the government know who is right? It is quite possible that both teams are wrong. Academic studies often suffer from a lack of quality control, as peer review is usually brief and cursory. In normal times this doesn’t matter much, but it’s different when studies find their way into the policy world. In the current emergency, it is vital to check that the epidemiological models have been correctly assembled and that there are no inadvertent mistakes.

Several researchers have apparently asked to see Imperial’s calculations, but Prof. Neil Ferguson, the man leading the team, has said that the computer code is 13 years old and thousands of lines of it “undocumented,” making it hard for anyone to work with, let alone take it apart to identify potential errors. He has promised that it will be published in a week or so, but in the meantime reasonable people might wonder whether something made with 13-year-old, undocumented computer code should be used to justify shutting down the economy. Meanwhile, the authors of the Oxford model have promised that their code will be published “as soon as possible.”
 
It isn’t only the U.K. that’s plagued by inscrutable models that describe very different futures. It’s a problem that governments around the world now face. Is there anything that can be done to make the predictions put in front of policy makers more reliable?
 
Peer review can’t bear reform, because there are simply too few people around with the expertise and time to do comprehensive reviews. It would be much simpler to require publicly funded academics to publish data and code as a matter of course; the possibility of competing teams checking their work might encourage development of the quality-control culture that seems lacking within the academy. It would also mean that in a crisis, when traditional academic peer review would move too slowly to be useful, a crowdsourced review process could take place.
 
In this way, the combined intellects of experts among the general public could be brought to bear on the problem, rapidly identifying errors and challenging assumptions. This sort of crowdsourced review would provide the manpower to take apart the abstruse models that are all too common in many academic fields. The authors of the Imperial model have argued that they don’t have time to explain to people how to get their 13-year-old computer code running. But getting computer code running is usually a problem that can be solved in a day or two when you throw enough brain power at it.
 
Calculations aren’t the only problem. Only a few weeks into the pandemic, we don’t have enough data to feed into the models. In particular, information about how many people are infected but remain asymptomatic is highly tentative. This means that there are a huge number of mathematical models that might explain what has happened so far, each extrapolating a very different future. New data can change predictions considerably.
 
Take an example from astronomy. On March 12, 1998, media around the world announced that a mile-wide asteroid was on a possible collision course with Earth in 2028. Only a day later, the global asteroid scare was over as additional observational data showed it would miss by 600,000 miles. While the initial calculations weren’t inaccurate, they were based on limited data and weren’t properly scrutinized, which made the announcement premature. A short delay while new information was collated was all it took to show that there was no risk at all.

After this scare, the international astronomical community agreed on a robust warning system based on the Torino Impact Hazard Scale, a tool for categorizing and communicating potential asteroid impact risks. Out of a scientific fiasco, a successful risk-communication tool was developed. It has since prevented many false alarms and taught the public to understand and live with the comparatively small risk of asteroid impacts. Covid-19 is no false alarm, but public health could benefit from a similar warning system, which would help governments and health officials communicate uncertainties and risks to the public.
 
When competing models are giving wildly different, and in some cases frightening, predictions, the pressure on governments to adopt a draconian approach can be overwhelming. But, as we are seeing, the costs of such measures are extraordinarily high. Nations cannot afford to lock down their economies every time a potentially devastating new virus emerges. Setting up an effective pandemic hazard scale would inform policy makers and the public, helping fend off media demands for “something to be done” until the right decisions can be made at the right time.
 
Messrs. Peiser and Montford are, respectively, director and deputy director of the Global Warming Policy Forum.

No comments:

Post a Comment

Climate Science welcomes your views/messages.