Early in the morning on April 5, 2020, an article appeared on the website Medium with the title “Covid-19 had us all fooled, but now we might have finally found its secret.” The article claimed that the pathology of COVID-19 was completely different from what public health authorities, such as the World Health Organization, had previously described. According to the author, COVID-19 strips the body’s hemoglobin of iron, preventing red blood cells from delivering oxygen and damaging the lungs in the process. It also claimed to explain why hydroxychloroquine, an experimental treatment often hyped by President Trump, should be effective.
The article was published under a pseudonym—libertymavenstock—but the associated account was linked to a Chicagoland man working in finance, with no medical expertise. (His father is a retired M.D., and in a follow-up note posted on a blog called “Small Dead Animals,” the author claimed that the original article was a collaboration between the two of them.) Although it was not cited, the claims were apparently based on a single scientific article that has not yet undergone peer-review or been accepted for publication, along with “anecdotal evidence” scraped from social media.1
While Medium allows anyone to post on their site and does not attempt to fact-check content, this article remained up for less than 24 hours before it was removed for violating Medium’s COVID-19 content policy. Removing the article, though, has not stopped it from making a splash. The original text continues to circulate widely on social media, with users tweeting or sharing versions archived by the Wayback Machine and re-published by a right-wing blog. As of April 12, the article had been tweeted thousands of times.
There is a pandemic of misinformation about COVID-19 spreading on social media sites. Some of this misinformation takes well-understood forms: baseless rumors, intentional disinformation, and conspiracy theories. But much of it seems to have a different character. In recent months, claims with some scientific legitimacy have spread so far, so fast, that even if it later becomes clear they are false or unfounded, they cannot be laid to rest. Instead, they become information zombies, continuing to shamble on long after they should be dead.
It is not uncommon for media sources like Medium to retract articles or claims that turn out to be false or misleading. Neither are retractions limited to the popular press. In fact, they are common in the sciences, including the medical sciences. Every year, hundreds of papers are retracted, sometimes because of fraud, but more often due to genuine errors that invalidate study findings.2 (The blog Retraction Watch does an admirable job of tracking these.)
Reversing mistakes is a key part of the scientific process. Science proceeds in stops and starts. Given the inherent uncertainty in creating new knowledge, errors will be made, and have to be corrected. Even in cases where findings are not officially retracted, they are sometimes reversed— definitively shown to be false, and thus no longer valid pieces of scientific information.3
Researchers have found, however, that the process of retraction or reversal does not always work the way it should. Retracted papers are often cited long after problems are identified,4 sometimes at a rate comparable to that before retraction. And in the vast majority of these cases, the authors citing retracted findings treat them as valid.5 (It seems that many of these authors pull information directly from colleagues’ papers, and trust that it is current without actually checking.) Likewise, medical researchers have bemoaned the fact that reversals in practice sometimes move at a glacial pace, with doctors continuing to use contraindicated therapies even though better practices are available.6
For example, in 2010, the anesthesiologist Scott Reuben was convicted of health care fraud for fabricating data and publishing it without having performed the reported research. Twenty-one of Reuben’s articles were ultimately retracted. And yet, an investigation four years later found half of these articles were still consistently cited, and that only one-fourth of these citations mentioned that the original work was fraudulent.7 Given that Reuben’s work focused on the use of anesthetics, this failure of retraction is seriously disturbing.
Claims with some scientific legitimacy continue to shamble on long after they should be dead.
But why don’t scientific retractions always work? At the heart of the matter lies the fact that information takes on a life of its own. Facts, beliefs, and ideas are transmitted socially, from person to person to person. This means that the originator of an idea soon loses control over it. In an age of instant reporting and social media, this can happen at lightning speed.
The first models of the social spread of information were actually epidemiological models, developed to track the spread of disease. (Yes, these are the very same models now being used to predict the spread of COVID-19.) These models treat individuals as nodes in a network and suppose that information (or disease) can propagate between connected nodes.
Recently, one of us, along with co-authors Travis LaCroix and Anders Geil, repurposed these models to think specifically about failures of retraction and reversal.8 A general feature of retracted information, understood broadly, is that it is less catchy than novel information in the following way. People tend to care about reversals or retractions only when they have already heard the original, false claim. And they tend to share retractions only when those around them are continuing to spread the false claim. This means that retractions actually depend on the spread of false information.
We built a contagion model where novel ideas and retractions can spread from person to person, but where retractions only “infect” those who have already heard something false. Across many versions of this model, we find that while a false belief spreads quickly and indiscriminately, its retraction can only follow in the path of its spread, and typically fails to reach many individuals. To quote Mark Twain, “A lie can travel halfway around the world while the truth is putting on its shoes.” In these cases it’s because the truth can’t go anywhere until the lie has gotten there first.
Another problem for retractions and reversals is that it can be embarrassing to admit one was wrong, especially where false claims can have life or death consequences. While scientists are expected to regularly update their views under normal circumstances, under the heat of media and political scrutiny during a pandemic they too may be less willing to publicize reversals of opinion.
The COVID-19 pandemic has changed lives around the world at a startling speed—and scientists have raced to keep up. Academic journals, accustomed to a comparatively glacial pace of operations, have faced a torrent of new papers to evaluate and process, threatening to overwhelm a peer-review system built largely on volunteer work and the honor system.9 Meanwhile, an army of journalists and amateur epidemiologists scour preprint archives and university press releases for any whiff of the next big development in our understanding of the virus. This has created a perfect storm for information zombies—and although it also means erroneous work is quickly scrutinized and refuted, this often makes little difference to how those ideas spread.
Many examples of COVID-19 information zombies look like standard cases of retraction in science, only on steroids. They originate with journal articles written by credentialed scientists that are later retracted, or withdrawn after being refuted by colleagues. For instance, in a now-retracted paper, a team of biologists based in New Delhi, India, suggested that novel coronavirus shared some features with HIV and was likely engineered.10 It appeared on an online preprint archive, where scientists post articles before they have undergone peer review, on January 31; it was withdrawn only two days later, following intense critique of the methods employed and the interpretation of the results by scientists from around the world. Days later, a detailed analysis refuting the article was published in the peer-reviewed journal Emerging Microbes & Infections.11 But a month afterward, the retracted paper was still so widely discussed on social media and elsewhere that it had that highest Altmetric score—a measure of general engagement with scientific research—of any scientific article published or written in the previous eight years. Despite a thorough rejection of the research by the scientific community, the dead information keeps walking.
Other cases are more subtle. One major question with far-reaching implications for the future development of the pandemic is to what extent asymptomatic carriers are able to transmit the virus. The first article reporting on asymptomatic transmission was a letter published in the prestigious New England Journal of Medicine claiming that a traveler from China to Germany transmitted the disease to four Germans before her symptoms appeared.12 Within four days, Science reported that the article was flawed because the authors of the letter had not actually spoken with the Chinese traveler, and a follow-up phone call by public health authorities confirmed that she had had mild symptoms while visiting Germany after all.13 Even so, the article has subsequently been cited nearly 500 times according to Google Scholar, and has been tweeted nearly 10,000 times, according to Altmetric.
Media reporting on COVID-19 should be linked to authoritative sources that are updated as information changes.
Despite the follow-up reporting on this article’s questionable methods, the New England Journal of Medicine did not officially retract it. Instead, a week after publishing the letter, the journal added a supplemental appendix describing the progression of the patient’s symptoms while in Germany, leaving it to the reader to determine whether the patient’s mild early symptoms should truly count. Meanwhile, subsequent research14, 15 involving different cases has suggested that asymptomatic transmission may be possible after all—though as of April 13, the World Health Organization considers the risk of infection from asymptomatic carriers to be “very low.” It may turn out that transmission of the virus can occur before any symptoms appear, or while only mild symptoms are present, or even in patients who will never go on to present symptoms. Even untangling these questions is difficult, and the jury is still out on their answers. But the original basis for claims of confirmed asymptomatic transmission was invalid, and those sharing them are not typically aware of the fact.
Another widely discussed article, which claims that the antiviral drug hydroxychloroquine and the antibiotic azithromycin, when administered together, are effective treatments for COVID-19 has drawn enormous amounts of attention to these particular treatments, fueled in part by President Trump.16 These claims, too, may or may not turn out to be true—but the article with which they apparently originated has since received a statement of concern from its publisher, noting that its methodology was problematic. Again, we have a claim that rests on shoddy footing, but which is spreading much farther than the objections can.17 And in the meantime, the increased demand for these medications has led to dangerous shortages for patients who have an established need for them.18
The fast-paced and highly uncertain nature of research on COVID-19 has also created the possibility for different kinds of information zombies, which follow a similar pattern as retracted or refuted articles, but with different origins. There have been a number of widely discussed arguments to the effect that the true fatality rate associated with COVID-19 may be ten or even a hundred times lower than early estimates from the World Health Organization, which pegged the so-called “case fatality rate” (CFR)—the number of fatalities per detected case of COVID-19—at 3.4 percent.19-21
Some of these arguments have noted that the case fatality rate in certain countries with extensive testing, such as Iceland, Germany, and Norway, is substantially lower. References to the low CFR in these countries have continued to circulate on social media, even though the CFR in all of these locations has crept up over time. In the academic realm, John Ioannidis, a Stanford professor and epidemiologist, noted in an editorial, “The harms of exaggerated information and non‐evidence‐based measures,” published on March 19 in the European Journal of Clinical Investigation, that Germany’s CFR in early March was only 0.2 percent.21 But by mid-April it had climbed to 2.45 percent, far closer to the original WHO estimate. (Ioannidis has not updated the editorial to reflect the changing numbers.) Even Iceland, which has tested more extensively than any other nation, had a CFR of 0.47 percent on April 13, more than 4 times higher than it was a month ago. None of this means that the WHO figure was correct—but it does mean some arguments that it is wildly incorrect must be revisited.
What do we do about false claims that refuse to die? Especially when these claims have serious implications for decision-making in light of a global pandemic? To some degree, we have to accept that in a world with rapid information sharing on social media, information zombies will appear. Still, we must combat them. Science journals and science journalists rightly recognize that there is intense interest in COVID-19 and that the science is evolving rapidly. But that does not obviate the risks of spreading information that is not properly vetted or failing to emphasize when arguments depend on data that is very much in flux.
Wherever possible, media reporting on COVID-19 developments should be linked to authoritative sources of information that are updated as the information changes. The Oxford-based Centre for Evidence-Based Medicine maintains several pages that review the current evidence on rapidly evolving questions connected to COVID-19—including whether current data supports the use of hydroxychloroquine and the current best estimates for COVID-19 fatality rates. Authors and platforms seeking to keep the record straight should not just remove or revise now-false information, but should clearly state what has changed and why. Platforms such as Twitter should provide authors, especially scientists and members of the media, the ability to explain why Tweets that may be referenced elsewhere have been deleted. Scientific preprint archives should encourage authors to provide an overview of major changes when articles are revised.
And we should all become more active sharers of retraction. It may be embarrassing to shout one’s errors from the rooftops, but that is what scientists, journals, and responsible individuals must do to slay the information zombies haunting our social networks.
Cailin O’Connor and James Owen Weatherall are an associate professor and professor of logic and philosophy at the University of California, Irvine. They are coauthors of The Misinformation Age: How False Beliefs Spread.
Lead image: nazareno / Shutterstock
1. Liu, W. & Li, H. COVID-19 attacks the 1-beta chain of hemoglobin and captures the porphyrin to inhibit human heme metabolism. ChemRxiv (2020).
2. Wager, E. & Williams, P. Why and how do journals retract articles? An analysis of Medline retractions 1988-2008. Journal of Medical Ethics 37, 567-570 (2011).
3. Prasad, V., Gall, V., & Cifu, A. The frequency of medical reversal. Archives of Internal Medicine 171, 1675-1676 (2011).
4. Budd, J.M., Sievert, M., & Schultz, T.R. Phenomena of retraction: Reasons for retraction and citations to the publications. The Journal of the American Medical Association 280, 296-297 (1998).
5. Madlock-Brown, C.R. & Eichmann, D. The (lack of) impact of retraction on citation networks. Science and Engineering Ethics 21, 127-137 (2015).
6. Prasad, V. & Cifu, A. Medical reversal: Why we must raise the bar before adopting new technologies. Yale Journal of Biology and Medicine 84, 471-478 (2011).
7. Bornemann-Cimenti, H., Szilagyi, I.S., & Sandner-Kiesling, A. Perpetuation of retracted publications using the example of the Scott S. Reuben case: Incidences, reasons and possible improvements. Science and Engineering Ethics 22, 1063-1072 (2016).
8. LaCroix, T., Geil, A., & O’Connor, C. The dynamics of retraction in epistemic networks. Preprint. (2019).
9. Jarvis, C. Journals, peer reviewers cope with surge in COVID-19 publications. The Scientist (2020).
10. Pradhan, P., et al. Uncanny similarity of unique inserts in the 2019-nCoV spike protein to HIV-1 gp120 and Gag. bioRxiv (2020).
11. Xiao, C. HIV-1 did not contribute to the 2019-nCoV genome. Journal of Emerging Microbes and Infections 9, 378-381 (2020).
12. Rothe, C., et al. Transmission of 2019-nCoV infection from an asymptomatic contact in Germany. New England Journal of Medicine 382, 970-971 (2020).
13. Kupferschmidt, K. Study claiming new coronavirus can be transmitted by people without symptoms was flawed. Science (2020).
14. Hu, Z., et al. Clinical characteristics of 24 asymptomatic infections with COVID-19 screened among close contacts in Nanjing, China. Science China Life Sciences (2020). Retrieved from doi: 10.1007/s11427-020-1661-4.
15. Bai, R., et al. Presumed asymptomatic carrier transmission of COVID-19. The Journal of the American Medical Association 323, 1406-1407 (2020).
16. Gautret, P., et al. Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. International Journal of Antimicrobial Agents (2020).
17. Ferner, R.E. & Aronson, J.K. Hydroxychloroquine for COVID-19: What do the clinical trials tell us? The Centre for Evidence-Based Medicine (2020).
18. The Arthritis Foundation. Hydroxychloroquine (Plaquenil) shortage causing concern. Arthritis.org (2020).
19. Oke, J. & Heneghan, C. Global COVID-19 case fatality rates. The Centre for Evidence-Based Medicine (2020).
20. Bendavid, E. & Bhattacharya, J. Is the coronavirus as deadly as they say? The Wall Street Journal (2020).
21. Ionnidis, J.P.A. Coronavirus disease 2019: The harms of exaggerated information and non-evidence-based measures. European Journal of Clinical Investigation 50, e13222 (2020).