The following post was submitted by Elisa (EJ) Sobo, Professor of Anthropology, San Diego State University.
The New York Times recently featured an op-ed piece titled ‘Academics seek a big splash.’ In it, Noam Scheiber assesses recent changes in how scholars relate to the media. Concurrently, Huffington Post published ‘An anthropological approach to California’s vaccination problem,’ which concerned a forthcoming peer-reviewed anthropological article of mine regarding vaccine refusal. The essay, and news of it, spread quickly over the Web.
As Scheiber notes in the ‘big splash’ piece, although academics “once regarded the ability to attract attention with suspicion” we “increasingly reward it.” Our newfound interest in cultivating mass publicity is in part due to the fact that funding agencies like it when the work they sponsor is in the news. This “has led to a new model of disseminating social science research through the media.” When journalists cover academic work, academics in turn publicize their publicity via social media, university websites, and the like.
It’s a win-win situation, right? Not always.
The coverage my vaccination project received celebrated anthropology’s applied potential, which is wonderful. The original report contained some factually accurate statements about the research. So far so good. However, it also included a number of errors—which multiplied and deepened as reviews drawing on the original coverage spread (details regarding the debacle are forthcoming in the newsletter of the Society for Medical Anthropology).
As Scheiber observes, “Many journalists are not equipped to distinguish good science from shoddy science. That is a particular risk when the work does not wend its way through the usual academic channels before entering the news media’s consciousness.” Beyond this, journalists also may be pushed toward sensationalism. They may not be equipped to still their biases. They may not be allowed the space it takes to explain things fully and clearly to lay readers. Accuracy may be compromised by deadline pressures or infrastructural conditions that limit fact checking. In these ways, even reporting on research that has been through the peer review process—as mine had—can be problematic.
In my case, a quick content analysis of reader comments was illuminating. Many rebutted assertions misattributed to me by the reporter without questioning whether he had summarized my work accurately or taken anything out of context.
In fairness, access to academic articles can be hard to come by. So can the kind of background needed to read and interpret academic writing. However, not one commenter voiced an intention to read my journal article. Many seemed content to dismiss or argue with “Sabo’s” [sic] research on the basis of what the reporter told them it encompassed.
Readers are right to think critically about findings reported. Many past expert safety assurances (e.g., regarding DDT, asbestos, cigarettes, frequent antibiotic use) have been upended. Furthermore, funding bias has been shown, scientifically, as a problem in some research trials. However, and in addition to the problem of training that Scheiber highlighted, reporters also filter things, even if unconsciously.
Further, reporters do not always bear in mind the receiving context. As Dan Kahan warned in 2013 in Science, “inattention to the quality of the science communication environment” can contribute to misunderstanding. Reporting that polarizes —even unintentionally—is beyond unhelpful: it is dangerous. It feeds the destructive side-taking that most medical anthropology works against. It crushes attempts to build bridges.
Given the need for constructive dialog, how can we ensure that the media splashes our research may make are the right kinds of splashes? To start, improved scientific and information literacy training in school is necessary. Such training must not, however, suggest to learners that basic scientific and information literacy equal expertise.
Making peer-reviewed research articles available to all also is crucial. If we do not find ways to support open access, we cannot complain when members of the lay public rely on secondary sources for information.
More immediately, as Kahan’s research suggests, we must maintain better control over how journalists present our work to lay audiences. Writing for the public ourselves is one way to do this. We also can use the science of science communication in drafting editorials and so on, and we can seek feedback on our messaging, for instance by carefully considering rather than arrogantly dismissing reader comments.
The fact that the public is interested in our work should give us hope as we seek solutions to the problem of dissemination. Making media splashes may be part of the new academic job description, but that does not mean we should neglect our higher aspirations.