Mistrust of Science

DANGEROUS MEDICINE in a Time of Vaccine Hesitancy

Originally published December 27, 2021.

A thirty-year program of experiments spanning World War II and the early Cold War in which scientists deliberately infected people with hepatitis. Ten groups of researchers from elite universities and laboratories conducting hundreds of virus transmission studies with support from the U.S. government. More than 3,700 people used as human subjects, all member of marginalized groups­­: conscientious objectors to the military draft, prison inmates, mental patients and adult and children in institutions for the developmentally disabled. Four deaths from acute hepatitis during experiments and a larger number of subjects left as hepatitis carriers, at risk for cirrhosis and liver cancer decades later.   

It’s a very troubling history. How can material of this kind be presented today without contributing to widespread mistrust of science? Here, author Sydney Halpern is in conversation about her book Dangerous Medicine and about public perceptions of biomedicine.   

 

Do you think your book will encourage distrust of biomedical science in general?

That’s certainly not my intention. But I do think the book invites conversations about why there is widespread mistrust of science today and what about the history of U.S. biomedicine allowed these and other morally questionable experiments to flourish not so long ago. It also invites conversation about what biomedical researchers can do to address public mistrust. 

 

Let’s start with the first of these: why do you think there’s so much distrust of science today?

It’s fueled in part by the currency of populist political sentiments. One feature of populism is suspicion of elites and the authority of elites. People with populist sentiments don’t want to defer judgments to authorities that seem to them both disrespectful and threatening. Stories about vaccines for COVID-19 containing microchips encapsulates this anxiety. There’s an underlying fear that experts with power will use it to control people who feel they have no power. But that’s only part of the answer.

With biomedicine, there’s a long history of resistance to experiments on people. It stretches back to the first medical interventions designed in scientific laboratories. During the late nineteenth and early twentieth centuries, antivivisectionists objected to human experimentation—this movement is best known for protesting animal research, but human research was also a focus. These activists thought that scientific medicine would undermine physicians’ primary concern with patients and that doctors would put their commitment to science ahead of the people they were treating.  

Then in the late 1960s and 1970s, Americans began hearing revelations about research abuses—experiments where researchers recruited vulnerable subjects, exposed them to undue risks, in some cases deceived them, and failed to get voluntary and informed consent. The abuses included the infamous Tuskegee syphilis study in which researchers from the U.S. Public Health Service withheld treatment to more than 400 African American men with tertiary syphilitic disease. Other studies reported to have mistreated subjects also drew participants from marginalized groups. This history has generated a lot of mistrust.

Still, while accounts of research abuses continue to circulate, so do stories about experiments producing remarkable life-enhancing medical breakthroughs. Public narratives about biomedical experiments have often bounced between these two extremes.

 

What accounts for this polarity? 

There seems to be a human affinity for stories about heroes and villains—and about great accomplishments and terrible missteps or malfeasance. This certainly plays out in popularized stories about biomedicine. Sensational reports in the press—split between lionizing and derogating—cater to and amplify a predilection for drama. The wildly discordant accounts contribute to confusion and misunderstanding of science. 

 

What are the biggest public misconceptions of science?   

I think there are three major misconceptions. One is the notion of the stand-alone researcher. We hear stories about individual scientists, or pairs of collaborators, who make especially impactful contributions. Examples are Jonas Salk, who introduced the first polio vaccine that was licensed, or James Watson and Francis Crick, who are credited with discovering the structure of DNA. When scientific achievements are celebrated, it’s often through awards given to lone researchers or pairs of investigators. These days, a lot of scientific studies are large-scale, multi-institutional projects. Even more to the point, science takes place in communities. Members of these communities share information and evaluate research evidence; it is their collective work that makes the achievements of individual scientists—or circumscribed groups of them—possible.   

Another misunderstanding is that there is or should be a straight path to scientific certainty.  Science is a process; it’s provisional. Communities of researchers come up with provisional agreements about the nature of things based on a preponderance of evidence. Available evidence changes over time, particularly at the cutting edge of science. As a result, scientific truths are mutable. This has become starkly evident over the last two years as researchers scrambled to understand COVID-19. There’s been a lot of confusion about changing information about the SARS-CoV-2 virus and changing advice on how best to protect against infection. These shifts have occurred in large measure because new empirical evidence has emerged and because the population of viruses has evolved. While the changing advice can be frustrating, it actually reflects the fact that science is working. 

Finally, we shouldn’t consider science as independent from its social context. Scientists like to think of science as a world unto itself. But as a historian, I’m very aware that the social context powerfully shapes what research problems scientists pursue—as well as how they go about conducting their research.  

 

How did the social context shape the history of hepatitis research?

Hepatitis outbreaks were recurrent among soldiers during and after World War II. At the war’s end, investigators discovered that hepatitis B was contaminating the blood supply, and this was a major threat to public health. Policy makers considered solutions to these problems as crucial to the national security. Researchers were unable to find an animal susceptible to hepatitis for use in transmission studies. Meanwhile the prevailing social ethos valued both ends over means and sacrifices for the common good. These conditions emboldened medical researchers. 

Scientists drew on available cultural imagery to build narratives justifying human experiments with hepatitis viruses. In their accounts, prisoners and conscientious objectors who served as subjects were making patriotic contributions to science and country. Researchers advanced different rationale for enrolling psychiatric patients and people with developmental disabilities. With these populations, they argued that their experimental interventions were therapeutic; they also appealed to the ethos and management concerns of the physicians who oversaw custodial facilities for the impaired.

The social context doesn’t excuse what went on in America’s hepatitis program. But understanding the historical environment does help to explain what unfolded.

 

Did racial dynamics underly the recruitment of subjects for the U.S. hepatitis program?

I found no evidence that hepatitis researchers targeted African Americans for recruitment—or any another minority group—as they did in the Tuskegee syphilis experiment. However the scientists certainly used populations of convenience in which persons of color were overrepresented. Information on subjects’ race is in patient records which, for the most part, are not available in archival repositories where I gathered most of the material for the book. However, I did occasionally find a revealing document. Of the approximately 3,700 hepatitis subjects, somewhat over 50% were prison inmates. It was in this population that the recruitment of African Americans was most common. I estimate that, depending on the prison, between 25% and 45% of the inmates enrolled in hepatitis experiments were black. I found very little information about the use of other minority groups. 

 

What motivated researchers to expose vulnerable subjects to grave risks? Should we consider them culpable?  And, if so, of what?

They were deeply committed to scientific medicine and believed their work would produce tools for disease prevention and treatment.  

Researchers can get carried away by big ideas. They become so wedded to realizing the potential of these ideas that they downplay the difficulties ahead of them. It can take decades to translate a powerful concept in biomedicine into safe and effective medical interventions. In the meantime, researchers can overestimate the value and safety of their experimental interventions. There are numerous examples of this in the history of medicine. Here’s just one of them. 

In 1882, Robert Koch announced he had isolated the bacteria for tuberculosis—then a major cause of disease and death. A year earlier, Louis Pasteur had shown that a vaccine developed in his laboratory from the recently isolated anthrax bacillus protected sheep against anthrax infection. It was a revolutionary idea that scientists could isolate and modify a disease pathogen and then use it to induce immunity. Koch’s discovery of the new microbe generated tremendous excitement that a vaccine to prevent tuberculosis in humans was in sight. This excitement extended from the research community to segments of the public at large. Three weeks after Koch’s announcement, the London Times ran an editorial saying that an immunizing agent for TB was “within our reach” and would likely be available at “a not distant date.” A similar piece appeared in the New York Tribune. But in fact, it was a full half-century before a workable vaccine for tuberculosis—called the BCG vaccine—was widely available. And in 1930, a poorly prepared batch of the immunizing agent used in Germany transmitted tuberculosis to more than 250 babies. 

Hepatitis researchers saw outbreaks of the disease as an urgent problem and were convinced that experimental medicine would provide a speedy route to prevention. Enthralled by the potential of powerful ideas, they were impatient for results and shortsighted about risks. They were willing to push forward with experimental interventions that posed very significant dangers to participants and to do so using subjects from populations of convenience. I think the researchers’ most serious flaws were overconfidence and hubris. 

 

Did you make moral judgments about the hepatitis experiments when working on Dangerous Medicine?  And did you include those opinions in the book?

Some of my reactions are plainly evident in the book. But to a large extent, I held myself in check. One of my goals was to explain how researchers and their government sponsors built and sustained a moral framework that supported these and other problematic experiments. I thought that voicing my personal sentiments would muddy that account. Also, I didn’t want to get in the way of readers making their own moral judgments.

 

Are people still participating in hazardous medical research that offers them no benefit? Who are they and how do scientists persuaded them to enroll?

Studies exposing subjects to risks without possible benefit to them can yield useful scientific information. Researchers continue to seek participants for human infection studies with disease pathogens—for the most part, these are smaller in scale and less dangerous than the experiments I focus on in Dangerous Medicine. They also seek subjects for Phase 1 clinical trials where medical interventions are in the earliest stage of human testing. Today participants in these types of studies are not the members of the vulnerable groups used in mid-century hepatitis experiments—current regulations prohibit it. Instead, research organizations now pay people to enroll, and the subjects are mostly individuals from the low end of the economic hierarchy. Alternatively, studies of this type are exported to developing countries. For nontherapeutic experiments conducted in the U.S, people who are income-insecure, often disproportionately people of color, have replaced members of identified vulnerable groups—including children and prisoners—as the mainstay of enrollees.  

It’s very unfortunate that so many subjects in nontherapeutic studies are from the economic underclass. It exploits existing social inequalities. I believe that more can done to encourage broader participation. One route would be to put greater effort into valorizing people who enroll in medical studies that have the potential to benefit others. This would mean acknowledging and supporting this research participation as an important and consequential public service.

I also suggest that biomedical scientists seriously consider strategies for making research subjects feel they are partners in a common enterprise. This would involve, among other things, providing prospective and enrolled subjects—as well as the broader public—with as much information as possible about study logic, design, and goals without compromising study outcomes.  

 

What can and should medical scientists do to build public trust, particularly considering wrenching stories like the ones you tell in Dangerous Medicine?

The research community needs to be honest and open about mistakes from the past and transparent going forward, particularly about how they are weighing risks and about the characteristics of people they are recruiting as subjects.

I encourage researchers to take a sober look at how long it has often taken to translate big ideas into safe and effective interventions with patients. Familiarity with the many sequences in biomedicine’s history where human studies did not lead to clinical advances—and in the meantime, injured subjects—might dampen overconfidence in experimental outcomes.  

Finally, I’d like to see concrete steps taken in the policy arena. Investigators and their sponsors have a responsibility to provide care and redress for human subjects harmed in medical studies, whether these harms are immediate or delayed. Researchers consider a human study to be over when results have been tabulated, and publications submitted. But it may not be over for research subjects; some may be left with lingering and debilitating effects of experimental interventions. The U.S. does not have and badly needs a system for compensating research injuries, particularly where participants expected no medical benefit and were asked to contribute to the greater good. As of now, we leave these matters to lawsuits—a route that individual subjects are not equipped to pursue. Scientific leaders would go far in fostering public trust by being at the forefront in advocating for a compensation system.

Previous
Previous

On Writing