It’s sometimes easy to forget, but when covid first engulfed the planet in 2020, vaccines weren’t even on the horizon. The typical development process for a new vaccine was something like a decade. The fastest the world had ever produced a vaccine was 4 years. If we were still operating on that timeline, we’d have another year or two to wait to get our first jab.
What happened instead was a scientific miracle that saved the lives of millions upon millions of people. By December of 2020 multiple vaccines had been demonstrated safe and effective in large scale human trials. It was like a gift from God, except God had nothing to do with it. The research and expertise of a small number of people more or less saved the world.
Then we faced the problem of actually getting people to take the damn things. Even in relatively high-trust, scientifically-literate societies like Australia, that proved difficult, and a small number of people never got vaccinated at all. In low-trust societies like the United States, the vaccine-hesitant population was even harder to reach, and larger.
At the peak of this standoff, the whole gamut of advocacy groups and public health organisations mobilised to inform and educate their communities about the safety of the vaccine. This outreach was often targeted and tailored; designed to meet people where they are, as the organising ethos goes. It sought to overcome the politicisation of scientific literacy – a phenomenon in which heeding public health messaging became associated with liberal politics. Ignoring public health warnings became a badge of honour for the reactionary right.
But the hard work of many of those organisations was swamped by a different message that emerged in the public sphere. This emergent, collective message had no ear for reaching people most resistant to public health information. Instead, the public health message that dominated our societies was an exhortation to ‘trust the science’.
I think most thoughtful people could quickly see, or at least intuit, that ‘trust the science’ was a self-serving and myopic political tool rather than a sincere attempt at persuasion or trust building. Appeals to authority, especially the fingerwagging type, are not well suited to building trust in that authority. They are more likely to trigger reaction – and fairly so. As a rhetorical device it’s all very circular. Why should I trust the science? they asked. Because you should trust the science, we answered. You’d dig your heels in, too, if your doubts were so readily dismissed.
One reason this message came to dominate public discourse, I think, is simply that it’s simple. Our discursive cultures are unable to metabolise or propagate anything more nuanced.
Another reason is that our discourse is heavily dominated by a certain type of liberal person – a person more invested in political culture war than effective public health messaging.
A third reason, closely associated with the second, is a sense of panic among liberals who have found, first with climate change, and later with medical science, that expertise itself has come to be distrusted by the right. This is a terrible and terrifying phenomenon in which conservatives seek to erode the epistemic authority of experts exactly because intellectual progress is not conservative. The right wing assault on universities; the undermining of institutional media; the anti-intellectualism of conservatism – these are all strategies of a movement that views the development of knowledge as a threat. Liberals, apparently unable to mount a defensive strategy against these attacks, fall back upon feeble exhortations like ‘trust the science’ out of fear and desperation.
But the most worrying reason is that many people simply don’t know what science is. ‘Trust the science’ takes “science” to be a body of knowledge and dogmatically insists the listener accept that knowledge to be true. But science is not a body of knowledge, it’s a method of inquiry, and that method functions only when beliefs are not protected by dogma. It feels silly, very high school, to have to say this, but there you go. Leaning on people to believe some body of knowledge isn’t just ineffective – it’s bad science.
And things get really bad when the body of knowledge you want people to believe turns out to be wrong. It puts you at risk of delegitimising the entire scientific project in the eyes of the listener. Those who have been told to ‘trust the science’ won’t react well when “the science” turns out to be rubbish. And that happens all the time.
To see what I mean, consider three examples from just the last two weeks.
July 19 – no evidence for nudge theory
If you’ve seen The Big Short, you probably know that Richard Thaler won a Nobel Prize for his work in behavioural economics, and, in particular, his theory of “nudges”. A nudge, says Thaler and coauthor Cass Sunstein, is
any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives.
The idea that we could improve people’s choices by reimagining their environment was very attractive to governments, and accordingly governmental “nudge units” have spawned across the world to design policy with nudge theory in mind. Billions of dollars have been spent on associated policies.
But last week Maier. et al published a review of the literature titled 'No evidence for nudging after adjusting for publication bias.’ It found, uh, that there’s no evidence for nudging, once you adjusted for publication bias. Oops.
July 21 – foundational Alzheimer’s research may have been fabricated
In 2006 researchers published a piece in Nature that changed the game in Alzheimer’s research. ‘The study,’ says Jessica Glenza in The Guardian, ‘…proposed that a specific amyloid protein may be responsible for cognitive decline.’
That hypothesis took root and, for the last decade, has lead the field, directing billions of dollars in research funding towards understanding the role of this protein. Some began to hope that promising treatment for Alzheimer’s might be around the corner. And yet, over 15 years later, such a treatment eludes us.
One reason may be that parts of the original study were manipulated and doctored to prop up research results. It’s unclear just how devastating this fraud will be. In the worst case scenario, a cure for Alzheimer’s may have been delayed by a decade or more. Ah, well.
July 22 – no evidence for the chemical imbalance theory of depression
For decades, psychiatry has been dominated by something called the chemical imbalance theory of depression. I’m sure you’re familiar with it. It says that people become depressed when they experience a deficiency in serotonin. SSRIs, or selective serotonin reuptake inhibitors, work by correcting that imbalance.
If you believe something like this, you’re not alone. A 2013 study found that upwards of 80% of Australians believe it. And with good reason: it receives qualified support from the major textbooks, including New Oxford Textbook of Psychiatry and Kaplan & Sadock’s Comprehensive Textbook of Psychiatry, as well as the endorsement of a range of leading researchers. You may have even found your doctor saying as much to you: practitioners regularly report that they communicate this theory to their patients.
The only problem with the chemical imbalance theory of depression is that it appears to be entirely wrong. A new, exhaustive review in Molecular Psychiatry ought to put the theory to bed. After a massive survey of the literature, the authors are unequivocal: ‘there is no evidence’, they write ‘of a connection between reduced serotonin levels or activity and depression.’ SSRIs remain moderately effective – we just don’t know why. Alas!
How bad is it?
On first glance it looks like July was a pretty devastating month for science. To sum up:
a Nobel Prize winning centrepiece of behavioral economics is mostly rubbish;
we may have wasted billions of dollars on a fraudulent red herring of Alzheimer’s research, and
everything you know about depression is a lie
And there’s a lot to worry about in each example. The nudge theorists seem to have got way too far over their skis, making claims weren’t justified by the evidence. An Alzheimer’s researcher seems to have actually made evidence up. And our entire popular understanding of depression appears to have been shaped by drug company advertising. Science, you might think, cannot be trusted.
But a better way to think about it is that July was a win for science, not a loss. This is what it looks like when research and inquiry work – bad theories are overturned, errors are rooted out, and fraud is uncovered.
We can, and should, demand better from our scientific community. Doing so would require overturning a publish-or-perish labour market, realigning incentives so researchers are encouraged to return negative results, and installing funding security so scientists can follow their own hypotheses, rather than those of funders.
When we’re done, we’d do well to start demystifying science in the popular imagination. I’ve been contrasting the pro-science liberal with the anti-scientific communities they scorn, and I want to end by suggesting that these groups are more similar than they appear. What they have in common is an overestimation of science, and a sense that in science something mystical is going on.
On the one hand is a type of person who more or less worships some thing they call “science”, but which is basically mysterious to them. And on the other hand is a type of person who is very skeptical of science, exactly because it is mysterious to them. For the former, science’s mystical powers allow it unique, uncorrupted insight; for the latter, science has a mystical ability to lead us astray.
We’d do well to remind everyone that what science really is is just the systematic application of the epistemological tools we use everyday. The epistemic tools that we apply in science are universal and commonplace.
The banal reality is that the scientist and the anti-scientist are equipped with the same toolbox. It is the most common skillset known to man. In fact there is nothing that the scientist has that you do not have; there is nothing that you have at the scientist does not have. We are simply, all of us, in the business or coming to the world with hypotheses and finding those hypotheses confirmed or disconfirmed by experience. Some people are more careful – what philosophers would call epistemically virtuous – than others, but all humans are united in this simple act. It’s just about all we have.
Demystifying science should allow our communities a more thoughtful and less charged relationship with the scientific world. Liberals might remember that scientists are regular people making faulty and clumsy theories about the world; science skeptics might remember that scientists are regular people earnestly and sincerely doing their best to push forward the frontiers of human knowledge. Both groups could recognise, without defensiveness or exaggeration, that our scientific communities can be improved; that the beliefs they produce may turn out to be wrong; that, until then, they are the best we’ve got.
Do you see any limitations in canvassing the world in a dichotomy of left and right, liberal and conservative? For years I haven't identified with that spectrum and found it to generate the political cultural war that you write about. As a notion it arose out of the reprisal violence of the French Revolution, an origin story that should leaves us suspect in the least. It's so vulnerable to dualistic 'divide and conquer' profiteering, so easy to pit one side against the other and make them exhort their energy int that struggle. There can be no dialogue or communal acts of thinking when one side is trying to convince the other of what they believe. And being offered only two categories means the foundations of dialogue are never there, so the voices who don't identify with those categories - who we most need - never show up to the public conversation.
On the topic of health, the real swings in policy and public health messaging are not originating at the level of the peoples' partisan politics. They arise from globalist, corporatist organisations that treat nation state governors as policy enactors. Owning the media within and across states means that you can feed populations only that which represents the science you want, demonises everything else and creates an echo chamber resounding with the policy line being handed down. Dialogue, even debate, is quashed, hence the logical dissemination of the completely antiscientific phrase 'trust the science'. Who in the right mind would pit their job, their social life, their position in mainstream society, against that charge? A decade or three later, after a lot of money has been made from an obedient population and publishing trend, as you cite in your examples, the tide may come back in to shore.
It sounds so beautifully idealistic and redemptive of humanity to believe that a scientific miracle the likes of which we've never seen has benevolently happened and millions of lives have been saved. But I've seen too much credible scientific evidence as well as corruption to believe that. Much of which can be found through this uncensored platform.