The Right to Be Wrong Isn’t The Freedom From Consequences
And From Where Consequences Might Arise
I don’t know why I didn’t publish this at the time, probably because I thought it wouldn’t help, or I got distracted. It was all blurted out at once, and I put it aside for editing that would never come — because when you write something all at once, it is usually irretrievably topical, and that makes playing with it later impossible.
At least, it does for me. I can never re-create the murderous thoughts that created it. So be it.
It has some features of interest:
(1) it’s the first time I used the term ‘forensic metascience’, to my recollection. So, I’m reasonably certain that was coined here.
(2) this is how I felt in ~September 2021, so almost exactly two years ago, and it feels reasonably well predictive about how The Big Stupid is going for us - what happens when you combine existing institutional mistrust with the hostile and feverish environment of COVID, followed by the social normalisation of ridiculous ideas. Never underestimate the power of learning it’s all someone else’s fault.
**************************************************************************************************
This morning was the most depressed I have been with Pandemic Science.
Hopefully I can finish this before I cheer up. I would prefer not to waste a mood this black on morning bagatelles like ‘coffee’ and ‘having feelings’.
I have taken to capitalizing Pandemic Science not simply because I promiscuously capitalize things (I do), and certainly not because the proper noun is in common usage elsewhere (it is not).
Rather, because it feels like the rules have changed, and we have rapidly entered a new and exceedingly unpleasant era of producing knowledge about the natural world.
A hook has been slipped, a catch has been missed, an up has been fucked.
I will explain with examples.
Example One
This was a reanalysis of CDC data attached to a New England Journal of Medicine paper on pregnancy and COVID vaccine miscarriage. The original paper concluded there was no real risk. The reanalysis concluded that miscarriage rates for the pregnant and vaccinated were over 80%, maybe as high as 90%.
Just one titchy problem with that: the cohort was collectively pregnant between December 2020 and February 2021, and the paper analysed ‘completed pregnancies’.
If a pregnancy is nine months, and the sample study period is three months, then the only way to ‘complete’ a pregnancy is:
Be at least six months pregnant when the study starts, then give birth, or
Miscarry.
Now, why this obviously silly calculation was included was — ostensibly — to make a point about the calculations in the original paper. At least, this was the story told by the authors after the most utterly predictable thing happened: it immediately became a loud and violent talking point for the yammering anti-vaccine NPCs of the internet.
This analysis was withdrawn by the authors from a journal which is… well, not so much a journal as a blog tripped over the PDF format.
Have a look at the last issue, featuring:
(1) an incoherent editorial on pharmacovigilance
(2) a study by Walach and pals (those of you with sharp eyes will recognise Walach as the lead author of a JAMA Pediatrics letter that I yelled at earlier in the year, which was retracted)
(3) a VAERS paper (eyeroll emoji)
(4) this paper, now withdrawn
A complete and utter waste of time, another turd in the groaning sewage pipe of misinformation. It will be forgotten soon, but it sucked while it lasted.
Example Two
This paper, published in March, might be the most effective single piece of misinformation in the whole pandemicals.
It was a mathematical model of how lockdowns confer absolutely no health benefits whatsoever, comparing publicly available information on mobility to mortality.
The only minor issue was: if you feed their analysis with simulated data which unambiguously shows lockdowns DO work, the model still concludes they don’t.
In other words, if you have an unconstrained population getting ill, footling around in society at large and making each other sick and dead, and then you suddenly jam these people in hermetically sealed containers with no human contact (and absolutely no possible way for the virus to propagate between people), and have their food delivered by adorable model railway and their water piped in through a goddamn sterile hamster bottle, this model still wouldn’t show that lockdowns ‘work’.
Published in March last year, it was retracted a little ways back.
In response, the author said:
This was a terrible decision made by the editors. The basis for the retraction was based on simulated data, they changed the methodology and the reviewers.
This is categorically stupid. For ‘simulated data’, read ‘data simulated by very simple methods which is unambiguously contains the effect of interest, which the analysis method could not detect’. This is like testing a gasoline pipe with water, and when it comes squirting out at all angles, claiming victory because water isn’t gas.
The Point
There are plenty of differences between these papers. One was published in a run-of-the-mill but still very real journal, the other was published in a digital toilet. One was retracted, the other was withdrawn by the authors. One was reanalysed and found wanting, the other was just visibly silly from across the room.
But the commonalities are more important:
both received global publicity, to the extent that their conclusions appear in government press conferences, they have hundreds of thousands of reads/downloads, and people in my own life have quoted the results at me
both papers are not simply wrong, they are dumb — the basic observational mechanics of both studies are irreparably compromised
both papers were ostensibly peer-reviewed
A word on that last point, before we get to the crux here: regular peer review often does a poor job of telling these apart. Peer review usually works something like this:
(1) invite about twenty scientists, chosen at random but who are within the relevant area of research, to read a new scientific paper
(2) give the paper to the two to four of them who say yes
(3) hope to hell they know what they’re talking about
(4) make a decision based on their considered opinions
When it works, it’s fine. It often doesn’t. Peer review has the potential to be stochastic, unreliable, astonishingly petty, strangely hostile, and completely counterintuitive.
Now, let’s contrast that to what I’ll call forensic metascience. Ready?
(1) download everything to do with the paper, especially any extra resources and appendices
(2) pay particular attention to the data and the analysis methods involved — if these are not available, immediately request them from the authors. Quote the journal’s policy on ‘open data’ or ‘open methods’ if you have to. Follow up with the authors regularly until they help you. Write to their institution if it’s really important. Write to anyone else you might help you.
(3) pull any data you have apart like you’re at a goddamn alien autopsy. Recreate the analysis presented in the paper. If you don’t have the raw data, analyse the statistical conclusions. If you do, analyse the digits for manipulation patterns. Visualise how the numbers are distributed. Check for missing data, outliers, exclusions, weirdness, miscoded variables, and more.
(4) do the same to the analysis methods. Start by downloading any extra software you need to make it run, and make that work first. Then, step through the statistical mechanics. Check the assumptions. Feed contrasting data into the same method. Poke and prod.
(5) look at the methods more broadly. And the study registration, if there is one. Is it all normative, supportable, or … possible?
(6) if serious problems are located, raise hell. Sometimes for years.
There are reasons science doesn’t support doing the above as a cultural standard — because it is incredibly time consuming, hostile, and often fruitless. Until you make this someone’s job, rather than their work-adjacent hobby, it is massively disincentivized.
Even if an incentive to be this scrupulous existed, without a serious program of digital tool development, and a complete change in norms around data availability, forensic metascientific investigation as a replacement for peer review is a non-starter.
And having both of the above in place, it still isn’t easy because it a) requires serious local expertise b) requires you to take personal risks, and c) is wildly non-scalable. If experts being critical of published science are poorly received, wait and see what people say when machines do it.
The Real Point
Scientists reserve the right to be wrong. Most observations, most conclusions are at least somewhat tentative. All science is in immediate danger of being overwritten.
And, of course, most of it is complex. Even papers I call ‘very simple’ are not actually simple. This is shorthand for ‘if you’re very familiar with the study area, the analysis methods, and reading papers in general, the methods and results are easy to understand’. Which isn’t simple in any normative sense.
Thus, the mendacious, the stupid, the ridiculous, the robustly fucked, is mixed into one nearly homogenous solution which itself is tipped into a big bucket marked ‘science is hard’ - which also contains the slightly wrong, the moderately misconceived, the important typo screwing up the whole paper, etc.
Notice I said nearly.
What has changed here is the immediacy of attention, and the necessary persistence of information. The sociocultural policing of scientific standards rely heavily on factors which are no longer true , specifically:
first, that a paper will remain in existence forever in formal paper copy (which is historically what a journal always was);
second, that reputation is a sufficiently valuable currency that scientists will not want to screw around with misinformation or silliness too much;
third, that the research world is very small, and that if you do something truly egregious, you will be remembered by your peers.
Pandemic Science feels like the point where The Bastards have figured out, correctly, that these norms are stone dead. Scientists are proud of their system being trust-based, and it was always porous. And we’ve really, finally opened the gate to The Big Stupid, and let everyone in. Trust is fine when everyone plays the same reputation game as you, but we have new players now, and there is money and love in The Big Stupid.
This is going to get worse, I repeat: this is going to get worse.