I’ve been pondering quite a bit lately how most of the scientific literature has turned into hot garbage in the last 2 years, driven by COVID panic. Early on, especially in the mask debate, which was pretty much decreed to NOT work before the pandemic, there were all of the sudden a couple of key papers, as well as supporting papers posing mostly as literature review, aggregators, etc. that came out declaring “masks work!”
Early on, I found myself shaking my head, not so much in negative disbelief. More like I had a gnat in my ear — something was just kind of wrong with all of this. The first tip-off inevitably was the long list of authors attached to the various papers. Anyone that knows anything about academic collaborations knows that they take basically forever to establish, and then once on the move, an equivalent amount of forever to coordinate research, write the paper(s), do the back-and-forth of argumentation (people outside the academy simply have no idea how academics can pick nits to establish status over each other. You have to attend your share of faculty meetings to really get this) and then actually send the paper to the review process.
Papers like this one in the Lancet (an ostensibly ‘distinguished’ journal) are great exemplars of the utter nonsense that dominated early publication, and remains to this day. The authors declare no competing interest, but if anyone believes that scientists in Hong Kong are going to publish anything that goes against the Chinese Communist Party’s (CCP) narrative need to have their head examined. These people got out early in front of the media wave (this paper was published in May 2020) and it’s far from the only paper claiming territory. Even looking at the author list for this bogus modeling paper causes anyone inside the academic sausage-making machine to blanch. How would such scientists on the list even find each other? And yes — I pulled the bios from the list — most are in AI/ML.
That stack of crazy led to even bigger stacks of crazy, including some extremely famous crazy like this paper, which purposed to aggregate all the bullshit being produced in the academic sausage factory. The author list for this paper is even MORE extreme in range than the various purely Hong Kong papers, primarily indicating that production of mask supporting literature is a bigger, and arguably a memetic problem keyed to how various academic brains work, than anything having to do with reality. Even the National Academy of Sciences got roped into this one. And yes — there are more arguably irrelevant Hong Kong-based authors on this paper. You can be sure they’re not going to sign on to anything contradicting the eminence-uber-alles views of the CCP. The mind reels. This paper, just according to The Google, has already been cited 264 times (and likely is still going up.) Once again, for those of us familiar with academic systems, both the coordination, and the number of citations post-publication, are utterly insane for something like masking, which prima facie really doesn’t have much hope of working.
I would also note that if you read the list, most importantly, the scientists on this list, producing most of this bullshit, have gone on to be key commentators on COVID, exclusively on the side of socially destructive, agency-destroying NPIs, often surfing the front of the wave of popular opinion, because it’s so obvious, from a grounding validity perspective, that the NPIs don’t work. And as far as long-term integrity, I’ve heard no apologies from this crowd, other than more doubling down on the nonsense that is basically destroying people’s lives around the world, while having no effect on COVID whatsoever. Virus gonna virus.
What’s going on?
One of the things that I started doing myself (here is a confession of sins) was subscribe to MedPage Today — a news aggregator for the medical community, that while not exclusively covering COVID, has probably devoted about 80% of its coverage to blurbs regarding COVID research and status checking around the world. It’s been very convenient — there it is with my morning coffee — to read up on whatever the latest COVID research is. MedPage Today does not screen itself to be boring, and often will reproduce histrionic pronouncements from the various researchers. It also will announce pre-prints not yet finished with review, which is interesting for a scientific publication. I am not one of those scientists that will totally die on that hill of peer review — it has its own unique set of problems on how papers are adjudicated, as well as with work that has any cross-disciplinary boundary implications. But it’s the best we’ve got now, and those hordes of graduate students diligently poring through papers assigned by them from their advisors do add some value.
What I do know about MedPage Today is that it allows release of information into the meme sphere far more quickly than anytime in the past — likely by two, or potentially three orders of magnitude. Historically, when I published my own fundamental work at the beginning of my career, on chaos theory and fractals, getting a paper through to publication was truly a herculean task. Review took at least six months, and by the time you got the draftsperson to do the figures, edited the text, resolved all comments, you were lucky to get your paper published in less than two years.
And after you published your precious gem — well then people (and other graduate students) had to find it in the library stacks, read it, and potentially incorporate the findings in their own work. It was ALL SLOW.
What’s the implication of slow? Though not guaranteed, there was a far greater potential of statistical independence in the conclusion. And even if it built on prior work, the timescales were such that the potential for confirmation bias stacking were greatly minimized. The information system simply didn’t allow it. Yes, we had various rock stars and such, but you had to go to that key conference, and argue with a bunch of other academics. The time scales were long, and the work benefited.
None of this is true anymore. Pre-printed work is sensationalized, and researchers in the various fields can curate results quickly that fuel their own confirmation bias. This turns the work far from any independence of thought, or search for nuance, and allows geometric/Pareto stacking of results — not unlike what we’ve seen in politics! Worse, it draws people inside the COVID research bubble who are hunting for status. One can almost generalize this as a social/collective assault on reason along the lines of Kahneman’s Thinking Fast and Slow. If you get your work tied in quickly into a simpler, less complex knowledge structure, it’s far more likely you’ll win both the Researcher Internet, as well as the Public Relations machine. Your simpler explanations are more viral, and memetically replicate far more rapidly than more nuanced versions. But they’ll come out of the social network limbic centers, and not surprisingly drive more fear and rancor on a hot-button issue like COVID, than reality.
I wrote about this around a year ago in this piece on the triumvirate of Drs. Bhattacharya, Kulldorff, and Gupta. These three august scientists, the authors of the Great Barrington Declaration, got double-bushwhacked by both their professional communities, as well as the Medium paper propagated by Tomas Pueyo and his “hammer and dance” nonsense. The mismatch in timescales created deep memetic conflicts I talk about here. Why do the memetics matter? Because once these deep currents are tapped, you don’t need to get on the phone and organize. The behavior of dichotomous limbic attack, attack, and attack becomes emergent. Nuance and multiple solution thinking get lost. And the press, already aligned with limbic response and spreading terror, having long ago lost their cultural mission of speaking truth to power, lined up behind the confirmation bias crowd. Dr. Kulldorff, a preeminent Harvard faculty member, was even banned from Twitter for going against the limbic crowd.
What’s the upshot of all of this? Short timescales in information systems like MedPage Today lead to Pareto cascades, statistical dependence and confirmation bias in work, and poorly thought-out belief systems that cater to the needs of those in power. Displacing those long slow days in the library, Internet search allows researchers almost instantaneous access to confirmation-bias-accelerating work. And the various citation indices create even more Pareto amplification. Inevitably, the entire system is biased toward producing statistically dependent, shoddy work. And it has.
At some level, other scientists are recognizing the problem. But it’s incorrectly framed, and is very likely to make things worse. More data mining, which will likely make researcher bias worse, not better. People stuck in status-driven systems are far more likely to use these tools to create even more elite opinion — even if it’s wrong.
This is not a simple problem. But grandma had the best advice in all of this — don’t be jumping’ to conclusions, son. That’s what gets you into trouble.