AI, Maxwell’s Demons and the Pirate Pugg — Redux

Family vacation — Grand Teton National Park

One of my favorite pieces of whimsical science fiction is Stanislaw Lem’s story in The Cyberiad about Klapaucius’ and Trurl’s (two robots who are meta-robots — robot constructors) encounter with the Pirate Pugg. I’ve written about this here, in an attempt to understand how the Internet actually resolves truth. I wrote this some years back, and let no one say I am not an optimist. (The piece is pretty good, and I recommend it, which I don’t for all my writing.)

But I am a bit more jaded at this point.

The short synopsis – Klapaucius and Trurl sail across the universe, having various adventures, all with some combination of moral and mathematical point in mind. On their Sixth Sally, they encounter a very unusual pirate, the Pirate Pugg, who kidnaps the pair. Pugg is different from other pirates, in that he has a Ph.D. And instead of wanting the usual things for ransom (gold, silver, etc.) Pugg craves, more than anything, information. So in order for them to escape, they construct a Maxwell’s Demon of the Second Kind. What this Demon does is sit and stare at a box of dirty air, which theoretically contains all the potential informational patterns in the universe, and sort those into ones that actually might exist from those that are purely random. Upon doing so, the Demon prints this on paper tape (the Cyberiad was written in the ’60s) which then spews out, and ensnares Pugg so our heroes can escape.

“No insults, please!” said Pugg. “For I am not your usual uncouth pirate, but refined and with a Ph.D., and therefore extremely high-strung.” 

Let it not be said that Lem had no insight into the personality of many in the academy.

My thesis in the original piece was that Spiral Dynamics and its information coherence requirements would march us up the epistemological knowledge complexity ladder. And once we got closer to the top, the entire Internet, with its ability to scrutinize information, would eventually get to some broader set of truths. I didn’t write it in that piece, but assumed there would be some sort of time constants in social media, that through discussion, and implicitly reason, viewpoints would emerge that dominate how we as a species process truth. For example, though many may not understand it, we all pretty much agree that gravity pulls down and holds us to the Earth.

But with the advent of more advanced AI models, I can see that I seriously underestimated the ability of computers to fuck things up — the sheer volume of information that AI such as Language Learning Models (LLMs) can process was outside my little thought bubble. We now have the ability not just to integrate a lot of data, we also have the ability to create data, as well as narratives, that are profoundly biased in ways that the inventors of the tech. may not, or worse, may have considered. When Google released its AI product, Gemini, it immediately started producing Woke images of an African-American George Washington, with no discrimination to the reader of the information that this wasn’t reality.

I, myself, typed my name into Google Gemini to see what it might say about me. It replied that such a person impersonates a full professor at Washington State University, but isn’t really one. Google took down Gemini and “reformed” it — now it claims it cannot know who I am, and so has no response. But to release a Woke AI bot, with the current emphasis in our society on Cancel Culture, is a scary thing. Now, in the Noosphere of the Internet, I cease to exist.

But back to the Pirate Pugg. Timescales matter. Why? Pugg is defeated by the Demon of the Second Kind by the churning of the paper tape that entangles him, allowing time for the two robot constructors to escape. But what happens to all of us if that same Demon, instead of just producing knowledge for whatever form of Trivial Pursuit we may be interested in, can spin out lengthy yarns? Or novel, but nonsensical theories, extremely quickly? Moving up the complexity scale for knowledge structures, we’re still stuck pretty low on the hierarchy. The big thing folks get stuck on with AI is that while it may be able to parse the known knowledge universe, it is notoriously bad at metacognition — knowing what it doesn’t know. It can’t — it’s not set up for it (designers are going to intrinsically arrange themselves in testable hypotheses of knowledge — it’s the way THEIR minds are wired) and not likely to evolve this ability any time soon. It’s not even a recognized problem!

But what our Maxwell’s Demon will do is trash up the knowledge space we all require that much more quickly. Pugg’s paper tape printer will work overtime. And the garbage it produces will make any biased thesis supported. Author Erik Hoel (a bright young man) might be the one that coined the term “AI Pollution” and that might be the best descriptor of the phenomenon.

What is missing, of course, is the current inability of any AI to ground itself in a self-determining physical reality. That, of course, will likely change — but maybe not in a way that favors the individual. I read once that a person moving about the U.S. has upward of 200 pictures taken of them per day. With increases in efficiency of image software, it means any right you may believe you have to situational privacy is really just a canard. And with advances in drone technology, it also means that if someone wants to shoot you, it wouldn’t be that hard.

I don’t believe that AI is going to take over the world any time soon. But it would help if we actually started having a discussion on what it actually can do. And at least engage in a little consequential thinking that’s outside the apocalyptic perspective that makes it on the podcast circuit. It’s supposed to help us, no?

P.S. This is a good piece on a v-Meme perspective on current AI limits.

2 thoughts on “AI, Maxwell’s Demons and the Pirate Pugg — Redux

Leave a comment