The Memetics of ChatGPT

A different world — off Anini Beach, Kauai, Hawaii

Yesterday morning, I had the pleasure of showing up (uninvited) in a Twitter Space with three very smart doctors — Anish Koka, Venk Murthy, and Sai Medi. I have quite a bit of fun with Sai on Twitter, but only see posts from the first two occasionally. The subject of the Twitter Space was AI in medicine, and whether it will be helpful or not. All three docs. are deeply intuitive, and at some level, I’m writing this for them so that they can calibrate their intuition better, as they move in far different social circles than I do.

The subject of the hour was the recent emergence of ChatGPT, which as far as academia is concerned, should be heralded as an asteroid in the sky, with us playacting the role of the dinosaurs. ChatGPT is really the first useful, nontrivial chatbot I’ve seen, and has already created waves in our world by doing such tasks as writing computer code, solving thermodynamics problems, as well as the more widely recognized issues of writing papers and abstracts.

But what is poorly understood in the AI community (usually met with handwaving, if they understand at all) are the implications of how an AI is created, and what we can reasonably predict a given level of AI can successfully execute. Before this moment, none of it worked well enough to be of serious interest to anyone — easily chalked up to arbitrary responses, or mimicry out of computer codes. That has changed.

Any AI that actually works, including ChatGPT, inherently must reflect the v-meme development/mindset of the designers. And it is highly unlikely, especially considering how modern AI is trained, that we are going to end up with something people would recognize as generalized intelligence — primarily because humans don’t understand generalized intelligence themselves.

But if we go back to the knowledge structure stack, some light glimmers in the darkness. See below.

ChatGPT is inherently constrained by the social structures creating it — at least as far as knowledge evolution goes. Though it is also true that when it comes to aggregation of already extant knowledge, something like ChatGPT has obvious advantages to any human brain on the planet. ChatGPT does not, and cannot yet operate in any knowledge structure above the Trust Boundary. What that means is it’s constrained to algorithms of increasing complexity, with defined or stochastic inputs, as well as factoids, documented opinions, and foundational stories/myths. But ChatGPT is fast, and can run constantly, learning and aggregating knowledge set in social structures at or below the level of complexity it can understand. Isolated in its own inland sea of knowledge, it can do many useful tasks — most of our undergraduate engineering curricula operates in the bottom four knowledge structures as well. Stanislaw Lem wrote about ChatGPT over 60 years ago, with his character, the Pirate Pugg — a pirate with a Ph.D., known for his hunger for facts of any kind — relevant or not.

But it cannot parse its own experience, and extract larger lessons. It cannot look at others and realize they might (at least at this point) know something more that it knows, unless it is explicitly instructed. What this means is that it can have no metacognition, other than what has been granted to it by its outside trainer. It cannot know that it doesn’t know what the surface of Pluto looks like unless someone has programmed (or trained) it to know that there are characteristic list of things all planets have, and it has a hole in its database.

When it comes to medicine, indeed, given a list of diagnostic characteristics, well characterized, it can go through the list. I suspect it will also be possible to search/scan case studies with standardized terminology (e.g. count the # of cases with metabolic syndrome, or people suffering from a heart attack.) I would also suspect in the presence of standardization, it might expand that list with other categories, and it’s not a huge improvement to note changes in training data.

But in order to evolve in sentience, just like other sentient agents (like humans) it needs others. It would have to be able to vet that outside sentience for reliability, as well as validity. An AI is constrained by exactly the same attempts at sentience that we are. It’s easy to see how an AI like ChatGPT could come up with its own refined, data-driven estimates. It’s above ChatGPT’s level to synergize results with others — it intrinsically has to have an authority stack in order to pay attention. It will have a very difficult time figuring out the truth with anyone attempting to gaslight it.

The more interesting point is to understand that many doctors have problems with these things too, and start the larger conversation how to make medicine more empathetic. That’s a big lift — it will involve changing the very status-centered authority structures already present in it. And those folks aren’t going down without a fight. They simply have too much to lose.

To sum up — algorithmic knowledge and below — ChatGPT will own the space. Heuristics and above — evolved humans still have a large contribution to make. What’s not clear with medicine is with what I’ve been calling the McKinsey-ization of medicine — efficiency and siloing of procedures, and elimination of cross-doctor consultation, relying only on scribbled notes — hey — ChatGPT is gonna own that. All the more reason for docs to set down and review cases together, and argue them out. In the process, they’re not just serving their patients better. They’re also evolving their own knowledge structures, increasing their own metacognition, and tuning up their nuance. And then turn to ChatGPT to do the fact aggregation that it inherently will do better.

P.S. This was a tricky post to write. One of the things I’ve found is that any system with enough validity grounding/contact with reality can get to a reasonable, proximal truth with enough fractal branching. Down and down in detail we go. Higher knowledge structures can get to big-picture views and truths far more quickly. Think of it this way. Let’s say every time you move to a new locale, you have to test for gravity to make sure it’s there. That’s pure Legalistic/Absolutistic thinking. But if you’re armed with Guiding Principles knowledge, you can right away assume you know the answer, and you’ll be correct. As the old masters used to say, Enlightenment cuts like a knife.

5 thoughts on “The Memetics of ChatGPT

  1. An excellent piece. ChatGPT has been at the forefront of my thoughts amidst all the geopolitical clowning of the past few weeks, particularly with Buzzfeed’s announcement.

    What I find the more critical part of the ChatGPT story, at least currently, is one you touch on only lightly at the beginning of your piece: ChatGPT, which as far as academia is concerned, should be heralded as an asteroid in the sky, with us playacting the role of the dinosaurs.

    For certain problem sets, it doesn’t matter at all that ChatGPT cannot truly replicate or give rise to GAI. But in the practical world, all that really matter is that it’s good enough for markets.

    I predicted that 4th Industrial Revolution job loss would be bottom up and was still effectively a few decades off and that other shitshows would be more dire problems. Thanks to ChatGPT it looks like it will be top down; the consolidation of wealth is gonna be bloody and doesn’t bode well- lots of academics and technorati used to mistaking linguistic skill with reality are gonna start demanding unemployment from national governments that have already impoverished themselves with everything else they’ve been up to lately.

    Wow, this is getting too long. Sorry.

    Liked by 1 person

    1. It took me about a minute to get it — when a colleague said he had put in that thermo problem, and got the correct answer, that was it. Algorithmic mastery in modestly fuzzy domains, with constrained boundaries was the top of the stack from the developmental level of the AI community. Without a greater understanding of what makes sentience, that still takes you a long way. And that is a LOT of jobs.

      Like

      1. Exactly. “Good enough” is a surprisingly big delta in a lot of well-paid, highly-educated disciplines.

        And that lost wealth ain’t trickling down, kids.

        Like

  2. So of course, given my age, my first thought is “do we want SkyNet? Because this is how we get SkyNet.” My second thought is “oh, my goodness. This has such potential to reduce medicine to the lowest possible common denominator, which is frightening. ” I hope that the excellent gentlemen you reference from Twitter will read this and lead a charge, as it were.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s