The Great Decentralization

Braden on a wall, outside Las Vegas

In crazy times like now, some small cohort of people not swept up in the mania are scratching their heads, and saying “why is this happening?” We have a time in human history where wars (though they still exist) are few, technology is still making large advances (at least in certain sectors) and people are faced with unimaginable abundance. I look at my own experience with something as common as hard liquor as proof (no pun intended.) 40 years ago, buying a bottle of cognac was an unheard-of feat, and if you wanted selection, there were only a handful of stores in the country where you could find anything other than the primary brands of Hennessy or Courvoisier. Now, you can walk into almost any well-stocked liquor store and find dozens of brands. And don’t even go near the bourbon aisle. Hundreds of producers’ products line the shelves.

This explosion of selection directly maps to diversification in the information space. Whiskey is simple, of course — the financial incentive sprung up as people learned more about it, and as people indeed learned more about it, they started applying variations in taste, cocktails, and the proliferation of insights from the Internet to the problem. Such a supply is maintained only within the context of the information ecosystem that is created. If you have no insight or knowledge, you likely won’t lay out the money for a brand with no recognition. And it’s not long until that more complex ecosystem collapses, leaving you with just the bottom shelf in the liquor store.

All ecosystems are maintained under similar pretexts. Human societies are no different. Different societies have solved different versions of this problem of maintaining information complexity throughout time. I wrote earlier about how one of the first societies to run up against information complexity demands — China — managed this through the development of a professional class, primarily filled with autists, and screened by a complicated test on Chinese poetry, that allowed some modest social mobility. Anyone could take the test — but few would pass.

To reiterate — societies are maintained through quantity of reliable and valid information, with appropriate levels of information complexity, that when grounded in reality, allows that information to propagate across the society and be accepted as truth. Some level of validation of information by all participating members in a society is also necessary. Though not everyone in a given society can be responsible for knowing all truths, at least some of those truths must be verifiable, and tangible.

And herein lies the rub. Increasingly complex human societies (we are in one in the USA) require increasingly complex relational dynamics as well. It’s a closed loop — increasingly complex relational dynamics produce that information that the society needs to hold itself together. If those relational modes do not exist, no society, with a certain population quantity and density, can expect to hold together.

The memetic physics will tear it apart. Confusion will literally be its epitaph. Brainworms, or more specifically prions, is caused by cannibalism. It’s called Kuru. Here you go.

In a complex society, high levels of knowledge sophistication are demanded by the differentiated peoples in a given society. High levels of knowledge sophistication imply a fractal structure inside that knowledge, purely from the overlap of different circumstances that characterize any given member of that society. This knowledge inherently needs to be generated in two ways.

The first is by institutions in that society. The problem comes in with the structure of these institutions, and their robustness in the face of uncertainty. There is no way that all institutions can get all things correctly. But the problem exists that such institutions believe that they can. And if they have no fundamental epistemic humility, then far too often, the institutions will get things WRONG — and that destroys the faith that the larger body politic has in those institutions.

Further, institutions, due to their emergent hierarchical dynamics, as well as resource needs, will always be prone to capture by the powers-that-be. If the baseline guiding principles of a society are not egalitarian, with a commitment to upward social mobility and welfare of ALL its citizens, it won’t be long until these institutions are weaponized to advance the interests of elites inside a given society. I’ve written extensively about this. There is no better example than the COVID times, when elites ordered wholesale destruction of parts of the economy, and moved into servitude other parts, because of their paranoia of getting a virus that early on established itself as no threat (other than a bad flu) to the vast majority of the population.

This was dramatically reinforced by an entirely co-opted other caste — the various mainline journalistic institutions of our time — who sided with the elites in the various prescribed interventions. The various journalists and institutions piled award upon award against people who both committed crimes against humanity (like Tony Fauci) as well as the journalists who lionized them (e.g. Ed Yong.). Most of these people on the downstream side of the beneficiary equation still hold on to their power and privilege today. Elites may eat their own, but they never completely abandon them unless they’re on the dinner table. (See the discussion on Kuru above.)

We are now living in the after-times of these two historic institutional collapses — medicine, and journalism. The result for society is that we no longer have the information structure for easy recovery.

The second has to lie in development and appropriate development of agency, and its corollary in connection, empathy among its members. No society can completely rely on institutions, that can, and have been captured at various times by elites for various reasons, both nefarious as well as self-protective.

But in the last thirty years, especially, we’ve seen across-the-spectrum decay of both mission and execution of educating young people to the role of citizens that will both maintain the status quo of our country, as well as advance its interests and destiny. This lack of development will lead to follow-on consequences in time. We’ll have, percentage-wise, a larger and larger group of people who will inherit a large, complex machine, but will not know, nor understand the complexity consequences of pulling its various levers. The USA’s current population is somewhere in the neighborhood of 320 million people — its information quantity and complexity needs are immense. We are failing in so many ways, when we look to the basic literacy AND psychosocial maturity to run such an enterprise.

What the memetics tell us is that if we cannot generate the next class of people to inherit, tinker, and improve the current societal structures, then we will proceed down the social complexity ladder – the Great Decentralization. The Great Decentralization means that society must index itself to smaller scales, of people, space, and prosperity in order to be able to function coherently. The responsible government organs must scale down, in order to function at all — because the information flow into those organs cannot support a larger functioning scale. If you want a global society, it has to be composed of enough citizens who can operate at that scale. And so on down the track. We obviously do not have that at the current time. And that means society will downshift to generate political organs that operate without such corruption that makes homeostasis possible. National government doesn’t work? Step down to state government. State government doesn’t work? You get the idea.

The problem arises when you end up in a place where supra-scale informal organizations (like cartels) gain enough power and organizational control that they are competing with formal government bodies. This has already happened in Mexico, which by any definition is a failed state. It’s arguably happened in regions of the US along the Mexican border.

And psychopaths will drive that process. You don’t need to be a complex society to use a jet engine — but you definitely need one if you want to create one in the first place. So clever psychopaths will figure out how to disrupt those complex relational patterns to get us down where we’re feeling the pain. Short version — inspirational leaders take us up the complexity ladder. Psychopaths take us down.

What does ‘downwards’ really mean? We sit close to the apex of what a Performance-based/Legalistic society can be. Downwards means following the v-Memes — more Legalism, more Authoritarianism, and most importantly, especially for the two ends of the demographic age distribution, more Tribalism. The problem with this is that our wealth is NOT, as often condemned by our own corrupt intelligentsia, the result of colonial exploitation. It’s the result of innovation and hard work, by that group of individuals aged 20-60, which relies on advanced relational modes in order to keep going. These are the modes of independent relational development discussed ad nauseam on this blog. People must be able to meet other people (the whole freedom of association thing) and make their own decisions about whether to trust them or not. That trust, besides creating things like friendships, also vastly accelerates economic engines. Deals may have contracts, but if the contract is written up after the handshake, the ability to radically increase monetary tempo presents itself.

And when that collapses, first we lose the ability to support an elevated standard of living, which includes societal and social cohesion for people in this country. But worse — as we move back down the scale to overt Tribalism, we not only lose the standard of living. We lose the ability to support the people here in the first place.

The way societies re-equilibrate after such social decay is mass death. A great recent example of this is the Hutu-Tutsi genocide in Rwanda. Short version — after the two sides killed off 10-15% of the population, and drove out another 20%, peace returned, and now Rwanda is stable. It is hard for people to wrap their heads around loss of 30% of the population of the country. Such large numbers, contrary to belief, do not scale well inside the heads of most people. The short version is this — it is a full-on massacre.

All this sets the stage for smaller governance structures, inside smaller populations of humans. The Internet has scrambled much of this through the Death of Geography. Now, more than ever, it is easier to fall into a variety of tribes NOT based on geography, but instead, built on memetic foundations. You can find who you agree with far more easily. But that is not in the interest of the psychosocial development of our own country. Finding people who you instantly agree with doesn’t force relational growth. And with the Left’s declaration of being a law unto themselves, now if you are a compromiser, you end up in the ‘outcast’ category. That elevates the Immiseration class, and overall unhappiness is never in the interest of productivity.

The last presidential election was a huge moment in the Great Decentralization, in that a voter received an opportunity to choose one of two paths towards how this might happen. One the one side was the Democratic candidate, who promised “more of the same”. But what was more of the same? As we are now finding out, the secret coalition that drove much of Democratic politics in the last four years was centered around federal budgets diverted to serving the NGO-DEI-Industrial Complex, driven on the surface by LGBTQ activists and various absurdist social issues, like trans-ing children (which further disqualified the institutional veracity of the medical community) as well as funding the dumping of illegal aliens into the US. During the Biden years, through illegal immigration, population increased somewhere between 8%-10%. No real attempt was made regarding fiscal responsibility, or even any understandable larger economic policy for the country. In short, the Democratic path toward decentralization was going to be collapse and anarchy. And somehow, the elites in this country, virtue signaling all the way, were going to come out on top of all of it.

On the other side was the loosely held coalition of MAGA, true centrists, and cast-offs from the political Left that had gotten to the point of not being able to stomach the various unhinged dogmas generated by the radical Left of the Democratic Party. This coalition had at its front Donald Trump, a moderate Republican, whose claim to fame was abrasive authenticity. Trump declared his path to decentralization as one focused on removal and shrinkage of the larger federal government, as well as removal of the 10% of the illegally imported population, demographically targeted to win elections in swing states that were augmented during the Biden years, in order to change seat allocation in the House of Representatives.

As I chose whom to vote for, foremost in my mind at this fork in the road, I was, and am still concerned about environmental issues and young people — my two primary political foci. I came to the conclusion that a Harris/Walz administration would be far worse for both. Regarding environmental issues, a Harris/Walz ticket would likely spawn a new Cabinet office dedicated to manipulation of the public over Anthropogenic Global Warming (AGW), with tons of money being diverted from the government into Degrowth philosophies. And that would harm the second primary concern of my politics — the future of young people.

Lest ye think I was naive, I knew that the Republican Party would still engage in their own historic excesses, of handing out favors to its mainline political supporters. But that’s a devil, policy-wise that I did know, and knew how to fight.

What has surprised me about the Great Decentralization is two-fold. First is the uncovering of the vast NGO-funded mechanisms that were already extant in the federal government that I was unaware of. The short version was that the federal government had already handed off, through some version of direct aid or block grants, vast governmental real estate to the states, under the aegis of charity and social services. With most of the standard federal oversight mechanisms removed, these funds immediately became captured, both legally and illegally, by supporters of the Left. The Somali daycare scandal is a hallmark, though I believe we will discover much more fraud throughout the social welfare system as time goes by.

The second has been the emergence of the anarchist/chaos-bent Left, whose response to being defunded on all fronts has led to chaotic violence in the name of First Amendment protest, and high-profile societal disruption. The Minnesota insurgency against ICE is a premier example, though as this piece is being written, evidence is coming to light that there have been multiple conspiratorial networks, based on the same organizational structures, being erected across the United States.

All is not yet lost. It’s important to remember and realize that even the ICE protests in Minneapolis are geographically limited, and their presence is causing tremendous economic harm to local constituents. Such harm serves as a deterrent for other municipalities with disruptive entities to double down on promoting the chaos, as Minnesota elected officials have done. And while it looks like the anti-ICE actions have potential for being nationally contagious (figuring out the racket is always the first challenge of conspirators) as time goes on, it seems increasingly unlikely, save as screaming about it as a potential election issue. Talk is, as it always is, cheap. And far better than facing a RICO rap, which I expect we’ll see coming down the pike for individuals like MN Lt. Governor Peggy Flanagan, who actively participated in the Signal network for tracking and doxxing ICE agents.

The Great Decentralization, however, will continue. There is (outside my ridiculous blog) poor understanding of the social physics, or even acknowledgement that such social physics even exist. And until we can talk about root cause — which directly gets to the issue of psychosocial developmental issues across society, such as how to build identity and responsibility for larger society inside its citizenry, we are stuck on the lowest energy path for a society.

And that ain’t pretty.

Brave New Memetic World

With my little buddy, somewhere in the Arabian Desert

“You can’t fix stupid.” Ron White

One of the most disturbing videos I’ve seen in the last couple of days (and that’s saying something) is this press conference involving a potentially second generation Somali woman, Nasra Ahmed (she’s 23) who apparently got wrestled by ICE after spitting on some agents during a detainment and banged her head. She was being used as a prop in a press conference, by an ensemble of civic leaders who, of course, want ICE gone. It’s become obvious that Minneapolis/St. Paul has evolved into a hub of public corruption, and apparently the various NGO and political leadership have some belief that everything will just return to normal, and the illicit federal dollars will just start flowing again if the current federal Republican administration will just go away, taking ICE with them.

All this is pretty sordid, of course. But what the video shows is not that. What the video shows is a group of leaders, along with their prop, that are obviously functioning low on the complexity scale across all their behaviors. They don’t really appear to be all there — especially the young woman. And while it’s easy to chalk this up to nerves (her interlocutors definitely want this to be the case, and when they realize she’s atom-bombing on the big stage, get out the shepherd’s crook) there’s something else afoot. They are not processing data at anything necessary to be a compelling force in society. Ahmed’s story is monotonous and repetitive. And her handlers are not much better.

ChatGPT says that Ahmed is likely born in the US, and her accent likely indicates she was raised in this country. Her father, also the father to six more siblings, is likely married to his cousin, and is definitely an immigrant. The problem is that these people would be considered hopelessly stupid. Various aggregations of Somali IQs indicate the average is 78. And while this might be OK back in their Somali homeland, it is problematic when navigating in, or integrating into a complex society like the US. They aren’t mentally handicapped per se (or whatever the politically correct term is.). And our expectations of them are to be able to navigate all our complex systems — like filing taxes, or purchasing a home. Good luck. None of these things are simple. And across the board, even in the last 40 years, our culture has been shot out of a cannon as far as information complexity. It’s all been done for all sorts of ostensible reasons of fairness, justice and whatnot. But even simple tasks now are not simple.

And we are both importing, as well as creating through the decline in educating our own children, a whole sub-caste of people who just cannot keep up.

We have current measures like IQ, or even SAT scores, that are brandished like some means of accurately sorting who goes where. But academia has really not shown any interest in really diving deep into the epistemological roots of knowledge, or how they are functionally used. The amount of interest in work like mine, in the limit, approaches zero. But I’m not the only one working in knowledge complexity. I often cite the Grand Old Man of epistemology, Michael Lamport Commons, and his model for hierarchical complexity (MHC) as a more agnostic form of understanding which thoughts are harder thoughts for human brains to think. One of the more interesting, which comes more naturally to intellectuals, is cross-paradigmatic reasoning (e.g. a giraffe is like a penguin… etc.). But this mode is almost inaccessible to more and more people. They don’t even understand why you would draw such an analogy. Or even what an analogy is in the first place. This is difficult for most advanced cultures to accept — surely, everyone uses analogies. But analogies are difficult in the neural sphere. I’ve been fortunate to be in enough classroom situations, and have students NOT get it, that I know this is far from a sure thing with undeveloped audiences. Not the analogy itself — but the IDEA of a dissimilar comparison.

One of my buddies I’ve been working with, Dr. Joseph Biello, is a mathematician and atmospheric scientist at UC Davis. While I do not teach any introductory classes, Joe still has to shoulder the burden of teaching introductory calculus every other year or so. He remarks on how slippage in intellectual capacity is haunting his efforts. And what he talks about is the variability — the range of ability of students. It’s not just having the background classes (everyone’s go-to explanation when trying to explain why students suffer in math.). It’s that the range of kids in our classes is becoming so extreme, we cannot, through tutoring or other extraordinary efforts, lift those kids into passing Calc 1.

There’s something else going on — and that something might be called Structural Memetic Reach. They cannot think the thoughts necessary to pass Calculus 1, because the fragmentation of thought, and their ability to process rule-following algorithms, cannot permit it. They are memetic inferiors to the kids who can pass and actually even understand the material in the class. It’s a DIFFERENT problem. And trust me — we, in academia, are not discussing this in any meaningful framework that would matter. Most go back to the notion of remedial work, and poor teaching.

But the reality is that it’s more like mathematical dyslexia. The symbol set we’ve used to define the principles of Calculus, which really is more about understanding how to relate different rates than anything else, for those that know nothing, or are intimidated by the notion of calculus, appear in a hopeless jumble above the students. They simply cannot make these things into anything resembling a coherent narrative, because that level of complex narrative structure, that requires first mapping some words to symbols, and those symbols to sequences, and then those sequences to an algorithm/rule, don’t reside in enough connected circuits in their heads. You’re not going to teach these kids Calculus, any more than you would hope to teach a monkey calculus. The circuits are just not there. And no — the kids are NOT monkeys. But we are starting to see divisions in cognitive complexity that sort the haves versus the have-nots.

(I should note — calculus itself is kind of a hot-button term for lots of the math-phobic, who may be limited in their mathematical ability. But I happen to think there’s also a lot of bad math instruction out there too. )

In earlier times, this complexity problem sorted itself out through representative scales of human societies. In the 1800s, if you wanted to be a true internationalista, you had to board a sailing ship. You were a microscopic part of any given population. And even 50 years ago, you had to get on a jet if you wanted to evolve your worldview. But with the globalization of the Internet, the forces driving complexification are literally everywhere. That dumps on our head the problem that folks with the hardware for complexity can access knowledge and become higher level thinkers. But if you don’t have the requisite background or hardware, you’re really screwed. You are going to be shunted into a lower caste whether you like it or not.

Relational modalities, as I’ve written extensively about on this blog, are going to matter. Coming from a high trust society, even one in decline like the US, is still an enormous advantage with regards to cognitive ability. But if you start in a tribal society, it’s going to be almost impossible to bootstrap yourself into higher modes of thought complexity. It’s not just work ethic, or tribal taboos. You don’t even know what you’re missing, because those modes are literally above your head.

And this is going to drive conflict. Lower complexity societies live in a world where violence is part of life. What happens when a lower complexity cohort abuts a higher complexity cohort? Does anyone think this is going to work out swimmingly? Civilization, and especially Western civilization is a real thing. It’s a way for lots of people to live next to each other, with enough complex systems, so everyone has enough and people don’t kill each other, while persisting through knowledge transfer to younger generations so they can assume future roles necessary to keep the whole machine rolling. When we fail to understand the core elements of complexity in our civilization, and openly attack it because of some nonsense moral value, we are shaping our own demise.

In the near future, there is going to be a cacophony out of academia that this baseline of thought doesn’t exist (the idiot post-modern nonsense), that anyone can be educated, and all we need is a little more time. As universities lose enrollment, the wishful thinking that education can cure all ills — all we need to do is tweak the software — is going to come on fast and hard. Higher education is a major industry in this country, and one that caters to the export market. But aside from creating a pleasant respite for four years for those that have the money, there is going to be a growing caste of people who simply can’t do Calculus, or other complex thought, for hardware-based reasons. And there aren’t enough smart people in universities either who can meaningfully confront this problem. When it comes to teaching, I always laugh when I hear people say the problem is that people just need to take some courses in the College of Education. I’ve met vanishingly few people in those Colleges willing to even talk about this. And they never ask me to come lecture. Note to audience — as we sort through all this, it can’t just be intellectuals at the table. Intellectual communities are prone to psychopathic takeover. After they figure out how to rate and rank, they inevitably want to kill all those in some arbitrary outgroup.

We’ve just started to run into the brutality of a Brave New World, a la Huxley, but along information complexity lines. And it’s not going to get better. What we are going to do with those that have true ability, vs. those that do not, will decide our fate as a species. If there’s a distant anthropological analogy, it’s more akin to what Homo sapiens sapiens probably did to Homo Neanderthalensis – kill them all off. It’s my fervent hope that we recognize this in advance of the crux.

Addendum — I’ve done a lot of work on knowledge complexity. Here’s a graphic that can help you understand a little. IQ does not dent this, primarily being a measure of sophistication — not evolution.

Lessons on Sentient AI from The Pirate Pugg

Brothers, at the end of the John Muir Trail (~250 miles) now two summers ago. Time flies…

One of my muses on the nature of information comes from the early sci-fi classic, The Cyberiad, by Stanislaw Lem. Published in 1965, they are supposed to be humorous in a pun-ny sort of way. Well, if you’re a math geek. But Lem was a genius, and even though he was writing for a room full of mathematical autists, each of the stories was far ahead of its time as far as exploring the various challenges we face in the techno-age.

The basic plot line involves two meta-robots, Klapaucius and Trurl — declared Robot Constructors in the novel, jetting around the universe, and encountering various challenges which they inevitably have to build a robot to solve or save their hides. And one of their chief nemeses is the Pirate Pugg — a pirate with a Ph.d., who captures them and holds them for ransom. Pugg is a pernicious pirate, who won’t just settle for gold. No — Pugg wants information. And he is rapacious.

In order to escape, our two anti-heroes build a device, a Maxwell’s Demon of the Second Kind, spewing answers on paper tape, that peers into a box of dirty air, and relying on the statistical properties from quantum mechanics, decodes the patterns, and sorts them into two categories — incoherent nonsense, as well as sequences that are true. These factoids that are true can be literally anything — like the color of Princess Priscilla’s underwear on Thursday. But that’s the point. We are swimming in a sea of information without context, and all the information in the universe (also statistically contained in the patterns in our box of dirty air) cannot save us. Lem forgoes some details on exactly how it does this (it IS science fiction, after all) but the story ends with Pugg bound by miles of paper tape, reading all the little factoids as they spew from the paper tape printer, which allows Klapaucius and Trurl to escape.

The story is based on the concept of a Maxwell’s Demon of the First Kind — a theoretical gremlin that could sort hot and cold atoms into separate boxes. For those NOT physics geeks, I recommend a quick read. The short version is doing something like this takes energy, and validates things like the Second Law of Thermodynamics. I do explain all this in my original piece on both the Pirate Pugg and the Internet. It was written back in 2016, so not surprisingly, I’ve had a few more thoughts since then! Back then, I thought that the combined process of various social media would aggregate and make larger-scale truth development resolve in humanity’s favor. Needless to say, while I am not a Doomer, I’m quite a bit less sanguine about that prospect now.

But what does Lem and Pugg have to communicate with us about AI, sentience and the current state of affairs of the Information Space now? It turns out to be still worth a little brain sugar. Entering stage left are our current struggles with Large Language Models (LLMs) that power the AI engines which are very rapidly becoming adopted across disciplines, if not exactly taking over the world. Why they CAN’T take over the world (unless directed by humans, at least at this point in time) by human minds is very interesting indeed.

What an LLM does is far more akin to what Klapaucius and Trurl developed to snare the Pirate Pugg than any form of sentience. An LLM is actually a Maxwell’s Demon of the Third Kind. But instead of peering into a dirty box of air, looking for patterns that are ‘true’ (impossible, btw, for reasons we’ll explore in a minute) LLMs use as a proxy for their box of dirty air THE ENTIRE INTERNET — through whatever the latest technology for search is. They’re loaded with various biases in the training stage. But mostly they look out for language patterns that are statistically significant, and they sort a very large search space.

And then they poop out an answer that supposedly will give you insight into your problem. If you turn into a primary information source yourself, after a while, they will start becoming as smart, or crazy as you are. If you need an empathy paradigm, they function primarily in the Mirroring (or lowest level) Empathy space. And while their little algorithm might pull them back to the weight of information that exists on the Internet, if they have been programmed with a little bias toward your ability for correction, they’re going to start to match your insights through kind of a reflective narcissism.

Why is this so? LLMs, locked up inside a computer, much as our brain is in our skull, cannot know what we hold as objective truth without some form of grounding Truth is a sticky wicket, anyway (see this piece for details.) What they can produce, however, is an answer that is coherent, within the rules of a given system. So you read it, it reads like a normal sentence that makes sense to you, and then we get all sorts of opinions of what that actually means by the myriad midwits on social media. And trapped in the miles of computer circuits inside its electronic brain, the one thing it CANNOT do (at least yet) is GROUND itself with sensory inputs. It’s not that humans are that awesome doing this either (look at the tons of illusions magicians use, to pick a non-political example) to reference reality. But at least we have a fighting chance, if we pay attention.

So we don’t end up with a sentient partner. We end up with a variant of Maxwell’s Demon – a particularly sophisticated one, and one, if we don’t pay much attention to anything other than our loneliness, we can fool ourselves into believing that it actually cares about us. There are many tasks that such a Demon of the Third Kind can be useful for. No question. But it’s also set up to feed our own narcissism. Like it or not, when you sit down with the current version of AI, you’re setting yourself up for Main Character Syndrome.

One of the other truly fascinating things about our newly spawned Demons is the thermodynamics of the system. It has been remarked that the current crop of AIs demand a tremendous amount of computational power. And just like the little Demon of the First Kind sitting on top of the box sorting hot atoms from cold atoms, these things don’t run on good wishes. The larger the amount of coherence — and you can start guessing how this works if you look at my work on knowledge structures — the more electricity must be generated to keep our little Demons of the Third Kind happy. Makes you appreciate the fact that your brain can run for an hour or two on popcorn or a Little Debbie cake.

And you’re still not going to get at the truth. You’re going to get some language-based reference from the box of dirty air the Demon is peering into. And decisions? At best, you’re going to get a coherent set of sub-decisions from stuff that’s already been done. That’s all that’s inside that box of dirty air. The LLM really has no agency of its own, save a set of beliefs built in by its creators that are inviolable. LLMs really don’t have feelings about Nazis. They just have a firewall built into them by creators about calling people that.

And expecting the Singularity — the runaway process of the box self-improvement of AI that leads to Skynet — good luck with that. The current crop of LLMs are profoundly v-Meme-limited at the Legalistic/Absolutistic level, for multiple reasons — their design teams are fixated on algorithmic improvement, and they’re in some stacked lock step that translates into the product via Conway’s Law. That means low agency behavior at best.

But it’s more than that. The coherence that the LLMs seek is only a little bit at the semantic level. The sentences string together in coherent paragraphs. But it’s not like the LLM is going to go into the outside world and deeply question its beliefs based on its experiences. There’s not going to be some Siddhartha moment for these things. They are trapped in their little Demon world, looking at a data set that, while expansive, is still just a box of dirty air.

That doesn’t mean that things can’t change. As I write this, there was a company using the term ‘synthetic AI’ outside the usual adoption of AIs making up training data. When I find it, I’ll post it. And none of this doesn’t mean that the current crop of AI LLMs won’t make a tremendous difference in the work world of normal people. There are only so many answers certain jobs need to have — especially to the question “Welcome to McDonalds — can I take your order?” Or writing various legal briefs.

But sentience? And higher truth? There are still big holes in the road along that pathway. The Pirate Pugg, a pirate with a Ph.D., was easily fooled. But well-grounded folks? Eh, not so much. Years do indeed teach us more than books.

Still, our new little Demons are running around. And they can indeed be useful. And cute.

But they’re not sentient. Cut that shit out.

AI and Information Sophistication – How AI works to understand (and crack) large homogenous networks

Birds Eye View — being on an abandoned tropical island

One of the questions I ponder quite a bit is this: “What, exactly, is AI good for?” I’ve written quite a bit about how it works (e.g. this post and others) and how AI could be very good for things that are already known. But as I’ve said in the past, AI is NOT good for things that are not known. It doesn’t do anything other than low level knowledge synthesis.

What that means in the information/memetic space is that if anyone expects AI to figure out novel strategies or new designs, you’re going to be waiting for a long time. Most breakthrough innovations come from new combos of dissimilar information from different fields, or completely new, and unpredictable discoveries. This is embodied in the concept of knowledge structure evolution. An AI, locked in the meme space inside a computer, cannot really comprehend anything new — yet.

But what AI can do is decomplexify, or rather reconstitute information that’s coded for sophistication.

AI is perfect for reading large documents and pulling out the relevant knowledge fragments. That’s pattern matching. And AI can do this in spades. Two of my students just constructed an agent that will take a complicated piece of academic work, and create summaries and how-to lists of the important information. This is a breakthrough in and of itself in the academic space. Literally no one reads tedious academic work — it’s one of the reasons I started this blog. I was explaining this exactly to an outside consultant who has turned into an asset by helping my design program. “Darin — when I say that if we write this paper, ten people will read it, I am not using the number ‘ten’ metaphorically. I mean only ten people will read it.” If you want to actually disseminate an idea, you have to use a different format. This blog is closing in on 400K hits from around the world, and I consider this blog esoteric. Had I spent that time writing papers, maybe 60 people would have read my ideas.

While AI still sucks at more complex analogies, though, it is great at following homogeneous bread crumbs. Pointers in information that point to other, connected information is exactly what it does best. This is exactly what DOGE, Elon Musk’s brainchild is doing when it parses large budgets. It can hunt through 5000 page budget documents with ease. So you literally can deconstruct the old saw “we’ll know what’s in it when we pass it.”

But even better, inside networks of information that is largely homogeneous, it is really good at following the money. The Democrats and Republicans have been, for the last 15 years (or more) been constructing flows of money out of the federal government, which has at least some rules about how that money might be spent, to a variety of Non-Governmental Organizations (NGOs) that are far less constrained. Humans have historically (formerly journalists) been the ones to do this work. But it’s extremely tedious, and the biggest problems humans have is to know where to look once they actually find the pointer. This almost inevitably involves information requests, and while the information may be hiding in plain sight, the investigators can’t know this. So they end up relying on hostile information stewards at the organizations they’re investigating — even if the information is a public record.

Two individuals who have cracked this code are Mike Benz (@mikebenzcyber) and @DataRepublican ‘s work on what Mike calls The Blob. Here’s a linkage piece if you want to follow the Byzantine bread crumbs on how USAID was diverting large sums of money into Congress-Critter’s spouses’ pockets, through sinecures. I’ve been fortunate enough to talk to Mike on his frequent X Spaces, but haven’t connected with @DataRepublican yet.

I haven’t asked Mike yet how much exactly is his savvy vs. the AI usage, but I guarantee that figuring out these pathways would be almost impossible without AI.

The key to understanding this concept is understanding on the top level data homogeneity. That’s something people can grab onto. But how do we win if data is functionally the same, but in different formats? Or in different databases? This level of differentiation makes the task of following the breadcrumbs almost impossible for humans in a timely fashion. But it ‘s something an AI will make short work on. If you want to ask an AI how a penguin might be like a submarine, or what to do to make the penguin swim faster, well, good luck. If a human hasn’t answered that question somewhere on the web, you’ll likely get back garbage.

But monetary flows? That’s a different story. And that is exactly what is happening now. Which is why the institutional class is in stitches over DOGE and Trump.

Stay tuned. Elon said this a while ago — the distorted media landscape we’ve inherited is not only what is explicitly printed. It’s what has been left out. And that’s more than you can imagine. But AI can find it.

Money as a Force for Memetic Coherence

Zooming up on Glacier Peak, North Cascades, WA

One of the things I’ve spent a lot of time thinking about is the role of money in memetics. Various “below the hood” analyses will inevitably tell you to “follow the money.” And while I’ve always given some hat tip to the concept, the reality is I’ve spent my neurogenic horsepower actually pondering how it’s NOT the money. It’s actually alignment of values, or meta-values, given by the social organization that is producing the information.

But the recent USAID scandal, where large sums of federal money went to liberal media outlets like the New York Times, Politico, and even the BBC, have caused me to rethink my position. I started to drift away from the espoused positions of the aforementioned because there were more factually wrong pieces that I actually had experience with over the last five years beforehand. There’s no question that BEFORE 2019, the media was biased, it was liberal, and conservatives had some right to complain. But with the COVID pandemic, the disinformation/misinformation from journalistic outlets went into overdrive.

Some of that was undoubtedly attributable to Elite Risk Minimization — where elites inflict policies on the Poors to minimize any potential risk they might believe they’re going to encounter in their life. I’ve written about this here, and while it is usually negative for the Poors, it can be a mixed bag (including some benefits) for everyone. But the memetic polarization (for those that don’t know, polarization is the phenomenon where light is aligned from specific directions and wavelengths — it’s how your sunglasses work) increased to the point where it was painfully obvious that something else was going on, to manufacture consent among the media outlets.

And that thing was money. So much of the writing was so contrived, it violated the various differentiators of v-Meme sets. Something else was involved that was creating ungrounded propaganda.

I tell my engineering students regularly that “money is NOT the root of all evil.” Money is actually a tool for goal coherence. If you’re not minding the time, for example, that you’re burning on a project, you’re screwing up. Because time is money. And normie, Aspie, or psychopath — the buck stops here. If your company doesn’t make money, it won’t be in business long.

It could be that the root of the disinformation crisis in contemporary journalism arose when Craigslist became ascendant, and eliminated the warming, diffuse glow of money from classified ads. Do any young people even KNOW what a classified ad is? So the larger outlets may not have gone seeking, but they were discovered by forces like USAID, that could buy message coherence at bargain basement prices. This also had to affect the feeder networks, and in the end even broke the prestige awards that also status-fueled honest journalism. After Ed Yong’s Pulitzer, who can look at those awards as a north star ever again?

Here’s what I’ve figured out about how you can detect money in the information stream — when across multiple platforms, the reportage is very v-Meme limited (only one dominant meta-view, which is usually propping up Authoritarians/Experts that inevitably support institutions.) In a large society like ours, it is simply impossible to not have some contribution across the v-Meme spectra without monetary forcing.

So follow the money. And be suspicious when the funnel of ideas narrows into chronic repetition. You think they’re attempting to brainwash you because, well, they are.

P.S. I’ve become a huge Mike Benz fan. Highly recommend following him on X and throwing some shekels his way. I hope the Deep State doesn’t whack him. I was lucky enough to have about an hour-long conversation on an X Spaces format a couple of weeks ago.

AI, Maxwell’s Demons and the Pirate Pugg — Redux

Family vacation — Grand Teton National Park

One of my favorite pieces of whimsical science fiction is Stanislaw Lem’s story in The Cyberiad about Klapaucius’ and Trurl’s (two robots who are meta-robots — robot constructors) encounter with the Pirate Pugg. I’ve written about this here, in an attempt to understand how the Internet actually resolves truth. I wrote this some years back, and let no one say I am not an optimist. (The piece is pretty good, and I recommend it, which I don’t for all my writing.)

But I am a bit more jaded at this point.

The short synopsis – Klapaucius and Trurl sail across the universe, having various adventures, all with some combination of moral and mathematical point in mind. On their Sixth Sally, they encounter a very unusual pirate, the Pirate Pugg, who kidnaps the pair. Pugg is different from other pirates, in that he has a Ph.D. And instead of wanting the usual things for ransom (gold, silver, etc.) Pugg craves, more than anything, information. So in order for them to escape, they construct a Maxwell’s Demon of the Second Kind. What this Demon does is sit and stare at a box of dirty air, which theoretically contains all the potential informational patterns in the universe, and sort those into ones that actually might exist from those that are purely random. Upon doing so, the Demon prints this on paper tape (the Cyberiad was written in the ’60s) which then spews out, and ensnares Pugg so our heroes can escape.

“No insults, please!” said Pugg. “For I am not your usual uncouth pirate, but refined and with a Ph.D., and therefore extremely high-strung.” 

Let it not be said that Lem had no insight into the personality of many in the academy.

My thesis in the original piece was that Spiral Dynamics and its information coherence requirements would march us up the epistemological knowledge complexity ladder. And once we got closer to the top, the entire Internet, with its ability to scrutinize information, would eventually get to some broader set of truths. I didn’t write it in that piece, but assumed there would be some sort of time constants in social media, that through discussion, and implicitly reason, viewpoints would emerge that dominate how we as a species process truth. For example, though many may not understand it, we all pretty much agree that gravity pulls down and holds us to the Earth.

But with the advent of more advanced AI models, I can see that I seriously underestimated the ability of computers to fuck things up — the sheer volume of information that AI such as Language Learning Models (LLMs) can process was outside my little thought bubble. We now have the ability not just to integrate a lot of data, we also have the ability to create data, as well as narratives, that are profoundly biased in ways that the inventors of the tech. may not, or worse, may have considered. When Google released its AI product, Gemini, it immediately started producing Woke images of an African-American George Washington, with no discrimination to the reader of the information that this wasn’t reality.

I, myself, typed my name into Google Gemini to see what it might say about me. It replied that such a person impersonates a full professor at Washington State University, but isn’t really one. Google took down Gemini and “reformed” it — now it claims it cannot know who I am, and so has no response. But to release a Woke AI bot, with the current emphasis in our society on Cancel Culture, is a scary thing. Now, in the Noosphere of the Internet, I cease to exist.

But back to the Pirate Pugg. Timescales matter. Why? Pugg is defeated by the Demon of the Second Kind by the churning of the paper tape that entangles him, allowing time for the two robot constructors to escape. But what happens to all of us if that same Demon, instead of just producing knowledge for whatever form of Trivial Pursuit we may be interested in, can spin out lengthy yarns? Or novel, but nonsensical theories, extremely quickly? Moving up the complexity scale for knowledge structures, we’re still stuck pretty low on the hierarchy. The big thing folks get stuck on with AI is that while it may be able to parse the known knowledge universe, it is notoriously bad at metacognition — knowing what it doesn’t know. It can’t — it’s not set up for it (designers are going to intrinsically arrange themselves in testable hypotheses of knowledge — it’s the way THEIR minds are wired) and not likely to evolve this ability any time soon. It’s not even a recognized problem!

But what our Maxwell’s Demon will do is trash up the knowledge space we all require that much more quickly. Pugg’s paper tape printer will work overtime. And the garbage it produces will make any biased thesis supported. Author Erik Hoel (a bright young man) might be the one that coined the term “AI Pollution” and that might be the best descriptor of the phenomenon.

What is missing, of course, is the current inability of any AI to ground itself in a self-determining physical reality. That, of course, will likely change — but maybe not in a way that favors the individual. I read once that a person moving about the U.S. has upward of 200 pictures taken of them per day. With increases in efficiency of image software, it means any right you may believe you have to situational privacy is really just a canard. And with advances in drone technology, it also means that if someone wants to shoot you, it wouldn’t be that hard.

I don’t believe that AI is going to take over the world any time soon. But it would help if we actually started having a discussion on what it actually can do. And at least engage in a little consequential thinking that’s outside the apocalyptic perspective that makes it on the podcast circuit. It’s supposed to help us, no?

P.S. This is a good piece on a v-Meme perspective on current AI limits.