Space Aliens or Killer AI Robots? Which ones are gonna get us?

Tamandua, a small anteater, Pantanal, Mato Grosso do Sul, Brazil

I’m listening now to a super-fun book from my Audible account, that’s right up the alley of all kinds of fans of this page. It’s called Human Compatible: Artificial Intelligence and the Problem of Control, by Stuart Russell . I highly recommend it. Russell is a full professor at UC-Berkeley, and constructs measured rational arguments about the need for AI regulation, instead of defaulting to his authority, of which, as an internationally famous scholar and Oxford grad, he has plenty. Further, he’s implicitly an evolutionary thinker — his solution for the problem of out-of-control AI is adding some self-doubt to the AI, with a reverential response to individual human preference, which is exactly one v-memetic/value set click up, with different scaffolding, than the current objective-driven AIs that we have out there now.

Caveat: I don’t have the hard-copy — just the audio book — but I do still recommend this book. This piece may get a couple of details wrong, but most of what I’ll write is conceptually pretty solid. I also have not finished the whole book — and may update this piece when that happens.

Russell frames the problem of whether we should worry about super-intelligent AIs, no matter how far out in the future they are, by equivalencing to the question “what if we knew space aliens were going to show up 50 years in the future?” His short answer is this — we better get prepared.

And while I don’t disagree with his premise — he’s RIGHT — we should get prepared — for those that understand the empathetic implications of this work and how Conway’s Law would dictate the structure of an AI, this is not the best way to start the book. Any super-intelligent civilization that finally travels to Earth is expanding the temporal and spatial scales of their own consciousness, as well as mastering maintenance of their personal ecosystem over potentially millions of years, as well hundreds of light-years. More importantly, they are doing this sentient development OUTSIDE our human system.

And since I’ve made the case multiple times that sentience is sentience is sentience, it’s safe to assume that such creatures are far more evolved than we are, and operating from a much more connected, wise and evolved v-Meme/value set. No one’s going to fly interstellar distances to do something as stupid as mine our planet for rare earth metals. Or stop by Earth for a billion-or-so people snack. That’s an egocentric projection out of our own deep Survival value set. For those that want to see all that unpacked, and why you shouldn’t believe people like Stephen Hawking, this is a super-fun piece that I wrote, and don’t cringe when I re-read it. (Not true for everything on this blog!) Short version — interstellar space aliens are already likely enlightened.

But evolutionary AI is an entirely different problem. AI is something we create, and as such, will reflect us, dependent on both the social structure of the group creating the AI, as well as its v-Meme/value set. That’s the implication of Conway’s Law — the design of the system maps to the organization that creates it. And that nasty bit of reasoning, The Intermediate Corollary, that states that social structure maps to knowledge structure before it gets to instantiation in the design, is a bugger when it comes to AI. Like it or not, we intrinsically are going to transfer SOME value set into our AIs. And that value set, whether we understand all the implications or not, without deliberate attempts at growing empathy, is going to be a low empathy solution.

And this is where Russell’s analysis shines. He frames the development of AI around the fact we will create AIs to reach goals, (Performance/Goal-Based value set/v-Meme) and while he doesn’t have an explicit value evolution structure like the work on this blog, he calls out the scaffolding that’s going to mess us up — namely Authoritarian/Egocentric value set concentration by the AI wanting to replicate itself, as well as the fundamental Survival v-Meme/value set desire by the AI to survive. And that needs to worry us. If we build a cognitive engine strong enough to reach certain goal levels, there’s no telling what that AI’s neuroplasticity will come up with as far as that Survival strategy.

We’re already seeing what folks in the aerospace industry call “lack of configuration control” in algorithmic search, which is a form of AI in itself. We often cannot tell exactly why a given search routine does what it does — it just “does” it. Implicit in this is that unspecified dynamics are programmed in, whose results are unanticipated. In the world of aerospace, every structure on an aircraft is supposedly isolated, or exists with defined coupling, to every other part. When unknown coupling between parts produce unpredictable behavior, the plane design is said to “lose configuration control.” It is no different for software systems, and this is a pressing problem as we develop more complex algorithms.

For those that like talking historically about the progress of philosophy, Russell provides (not surprisingly – he’s an Oxford scholar) a lot of details, as he very gingerly places them under the bus. They’re not quite sufficient, and of course, he’s right. I just wrapped up listening to a long discourse on Utilitarianism and how it’s not up-to-snuff. No single philosophy can be, because current philosophy is Authority-driven. And no matter how profound someone can be, if they’re like any normal philosopher, they’re still stuck on Intellectual Flatland, with a limited range of perspectives.

Russell does a fair job of dismissing the various Pollyannas of AI, that say we have nothing to worry about, or that we should worry about the presence of a super-intelligence when it shows up. He criticizes many developers in the AI field actually from a value-set perspective, accusing them of Tribalism! So cool! He’s off by a v-Meme click — it’s actually a mapping into the low-empathy Authoritarian v-Meme/value set, but he correctly dismisses their arguments as belief-based, In-group/Out-group conflicts, and fundamentally not data-driven.

One of the things he kind of alludes to, but a point that needs to be made, is that there’s nothing that says folks that develop AI are the ones that really are best suited to understanding the implications of their work. In fact, we shouldn’t really expect it at all. Experts are going to stack in hierarchies, be it research labs or university campuses, and while they might succeed with very complicated/sophisticated thinking, their silos are still real, and their social system will inevitably fail to develop most of them in the ways of both broad and deep consequential thinking. It’s not that a familiarization with the technology doesn’t matter — certainly understanding current AI techniques and capabilities ground one in the possible/cognitive. But in many ways, they do not open the door to the metacognitive — knowing what you don’t know. If all our researchers were as wise as Sai Wong, the old Chinese man whose horse ran away, we’d be in a better world. Good news, bad news, who knows?

And it’s not like turning to philosophers, or social scientists, is going to necessarily provide answers either. They suffer from the same low evolution/high sophistication thinking that the researchers suffer from.

I’m loathe to criticize Russell’s analysis, because it’s such a good one. But there are some things I hope he considers. Submerged in his own Legalistic hierarchy (albeit an international one!) he does praise rationality, or really perfect logic, as the unachievable, but desirable goal – a rather low value set/v-Meme goal, though he does rescue himself with his solution. Evolution has given us fuzziness and heuristics because that turns out to be a deep Survival strategy, especially for the collective. Having freaks isn’t a bug — it’s a feature, and a way of storing low probability-of-use information.

He also fails to consider that maybe we might generate the ultimate super-intelligence, but we would still be unlikely to listen to it, especially if it required us to do something to facilitate its success. We might be game to help it solve cancer, but when it comes to global warming, or any other complex problem, the same political forces will resist change of our energy infrastructure.

And then there’s the inevitable inability of humans, dependent on their developmental level, to even understand what a super-intelligence might be saying. Sentience is sentience is sentience, and the same ceilings of understanding are going to be in play with humans relating back to the AI itself. Going back to the Space Alien problem, I’m convinced that if a group of friendly aliens showed up, we likely wouldn’t understand their solutions for us except through the lens of magic. And needless to say, I’m not the first person to have that perspective.

One thing to dump into the debate that Russell peripherally alludes to, but is integral to an empathy-oriented analysis, is temporal and spatial range of action. Russell does start talking about the value of altruism, with an example of creating an AI that is so altruistic, it takes off for Somalia to help ostensibly starving people on the other side of the globe. Here’s hoping he takes a look at understanding how an AI might be coupled to a human master, not just in examining preferences, but in optimizing the behavior of itself, with restricted sidebars (we call them laws!) across a person’s social network.

And, of course, Russell doesn’t spend much time understanding networks of agents, and how they might work together. Here’s hoping he reads some of my stuff on structural memetics. The idea of a networked collective intelligence isn’t broached, though it is actually inevitable. Computers in a stand-alone fashion weren’t much. But once we got the Internet, well we all kinda know what happened next.

One framework that Russell doesn’t capture very well is the cognitive/metacognitive risk of powerful AIs. While most of the book directs itself toward wondering and warning about unpredictable, emergent behavior — a very real danger, and one that must be taken seriously — there’s also the problem of deliberate construction of AIs that map value sets that are quite terrible from their creators. And while I absolutely do not want to be dismissive of the threat from the former, I’m more worried about perfecting the technology explicitly that lets small and medium robots go out and hunt people. This is a function of power and money, and we cannot escape our own need to evolve. As the old Bedouin saying goes, “Some people fear the future. But I fear what has already passed.”

So — get his book. I think it’s a great one for a book circle. And weave some of the v-Meme-y, goodness in there. Then you can appreciate the foresight of the author, as well as help us all iterate our deeper concerns as we plunge into this unknown space.

One thought on “Space Aliens or Killer AI Robots? Which ones are gonna get us?

  1. Random comments:

    1. That bedouin quote is excellent. There is a person who identifies himself as a (Palestinian ) Bedouin i saw give a talk by a year or so ago in at a church near my area. he now runs https://palestinenature.org –looks like a very nice project, partly funded by grants from from UK and possibly Qatar (but he spent many years as a student/professor in USA in biology –medical genetics—at places like Duke , U Tenn, and Yale–but saved his money and went home.) He calls himself ‘a bedouin in cyberspace’. As a Palestinian -Israeli, he is routinely banned from traveling many places in his country—including ones controlled by different factions of Palestinians where he lives. He still manages to make it to USA, Qatar, and Uk to raise money and give speeches.

    (The preacher of the church he spoke at who I have met (once at a ‘black lives matter’ protest ) —Graylin Hagler—he’s the main pastor — I view as in the tradition of MLK–it has a predominantly African American congregation and is sort of in the ‘liberation theology’ tradition, as opposed to the ‘prosperity bible’ , fundamentalist (eg Jerry Falwelll) or ‘holy roller’ style. Anyone can go that church —including LGTBQ++… identified people. But Hagler does have an ‘edge’ to him—doesn’t have many nice things to say about white or caucasion people. This is partly understandable, given the (relative) poverty you see in the black community.

    Alot of places in his area are run down, and owned by absentee ‘slumlords’ who just collect rent from tenants, and don’t waste any money on luxuries for tenants like a waterproof roof, running water, heat, smoke detectors, or electricity. Tenants are told if they feel chilly they can walk a block away to the liquor store, get some malt liquor, and warm up.
    A few of the buildings in this area burned down as a result , a few people died, and other tenants ended up in homeless shelters
    . (A well known person got his start as a slumlord before moving on the casinos, golf courses, and reality TV shows—perhaps those are based on the theory of v-memes—how to be a good apprentice for future success. )
    As far as ‘black lives matter’ movement, while this is an issue (police brutality or use of excessive force) BLM in this area never discusses things like the fact that there are 3 homicides a week in this jurisdiction –151 so far this year–and few, if any involved police.

    2. Arthur Clarke’s 3 rules cited in a linked post seem correct—if an emereti prof says something is possible it likely is, and if they say its impossible s/he’s probably wrong. FPU experiment (fermi-pasta-ulam +1) is my favorite example, along with ‘general equilibrium theory ‘ in economics’. (This theory sue partly to K Arrow , Hahn and Debreu shows our market economy optimizes–we live in a perfect world. Cream rises to the top–and we have the best persons as leaders, presidents, based on precendent and prescience.
    Eminent professors will tell you this all the time—the proof is they got their retirement check. If you go through the math details, you will find they are correct— the system is at an optimum, or will be (if you are patient and can wait forever). Getting to an optimum is no more difficult than finding a needle in a haystack. (I once caught a green snake in North Dakota when visiting my grandma –i had never seen one of those, thuough there is a subsecies that lives in this area though you rarely see them. ). Most times you see them they are dead–someone ran over them with a car or stepped on them). I put it on a haystack to give it a ‘walk’ and it dissapeared.

    3. I view myself as ‘epistemological reductionist’ and ‘utilitarian’ . The whole world as a first approximation can be reduced to parts and mathematics, so its a cost-benefit analyses (path of least action). Once you get into quantum mechanics and collective phenomena, and Godel’s theorem, you realize many discussions of reductionism and utilitarianism are what are called ‘linear approximations’. In other words they describe costs and benefits, and the structure of the world with 1 complete equation—which they expand into a Taylor’s (infinite) series, and then truncate it at the 2nd of sometimes 3rd term. From an empathy view this is like solving world hunger and poverty by dropping sandwiches from airplanes into famine ravaged and poverty stricken area along with a few dollars. or you could say ‘have a nice day’ (I think there is a well know song fro 1960s called ‘try a little kindness’).

    People into AI and superintelligence i sort of follow i think are familair with that book. I have seen AI programs which appear to be able to score higher on standardized tests or academic course exams than i can. They also can write better than me.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s