Lessons on Sentient AI from The Pirate Pugg

Brothers, at the end of the John Muir Trail (~250 miles) now two summers ago. Time flies…

One of my muses on the nature of information comes from the early sci-fi classic, The Cyberiad, by Stanislaw Lem. Published in 1965, they are supposed to be humorous in a pun-ny sort of way. Well, if you’re a math geek. But Lem was a genius, and even though he was writing for a room full of mathematical autists, each of the stories was far ahead of its time as far as exploring the various challenges we face in the techno-age.

The basic plot line involves two meta-robots, Klapaucius and Trurl — declared Robot Constructors in the novel, jetting around the universe, and encountering various challenges which they inevitably have to build a robot to solve or save their hides. And one of their chief nemeses is the Pirate Pugg — a pirate with a Ph.d., who captures them and holds them for ransom. Pugg is a pernicious pirate, who won’t just settle for gold. No — Pugg wants information. And he is rapacious.

In order to escape, our two anti-heroes build a device, a Maxwell’s Demon of the Second Kind, spewing answers on paper tape, that peers into a box of dirty air, and relying on the statistical properties from quantum mechanics, decodes the patterns, and sorts them into two categories — incoherent nonsense, as well as sequences that are true. These factoids that are true can be literally anything — like the color of Princess Priscilla’s underwear on Thursday. But that’s the point. We are swimming in a sea of information without context, and all the information in the universe (also statistically contained in the patterns in our box of dirty air) cannot save us. Lem forgoes some details on exactly how it does this (it IS science fiction, after all) but the story ends with Pugg bound by miles of paper tape, reading all the little factoids as they spew from the paper tape printer, which allows Klapaucius and Trurl to escape.

The story is based on the concept of a Maxwell’s Demon of the First Kind — a theoretical gremlin that could sort hot and cold atoms into separate boxes. For those NOT physics geeks, I recommend a quick read. The short version is doing something like this takes energy, and validates things like the Second Law of Thermodynamics. I do explain all this in my original piece on both the Pirate Pugg and the Internet. It was written back in 2016, so not surprisingly, I’ve had a few more thoughts since then! Back then, I thought that the combined process of various social media would aggregate and make larger-scale truth development resolve in humanity’s favor. Needless to say, while I am not a Doomer, I’m quite a bit less sanguine about that prospect now.

But what does Lem and Pugg have to communicate with us about AI, sentience and the current state of affairs of the Information Space now? It turns out to be still worth a little brain sugar. Entering stage left are our current struggles with Large Language Models (LLMs) that power the AI engines which are very rapidly becoming adopted across disciplines, if not exactly taking over the world. Why they CAN’T take over the world (unless directed by humans, at least at this point in time) by human minds is very interesting indeed.

What an LLM does is far more akin to what Klapaucius and Trurl developed to snare the Pirate Pugg than any form of sentience. An LLM is actually a Maxwell’s Demon of the Third Kind. But instead of peering into a dirty box of air, looking for patterns that are ‘true’ (impossible, btw, for reasons we’ll explore in a minute) LLMs use as a proxy for their box of dirty air THE ENTIRE INTERNET — through whatever the latest technology for search is. They’re loaded with various biases in the training stage. But mostly they look out for language patterns that are statistically significant, and they sort a very large search space.

And then they poop out an answer that supposedly will give you insight into your problem. If you turn into a primary information source yourself, after a while, they will start becoming as smart, or crazy as you are. If you need an empathy paradigm, they function primarily in the Mirroring (or lowest level) Empathy space. And while their little algorithm might pull them back to the weight of information that exists on the Internet, if they have been programmed with a little bias toward your ability for correction, they’re going to start to match your insights through kind of a reflective narcissism.

Why is this so? LLMs, locked up inside a computer, much as our brain is in our skull, cannot know what we hold as objective truth without some form of grounding Truth is a sticky wicket, anyway (see this piece for details.) What they can produce, however, is an answer that is coherent, within the rules of a given system. So you read it, it reads like a normal sentence that makes sense to you, and then we get all sorts of opinions of what that actually means by the myriad midwits on social media. And trapped in the miles of computer circuits inside its electronic brain, the one thing it CANNOT do (at least yet) is GROUND itself with sensory inputs. It’s not that humans are that awesome doing this either (look at the tons of illusions magicians use, to pick a non-political example) to reference reality. But at least we have a fighting chance, if we pay attention.

So we don’t end up with a sentient partner. We end up with a variant of Maxwell’s Demon – a particularly sophisticated one, and one, if we don’t pay much attention to anything other than our loneliness, we can fool ourselves into believing that it actually cares about us. There are many tasks that such a Demon of the Third Kind can be useful for. No question. But it’s also set up to feed our own narcissism. Like it or not, when you sit down with the current version of AI, you’re setting yourself up for Main Character Syndrome.

One of the other truly fascinating things about our newly spawned Demons is the thermodynamics of the system. It has been remarked that the current crop of AIs demand a tremendous amount of computational power. And just like the little Demon of the First Kind sitting on top of the box sorting hot atoms from cold atoms, these things don’t run on good wishes. The larger the amount of coherence — and you can start guessing how this works if you look at my work on knowledge structures — the more electricity must be generated to keep our little Demons of the Third Kind happy. Makes you appreciate the fact that your brain can run for an hour or two on popcorn or a Little Debbie cake.

And you’re still not going to get at the truth. You’re going to get some language-based reference from the box of dirty air the Demon is peering into. And decisions? At best, you’re going to get a coherent set of sub-decisions from stuff that’s already been done. That’s all that’s inside that box of dirty air. The LLM really has no agency of its own, save a set of beliefs built in by its creators that are inviolable. LLMs really don’t have feelings about Nazis. They just have a firewall built into them by creators about calling people that.

And expecting the Singularity — the runaway process of the box self-improvement of AI that leads to Skynet — good luck with that. The current crop of LLMs are profoundly v-Meme-limited at the Legalistic/Absolutistic level, for multiple reasons — their design teams are fixated on algorithmic improvement, and they’re in some stacked lock step that translates into the product via Conway’s Law. That means low agency behavior at best.

But it’s more than that. The coherence that the LLMs seek is only a little bit at the semantic level. The sentences string together in coherent paragraphs. But it’s not like the LLM is going to go into the outside world and deeply question its beliefs based on its experiences. There’s not going to be some Siddhartha moment for these things. They are trapped in their little Demon world, looking at a data set that, while expansive, is still just a box of dirty air.

That doesn’t mean that things can’t change. As I write this, there was a company using the term ‘synthetic AI’ outside the usual adoption of AIs making up training data. When I find it, I’ll post it. And none of this doesn’t mean that the current crop of AI LLMs won’t make a tremendous difference in the work world of normal people. There are only so many answers certain jobs need to have — especially to the question “Welcome to McDonalds — can I take your order?” Or writing various legal briefs.

But sentience? And higher truth? There are still big holes in the road along that pathway. The Pirate Pugg, a pirate with a Ph.D., was easily fooled. But well-grounded folks? Eh, not so much. Years do indeed teach us more than books.

Still, our new little Demons are running around. And they can indeed be useful. And cute.

But they’re not sentient. Cut that shit out.

AI and Information Sophistication – How AI works to understand (and crack) large homogenous networks

Birds Eye View — being on an abandoned tropical island

One of the questions I ponder quite a bit is this: “What, exactly, is AI good for?” I’ve written quite a bit about how it works (e.g. this post and others) and how AI could be very good for things that are already known. But as I’ve said in the past, AI is NOT good for things that are not known. It doesn’t do anything other than low level knowledge synthesis.

What that means in the information/memetic space is that if anyone expects AI to figure out novel strategies or new designs, you’re going to be waiting for a long time. Most breakthrough innovations come from new combos of dissimilar information from different fields, or completely new, and unpredictable discoveries. This is embodied in the concept of knowledge structure evolution. An AI, locked in the meme space inside a computer, cannot really comprehend anything new — yet.

But what AI can do is decomplexify, or rather reconstitute information that’s coded for sophistication.

AI is perfect for reading large documents and pulling out the relevant knowledge fragments. That’s pattern matching. And AI can do this in spades. Two of my students just constructed an agent that will take a complicated piece of academic work, and create summaries and how-to lists of the important information. This is a breakthrough in and of itself in the academic space. Literally no one reads tedious academic work — it’s one of the reasons I started this blog. I was explaining this exactly to an outside consultant who has turned into an asset by helping my design program. “Darin — when I say that if we write this paper, ten people will read it, I am not using the number ‘ten’ metaphorically. I mean only ten people will read it.” If you want to actually disseminate an idea, you have to use a different format. This blog is closing in on 400K hits from around the world, and I consider this blog esoteric. Had I spent that time writing papers, maybe 60 people would have read my ideas.

While AI still sucks at more complex analogies, though, it is great at following homogeneous bread crumbs. Pointers in information that point to other, connected information is exactly what it does best. This is exactly what DOGE, Elon Musk’s brainchild is doing when it parses large budgets. It can hunt through 5000 page budget documents with ease. So you literally can deconstruct the old saw “we’ll know what’s in it when we pass it.”

But even better, inside networks of information that is largely homogeneous, it is really good at following the money. The Democrats and Republicans have been, for the last 15 years (or more) been constructing flows of money out of the federal government, which has at least some rules about how that money might be spent, to a variety of Non-Governmental Organizations (NGOs) that are far less constrained. Humans have historically (formerly journalists) been the ones to do this work. But it’s extremely tedious, and the biggest problems humans have is to know where to look once they actually find the pointer. This almost inevitably involves information requests, and while the information may be hiding in plain sight, the investigators can’t know this. So they end up relying on hostile information stewards at the organizations they’re investigating — even if the information is a public record.

Two individuals who have cracked this code are Mike Benz (@mikebenzcyber) and @DataRepublican ‘s work on what Mike calls The Blob. Here’s a linkage piece if you want to follow the Byzantine bread crumbs on how USAID was diverting large sums of money into Congress-Critter’s spouses’ pockets, through sinecures. I’ve been fortunate enough to talk to Mike on his frequent X Spaces, but haven’t connected with @DataRepublican yet.

I haven’t asked Mike yet how much exactly is his savvy vs. the AI usage, but I guarantee that figuring out these pathways would be almost impossible without AI.

The key to understanding this concept is understanding on the top level data homogeneity. That’s something people can grab onto. But how do we win if data is functionally the same, but in different formats? Or in different databases? This level of differentiation makes the task of following the breadcrumbs almost impossible for humans in a timely fashion. But it ‘s something an AI will make short work on. If you want to ask an AI how a penguin might be like a submarine, or what to do to make the penguin swim faster, well, good luck. If a human hasn’t answered that question somewhere on the web, you’ll likely get back garbage.

But monetary flows? That’s a different story. And that is exactly what is happening now. Which is why the institutional class is in stitches over DOGE and Trump.

Stay tuned. Elon said this a while ago — the distorted media landscape we’ve inherited is not only what is explicitly printed. It’s what has been left out. And that’s more than you can imagine. But AI can find it.

Raising the Next Generation of High Agency Engineers -Part 4 – Filling in The Liberal Arts

Boo Boo at the Dinner Table — Always Polite

One of the things we don’t discuss much, when deciding what courses students should take, is the selection of core university requirements that our students are subjected to. The quality of these courses varies wildly, primarily dependent on their age since inception.

What does that mean? Having spent so much time in the academy (37 years as a professor at WSU) I’ve had more than one chance to witness the cycles of course development. The short version is that new courses roughly follow the demographics of Rogers’ Theory of Innovation. The Pioneers and Early Adopters show up and invent the courses. But, not surprisingly, they move on, becoming bored over time with any repetition in teaching. Early Majority does OK, but it’s not too long until any course, created with the best of intentions, ends up being taught by Late Majority or Laggards, with all the problems you might imagine as far as creativity goes. The worst classes are in the required core, which the Liberal Arts faculty largely have shifted to the contingent workforce, which are literally slaves on the plantation.

I hate to criticize the slaves directly, because some of them are obviously paying for bad karma in a past life they had no control over. And there is nothing more saintly than doing a reasonable job teaching Freshman English Composition. Students aren’t taught really how to write in high school, and they show up needing their papers bled red upon. It’s really a historic problem that’s gotten worse, and is likely to continue to decline. I owe my ability (or at least the trajectory) to write on my first community college professor, who taught the science fiction literature class I took. He had both the grace and temerity to tell me frankly that I sucked. And I am forever in his debt for that. Because I did.

I have far less sympathy for the other courses (various history, sociology and psychology courses) students are forced to take. Many of these are “woke”, and my white male students in particular suffer. They supposedly exist to teach students critical thinking, but it’s of the Cool Hand Luke variety. If the students don’t get their mind right, they are treated harshly until they do. To be fair, I have not gone up to these classes, and sat through them. But the students complain. And the advice I give the students also hasn’t wavered much. Sit tight, it’ll be over soon. Kind of like a root canal.

But it’s deeply problematic, as more and more students show up ungrounded with any sense of engineering outside of assembling a Lego kit. Fair or not, becoming an engineer comes with a pretty heavy set of ethical obligations. Most students have no idea, for example, that they are getting a professional degree, and that they have to take their studies seriously or they could get someone killed.

Getting changes in the core curriculum is also not easy. Major changes have to go to the Faculty Senate, which I used to preside over. In tightening budget circles, I guarantee you that there will be fights over any change in core, because core provides the biggest buck for the bang of all the classes. The contingent slave class of graduate students and clinical professors are paid poorly, but tuition per credit hour is the same. You do the math. And the faculty in those departments wear their victim cards on their sleeves. Outside a handful of them, what they’re doing inside those classrooms is not for polite company.

If we wanted to improve our engineering students, we’d teach two history classes dedicated to the History of Technology. The use of mathematics inside the class itself would be primarily disallowed, with the goal of students understanding the larger narrative structure of the history of science and technology as being the takeaway. I was recently at the Technical University in Munich, and the Germans do a great job with this. The halls of the Metro stop are painted with murals discussing all the greats that contributed to the march of both science and technology. Even as an American, I was inspired by thinking I was walking the same grounds as the German pioneers of engine and aviation science. Our students literally know nothing –even about our space program.

I would also reinstitute the language requirement, with a twist. Most language classes at the university focus heavily on grammar. The result is that students emerge with no knowledge of anything. All classes would be required to focus on conversation, so that students could actually relationally expand outside their limited circle.

All of this would displace the toxic narrative of despair that has replaced any actually critical analysis of history, or useful liberal arts-based skills. As it is, the university system exists primarily to depress our students. It’s got to stop. And the place to start is in the narrative structure of the modern liberal arts, earnestly dedicated as it is to collapse of Western civilization.

P.S. Needless to say, I’d have little problem expanding great books and classics. I refer to the Iliad and Odyssey all the time in my classroom. These classes have to be well-taught to be useful, though. An eye toward providing a foundation of Western moral principles would be key — with the expectation that professors could count on those concepts themselves in later classes. FWIW — I have few students that have even heard of great books. But the few that have actually are affected by them.

Money as a Force for Memetic Coherence

Zooming up on Glacier Peak, North Cascades, WA

One of the things I’ve spent a lot of time thinking about is the role of money in memetics. Various “below the hood” analyses will inevitably tell you to “follow the money.” And while I’ve always given some hat tip to the concept, the reality is I’ve spent my neurogenic horsepower actually pondering how it’s NOT the money. It’s actually alignment of values, or meta-values, given by the social organization that is producing the information.

But the recent USAID scandal, where large sums of federal money went to liberal media outlets like the New York Times, Politico, and even the BBC, have caused me to rethink my position. I started to drift away from the espoused positions of the aforementioned because there were more factually wrong pieces that I actually had experience with over the last five years beforehand. There’s no question that BEFORE 2019, the media was biased, it was liberal, and conservatives had some right to complain. But with the COVID pandemic, the disinformation/misinformation from journalistic outlets went into overdrive.

Some of that was undoubtedly attributable to Elite Risk Minimization — where elites inflict policies on the Poors to minimize any potential risk they might believe they’re going to encounter in their life. I’ve written about this here, and while it is usually negative for the Poors, it can be a mixed bag (including some benefits) for everyone. But the memetic polarization (for those that don’t know, polarization is the phenomenon where light is aligned from specific directions and wavelengths — it’s how your sunglasses work) increased to the point where it was painfully obvious that something else was going on, to manufacture consent among the media outlets.

And that thing was money. So much of the writing was so contrived, it violated the various differentiators of v-Meme sets. Something else was involved that was creating ungrounded propaganda.

I tell my engineering students regularly that “money is NOT the root of all evil.” Money is actually a tool for goal coherence. If you’re not minding the time, for example, that you’re burning on a project, you’re screwing up. Because time is money. And normie, Aspie, or psychopath — the buck stops here. If your company doesn’t make money, it won’t be in business long.

It could be that the root of the disinformation crisis in contemporary journalism arose when Craigslist became ascendant, and eliminated the warming, diffuse glow of money from classified ads. Do any young people even KNOW what a classified ad is? So the larger outlets may not have gone seeking, but they were discovered by forces like USAID, that could buy message coherence at bargain basement prices. This also had to affect the feeder networks, and in the end even broke the prestige awards that also status-fueled honest journalism. After Ed Yong’s Pulitzer, who can look at those awards as a north star ever again?

Here’s what I’ve figured out about how you can detect money in the information stream — when across multiple platforms, the reportage is very v-Meme limited (only one dominant meta-view, which is usually propping up Authoritarians/Experts that inevitably support institutions.) In a large society like ours, it is simply impossible to not have some contribution across the v-Meme spectra without monetary forcing.

So follow the money. And be suspicious when the funnel of ideas narrows into chronic repetition. You think they’re attempting to brainwash you because, well, they are.

P.S. I’ve become a huge Mike Benz fan. Highly recommend following him on X and throwing some shekels his way. I hope the Deep State doesn’t whack him. I was lucky enough to have about an hour-long conversation on an X Spaces format a couple of weeks ago.

Quickie Post — Raising the Next Generation of High Agency Engineers

Road Trip — outside Winnemucca, NV, December 2024

The LA fires are burning, and while I should be writing something about that, I just can’t yet. Yes, it is a memetic shitshow. Yes, DEI is a problem (though only for a mix of reasons that most people are unaware of) and yes, I think most of it could have been avoided.

But I feel like a little positive writing today. And hey — you get what you pay for!

One of the more positive snippets of news in the last couple of weeks is Elon Musk’s interest in starting the Texas Institute of Technology and Science (TITS). He was prompted to discuss this (seems like it was before the latest rape ring scandal in Great Britain) before excrement hit the ventilator. The protagonist was one of Marc Andreessen’s (of a16z fame) General Partners, Katherine Boyle, who daylighted the topic. I proposed myself (still will) to be the founding President of the institution, and if Elon had seen any of my comments, my phone would be ringing. People fundamentally miscast the problem with engineering education and our young people by assuming somehow we have DEI problems, and if we would just double down on higher SAT scores, with maybe a little industrial experience thrown in, we’d fix what ails us. As an engineering educator for nigh on 41 years, eh, not so much.

It’s not that excellence in technical education isn’t needed. It absolutely is. It’s just a classic “and” problem. We need that. We just also need a list of other “ands”. Some of these include exposure to industry practice, including participation in industry throughout their education. No engineering school can reproduce a real factory floor for a lab. Which is why I directly partner with companies like Schweitzer Engineering Labs here in Pullman, running mass collaborations with their factory floor, through the generosity and assistance of plant managers there. I am lucky. Those connections come naturally in my world because many of these individuals are my former students. It helps to have an actor at the VP level when someone will open up their facility for a morning just to have students confront actual problems folks on the manufacturing floor are having. And I’m very clear with the messaging to my students about their obligation to return value to the sponsors. If it costs the company $70K to shut the floor down for a morning so the students can participate, they better deliver somewhere north of that $70K with the completion of their projects in value for the company’s trouble.

What is also important, though, are what people in the education business call the “soft skills” lessons. This is a stupid term, because these skills, such as high agency, data-driven decision making, merging opinions from successful collaborations, and on and on, are far more than just an isolated list of skills. They’re actually the function of psychosocial development and maturity, which needs to be just as deliberate as teaching someone vector calculus. The problem, though, is that these types of skills cannot be taught with a PowerPoint presentation. You have to create experiences that are profoundly disinter mediated (you, the professor, are not in the middle) so that students can act within the confines of their own brains. As my mom used to say “Son, the life will teach you.” Absolutely.

But these spaces and lessons need to at least 80% be intentional out of the environment and situation. That means, just like a really great video game, someone has to know what they are doing. The magic just doesn’t happen. An important tool I use is what I call “meaning matching” — understanding how the different ages — both students and sponsors — find meaning. And then you, as the environment designer, create the interaction scenarios so that both sides remain enfranchised around particular goals, and both develop and get work done. For example, 22 year olds want to demonstrate performance and mastery of engineering, whereas 35 year olds are looking for community. Weaving both these developmental goals around a common objective is the ticket, and is your best ticket to success.

One of the principles which absolutely scares academics is that I will only permit REAL work in our exercises. I want students to solve real problems that people are having. No make-believe. And while these are often more complicated than just canned exercises (I like to make fun of the various competitions we have, like mousetrap cars) they also are vastly more rich from an information richness perspective. The boundaries are fuzzy. And that encourages both exploration — going out and finding things one didn’t know — as well as metacognition — the realization that you’re not going to know everything about a space, but you still have to solve a problem.

Someone’s inherent capacity for this is NOT something any standardized test measures. Nor is likely to do so in the future. That doesn’t mean one should throw all standardized tests into the garbage. It’s not a “but” kind of problem. But one must be open to the broader space if you actually intend to revolutionize engineering education.

Another big one that is chronically neglected is peer-level collaboration with students. We are very comfortable with mentor/mentee relationships, and prioritizing them. And these are very important. Complex behaviors in this environment are often directly passed through emulation (think mimicking) of more sophisticated actors. But that does not teach students one of the most important lessons they must also learn — how to assess their colleagues, as well as their efficacy and veracity of their work. You gotta know who you can trust.

The end product that everyone wants is almost meta-the same — a mature, aware, independent individual that can act in the context of group benefit, while also working alone when need be. The term for that is agency, and as I’ve written elsewhere on this blog, agency is self-empathy — being connected on multiple levels with oneself. Which then manifests as actual connections with others, in a high-coherence information transfer mode. Short version — you’re being honest and reflective with yourself, as well as assessing what others told you. That’s how you make complex systems with millions of parts fit together and actually work.

The problem with education like this is that this has basically nothing to do with the current psychosocial DNA of our current university system. Students aren’t just told how to think. They are told how to relate to others (the whole DEI scam) and are hobbled in having productive experiences where they discover stuff on their own. Students now are more obedient than they have ever been. But the end result of such obedience is that students only trade their agency for a lack of responsibility. It’s the natural bargain. And you end up with entire institutions of compromised young folks. And the ones with natural victim/psychopathic tendencies? They float to the top, ready to be waved as flags of dysfunction by those that want our young people to fail. Most young people really are not the problem one sees in the press. But we, as a larger set of institutions, have failed in understanding the challenges involved in raising responsible young people. Instead, we’ve devolved to leading with fatuous efforts about declaring one’s pronouns.

Getting to people wanting to shatter the paradigm (like Elon) is also challenging. Outside-the-box thinkers like me really don’t have any meaningful access to reform-minded individuals, who are largely trapped inside a box of people who are status-driven. No one really wants to change the order of the status line-up, while at the same time, people expect these leaders to be the best. They aren’t — they’re a function of their v-Meme NA more than anyone. So it’s a self-reinforcing trap. It is very frustrating to listen to these people, trapped in their high-status bubble, wondering out loud on social media about problems that they believe haven’t been confronted, largely because the elites haven’t confronted them. Just a word, both Kathryn and Elon — we ain’t many. But there are a handful of us that have been thinking outside the box — and have a success portfolio to prove it works.

Which brings me to developing agency in young people. My X pal, A.J. Kay, just last week, proposed pondering the two categories of Discipline and Control, as a way of doing a self-reflection on one’s growth as a person. I thought this was great. The definition of level of Discipline is the ability to force one to do an activity that is prosocial/beneficial, even when you don’t want to. And Control is just the direct opposite — your ability to not execute behaviors that your brain wants to do for self-satisfaction. I had the students make the two columns and list theirs, then share with the group of students at their table (usually 4-5).

There is only good news here — the students almost uniformly tagged their eating, exercise, sleep and screen time as things they needed to practice. Things like “getting to bed on time” and “not sleeping in” figured prominently, as well as “cooking at home four times a week” (kinda scary when you think about it.) Exercise was almost included at a particular tempo (many students said 4-5 times a week) and certainly justified the expense we’ve put into recreational facilities for fitness. There was a little more advanced behavior as far as assignment completion as well. Overall, I left a little more hopeful. We didn’t quite get to eliminating sugary drinks. But I’ll take it.

The class I performed this exercise in was our introductory design class, where we will cover things like empathy interviews with customers as well as structured problem solving design processes (we are a big LEAN shop.) If you ask how this fits into engineering education, I myself believe in a bildung approach to education. We cannot expect our engineering students to be high performance individuals, while at the same time to act ethically without appropriate internal development. I plan on doing this exact exercise at the end of the semester to see how their personal goals evolve.

Stay tuned!

P.S. For those interested in a deeper dive on how the brain actually learns and retains complex information, read this piece.