Insights on Knowledge Structures, Malcolm Gladwell, and 10K hours

From the Sept. 20 Climate Strike, Moscow, ID

For those that totally get the idea of Knowledge Structures that I’ve written about here, this post might seem a bit redundant. But I find that this deep nugget of meta-systematic understanding is one of the most elusive things I write about. Which is saying something. So — let’s go at this from a couple of different angles, and hopefully we’ll all be wise after the event.

To start: we, as ALL species, have a brain and nervous system that controls our activities. Not one creature on this planet that moves does it without some combination of neurons, arranged in myriad ways, that lets it do what it does. This true for jellyfish, squid, octopi, ground squirrels, crows and humans.

We all have a brain.

Depending on the species, and the requirements that really split into two camps — evolutionary success as an individual, and evolutionary success as a collective (there is NO species that survives solely by parthenogenesis!) our brains wire themselves through EVOLUTIONARY ADAPTATION. This evolutionary adaptation, especially in group organization, functions on principles of convergent evolution. No matter what brain you start with, when it comes to group organizational dynamics, the requirements are the same.

AND dependent on the species, that brain is split up into two meta-parts — hardware and firmware (which is easily studied) and software (not so easily studied.) Animals like fish are pretty much all hardware and firmware — they swim in schools, and if you’re something like an angelfish, you have a couple of tricks to avoid getting eaten by bigger fish.

The more complex behaviors you have access to, the more software matters. Dogs have far more software than yellow jackets. Or mice. My dog can recognize when we’re going over to a particular friend’s house by the fact I’m carrying a bottle of wine, and he likes that friend. He immediately goes to the door. When he sees me dress in my bike clothes, he gets up on the couch. He knows he’s not going anywhere.

That’s SOFTWARE. He did not inherit that ability from his parents. Or his genes. He inherited the SUBSTRATE, of course. But his complexity of thought evolved through his relationship with me. And the other friend — who he honestly likes.

We have preconceptualized ideas of hardware in the brain. Lots of brains have been taken out of lots of skulls, and weighed, and dissected and whatnot. fMRI techniques also tell us quite a bit about how those parts are wired together. Scientists can run experiments over and over, creating reliability of information on the hardware. We still don’t know everything about the hardware. But we’ve made a ton of progress. Empirical research can tell us much that we need to know.

But when it comes to software, we’ve started with some extremely bad paradigms. Those paradigms made sense, before we evolved our own software for understanding our own software!

The worst paradigm we have for meta-understanding is CULTURE. CULTURE is, by definition, characterizable to everyone in a large, connected-somehow group. There is no ‘independent specificity’ in the cultural paradigm. If you are in a culture, you do certain things — even if you don’t! You alternately eat pork/don’t eat pork, wrap things around your head/don’t wrap things around your head, and on and on. Many of things that we do in the context of culture come from ‘somewhere’. But the problem with ‘somewhere’ is that no one’s quite sure where it is. I’ve said previously on this blog that culture is the result of arbitrary mores mixed with Survival v-Meme information, specific to past trauma a group has experienced. Culture can work for or against long-term survival of a group. There is no way of telling a priori.

But one thing is for sure — most culture (with the exception of epigenetic bias) is in the software. We didn’t inherit a predilection to worship cats, for example (toxoplasmosis notwithstanding! 🙂 yet, we, as humans, have had subgroups that for a time, worshiped cats.

We also have other ways of characterizing brain software. One of our favorites (which is just impossible to bust!) is professional discipline. As an engineer, sometimes when people read my stuff, the first thing out of their mouth is “Oh — you’re an engineer. That’s why you thought of all this stuff.” If they only could see my colleague’s faces when I start talking about this stuff…

OK — here’s the moment of realization.

If you look out at the vast array of computer software out there, you might see a piece of accounting software. You might see a computer drafting package. You might see a video game like Civilization. You might see a piece of software that enables you to lay out a quilt. There are literally a BAZILLION different types of software out there.

But you’d have to be a fool to assume that how the software is structured in each of these applications is fundamentally, irrevocably different. You’d assume (correctly) that there were some set of reproducible, underlying patterns that the surface-level application would sit on. And if you were a software coder, you would learn these core patterns, and implement them REGARDLESS of what the surface-level application was. You’d work with a domain expert to assemble the code. You’d use things like linked lists, matrices, etc. to get the result you wanted.

OK. Here’s the punchline. HUMANS DO EXACTLY THE SAME THING IN THEIR BRAINS. With reproducible patterns — what we call a basis set. This basis set is given by our Knowledge Structures. Depending on how evolved the person is, they use that basis set of knowledge structures to lay their SPECIFIC knowledge on top of.

And where do these very SPECIFIC structures come from? Because we are a collective animal, they come from the different relational modes we use with each other. We reserve the deep patterns of our relationships, which serve as a master template, as templates for other knowledge.

Canonical Knowledge Structures

Why do we do this? Now we get Malcolm Gladwell to enter, stage left. This is what we practice. For what it’s worth, I’m not a believer in the 10K hours rule he has that says dictates mastery. But if you understand 10K hours as about five-ten years, it IS interesting how we move up developmentally to the higher stages (after that, all bets are off!) in about 10 year increments. SUPER-rough. But still interesting. We practice relating all the time. We use our full stack of neural function to do it. We even have a background processor that sorts everything for coherence (read up on the Default Mode Network here) focusing on social working memory or autobiographical tasks (same cite).

OK, pause. Take a deep breath. We spend TONS of time relating to other people, and reviewing how we relate to other people. We BURN these meta-patterns into our brains. So, it should come as no surprise that those things we practice far more than 10K hours, serve as the template for how we pick up other knowledge. Those are the fundamental knowledge structures that we plug the specifics into. We may become a computer aided drafting expert, but it should come as no surprise that if we’re organized as a rule-based hierarchy that dictates treatment of the different levels, we look for rules that govern HOW we execute our craft. The specific knowledge fragments (like CTRL-F moving the model out) end up as ritualized routines that our brains are used to practicing.

It also should come as no surprise that if we don’t practice changing our minds, we would lose that ability. And how do we do that? By being receptive to others’ moods, and thoughts. EMPATHY.

And how, you might ask, can we make that practice meaningful? Through self- separation — realizing that if your partner is having a bad day, it’s not YOUR bad day. That critical objectification and attention to the data stream from your partner is EXACTLY the same practice as being aware of confirmation bias in other areas of thought.


OK, now get ready to take a BIG LEAP!

While individual species may have unique problems (a snake might have to figure out how to swallow an egg, for example) when it comes to species that function as a collective, the problems of inter-agent coordination are THE SAME. The problems frame out as the species gets higher density and greater numbers, in more varied environments and so on. But they are the same meta-problems. Ensuring individual survival, fairness in large groups, who leads the way — all these value sets are shared in coordinated groups. So — by function of convergent evolution, sentience MUST be the same. That doesn’t mean that there aren’t problems — bandwidth, processor speed, efficiency, all the things we see when wiring up different computers together — all matter. But the larger patterns remain the same.

And IF those larger patterns of social coordination remain the same, then the same KNOWLEDGE STRUCTURE TOOLKIT also remains the same! Of course, the individual answers will vary, dependent on, and limited by the individual characteristics of the animal (or human!) A snake may swallow an egg differently than an MBA account might swallow an egg. Individual characteristics will matter. But the same problems of inter-agent exchange will remain the same, depending on what the collective is attempting to do.


Now get ready to take another BIG LEAP. We can now see that, while an individual may have a bazillion arbitrary ways to peel an egg themselves (or swallow it!) when it comes to coordinating sharing the egg, there is a profound subset of classifiable actions. And these all map to the canonical knowledge structure set. Survival? Swallow the whole thing instantly! Performance/Goal? What’s the right way to get the most done?

One can also see that without more complex, empathy-driven social structures, the level of complex knowledge is also PROFOUNDLY limited. In academia, if you’re stuck in a status-based social structure, if someone tells you that you’re wrong, you SIMPLY CAN’T HEAR IT. At least immediately. Or — you have to follow an externally imposed rule set that says how you’re supposed to buffer that kind of input. (We call that ‘collegiality’, which kind of works.)

What you CAN’T do (or hopefully, reluctantly do) is forfeit status, admit you’re wrong, and incorporate a new understanding into your own. UNLESS — it’s a Survival level crisis. The world isn’t flat, and the Earth isn’t in the center of the solar system, and if you persist, you’ll be driven out of the academy at a tribal level.

So empathy rewards complexity, and couples it both inside a social structure, as well as the concomitant knowledge structure. You might discover complex thoughts ginned up by others, but it’s going to be very difficult for you to generate your own if you don’t have a little empathy inside your own head.

These things are intrinsically coupled.

So — the quick takeaway? Knowledge structures are the deep meta-patterns that all our surface-level knowledge comes from. The structure arises from how we relate socially, which is what we practice for thousands of hours, and then transfer those burned-in brain patterns. And the complexity of those knowledge structures comes from empathy in the social structure, which, when evolved, grants us agency to think our own thoughts, as well as fluxes our brains with a data stream of input from others.

AND because the problems of inter-agent coordination in a group, are the same for birds, as well as humans, yet dependent on the core processing capacity of our different, respective brains, our ability to execute coordinated behavior is directly dependent on that hardware/software combo every animal has.

AND since most of what humans do is in the software, it becomes VITALLY important to develop that software. And if we want that software to handle complexity — we have to develop empathy and agency (self-empathy). It is inescapable.

As we relate, so we think (think/thank Malcolm Gladwell for that, if you must!)

We will not be smart enough without the wisdom of an aware crowd. Tip of the hat to Ryan M. for that encapsulation!

Postscript — while Gladwell and others’ 10K hrs. estimation actually rocks it for the knowledge structures for all the lower v-Memes up to Legalistic/absolutistic (think of a mastered tennis stroke as an algorithm executed endlessly by someone working on mastery of a movement) a new book out, David Epstein’s Range: Why Generalists Triumph in a Specialized World explains mastery of scaffolded heuristics — the next KS up the ladder above algorithmic thought, just above the Trust Boundary.

I did write Epstein, hoping to help him see the larger pattern. He did not write back. I did have a great exchange with one of the people in his book, though — so I’m going to always bet on empathy as the long-term path, regardless of the frustration associated with connection.

2 thoughts on “Insights on Knowledge Structures, Malcolm Gladwell, and 10K hours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s