Back to Basics — Values Transfer into Intelligent Systems through Conway’s Law

Betel nut tree plantation, Taiwan

One of the interesting hypotheses that DEFINITELY NEEDS MORE WORK, BUT IS LIKELY TRUE is the idea that, through Conway’s Law, values from the social organization transfer into the products developed by organizations will inherently embody the values that those organizations hold. We can revisit the Intermediate Corollary in these three slides, which say:

There are big implications here — by linking together value sets/v-Memes of social organizations to the instantiations in knowledge and actual design, we are implying that, at least for certain products, that more than just a canonical structure connection, actual values move into the product being created.

Let’s think about a couple of examples that alternately illustrate (or not) this principle.

What if your organization was designing tools that were for the general public, but no one in your organization was left-handed? What would happen if your organization was more focused on refinement of your current brand, and did not engage in customer research? Odds are your organization would certainly have a hard social boundary, regardless of relationships inside that boundary. And if that organization had inherited past product lines, refinement of those products would likely be the primary driver of any R&D efforts. AND, through omission, your organization, having an inherent bias against left-handed people, would only test such tools inside the organization. The products would be refined for right-handed people, and one would transfer that low developed empathy, discriminatory perspective into the product. Sure — it would be plenty safe for right-handed people. But left-handed people? How would you know to care? Check out this publication — for most folks, that handedness/neurodiversity is non-triggering , even though the conclusion is stunning– and that’s the reason I’m using it!

Clearly, it’s not necessarily true for all the parts of larger products. Teams of aerodynamicists have refined wing cross-sections according to the laws of fluid dynamics for the past 100+ years. Highly specialized teams studying Guiding Principles of transonic flow have honed the shapes of airfoils for maximum performance. But note the introduction of Guiding Principles (really a Global Systemic/Holistic v-Meme knowledge structure) into the mix. These principles are well-known, and alternately pull the designers up into a higher space, or compensate for their lack of empathetic development.

Contrast that with the Boeing 737 MAX MCAS disaster, that caused the loss of two airliners due to a runaway control system for stall safety. The flight control system was optimized using the laws of physics, and though flight control is super complicated, it’s likely that when the system was working, and all the sensors were operative, everything would have worked fine.

Except that wasn’t the case. Boeing’s rigid hierarchy outsourced the controls and interfaces to an Indian software vendor for completion. The inherent value assumptions embodied in the MCAS were that, in a potential flight divergence scenario, humans would be far inferior than the algorithms developed by the team. The combo of two rigid hierarchies — Boeing and the Indian software firm decided that humans SHOULD NOT HAVE AGENCY to turn off such a system. So, inherently, those values propagated into the design of the emergency system, which made it almost impossible to disengage, and caused the loss of two aircraft before the 737 MAX fleet was grounded! A direct, implicit (and emergent) value transfer!

One can see, through this example, the perils in developing systems in artificial intelligence and machine learning (ML) with non-diverse teams, as value transfer becomes even more problematic. AI developers may develop a system that follows a fair amount of axiomatic rigor in how a given ML system seeks an optimum. An attitude of “Just the facts, sir” may sound good on the surface. But datasets used to train such systems may inherently have bias written in them as far as collection (e.g. men are better than women doing math — it’s in the data!) And the extension is that such systems, once again, would be low on respecting human agency in the first place. The ML system would be, of course, smarter than the humans in the same problem space. And SHOULD be given authority because of a variety of reasons, including a more unbiased following of rules.

And that’s how you end up with low empathy AI — immediate value transfer from Authoritarian/Legalistic hierarchies into the products that are developed.

The examples above show how inherent bias is actually deeply memetic. But to say it’s unavoidable is also a cop-out. If we look at 2nd Tier, higher Global Systemic v-Meme processes, where deep reflection and “knowing what we might not know/metacognition” is inherent in organizational, cultural practice, we can escape some of the worst of emergent, implicit value transfer. By having diverse teams, we can dodge many a bullet as well explicitly — even if most folks don’t understand the whole idea of values transfer. The simple statement “We just don’t know enough about our users” is also very powerful, if it propels further engagement with larger relational/customer communities.

You don’t need to have folks understand the deep memetics of this blog to be a success. As I wrote in this piece,  Stuart Russell offers an intuitive walk up the psycho-social evolutionary pyramid in Human Compatible: Artificial Intelligence and the Problem of Control.

But for those new to the blog — you can also study up a bit on knowledge structures, and then back-check your work explicitly. That’s the point of the road map I’m laying out.

3 thoughts on “Back to Basics — Values Transfer into Intelligent Systems through Conway’s Law

Leave a comment