I developed “The Five Principles of Influence” as a way of trying to simplify for my clients the five most important things to understand about how people think, behave, and make decisions. In other words, if I could teach you just five things about human nature to give you a clearer picture of what “moves” us (to think, act, or choose differently), what would they be?
Principle 1 is “the human brain is not a computer.” I began with this principle because I found that there was a consistent discrepancy between how most people believe the brain works versus how it actually works.
At the heart of this misunderstanding is an underlying belief that our brain is designed for accuracy. In other words, most people assume that our brains operate as a computer does: impartially searching for and analyzing all available information, and then rendering whatever decision is most “accurate” (i.e., the “truth”) based on the available data.
But here’s the thing: our brains haven’t evolved to maximize our accuracy - they’ve evolved to maximize our chances of survival. And in a social species like ours, it turns out that it’s often better to be wrong and seen as a “good” group member than to be right but seen as a “bad” group member.
The line of research most supportive of this statement is what psychologists refer to as “motivated reasoning.” Essentially, if we were to characterize “pure” reasoning as an impartial search for “the truth,” motivated reasoning is a desire to reason in ways that allow us to support and justify “our truth” (i.e., what myself and my group believe).
Take a political issue like the death penalty. What happens if we provide individuals with two equally-good arguments: one for the death penalty, and one against it? If our brains were calibrated to “reason like a computer,” chances are getting a good argument from both sides would make us more moderate in our views - after all, there are compelling arguments for why the death penalty works and why it doesn’t. But for humans, the opposite tends to happen: we tend to become more extreme in our views. Why? Motivated reasoning.
Instead of going in with an open mind and trying to figure out what’s “right,” we instead go in with an agenda we’re trying to support - political beliefs we hold dear and want to justify - and thus are trying to figure out how we can interpret this data in the way that’s “right for my side.” This causes us to seek out, more reliably recall, and trust information that allows us to support our own views while ignoring, discrediting, or preferentially forgetting information that challenges our views. This is how two groups can receive the exact same, balanced data and yet end up further apart than they were before they received it.
What’s more, motivated reasoning can even “infect” traditionally non-political domains like mathematics. In a fascinating study from 2017, Dan Kahan and colleagues gave political partisans a problem to solve. To do so, they had to take some numerical data and draw conclusions from it. When the data was described as results from a study involving a new skin rash treatment (i.e., a non-political issue), there was no difference in the conclusions drawn by liberals and conservatives - the only thing that had a significant effect on results was one’s mathematical ability (i.e., participants better at math were able to draw more accurate conclusions). But when the same data was described as results from a study on the effects of a gun control ban, suddenly liberals and conservatives came to vastly different conclusions.
It was the same exact numbers. It was the same calculations. And yet when the data was said to have implications for a political issue they held dear, these groups managed to reach remarkably different “answers” to the math problem set before them. And here’s the really interesting (and shocking) part: partisans with the highest mathematical ability did not become less polarized in their calculations - they became more.
Contrary to what we believe, we’re usually not creating our opinions based on an impartial examination of the facts - we’re selectively seeking out the facts that will allow us to continue to feel good about our opinions.
The central lesson of motivated reasoning is this: when you threaten someone's identity, their brain stops listening and starts defending. Consequently, perhaps the most important takeaway is to find identity-congruent ways to present information (i.e., seek to present data, ideas, feedback, etc. in a manner that aligns with a individual’s or group’s ideological standpoint).
In other words, affirm, then inform.
For organizational leaders, that could mean reframing change as an affirmation of our identity (as opposed to a threat to it). Instead of saying something like "Our old strategy failed" (which is likely to engender defensiveness), say, "We’ve always been innovators, and this new direction is how we stay ahead."
For communicators, it’s important to embed facts into the narratives endorsed by the group(s) we’re targeting. A story - especially one that positions our group in a positive light - can slip past the brain's defensive mechanisms in a way that standalone statistics cannot. A narrative about an individual's journey (e.g., someone who was skeptical about a product but was won over) is more persuasive than a list of features because it engages empathy and identity, rather than just the analytical mind.
For legal professionals, it’s important to anchor your arguments in the moral frameworks of the jury. For libertarians, something like corporate liability should be framed as a matter of "personal responsibility and being held accountable for one’s choices"; for liberals, the same liability argument might be couched as "justice for the vulnerable."