tribalism [cc]


Tribalism is broadly defined in neutral terms,
as the state or fact of being organized in tribes: social groups sharing common
interests or attributes. The word “tribe” is often used in a positive way
to describe supportive communities. Individuals from abusive environments are
encouraged to “find their tribe”: to find others who, like them,
reject abusive relationships. But tribalism’s also taken on another meaning,
referring, typically in a negative way, to behaviour and attitudes that stem from
strong loyalty to one’s own social group. Group biases have been studied by
social psychologists for decades. In the 1960s, researchers proposed two main
explanations for group discrimination. They suggested it arose from
pre-existing competition between groups, or as an irrational attempt to find emotional
release by scapegoating others. But in the 1970s, Henri Tajfel found that
neither of these causes were necessary. All it took to create discrimination was
the act of dividing people into groups. He took 64 teenagers from the same school,
divided them into two arbitrary groups, and asked them to assign
monetary reward points to anonymous members of
their own group and the other group. Awarding others instead of themselves
took away direct self-interest. Not only did subjects assign more money
to their own group than to the other group, they often maximised
the difference in rewards, even if it meant their
own group received less. For example, rather than give 15 points to an ingroup
member and 10 points to an outgroup member, They might give 11 points to the ingroup member
and 1 to the outgroup member. So, there was no self-interest, no meaningful
group identity, and no pre-existing conflict. Being from the same school,
there was a fair chance some anonymous out group members
might even be close friends. But the overriding instinct was
loyalty to a random group. In his book, Moral Tribes, Joshua Greene
explores this tribalistic response, in relation to the problem of cooperation. Greene cites ecologist Garrett Hardin’s
‘tragedy of the commons’ problem, in which some self-interested herders
raise livestock on a common pasture. To maximise their own profits, they
continue to increase their livestock. Eventually, the pasture succumbs
to overgrazing. The grass goes, the animals die,
and the herders are left with nothing. Hardin’s thought experiment
illustrates how individuals who act only in their own best
interests can all end up worse off. Greene frames morality as a set of
psychological adaptations, that allow otherwise selfish individuals
to reap the benefits of cooperation, aligning individual interests
with group interests. He illustrates how a range of elements in
our common human emotional apparatus encourages cooperative behavior. We feel empathy, gauging the experience
of fellow group members, not just in some abstract way, but actively simulating their feelings and sensations
in our own brains, from pain to pleasure. We feel loyalty towards fellow group members,
inspiring us to help and defend them, sometimes at significant cost to ourselves. We feel guilt when we assert our own self-interest
against the interests of fellow group members. On the flip side, we feel contempt for members
who behave selfishly at the expense of others, like freeloaders, who exploit the group’s
generosity without contributing. These and other reflex emotional reactions reinforce
cooperative behaviours that lead to group benefits. Further reinforcement comes
from concepts like reputation. As social creatures, we’re highly attuned to information about trustworthiness. A trustworthy reputation can carry great social value
in helping to forge cooperative relationships. And, a reputation for consistently punishing
transgressions against the group can discourage uncooperative behaviour. When reputations are damaged, the concept of redemption allows people to repair them, opening a path back to the group, and restoring
the win-win of mutual cooperation. Tribalism arises from our attempts to distinguish
cooperative strangers from non-cooperative strangers. To help make that distinction, we develop social indicators of shared values: indicators like social rituals, language, clothes. We hunt for those indicators in others, and
adjust our behaviour towards them accordingly. So, the same processes that facilitate
cooperation between group members also create sharp divisions between
members and non-members. This leads Greene to an important qualification: morality didn’t evolve
to promote universal cooperation. Our moral brains developed to align
the individual with the group, but left us with the higher-order problem of
‘the group versus everyone else.’ We moved from ‘me versus us’
to ‘us versus them’. Expanding on Hardin’s
‘tragedy of the commons’, Greene presents
‘the tragedy of common-sense morality’. Instead of individual herders, we have individual groups,
each with its own ideas about cooperation. Group 1 thinks land should be
divided up into plots. Members have total freedom
to use their plot how they want. Herders who prosper through hard work
can purchase neighboring land. Herders who are lazy or foolish, or who greedily graze,
suffer the consequences alone, including death. Group 1 forms a council with minimal powers, simply ensuring property is respected
and promises are honoured. Group 2 thinks the whole land
should belong to everyone. Here no individual prospers, but
no individual is left to die. Group 2 forms a council with extensive powers, assigning and supervising all work for
group members who have no autonomy. Hard workers are forced to share
all proceeds with lazy workers. Both groups believe their own system
represents common-sense morality, while the other group’s system is
irrational and immoral. But their convictions about their moral values
might not be as solid as they think. In 2003, social psychologist Geoffrey Cohen
recruited liberal and conservative college students with strong opinions on welfare
for a study in group influence. He presented them with one of
two versions of a welfare policy: one provided generous benefits;
the other offered slender, stringent benefits. Pilot tests confirmed predictions that
liberals would prefer the generous version, and conservatives would
prefer the stringent one. In the actual experiment, when one of
the policies was presented to a participant, half the time it was claimed to be
supported by the Democrats; half the time it was claimed to be
supported by the Republicans. So, half the time the policy and
party support were politically consistent; but half the time they were
politically conflicted. Cohen found that when they conflicted,
participants showed a strong tribal bias, focusing on the party support
rather than the content of the policy. Liberals strongly favoured conservative policies labelled
Democrat over liberal policies labelled Republican. And conservatives strongly favoured
liberal policies labelled Republican over conservative policies labelled Democrat. Subjects later denied being
influenced by the party labels, although many of them believed
other people would be influenced, especially their ideological adversaries. To know that group identity can effectively nullify
group values in this way should give us serious pause. It means when we have reasonable,
useful, positive ideas from other groups — or unreasonable, corrupt ideas from our own group — we might be responding to them purely on the basis
of their group origin, instead of their content. This helps to explain how subjects
from evolution to climate change are sometimes denounced or defended by individuals
with no knowledge or understanding of the evidence, instead of being assessed on the data
from the relevant scientific disciplines. Greene notes when false beliefs become
tribal badges of honour, they’re difficult to change. It’s no longer just about teaching the facts,
but about combating overbearing group influence. So, is there any way out of this
tribalistic ‘us versus them’ thinking? Finding common ground in the
short term is clearly possible. In the face of a common enemy, many groups
have managed to temporarily forget their differences. Alan Moore and David Gibbons’s
brilliant graphic novel, Watchmen, plays with the question of whether an ongoing
common enemy could be artificially manufactured to bring about permanent peace
between hostile groups. The novel is set in an alternative timeline, during a nuclear standoff between
the United States and the Soviet Union, this time populated with superheroes. Retired superhero Adrian Veidt stages a hoax alien
invasion on New York that devastates the city, creating a truce between the two earthly superpowers. The movie adaptation has a different twist. As the novel acknowledges,
solutions based on deception hold big risks. Discovery of the subterfuge
could destroy the truce. Uniting against a common enemy can
just create tribalism on a larger scale, perpetuating all the same distorted thinking. Sometimes the common enemy might be
a genuine malevolent threat, but sometimes we might be demonising
a group that just has different preferences. The group might be falsely labelled
an enemy for political reasons. It’s easy to become the monsters
we think we’re attacking. When targeted groups lash back in self-defense,
we take it as proof of their aggressive nature. It’s a self-fulfilling prophecy; by looking for trouble we create it. Uniting against a common enemy
doesn’t resolve our differences; it just puts them on ice. When the enemy’s defeated,
differences can resurface, and even turn a former ally
into a new common enemy. Appealing to our common
humanity feels more positive. But again, things can backfire
if we try to sweep away our differences. Any unresolved issues we have about our differences
can block our collective progress, and the most trivial-seeming differences
can arouse unexpected issues. When I studied psychological
therapies at university, during a two-day retreat exploring
a range of personal differences, We were scored on our preference for
planning, structure, and resolution, versus improvisation, spontaneity,
and open-endedness. The tutor arranged us in a
giant U-shape according to our scores, with the highest-scoring planner at one end
and the highest-scoring improviser at the other, then invited us to form small groups with our
nearest neighbors and discuss our preferences. It later emerged that in those private discussions,
improvisers were labelled unnatural by planners, and planners were labelled unnatural by an improviser. These therapy students
were trained in empathy. But on finding out other folks
didn’t share their preferences, they tried to pathologise
and stigmatise them. Some valuable lessons were learned
that day about appreciating differences. We’re not all the same, and overcoming
tribalism doesn’t entail being the same. It’s about finding enough common
humanity to be able to hear each other and discuss our differences rationally. And let’s be real — some folks have a vested interest
in not finding common humanity: leaders of high-control groups,
segregationist groups, doomsday groups who welcome Armageddon
as the fulfillment of their faith. But finding common humanity
doesn’t require everyone’s participation. It’s a choice any individual can make at any time,
including disillusioned members of separatist groups. Each person who makes that choice
gets the chance to experience: wider communication with people
who might otherwise have been dismissed, deeper understanding from information that
might otherwise have been distorted or censored, and greater personal autonomy from
an enhanced awareness of our options. Establishing a common humanity means
overcoming two distinct problems. One is the tribalistic wiring
of our moral brains, which naturally favours ingroup cooperation
at the expense of intergroup cooperation. Another is the clash between
the moral systems of different groups. Overcoming these problems involves effort,
and possibly some psychological discomfort. But every individual who overcomes them
makes a difference. Regarding our tribalistic wiring, psychologists and neuroscientists have repeatedly
described two different human thinking styles. A. David Redish calls them
Pavlovian and deliberative. Daniel Kahneman calls them
system 1 and system 2, or fast and slow. Joshua Greene refers to them as
automatic and manual. They all capture the same idea. We have a reflexive style of thinking, that’s instinctive, effortless, emotional,
and produces automatic responses. And we have a reflective style of thinking, that’s conscious, effortful, logical,
and allows us to assess alternative responses. It’s not that one thinking style’s
better than the other; they’re just suited to different jobs. Greene uses the analogy of
automatic and manual camera settings. For everyday purposes, the camera’s
automatic mode will give good results. It’s designed to cater to a range of
common photographic conditions. But it has trouble outside those conditions. Photos come out under- or over-exposed, colour balance suffers,
images get blurred. To correct these errors,
cameras have a manual mode, allowing us to adjust focus,
shutter speed, and white balance. Takes more time, but produces
much more accurate pictures. Likewise, in everyday conditions, our social brains
tend to do a decent job in automatic mode, using a range of fixed emotional reflexes that
encourage beneficial cooperation with fellow members. But outside those conditions,
the limitations of our fixed reflexes become clear. We might falsely denounce some groups as immoral
just because we find them strange, and our simplistic, automatic thinking
equates strange with wrong. Like a camera, we can move over to
manual mode to correct our mistakes. We can widen our lens, take in and
evaluate more complex information, and produce a clearer moral picture. If we find someone strange, instead of responding
with reflex condemnation, we can ask ourselves: Has this person done wrong?
Has anyone been harmed? And we might reach
a different conclusion. Our capacity for manual thinking
gives us an extraordinary opportunity to move beyond the confines
of our limited reflex instincts. Manual thinking is our ladder
out of tribalism. Unfortunately, that ladder can turn into a
hamster wheel, locking us into tribalism. This is the double-edged nature
of manual, deliberative thinking. We can use it in a critical, detached way to scrutinise
our moral instincts and correct our tribal prejudices. Or, we can use it to try and invent elaborate
justifications that confirm those prejudices. Unfortunately, a psychological phenomenon
called cognitive dissonance often pushes us towards the
hamster wheel instead of the ladder. When we experience conflicting cognitions —
beliefs, emotions, thoughts — we feel a discomfort that drives us to
reduce that conflict and seek consistency. In seeking consistency, it’s easier
to invent fallacious reasons to confirm what we already believe, feel, and think, than to do the tough mental work
of questioning our assumptions. Take the earlier example of asking ourselves if
anyone’s been harmed by a person we find strange. If we can’t find any evidence
of injury or damage, we might twist the meaning of the word
‘harm’ to prejudice the answer. We might say the person’s
strangeness violates nature. Even if the person’s strangeness is
observed throughout the natural world, we can still claim it wasn’t intended for humans —
only other animals. We might claim the person’s strangeness defies the
wishes of invisible supernatural entities, called gods. We might suggest the person’s strangeness
offends the tribes sensibilities. If we take the easy hamster wheel
of self-justification, using our manual thinking
to confirm our prejudices, we can fool ourselves that
these accusations are valid. If we take the ladder, using our manual
thinking to challenge our prejudices, we might notice none of these accusations
identify any injury or damage. They’re all just ways of confirming
that strange means wrong, and they can all be arbitrarily applied to absolutely
any activity people want to stigmatise. Folks who use these
spurious tribal accusations might not appreciate that it’s in their
own best interest not to use them. In using them against other groups, they have no come back when other
groups accuse them of violating nature, defying other gods, or
offending other sensibilities. Live by the fallacy,
die by the fallacy. This is the second problem in
establishing common humanity: the clash between the
moral systems of different groups. Philosopher John Rawls’s thought experiment,
‘the veil of ignorance’, invites us to construct a moral system, while
imagining we’re just about to be born with no knowledge of what our abilities, preferences,
physical traits, or position in society will be. It’s in our interest not to disadvantage any group,
because we could be born into that group. The idea is that a fair system is
one we’d happily be born into at random. In discussing morality with other groups,
we could imagine a veil of membership. Here the task is to explain
our ideas on their own merits, without reference to the common-sense
notions of any specific group. Developing a common moral language means
acknowledging that it’s pointless constructing moral arguments in ways
we wouldn’t accept from others. If we’re not persuaded by the supernatural
threats and inducements of other groups, why expect them to be persuaded by ours? If we wouldn’t accept authority-based moral declarations
from the authorities of other groups, we can’t expect others to accept authority-based
moral declarations from us. If we wouldn’t accept bold assertions about rights
and responsibilities from other groups, we shouldn’t make bold assertions about
rights or responsibilities to them. When we’re used to assuming the
common-sense truth of our groups ideas, it can be jarring and illuminating to try
to explain them from first principles, to move from automatic to manual thinking. Joshua Greene cites the case of
18th-century philosopher Jeremy Bentham, who tortured himself for years trying to find grounds
to justify harsh punishments for homosexuality, which he called an abomination. But, working through the arguments
against homosexuality in a detached way, he finally concluded it represented
no harm or cause for public alarm, and argued for its decriminalisation, catapulting him
centuries ahead of the tribal common-sense of his era. Detached manual thinking
changes the freedoms we defend. Some people are surprised when
an atheist defends religious freedom, when a Jew defends the freedom of
expression of Holocaust deniers, when an ex-Muslim defends
the freedom to wear hijabs, when a Muslim defends the freedom
to draw the author of Islam, Muhammad. Superficially, they might seem to be
working against their own interests. But they’re just thinking on a
human level instead of a tribal one. Detached manual thinking
changes our loyalties. When we expand loyalty from tribal to human concern for truth and justice can be aligned on all levels:
individual, group, and human. When loyalty’s tribal,
the group is misaligned. The region of non-overlap could be labeled hypocrisy, where the group reserves special treatment for itself
that it doesn’t extend to other groups. We see tribal loyalty when groups who condemn
child abuse shield molesters in their own ranks, and when fraudulent politicians are protected
from the jail-time any other citizen would serve. It’s one law for us and another for them. Detached manual thinking changes our empathy. It’s easy to feel empathy
for members of our own groups. But to cross the ‘us versus them’ divide, we need to be
able to empathise with people outside our groups. Sometimes we hold back from that because we confuse
empathising with validating someone’s viewpoint. Empathy is just about understanding
why someone feels a particular way. Without understanding someone’s position,
communication is blocked. Some theists claim
atheists just want to sin. They fail to understand not only that atheists
can derive morality through secular principles, but that we have a very
obvious moral motivation: to enjoy the benefits of
mutual support and cooperation. Some atheists claim
all theists are mentally ill. Rather than arguing that
theists hold flawed beliefs, they leap to a psychiatric explanation. We all hold flawed beliefs
and for various reasons. Sometimes we’re mistaken.
Sometimes we’re deliberately deceived. Sometimes we’re indoctrinated,
which is how I became a theist. I required no psychiatric intervention; my indoctrination was broken
by detached reflection. I started out on the hamster wheel of self-justification,
trying to confirm my indoctrination, but I unwittingly ended up
on the ladder to atheism. When we expand empathy beyond
the tribal level to the human level, false judgments start to fall away. We start seeing the individuals
behind the group label, and communication opens up. When we form groups, our social brains are wired to encourage cooperation
and social cohesion within those groups. An unfortunate downside to that
beneficial wiring is tribalism, leading us to elevate
ourselves above other groups. But, tribalism isn’t inevitable. Our capacity for detached reflection
offers a path out of it. It involves more effort than
our everyday reflex thinking, and it can arouse discomfort when
long-held ideas are challenged. But the benefit to our collective
moral progress is clear. It’s often said, ‘there’s no ‘i’ in team’. But there is in ‘tribe’. And it’s a great reminder not to be
swallowed up by our groups, but to value and preserve individuality,
integrity, and independent thought; to always feel able to question our
groups and resist the errors of tribalism.