Translate

Showing posts with label Essays. Show all posts
Showing posts with label Essays. Show all posts

Sunday, 3 December 2023

"I Don't Care if You Listen or Not"

The statement speaks to a form of artistic autonomy that challenges the conventional performer-audience relationship; it invites reflection on what is deemed essential for performance, and what possibilities emerge when the dynamic is disrupted or reimagined. By focusing on the internal processes of the artist rather than the reception by an audience, we open up a realm of performance that is about the act of creation itself. This aligns with a theatrical philosophy which often prioritises the experience and integrity of the artistic expression over the interaction and response of the audience. A counterpoint to this view is that performance is an event designed for an audience, a form of communication or expression that presupposes a spectator. The presence of an audience, their reactions, and their engagement are typically seen as integral to the event itself, creating a dynamic interplay between the observer and the observed, each influencing the experience of the other. However, the notion that performance is an act of communication that requires both a performer and an audience has been increasingly challenged, particularly in the realms of contemporary theatre, performance art, and digital media. If we consider a performance as an artistic release of self-expression, then it can and does exist without an audience. Artists often create for the sake of the art itself or for personal emotional need, rather than for any anticipated public reception; the act of performing itself transforms the individual artist, irrespective of whether anyone is watching.

But is it a “performance” if nobody is being performed to? A performance typically refers to a live presentation or artistic exhibition delivered by one or more artists. This could be a play in a theatre, a musical recital, a dance showcase, a live painting demonstration, or even a street artist’s display. Here, performance is characterised by its temporality; it is an event that happens over time and is designed for an audience to witness and experience. The presence of an audience is a defining feature because it is the observers who perceive, interpret, and give meaning to the performance. An audience’s reaction—be it applause, laughter, critique, or interpretation—contributes to the complete nature of the performance, imbuing it with a shared social reality. Hence, in this definition, the act of performing carries an intention to convey a certain impression or communicate meaning. Theories such as “reader-response theory” or “reception theory” discuss how a text (or a performance) is not complete without its reception.

One could argue that a performance, like any event, occurs regardless of observation. The actions of the performer, the expression of the art, and the occurrence of the event are factual and exist independently of an audience. The key distinction here is between the existence of the performance and the validation or acknowledgement of it. Without an audience, the validation through applause, criticism, or interpretation is absent, but the performance as a sequence of actions still transpires. Even in an empty theatre, a performer may deliver lines, an orchestra may play a symphony, and a dancer may execute choreography; the physical and aesthetic actions do not cease to exist because they are unobserved. However, while the tangible mechanics of the performance may occur without an audience, the full spectrum of what constitutes a performance—its energetic exchange, its emotional impact, and its collective memory—is often thought to be co-created with those who witness it.

Yet, the creation of performance without an audience is not only possible but is already practiced in various forms within the arts: artists like Marina Abramović, for instance, have explored the limits of what constitutes performance and audience participation, sometimes engaging in acts that are witnessed by very few or even by no one, at least at the time of the initial act; and in the online digital space, it is commonplace for performance to occur without an immediate physical audience. Consider a singer recording vocals or an actor self-taping to camera—the eventual audience is remote, separated by time, space, and medium, and yet the act of performance still carries significant meaning and intent. The performances could be experienced by an audience long after the fact, or hidden beneath the multitude of other content and never seen. If nobody were to experience the recording—if the only audience present was in the mind of the performer—is it true that a fully actualised performance did not take place? The essence of the performances was not in its reception, but in the act of expression: the performances were created, executed, and fully realised without the presence of an external audience. The audience here is not a required component for the validity of a performance but rather a potential participant in a socially shared experience that may or may not take place.

The external audience dynamics do affect the nature of the performance, as well as its absence, but it is not necessary for the act of performance. Indeed, the presence and disposition of an audience can have a profound impact on the dynamics of a performance, affecting both the performers and the collective meaning of the performance itself. This phenomenon has been extensively studied across various disciplines including psychology, theatre studies, and performance theory. Research often explores these effects through the lenses of audience-performer dynamics, the psychology of performance, and the sociology of group interactions. From a psychological perspective, the seminal work of French sociologist Emile Durkheim on collective effervescence describes the energy that emerges when a group of people, such as an audience, comes together to participate in the same action. When performers are in front of an audience, they can experience what psychologist Mihaly Csikszentmihalyi terms as “flow”, a heightened state of focus and immersion in activities that can enhance performance quality. Furthermore, the “audience effect”, a concept often discussed in social psychology, specifically refers to the impact of an audience on performance. Robert Zajonc’s work in this field identified the ways in which the mere presence of others can enhance or inhibit performance, depending on the complexity of the task and the skill level of the performer. For well-practised tasks, an audience can enhance performance through increased arousal; for less familiar tasks, however, this arousal can be detrimental. In theatre studies, audience response is often viewed as a critical aspect. Susan Bennett’s Theatre Audiences offers a comprehensive examination of the reciprocal relationship between the audience and the performance. She outlines how the audience’s reactions can influence the pacing, timing, and energy of a performance as performers often adjust their delivery based on verbal and non-verbal feedback. This dynamic interplay can transform the experience, making each performance a unique event influenced by the specific audience in attendance. Moreover, in his influential text The Empty Space, Peter Brook discusses how an audience’s energy contributes to the creation of what he describes as “immediate theatre”. According to Brook, the performer-audience relationship is a crucial component that can turn the “deadly” theatre—where there is no true communication—into a “live” one. The concept of audience engagement and its effect on the performance is further elaborated by Baz Kershaw in his work The Radical in Performance. Kershaw discusses how an engaged audience can have a radicalising effect on performance, pushing the boundaries of traditional performance and creating a more immersive and interactive experience. In musical performance, John Sloboda’s research in The Musical Mind touches upon how musicians might experience heightened levels of anxiety or exhilaration when performing before an audience, which can, in turn, affect their technical proficiency and emotional expression. This interplay is significant in live music, where the audience’s reactions can influence the performer’s interpretation and delivery of the music in real time. From these perspectives, it becomes clear that an audience does not passively consume a performance but actively shapes its unfolding through complex psychological and social mechanisms. Each performance is therefore not merely a presentation of a pre-prepared piece but a dynamic interaction between performer and audience, with the audience’s responses continuously shaping the course and quality of the performance.

However, it is possible for a performer to have an audience, even when nobody is watching. This notion of a performer being the audience of his or her own performance invites a rich philosophical exploration, touching upon the concepts of self-awareness, and the phenomenology of experience. Philosophical discourse offers a breadth of perspectives on the relationship between the observer and the observed, as well as the subject-object dichotomy. In the field of aesthetics, the work of philosophers like Arthur Danto in his work The Transfiguration of the Commonplace can provide insight into the relationship between performance and perception. Danto’s theories on art as the embodiment of meaning suggest that a performer could very well be an audience to the meanings and interpretations that arise within their own performance. Each gesture, movement, or note in a performance can be reflective, carrying an intention and interpretation that the performer is uniquely positioned to understand and critique. The performer, then, becomes a sort of reflective audience, engaging with the performance both as a creator and an interpreter of meaning.

If a comedian makes a joke in an empty auditorium, does it make a sound? It is often said that in stand-up, timing is everything. As it turns out, when the audience is a row of empty seats, the timing is quite flexible. However, whether it is a performance to one’s own shadow or to a billion eager faces, the essence of the act, rather than the perception of the expression, remains the same. When a performance is enacted without an external audience, it becomes a private act, serving as a method of personal reflection for the artist; but the performer is still engaged in the act of performing, utilising their skills and perhaps even experiencing the same emotional and physical exertion as they would in front of an external audience. If the self can act as its own audience, then the solitude of one’s actions does not strip them of their performative character. For some creators, such as me, the act of performance is an intimate expression which serves as a form of self-exploration, catharsis, or a means of working through ideas and emotions. It’s here, in the sanctum of one’s mind, where the self-reflexive nature of human consciousness creates a sort of inner theatre where our actions are constantly up for review. The internal audience functions continuously, responding to and influencing the performance.

A performer can be the audience of his or her own performance, not in the literal sense of occupying two distinct spatial positions, but rather in the phenomenological sense of experiencing oneself as both the observer and the observed. This duality encapsulates the complex nature of human consciousness and the intricate interplay between action and reflection. In essence, the performer, through introspection and self-awareness, engages in a dialogue with oneself, constantly interpreting and re-interpreting the ongoing performance. From a phenomenological standpoint, particularly within the framework established by Edmund Husserl, the idea of a performer as an audience invokes the concept of “intentionality”, the notion that consciousness is always the consciousness of something. In this context, a performer, even while engaged in the act of performance, can have a dual intentionality where he or she is both the subject directing the performance and simultaneously the object of his or her own reflective consciousness. Husserl’s student, Martin Heidegger, would perhaps interpret this through the lens of “Dasein”, which underscores the idea of being-in-the-world where one’s existence is fundamentally interconnected with the world; thus, a performer, by being an audience to oneself, is actively shaping and being shaped by the very act of performance.

For existentialists, if every action is a conscious choice, we are, in essence, “performing” our lives for the most critical audience: ourselves. Sartre’s notion of “bad faith”—the denial of this freedom and the embrace of a fixed role—highlights the performativity of actions when they are done to conform rather than to reflect one’s genuine choice. Sartre’s views suggest that by becoming an audience to oneself, the performer engages in a kind of self-observation that can either be an act of authenticity, recognising oneself as the source of one’s actions, or an act of self-deception, where one denies material agency. Within this existential frame, the notion of authenticity is pivotal. The performance is not about creating a façade for others but is intrinsically tied to the authentic choices that define our being. Therefore, every action could be a performance if it is part of this continuous existential project of self-definition. The actions themselves become a narrative in the theatre of the self, where the individual not only acts but observes, judges, and often reinterprets their actions in the quest for meaning.

From a Jungian perspective, personal acts can be seen as influenced by and potentially performing archetypal roles within our own psyche. These acts, whether observed by others or not, are part of the fabric of our collective unconscious experience. They connect us to universal human themes and contribute to our personal narrative and the ongoing process of psychological development and individuation. The performative aspect is not necessarily about an audience of others but rather about the dialogue between our conscious self and the archetypal forces within us. Carl Jung suggested that archetypes represent universal, ancient symbols and images emanating from the collective unconscious, serving as the psychological equivalents of instinct. If we consider our personal acts as informed by these archetypes, it’s possible to view our actions as being influenced by these shared human narratives, which could be understood as a form of performance. When no external audience is present, the archetypes within the collective unconscious could act as an internal audience; for instance, if one’s actions align with the hero archetype, one might unconsciously “perform” acts of bravery or sacrifice, not for the sake of an external observer, but to satisfy an innate, archetypal script. In performing actions when we are alone, we might unconsciously be enacting certain archetypal patterns. This performance is not for others but for oneself, or rather, for the archetypal structures embedded within the psyche. Jung’s concept of individuation—the psychological process of integrating the conscious with the unconscious, including the archetypes—could be considered a performance in its own right. The process is an inward journey that involves confronting internal archetypal figures and is often played out through personal acts and choices, even when no one is watching.

In spiritual contexts, the idea of a divine observer alters the understanding of performance and audience. In this context, God is the ever-present audience. For those who hold this belief, the ultimate audience is not earthly but spiritual—God, or a divine presence. This shifts the emphasis from pleasing a human audience to performing in a way that aligns with divine will or cosmic order. For such individuals, every action is a performance in the sight of the divine, and this awareness can shape their choices and actions profoundly. In Christianity, the idea of Coram Deo, which means “in the presence of God”, encapsulates living one’s life as a performance before God in every action. In the mystic traditions of Sufism, every act of love and beauty can be seen as a performance that honours the divine. The dhikr (remembrance of God) and the whirling dance of the dervishes are both performances meant to unify the soul with the divine, transcending the earthly plane.

And so, the audience-performance question depends ultimately on the intent behind the performance. If the aim of the act is to be witnessed, to have a shared experience that communicates a message or evokes a collective emotional response, then, without an audience, the nature of the performance remains unfulfilled; conversely, if the purpose is for personal, psychological, or spiritual growth and self-expression, then the act of performing can be fully actualised without the need for external participants. Indeed, a self-actualising performer might argue that this form of performance is more true and pure because it is unpolluted by egoic desires or commercial and societal expectations; it is a performance for and with the artist’s own creative soul.

The phrase “All the world’s a stage”, famously penned by William Shakespeare in As You Like It, is a potent metaphor that encapsulates the idea that all of life is a performance, and that people are merely actors within it. Even when there seems to be no audience, the phrase implies that the mere act of living and interacting with the world is a performance in itself. According to Shakespeare’s metaphor, life’s performance continues irrespective of an observable audience because the “stage” of the world is ever-present. The metaphor is profound because, as can be derived from psychological and philosophical research, we are all performing our own stories envisaged in our minds. We embody these roles and, through them, engage with the narrative of our lives, seeking our version of a story’s resolution—be it peace, understanding, success, or reconciliation. In considering life as a form of art, the role of the individual can be seen as that of the artist, actively crafting his or her own life narrative, performance, and aesthetic. Life, in this light, becomes a canvas on which the aesthetics, themes, and structures of art are reproduced and reinterpreted, with each person both as the artist and the audience of their own existence.

Saturday, 19 November 2022

A Diagnosis

Major philosophers have long debated whether evil stems from monstrous intent or mundane indifference. Hannah Arendt, in analysing the Nazi perpetrator Adolf Eichmann, coined the phrase “the banality of evil” to denote how immense crimes can be committed not by fanatical demons but by ordinary, even unremarkable people. At Eichmann’s 1961 trial, Arendt was struck by his lack of diabolical passion—he was “neither perverted nor sadistic”, but alarmingly normal in his desire to advance his career. He performed evil deeds “without evil intentions”, out of an inability to think from others’ perspectives. In Arendt’s view, this thoughtlessness—a failure to imagine the real suffering of victims or to question authority—produced a shallow “ordinary” wrongdoing that nonetheless had monstrous results. Simone Weil similarly observed that real evil is often dull and mechanical, not the dramatic villainy of myth: “Imaginary evil is romantic and varied; real evil is gloomy, monotonous, barren, boring”. Both thinkers suggest that much of human evil arises from a void of empathy and reflection—a moral numbness that permits cruelty.

Other philosophers, however, have explored active or radical malice. Immanuel Kant argued that humans possess a “propensity to evil”: an innate tendency to put self-interest above the moral law. This propensity doesn’t mean each person is destined to do horrific deeds, but it tilts us toward moral failure unless actively resisted by principle. Kant distinguished this common radical evil from a purely diabolical evil (doing harm for harm’s sake), which he thought humans rarely if ever embody—since even wrongdoers usually rationalise their actions rather than embracing evil as such. Friedrich Nietzsche famously critiqued morality itself and probed the human impulse towards cruelty. In On the Genealogy of Morality, Nietzsche notes how throughout history people have taken festive joy in cruelty, both in punishment and in spectacle: “Without cruelty there is no festival: thus the longest and most ancient part of human history teaches—and in punishment there is so much that is festive!” He viewed the enjoyment of others’ suffering as deeply rooted in the psyche, however unsettling that may be. Meanwhile, philosophers like Simone Weil emphasised the privation of good at evil’s core—a kind of emptiness or refusal to see the humanity of others. Weil suggested that truly looking at another’s pain is a spiritual act, and evil consists in the failure to make that imaginative leap. Thus, across thinkers, we get a nuanced picture: evil can result from the absence of thought and empathy, or an active delight in causing harm, or simply the ordinary human tendency to favour oneself even at others’ expense.

Modern psychology bolsters these philosophical insights by examining individuals who enjoy cruelty versus those who slide into it mindlessly. Clinical studies have identified a personality trait of everyday sadism—the tendency to derive pleasure from inflicting or witnessing pain. In one experiment, researchers gave volunteers a choice of unpleasant tasks (such as killing insects in a grinder, cleaning toilets, or enduring ice water); a significant minority chose to kill insects, even expending extra effort to do so. The more “sadistic” the person (by personality score), the more likely they were to opt for killing and report enjoyment in the act. Such participants showed “emotional benefit in causing or simply observing others’ suffering”. Follow-up tests found that only those high in sadism would, for example, exert themselves to blast an innocent person with loud noise even when there was no retaliation—suggesting a pure appetite for others’ pain. This research supports the notion that malevolent cruelty—harming for harm’s sake—is very real, even if it’s present in only a subset of people.

Relatedly, the clinical profile of psychopathy illuminates how evil can manifest as an emotional deficit. Psychopathy is characterised by a callous lack of empathy or remorse, shallow affect, and often a charming manipulativeness. Psychopaths can commit cruel or exploitative acts with chilling detachment because they do not feel the pangs of conscience that stop others. As one forensic summary puts it, many psychopaths show a “profound lack of remorse for their aggressive actions… along with a corresponding lack of empathy for their victims”, which enables them to act in a cold-blooded manner, using those around them as pawns to satisfy their own desires. Most psychopaths do not become violent criminals—some channel their manipulative tendencies into business or politics—but the combination of charm, power-seeking, and inability to care about others’ suffering makes psychopathy a classic template of evil in psychological literature. This stands in contrast to Arendt’s banal evildoer who may feel something (fear, career ambition, peer pressure) but fails to think morally; the psychopath can think instrumentally but fails to feel morally, treating people as objects. Moreover, when a psychopath also possesses sadistic inclinations, the result can be a person who not only lacks empathy but thrives on cruelty—arguably an embodiment of active evil.

Philosophy and psychology together suggest that human evil comes in multiple forms. There is the thoughtless compliance that Arendt and Weil warned about—a void where empathy and reflection should be—making decent people into agents of horror through routine and obedience. And there is the intentional malevolence seen in sadists and psychopaths who recognise suffering and pursue it as a goal or amusement. One might call these the two poles of evil: the banal and the demonic. In reality, many evildoers combine banal and malicious elements—for instance, a war criminal might start by numbly “following orders” and later grow to relish the power over life and death.

Understanding these facets prepares us to examine how entire societies can sanction evil under lofty guises, and how individuals rationalise or revel in cruelty. In history, and the present, there are countless examples where twisted interpretations of beliefs lead to the justification, or even glorification, of murderous and sadistic tendencies. Such beliefs give a person an excuse; an identity in opposition to and superiority over other people, who can be condemned and abused from a position of personal righteousness.

Indeed, history shows that great evils are often perpetrated under moral disguises. Cruelty rarely advertises itself as cruelty; instead, it wears the costumes of righteousness, necessity, or justice. Totalitarian and extremist regimes in particular have excelled at cloaking acts of barbarism in high-minded rhetoric. In Nazi Germany, genocide was justified as purification and self-defence; in Stalin’s USSR and Mao’s China, mass murder was explained as a harsh but noble phase of building a utopia; in religious crusades extreme brutality was sanctified as the enforcement of divine law. These regimes did not lack an ethical narrative—on the contrary, they drowned their followers in a torrent of moral and ideological justification for wicked deeds.

A chilling example comes from a secret speech by Nazi SS leader Heinrich Himmler to his officers regarding the Holocaust. Himmler acknowledges the mass killing of Jews explicitly, but then praises his men for doing it while supposedly remaining “decent”. He noted that most of them had seen “100 bodies lying together, 500 or 1,000,” and yet—apart from a few instances of “human weakness”—“to have stuck it out and at the same time… to have remained decent fellows, that is what has made us hard”. In Himmler’s twisted logic, refraining from enjoying the slaughter was a point of pride; the SS were to view themselves as morally upright soldiers performing a gruesome duty. He even called the genocide “a page of glory” in German history that must never be publicly recorded. This is moral inversion at its extreme: murder presented as duty, atrocity as honour, and compassion as a weakness to be overcome. By convincing themselves that they were still “decent” men—just tough enough to do what was necessary—Himmler and his followers blunted any pangs of conscience. It exemplifies how propaganda and group ideology can enable cognitive dissonance resolution: one’s self-image as a good person is preserved by redefining evil impulses as good or at least necessary, and avoiding direct confrontation with the full horror of their crimes.

Psychologically, this wilful self-deception is explained by moral disengagement mechanisms. Albert Bandura identified several mental tactics by which people who violate their own moral standards manage to neutralise guilt. They might invoke moral justification (“we’re doing this for a great cause”), euphemistic labelling (calling torture “enhanced interrogation” or civilian deaths “collateral damage”) and advantageous comparison (“yes we’re harsh, but others have done far worse”). They also displace responsibility to authorities (“I’m just following orders”) or diffuse responsibility across a group (“everyone was doing it, it wasn’t just me”). Crucially, they dehumanise or blame the victims—seeing them as less than human or as deserving their fate. All these tactics appeared in totalitarian regimes. Nazi propaganda depicted Jews as subhuman “rats” or a bacillus infecting society; Stalinist and Maoist rhetoric labelled class or ideological opponents as “enemies of the people”, “vermin,” or obstacles to progress, making their elimination seem virtuous. Religious extremists paint those outside their fold as creatures to be righteously punished, where any personal hesitation to perpetrate brutality can be framed as weakness of faith. Through language and ideology, perpetrators create a contorted moral universe where cruelty becomes virtuous.

Social psychology experiments dramatically illustrate how ordinary people rationalise harm. In a classic study, college students were asked to administer electric shocks to peers as part of a supposed learning experiment; some overheard the peers being described in derogatory, dehumanising terms (“an animalistic rotten bunch”), others heard neutral or humanising descriptions. Those who heard the victims called animals delivered significantly stronger shocks on average than those who heard them praised, showing how seeing someone as less human lowers our moral restraints. Furthermore, after inflicting pain, participants often adjusted their attitudes to justify it—for instance, blaming the victim’s character (a form of post hoc dehumanisation). This aligns with cognitive dissonance theory: harming someone creates dissonance with seeing oneself as good, so people often resolve it by convincing themselves the victim deserved the harm.

Another concept relevant here is ideological possession, when an individual’s identity is so consumed by an ideology that independent moral reasoning shuts down. In such cases, any act can be justified if it serves the sacred ideology. During China’s Cultural Revolution, young Red Guards brutalised teachers and even parents under the sway of Maoist dogma, believing their victims were bourgeois traitors impeding a perfect society. Religious fundamentalists, similarly, could commit murder or enslave captives while convinced they were enacting holy scripture and earning divine reward. Fanatical belief systems can commandeer moral intuitions, directing empathy only to in-group members and suspending compassion for out-groups. What might otherwise be recognised as cruelty is seen instead as purity, justice, or martyrdom. The result is what Albert Camus called murderous purity—when someone will massacre others with a deluded conscience.

In fact, cruelty often wears a moral mask. Great atrocities are rarely committed with a roar of open wickedness; more often they proceed with a self-righteous drumbeat. Understanding this is vital, because it means we cannot rely only on spotting obvious “evil intent” to prevent horrors. Good people can be seduced into serving evil by reinterpretation: by propaganda that plays on their moral emotions (loyalty, piety, patriotism, justice) and redefines cruelty as duty. As numerous historical regimes demonstrate, an appeal to “higher ideals” can sanction virtually any barbarity. Recognising these patterns of rationalisation and disengagement is the first step in resisting them. It also sets the stage for examining cases of evil that do not bother with moral disguise—agents who embrace malevolence more directly, as we explore through the archetype of Iago.

Literature often provides insightful portraits of evil, and few are as emblematic as Iago in Shakespeare’s Othello. Iago is a Venetian ensign who orchestrates the downfall of his general, Othello, by exploiting trust and stoking jealousy—all while appearing loyal and honourable. What makes Iago especially unsettling is his lack of clear motive. Unlike many villains, he offers no grand ideology or righteous grievance to justify his treachery. He gives various reasons in passing—he was passed over for a promotion by Othello, he suspects (probably baselessly) that Othello slept with his wife, he even at one point says he acts out of envy—but none of these fully explain the elaborate cruelty he unfolds. As the play progresses, it becomes evident that Iago enjoys manipulation and destruction for their own sake. Literary critic Samuel Taylor Coleridge famously described Iago’s behaviour as the “motive-hunting of motiveless Malignity”. In other words, Iago is constantly searching for justifications for an evil that fundamentally has no justification. He is, Coleridge suggested, a being of almost pure malevolence—“next to the Devil” in fiendishness—who nonetheless wears the “divine image” of man and interacts in ordinary society. Iago’s agency is malevolent in a cold, self-conscious way: he knows he is deceiving and ruining innocent people (Othello, Desdemona, Cassio) and he revels in it with sly asides to the audience.

The absence of a rational cause for Iago’s hatred makes him a study in evil as enmity for its own sake. When Othello demands Iago explain why he did all this, Iago pointedly refuses to speak. His silence suggests that, ultimately, he has no satisfactory motive to offer—or that giving one would diminish the dark mystique of his villainy. In contrast, consider Shakespeare’s Julius Caesar: Brutus joins a conspiracy to assassinate Caesar, but he does so believing it a tragic necessity to save the Roman Republic from tyranny. Brutus is essentially a morally conflicted villain (if one even calls him a villain)—he justifies his violent act with a principle (“not that I loved Caesar less, but that I loved Rome more”). He remains tortured by guilt and honour. Iago, by stark contrast, feels no guilt and seeks no noble principle; he delights in the power of causing chaos and watches Othello’s psychological torment with a kind of sporting pleasure. This makes Iago more akin to a modern archetype like the Joker of Batman lore than to Brutus: a character who wants to see the world burn just to enjoy the flames, versus one who commits evil under a wilful self-delusion of doing good.

This contrast highlights a spectrum of villainy: on one end, the ideological villain (however twisted the principle) who at least professes to believe in some cause beyond mere destruction—Brutus believing in republican virtue, and even Shakespeare’s Macbeth, who is driven by ambition and later remorsefully reflects on the futility of his crime. On the other end is the nihilistic or malevolent villain exemplified by Iago—one who cannot claim any creed except perhaps will to power, who treats life and people like pieces on a chessboard to be moved and knocked over for his personal pleasure. Iago offers us a portrait of evil stripped of excuses. He is important because he lays bare an uncomfortable idea: that some evil is done with full awareness and little remorse, requiring no grand ideology at all. It is enmity for its own sake, or for very petty motives exaggerated into mania.

Shakespeare crafted Iago as a warning of what intellect unguided by morality can do. Iago is intelligent, articulate, and perceptive—he understands Othello’s principled but credulous heart and how to poison it. Yet all that wit is employed destructively, without empathy. In Iago, we see the thrill of power over others in its pure form: he calls his manipulation of Othello a “sport” at one point, and when his plots lead Othello to murderous rage, Iago coolly observes the chaos he’s made as if admiring a piece of art. This is evil not as a means to an end, but as an end in itself.

Understanding Iago’s kind of evil helps complete our picture. Not all perpetrators are banal functionaries or ideologues deceiving themselves; some truly relish the suffering they cause. By recognising Iago, we acknowledge that motiveless malignity exists—and it must be countered not by appealing to the perpetrator’s conscience (they may have none), but by robustly upholding justice and preventing such individuals from acquiring unchecked power.

Evil is not only a matter of individual psychology or isolated acts; it can be built into social institutions and norms. Looking back, we find eras when forms of cruelty we now recognise as heinous were accepted as routine, even celebrated. Human sacrifice, slavery, torture as public spectacle—these have all, at various times, been normalised. Understanding this history is sobering but also instructive: it shows that our moral circle has expanded over time, and what once was common can later become unthinkable (and vice versa, cautionary).

Throughout most of history we have behaved like members of ant colonies: attacking, destroying, and enslaving each other, with the added horrors of sadism and sexual violence, often led by one murderous sociopath after another. History is predominately one of brutalised, traumatised, confused people living in pain and subjugation. Humanity has mostly now progressed to recognise the depraved evils that were socially accepted in previous times—yet a person of those times would have gone along with the accepted norm, assuming it was right because everyone else said it was right. They were wrong. Only the strength of compassion would have made a person question the chorus of excuses for cruelty in their society. Without true compassion, a person is simply “of their time”, allowing themself to automatically conform to whatever happens to be contemporary popular thinking and belief-controlled behaviour. In an evolutionary process, that rule of wrongness would hold true for people today, relative to future generations.

One stark example is the Roman Colosseum and gladiatorial games. For centuries, Romans flocked to arenas to watch people (often prisoners of war, slaves or criminals) kill each other or be killed by wild animals for entertainment. The Colosseum stands today as “a glorious but troubling monument to Roman imperial power and cruelty,” as one historian notes. Inside that magnificent amphitheatre, “Romans for centuries cold-bloodedly killed literally thousands of people… as well as professional fighters and animals”. These shows were not fringe events; they were core to Roman culture—used by emperors to win popularity and display the might of the empire. The populace cheered as humans were dismembered and died in agony. To us this is abhorrent, but to many Romans it was normal leisure, justified by saying the victims were condemned criminals, enemies, or merely slaves whose lives didn’t count. A few voices (like the philosopher Seneca) condemned the bloodlust of the arena, but they were minority voices. The Colosseum is a reminder that institutionalised cruelty can persist for ages with communal approval. It took the spread of new values—in this case, Christian ethics valuing each soul, and perhaps simple fatigue and economic burden—for the gladiatorial games to be abolished in the 5th century CE.

Another vast historical evil is slavery. For millennia, societies around the world practiced slavery with little moral qualm. In ancient civilisations, war captives and their descendants were routinely enslaved. Enslaved people were dehumanised as property—whipped, branded, raped, worked to death—yet these practices were defended by appeals to nature, economics, and even religion. Such rationalisations allowed cultured individuals to participate in or tolerate horrific cruelty (like the separation of families, or routine physical torture) while maintaining an image of decency. Slowly, very slowly, the moral circle expanded. This hard-won progress underscores that what is socially accepted is not fixed: moral norms can evolve, and cruelty need not be permanent.

Consider the Belgian Congo under King Leopold II. Colonial agents in the late 1800s forced villagers to harvest rubber under threat of horrific punishment; failure to meet quotas often resulted in hands being cut off. An estimated 10 million Congolese died from violence, famine, and disease during Leopold’s reign. Yet in Europe this genocide was long downplayed; Leopold presented himself as a philanthropist spreading Christianity and ending Arab slave trading. Only later did missionaries and activists expose the truth, shocking the public. Across empires, we see patterns of systemic cruelty (massacres, concentration camps, cultural erasure) normalised by colonial ideologies. These ideologies insisted the colonised were uncivilised or childlike, thus needing firm (if brutal) governance for their own good. Again, we observe moral disengagement at scale: labelling slaughtered rebels as “savages” made their killing palatable to the imperial public.

Despite these dark eras, there has been measurable moral progress. Historian Steven Pinker and others have documented a long-term decline in many forms of violence—from the outlawing of chattel slavery to reductions in judicial torture, capital punishment, and bloody spectacle. Philosopher Peter Singer encapsulates one aspect of this progress with the image of “the expanding circle” of moral concern. In early human history, our sympathy and moral duty likely extended only to our kin or small tribe. Over time, through reason and cultural development, that circle expanded—to include one’s clan, then tribe, then nation, then all races, and even, as Singer argues, all sentient beings. “Beginning with our own family or tribe,” Singer writes, moral concern enlarges to “include larger groups, nations, families of nations, all humans and perhaps even nonhuman animals”. Key intellectual moments aided this: the Enlightenment introduced universalist ideas that all men (eventually all people) are created equal and endowed with rights. The concept of human rights took hold strongly after the world wars, leading to the Universal Declaration of Human Rights in 1948, which declared the inherent dignity and rights of every member of the human family—a stark rebuke to the dehumanisation underpinning regimes like the Nazis.

Moral progress has also been driven by empathy and compassion fostered through culture. The spread of literature—novels that invited readers into the inner worlds of people living very different lives from themselves—is thought to have increased empathy. For instance, Uncle Tom’s Cabin (1852) by Harriet Beecher Stowe vividly humanised slaves to many readers in the U.S. and Europe, fuelling abolitionist sentiment. The graphic horrors of war described in accounts of World War I helped turn public opinion against seeing war as glorious. Over the 20th century, practices once common—child labour, public lynchings, animal cruelty for sport—have been challenged or outlawed in many countries as sensibilities became more compassionate. The “moral circle” certainly hasn’t expanded everywhere evenly, and backlash is real. Yet the broad trend is that the circle has widened; humanity’s baseline for acceptable cruelty has shifted towards greater condemnation of violence and suffering.

It’s important to note this progress, not to rest on laurels but to recognise that moral change is possible. The fragility of compassion, however, is also evident—gains can be reversed in times of fear or turmoil. Civilised norms collapsed in Nazi Germany, a highly educated society, showing that moral progress is not linear or guaranteed. Still, the overall expansion of the moral community gives some hope that compassion can gain ground over cruelty.

If cruelty is one side of humanity’s moral struggle, compassion is the other. Compassion—the capacity to feel sorrow at another’s suffering and the desire to alleviate it—has been extolled as a virtue in nearly every religious and ethical tradition. It is often described as the antidote to cruelty, the emotion that binds us to others’ humanity. However, compassion is also fragile: it can be hard to extend beyond our immediate circle, and in cynical or brutal systems it is readily dismissed as weakness.

Virtually all major religions place compassion at their moral core. In Buddhism, compassion (karuṇā) for all sentient beings is a principal virtue. Buddhist practice includes meditation specifically aimed at cultivating limitless compassion and loving-kindness (Metta meditation), reflecting a belief that compassion can indeed be expanded with effort—from one’s family to friends to strangers and even enemies. In Christianity, compassion is likewise central: Jesus’ teachings urged love not just for neighbours but for perceived enemies, and parables like the Good Samaritan enshrine mercy toward the stranger as true righteousness. Christian charity and the ideal of caritas (selfless love) inspired countless acts of kindness (alongside, admittedly, episodes of religious intolerance—humans are nothing if not contradictory). Stoicism, often caricatured as a cold creed, actually advocated a form of compassionate cosmopolitanism: Marcus Aurelius wrote that we are all citizens of one universe, made for cooperation, like “feet, like hands” working together—therefore to act against one another is against nature. The Stoics valued sympatheia, a mutual interconnection; they counselled understanding others’ misdeeds as products of ignorance rather than pure malice, which is a stance that encourages a kind of compassion (or at least pity and forgiveness).

Modern philosophers have further explored compassion’s role. Martha Nussbaum calls compassion “the basic social emotion” that underpins humane law and justice. She defines compassion as “a painful emotion… occasioned by the awareness of another person’s (or other creature’s) undeserved misfortune”. In that definition, key components are evident: the suffering is not deserved, the person recognises the other’s suffering could befall themselves or someone they love (common humanity), and this engenders an altruistic response. Nussbaum and others argue that without compassion, our commitment to justice and the common good withers—laws become harsh and utilitarian, politics becomes a mere power game. Peter Singer, from a utilitarian perspective, asserts that reason can amplify our innate empathy; by logically considering others’ interests as comparable to our own, we correct the biases of tribalism and expand moral concern. He points to the spread of vegetarianism/veganism and global humanitarian aid as signs that compassion is widening to include non-human animals and faraway strangers—something unprecedented in scope.

Is compassion innate or learned? Research in developmental psychology shows even very young children exhibit rudimentary empathy: babies will cry in response to other babies’ cries, and toddlers will try to help an adult who appears hurt or in need. This suggests a basic empathetic responsiveness is hardwired. There is also evidence from neuroscience that seeing another person in pain can activate some of the same brain regions as experiencing pain oneself (mirror neuron systems, etc.). So, a capacity for compassion is part of our biological heritage—maybe because in social species, attending to others’ well-being had adaptive value (a tribe of caregivers would survive hardships better than a tribe of indifferent egotists). However, while the seed of compassion is natural, its growth and scope are greatly influenced by culture and training. Humans are quite capable of limiting empathy to their in-group and denying it to outsiders. Thus, many traditions stress cultivating compassion. Buddhist monks spend years training in compassion meditation to extend love to all beings. Parents and educators try to instil empathy in children by encouraging perspective-taking (“How would you feel if…?”). Philosophers like Nussbaum suggest that education in the arts and humanities—literature, history, theatre—can expand our imaginative empathy by exposing us to diverse lives and struggles.

Compassion indeed often needs cultivation to survive in “cynical systems”—environments that reward selfishness or pit groups against each other. In extremely competitive corporate cultures or authoritarian regimes, compassion may be scorned as a weakness or distraction. For example, under strict communist regimes, showing too much personal sympathy for those deemed “class enemies” could make one suspect; under extremist ideologies, mercy might be seen as betrayal of the cause. Yet even in such systems, compassion finds ways to persist. There are inspiring accounts of individuals who at great personal risk acted with compassion amidst terror. Compassion can be vulnerable to fear, propaganda, or fatigue, but it is also resilient in that it never disappears entirely. Often it survives in the shadows, ready to blossom when conditions allow.

In the modern global context, compassion faces new tests. The scale of suffering is enormous—wars, refugees, disasters—leading some to feel “compassion fatigue”. We are not psychologically evolved to emotionally process the pain of millions of strangers at once. There is a risk that constant exposure to suffering through news and the internet can either numb people or lead them to tune out rather than engage compassionately. Some thinkers, like psychologist Paul Bloom, have even argued “against empathy”, suggesting that unfocused empathy can be biased or paralysing, and that rational compassion (guided by principle rather than raw emotion) is what we need. This debate underscores that compassion must be paired with wisdom to be effective. Structured compassion—as seen in effective altruism movements (which try to channel compassion through evidence-based action) or restorative justice programs (which channel empathy into reconciliation processes)—might offer ways to systematically combat cruelty and injustice.

It’s also worth examining if compassion has limits. Are there people or situations where compassion fails? For instance, how do we respond compassionately to perpetrators of evil? Some argue that extending compassion to evildoers is necessary to break cycles of violence (e.g. rehabilitation rather than purely punitive justice), while others fear that too much empathy for the wrongdoer can lead to excusing harm. This is a delicate balance. Perhaps the ideal is to have compassion for every person’s basic humanity—recognising even perpetrators were often victims of something—but still hold them firmly accountable out of compassion for their victims and potential future victims. True compassion doesn’t mean the absence of accountability; it means we aim for outcomes that reduce overall suffering and transform conflict, rather than simply revenge.

Compassion stands as the counterforce to humanity’s often-evidenced worst impulses. It expands our moral circle, motivates us to alleviate suffering, and humanises those whom indifference or hatred would render invisible. It has deep roots in our nature but needs nurturing by culture, reason, and practice. Its fragility lies in how easily it can be overridden by fear, hate, or sheer scale, but its strength lies in how profoundly it resonates with our sense of meaning—people generally admire acts of compassion and often find personal fulfilment in caring for others. As technology and social change make us more interconnected, cultivating a robust, wise compassion may be our best hope to counter new forms of dehumanisation.

The 21st-century landscape of digital communication and media has altered the way we form moral judgments and sympathies—not always for the better. In theory, the internet could spread understanding by connecting diverse people. In practice, it has also given rise to echo chambers, misinformation, and tribalism that distort moral clarity and empathy. The term information pathologies can describe how the very channels by which we learn about the world may be infecting our moral discourse.

One issue is the echo chamber effect on social media and online forums. An echo chamber is an environment where one only encounters opinions and “facts” that reinforce one’s existing beliefs, with other views filtered out. The algorithms of online platforms curate content that align with users’ preferences and engagement history. Over time, this creates a feedback loop—conservatives see mostly conservative content, liberals see liberal content, etc., each side growing more convinced of its own righteousness and often more extreme (a phenomenon sometimes called polarisation by opinion amplification). Studies have found that social media does foster clusters of like-minded people who rarely interact with outsiders; these bubbles “limit exposure to diverse perspectives and… reinforce presupposed narratives and ideologies”. When we only hear our own “team’s” moral narratives, our capacity for empathy toward the “other team” erodes. Instead, out-group members are easily caricatured or demonised because their humanity or reasonable concerns are never presented to us in the echo chamber. This digital siloing thus fuels tribalism: people identify strongly with their virtual tribe (be it a political party, a fandom, an ideology) and may heap scorn or abuse on perceived outsiders. Online, it’s easier to engage in hate speech or cruelty because one often operates at a psychological distance—known as the online disinhibition effect—where the other is just a faceless avatar, not a full human before you.

Misinformation and propaganda thrive in such polarised, emotionally charged environments. Unlike in the broadcast era, the internet is an open battleground of information where the outrageous often outcompetes the measured. False or misleading content spreads rapidly, especially if it triggers anger or fear—two emotions that can short-circuit compassion. For example, during recent crises, conspiracy theories and rumours on social media have scapegoated certain groups, leading to real-world violence. The structure of online engagement itself often distorts moral discussion. Platforms reward content that generates strong reactions—and outrage is a potent driver of engagement. As a result, outrage culture has flourished: people perform their moral stances aggressively in order to gain validation from their in-group. This sometimes leads to performative cruelty in the name of righteousness (e.g., online “pile-ons” or cancel culture episodes, where individuals are hounded and dehumanised for missteps, with little room for empathy or forgiveness). It’s a bitter irony that tools which could have deepened our understanding of each other have, in some cases, made us less empathetic and more judgmental. Complex human stories get reduced to tweets; genuine dialogue gives way to flame wars. The anonymity or distance of the internet can unleash a latent sadism in some—a tendency to troll, bully, or joy in someone’s downfall in ways they likely wouldn’t face-to-face. This is a new kind of banal evil: ordinary users, perhaps otherwise kind in person, can become cruel in online mobs, not fully grasping the real harm to the target.

Identity and tribalism online also mean people’s moral views become entwined with their group identity (national, political, etc.). When facts or empathy for others threaten one’s identity, they are often rejected. For instance, climate change science or pandemic advice might be dismissed by some not purely on intellectual grounds but because accepting them feels like siding with the enemy tribe. Similarly, calls for refugee aid can meet reflexive hostility in those for whom such issues have been framed as partisan battle lines. Identity-driven moral bifurcation erodes the ability to see merit in “the other side’s” humanity or arguments.

Another pathology is the sheer speed and overload of information. We are bombarded with news of suffering—humanitarian crises, tragedies—to the point of numbness. Activists coin terms like “compassion fatigue” to describe how people, after a certain saturation point, stop emotionally responding to appeals for help. The constant stimulation also rewards snap judgments over careful deliberation; thus, nuanced moral issues get condensed into viral slogans or memes. Misinformation can manipulate emotions: so-called “fake news” often uses startling, emotionally charged falsehoods that spread faster than fact-checks can catch up. In the confusion, many lose a clear sense of truth, making them susceptible to demagogues who scapegoat and oversimplify. This epistemic chaos undermines empathy because empathy relies on understanding reality accurately—one cannot truly empathise with those one’s been misled about. If a person is convinced, falsely, that immigrants are mostly criminals, they will feel justified—even virtuous—in having no compassion for a drowning migrant or a child separated from parents at a border.

All is not lost, however. The same technology that enables echo chambers also allows unprecedented cross-cultural communication and exposure to real stories. Social media has facilitated empathy at times—viral images or videos of suffering have pricked the world’s conscience and spurred aid. The internet hosts countless initiatives for dialogue, charitable giving, and spreading awareness of others’ plights. The challenge is to fight the pathologies: by promoting digital literacy (teaching people how to recognise false information and seek diverse sources), by tweaking algorithms to prioritise reliable information and perhaps even empathy-evoking content rather than just incendiary posts, and by consciously stepping outside our online comfort sones. Individuals can curate their feeds to include different perspectives, practice restraint in online arguments, and remember the human on the other side of the screen.

In this age, maintaining moral clarity and empathy requires deliberate effort. It may mean occasionally unplugging from the rage-inducing news cycle to reflect. What’s clear is that if we allow our information ecosystem to remain poisoned, our capacity for compassion and rational moral agency will decline, and that vacuum can easily be filled by authoritarians and extremists.

Modern media has, in effect, globalised the “banality of evil” problem: passive scrolling and sharing can make us unwitting participants in spreading harmful ideas or normalising cruelty. But it can also globalise compassion: a generous crowdfunding response to a distant disaster shows the upside. The moral struggle continues on new terrain, and we must learn new skills of discernment and digital empathy to carry compassion forwards.

As technology advances, humanity is on the cusp of wielding powers once relegated to gods and fables. Artificial intelligence, genetic engineering, mass surveillance, autonomous weapons—these emerging domains hold immense promise but also grave peril. They raise a stark question: Will our moral wisdom and compassion evolve quickly enough to guide these powerful tools, or will we succumb to new forms of tyranny and catastrophe? Thinkers like Nick Bostrom and Elieser Yudkowsky have warned that certain technologies, especially a superintelligent AI, could pose existential threats—risks that could wipe out humanity or drastically curtail our future. Moreover, even without apocalyptic scenarios, these technologies could enable unprecedented oppression if abused by authoritarian regimes or unscrupulous actors.

Imagine Orwell’s 1984 but with modern tech: it’s easier for a government to be near-omniscient about citizens’ daily lives now. The social credit system in China—rating citizens based on various behaviours and associations—is another facet, using algorithms to reward or punish and ultimately to shape behaviour. Other countries are eagerly importing Chinese surveillance technology, spreading this model of digital authoritarianism. If such tools had existed in the 20th century, one shudders to think how much more efficiently the Gestapo or KGB could have crushed dissent. The peril is that these technologies give unprecedented leverage to power, and if that power lacks compassion or accountability, tyranny can reach terrifying precision.

Autonomous weapons—often called “killer robots”—are already in development. These are AI-driven drones or machines that can select and attack targets without human decision. They could operate at speeds and scales impossible for humans to control. The danger here is not only accidents (an AI misidentifying civilians as combatants) but also the ease of mass violence: an authoritarian could deploy swarms of tiny armed drones to eliminate dissidents en masse, or a terrorist could release AI-guided explosives that anonymously hunt people of a certain profile. Without compassion or conscience, machines make warfare even more indiscriminately lethal. International campaigns are urging bans on fully autonomous weapons, akin to bans on chemical weapons, precisely because of the moral horror they portend.

Given these hazards, what hope is there for mitigation? One path is trying to imbue our emerging tech with ethical safeguards—essentially, to encode compassion or its functional equivalent. AI ethics researchers propose various guidelines: ensuring AI respects human rights, is transparent, and is under meaningful human control. There are efforts to develop AI “principles” that emphasise beneficence, non-maleficence, and justice. Some suggest we might need AI to have empathy: for example, robots in caregiving roles programmed to detect distress and respond kindly. Whether genuine empathy is possible for AI is a deep question, but at minimum, AI can be constrained by rules that mirror compassionate values (e.g., a self-driving car must prioritise not harming pedestrians). Yet, pessimists note that a superintelligence might circumvent any rules we hard-code unless it truly understands and endorses our values—a very hard thing to guarantee.

Writers like Toby Ord speak of humanity being in a critical period—this century may decide whether we fumble our god-like powers and collapse, or harness them for a flourishing future. Nick Bostrom has used the metaphor of humanity being like “children playing with a bomb”—we have powerful science but not the maturity to handle it safely. Julian Savulescu and Ingmar Persson go so far as to argue that we might need to biologically or chemically enhance our moral capacities (e.g., increase empathy or impulse control through drugs or gene mods) to ensure we don’t abuse our technological might. They note it’s far easier for one malicious person to harm millions today (with a superbug or AI) than it is for one good person to similarly help millions, creating a dangerous imbalance. While “moral bioenhancement” is controversial—it raises spectres of mind control or loss of free will—the fact it’s even floated underscores the desperation of some thinkers about our moral preparedness. They highlight that evolution gave us Stone Age emotions, prone to tribalism and short-termism, but now we have nuclear weapons and synthetic biology. To be “fit for the future”, they argue, we might need deliberate intervention to boost our altruism or global empathy, or else risk catastrophe.

The peril of power in the technological era is both an external and internal moral test. External, because technology can drastically amplify the reach of both good and evil—a compassionate policy can save millions (say, a cheap vaccine distributed widely), whereas a malicious use can kill millions. Internal, because wielding such power responsibly demands virtues that our species has struggled to practice consistently: humility, foresight, empathy for the unseen other.

Standing at this precipice of history, it is clear that humanity’s moral struggle—between evil and compassion, unchecked power and responsibility—is reaching a new intensity. The future could unfold into dystopia or utopia, or something in between, depending on the choices we make now. What must change to tilt the balance toward a humane future?

Moral evolution begins with the psyche. If humans have innate tendencies to bias, selfishness, and fear of the other, then individuals must consciously cultivate counter-tendencies: critical thinking, empathy, and a sense of human-beingness. This means teaching children from a young age how to perspective-take (imagine life in another’s shoes), how to resolve conflicts peacefully, and how to spot and correct their own biases. Likewise, promoting media literacy is crucial in the digital age—young people (and adults) should learn how propaganda and misinformation work, so they are less easily manipulated into hatred. Essentially, we need to “inoculate” minds against the virus of dehumanisation, much as we do against biological viruses. On a more experimental front, techniques like compassion meditation (derived from Buddhist practice) have been shown to strengthen brain pathways associated with empathy and altruism. If we consider that prejudice and callousness are, to some degree, habits of mind, then deliberately training the opposite habits can yield more compassionate individuals.

We often treat technology as an autonomous force, but it is shaped by human choices. We should aim to design technologies that by default promote empathy and understanding rather than isolation and division. For example, social media algorithms could be tweaked to promote cross-cutting content that exposes people to constructive dialogue instead of only reinforcing biases. Online platforms could prioritise compassionate communication—perhaps through features that encourage users to pause and consider before posting an angry comment. There are interesting experiments: one project found that prompting users to imagine the perspective of someone from the opposing political party before reading that person’s post led to less toxic replies. Small design changes like this can nudge users toward empathy. In AI development more broadly, implementing the principle of “Ethics by Design” is key. Just as security and reliability are built into systems, so should ethical considerations be—whether it’s an AI medical diagnosis tool being made transparent and bias-checked to treat patients fairly, or an autonomous vehicle programmed to prioritise human life in split-second decisions.

Preventing dystopian outcomes by nurturing a more compassionate civilisation is a vital undertaking. It requires aligning many pieces: the human heart, the structures of society, and the tools we create. We will have to be both idealistic and pragmatic—idealistic in holding fast to visions of a just, empathetic world, and pragmatic in implementing incremental changes and safeguards that move us in that direction. The moral struggle of humanity is ongoing; there is no final victory where evil is vanquished once and for all. But neither is there a final defeat so long as people of conscience remain vigilant. Each generation must contend with the nature of evil, the vulnerability of compassion, and the peril of power in its own context. Our generation’s context is one of hyper-connection and super-powered technology, which raises the stakes extraordinarily high.

If there is a silver lining, it’s that many solutions reinforce each other. A more compassionate society tends to be more resilient and less prone to totalitarianism. A populace educated in critical thinking is less likely to fall for hateful demagogues. In essence, moral progress feeds on itself, just as cruelty feeds on itself. We must actively choose and cultivate the better angels of our nature, or the worst demons of our nature—whether banal or wilfully malevolent—may rise with catastrophic force. It is a choice each person and community faces.

Ultimately, understanding how easily cruelty can be normalised or rationalised steels us to reject complacency. Appreciating how fragile yet vital compassion is inspires us to protect and enlarge it. And recognising the peril of power—that any tool or authority can be turned to evil if not guided by conscience—means we must demand ethics at the core of innovation and leadership. These are the reflections and lessons that emerge from humanity’s long moral struggle, and upon them rest the prospects of our shared future.

Amongst more auspicious outcomes, these two disastrous scenarios are possible for our near future: the self-extinction of humanity through war; or a dystopian, psychopath-controlled world. Under the malevolent central control of all-encompassing surveillance and guidance technology, and without any hope of the system’s collapse, the latter outcome is even worse than the former.

Authoritarian governments will find it ever easier with technological advancements to zombify and control their populations. When such a government, helped by surveillance AI, is able to know what you are thinking and feeling, where you are and what you are doing, has control over all the information you receive, and knows your personality impulses precisely—what hope has anyone to escape from the hell constructed for them by the resident psychopaths?

The pressure to evolve to survive has mounted for humanity; given the stakes and the alternatives, we have to get better. The time window for resolving the problems and mitigating the risks is now, and we may never get the chance again.

Friday, 21 August 2020

Metaphysics

The dictionary definition of “atheist”, as a non-believer in God or Gods, isn’t accurate because there appears to be many people who think that the doctrinal teachings of religious institutions are cultural-based anachronisms—and so would be labelled “atheist” for not adhering to definitive religious beliefs about deities—yet believe in some higher spiritual power that they cannot define.

There are several belief jumps in this sentence: The universe is a purposeless collection of matter that mindlessly configured itself by chance out of nothing, existing in time with causes and effects that had no beginning. A reasonable-minded adherent might be aware of the glaring uncertainties, but state it is more parsimonious to adopt this materialistic concept of reality than implant a God belief system as an unnecessary additional layer. Yet the certainty with which many proponents preach this position as absolute truth suggests a type of commitment witnessed in doctrinal religious belief.

An agnostic would state that the ultimate “why” questions are unanswerable, so from a practical perspective we should just be concerned with the “how” questions. The ardent atheist’s objections to agnosticism—based on the burden of proof for God being on the proponent—misses the point to an agnostic who has already ruled out religious explanations of God, but not higher spiritual meaning and purpose to reality. A particularly zealous atheist might overplay the remit of verifiable facts by stating that opinions about ultimate meaning are irrelevant if they are not scientifically falsifiable—ignoring the fact that their own conceptual model for reality contains unfalsifiable conjecture.

I believe that one can value science and also acknowledge, as philosopher Paul Tillich did, that the word “God” might point to something far more profound than a cosmic superintendent. Tillich argued that God is not a being at all, but “being-itself”—the ground or power of being on which all things exist. This means that petty debates about whether God exists (as if God were just another object in the universe) miss the point entirely. Tillich’s insight frees us from the simplistic image of God as an old man in the sky, suggesting that whatever ultimate reality “God” signifies, it transcends any single creed or image.

The nuances of religious thought have often been flattened in modern discourse. Theologian David Bentley Hart observes that the very concept of God has grown “impoverished” in the modern mind, largely because we have forgotten the deeper philosophical insights of the past. New atheist critics often target only the crudest caricatures of faith—a proverbial bearded deity or literal seven-day creation—and declare victory over superstition. In doing so, they sometimes miss the more sophisticated understandings of the divine found in works of thinkers like Tillich, or in the mystical branches of various faiths.

One can be sceptical of traditional theism and still believe reality has dimensions that science and language fundamentally struggle to capture. There is a fertile ground where one can be a spiritual rationalist: deeply curious about transcendent questions, unwilling to close the door on the numinous, but also unwilling to accept any claim without scrutiny.

Modern atheism often aligns itself with metaphysical materialism, the belief that nothing exists except physical matter and energy. In this view, if something cannot be measured or falsified scientifically, it is not real (or at least not worth taking seriously). The materialist outlook carries a bracing simplicity: the universe is a brute fact, life a fortuitous accident, consciousness an emergent trick of brain chemistry, and any search for deeper meaning is a nostalgic delusion. However, materialism itself goes beyond what empirical science can say; it makes a sweeping ontological claim that is not empirically verifiable (ironically, a metaphysical claim that “only non-metaphysical claims are valid”). Even secular philosophers like Thomas Nagel, an avowed atheist, admits that the strictly materialist narrative feels incomplete. Nagel has been frank about his “cosmic authority problem”—a personal wish not to have a God—yet he also argues that reductive materialism fails to account for things like consciousness and reason.

Cutting-edge science has revealed a world far stranger and less material than we assumed. At the subatomic level, matter dissolves into energy and probability; solid objects are mostly empty space knit together by fields and forces. Quantum mechanics famously defies our intuition—particles that are waves, waves that are particles, influences that seem to leap across vast distances. As Nobel Prize-winning physicist Eugene Wigner observed, “while a number of philosophical ideas may be logically consistent with present QM… materialism is not.” When an observer’s act of measurement can affect whether a particle manifests as a wave or a particle, the neat separation between observer (mind) and observed (matter) becomes indistinct. Some interpretations of quantum mechanics even suggest that consciousness has a role in “collapsing” quantum possibilities—a controversial idea, but one that underscores how our epistemology might be entangled with the fabric of reality itself.

Some scientists point to the “fine-tuning” of physical constants—the way the laws of nature seem precisely calibrated to allow galaxies, stars, planets, and life—and argue this is just a lucky roll of the cosmic dice (possibly one of countless rolls if there are infinite universes). That could be, but note: positing an infinite multiverse where everything happens by chance is itself a kind of metaphysical speculation, unfalsifiable and forever beyond empirical reach. It strikes me as ironic that to avoid any whiff of purpose or design, some are willing to embrace an infinity of unobservable universes. At that point, one has arguably left the realm of Ockham’s razor far behind. Even Nagel acknowledges that the “interest of theism, even to an atheist, is that it tries to explain in another way what does not seem capable of explanation by physical science”. In other words, hypotheses of meaning or mind beyond matter arise because strict reductionism struggles with certain questions: Why is there something rather than nothing? How did matter become alive, and life become aware of itself? Are we—conscious agents—merely flukes, or an intended part of the cosmos? Science as a methodology may not answer such questions (they may be inherently metaphysical), but human inquiry doesn’t cease at the laboratory’s door. My rational mind sees the achievements of science and bows to its methodology for understanding the physical world. Yet my intuition and indeed my personal experience tell me there is more to reality than can be measured with an instrument. A truly open-minded scepticism must be sceptical not only of supernatural claims, but also of the claim that the natural (as presently understood) is all there is. The boundary between science and metaphysics is precisely where things get exciting: it’s where our knowledge gives way to wonder. At that boundary, one can remain rational—weighing evidence, avoiding logical fallacies—while also entertaining the possibility that the universe includes dimensions of meaning, value, or consciousness that transcend our current understanding.

Do I believe in God? That question is loaded with assumptions about both belief and God. Do I take as fact the doctrines concerning reality written by people in past civilisations?—No. However, there shouldn’t be a one-dimensional graded scale for belief that merely gauges the percentage probability of religious dogma being correct. The metaphysical understanding that most resonates with me is that there is a soul of the universe, in which we are all a part. In this definition, God is hope: a hope that the universe is ultimately love; that all the suffering will be overcome; that life will be saved from despair; and that despite everything, it will all be okay.

For any existence after death to be desirable, it would have to be outside of time and space, and completely beyond our current comprehension of reality—as even a limitless abundance of joy would become meaningless within the causes and effects of endless time. I believe that to thrive at being a good human is the purpose, and tend to subscribe to something along the lines that: form ends on death, but time is just a perspective from one vantage point—because the past, present, and future are really one; all things are a part of each other, connected strands in the great tapestry of life; and maybe there are other dimensions of reality and incalculable vantage points. There is no insistence on certainty here; this is a non-falsifiable interpretation of experience driven by internal feeling, not logical deduction—and in no way does it affect any commitment to a rigorous investigation of the world using the scientific method. So, where do I feature on the belief scale?

For me, God is the name given to the conviction that there is a source of meaning and goodness at the ground of reality. When I speak to the divine in moments of anguish or gratitude, I do not imagine a magic problem-solver; I am communing with that hopeful part of myself that trusts the universe is not fundamentally indifferent. I resonate with Paul Tillich’s description of God as the “ground of being”—the substrate of existence and meaning. In a similar vein, I find truth in the Sufi mystic Rumi’s poetic assertion that the light is one, even if the lamps are many. “The lamps are different, but the Light is the same… one Light-mind, endlessly emanating all things,” he writes. Those lines capture my sense that whatever ultimate reality is—call it God, call it the One, call it cosmic consciousness—it underlies and shines through the various religious images and the myriad forms of life. God, in this vision, is not a dogma but a direction: an orienting ideal of unity, love, and hope.

I embrace the intuition that everything is deeply interconnected. This is closely tied to the idea of a universal mind, but it also extends to matter, energy, and life. Mystical traditions often emphasise oneness: the notion that “All is One”—whether in the Sufi idea of tawhid, the Christian mystic idea of the ground where the soul and God are unified, or the Buddhist metaphor of Indra’s Net in which each being reflects every other. On the scientific side, ecology illustrates how no organism is truly separate from its environment, and quantum physics (again) shows that particles once linked can remain correlated across cosmic distances. My metaphysical view takes this interconnectedness as a given. I like to imagine reality as an immense tapestry of relationships rather than a collection of isolated objects. Each of us is a node where the cosmic web is particularly intense and self-aware. Our actions reverberate through the tapestry in ways we can’t fully chart—hence every ethical or unethical act sends out ripples. This vision, admittedly, has a poetic flavour. It owes a debt to thinkers like Pierre Teilhard de Chardin, who envisioned all of humanity (and indeed the cosmos) converging towards a unified point of consciousness he called the Omega Point. Teilhard, a scientist-mystic, saw evolution as not only a biological process but a spiritual one, with increasing complexity and consciousness leading eventually to union with the divine. I find inspiring his idea that we are co-creators in an ongoing evolutionary story—one that is as much about the growth of spirit as the propagation of genes.

My own instinctive opinion is that I believe religions share the same spiritual root, although the core message was often corrupted by the doctrines and institutions that arose. This is my personal version of “spiritual but not particularly religious”. As I am most familiar with Christianity, I could be labelled Christian; however, I do adopt a filter and select only what resonates with me, mindful that the scriptures were written and edited by early practitioners of the religion; and that the biblical canon was decided upon by the politics of powerful men in ecumenical councils, rather than being the unadulterated teachings of Christ. Looking back at history, the cruelties that have been perpetrated by professed followers of the religion represent the antithesis of the message of Christ; for real spirituality—the root of Christianity—is always inspired by love, joy, and peace.

The moment a spontaneous spiritual insight calcifies into an official creed, or a transformative mystical poem is reduced to a rigid scripture, the original life can begin to leach out of it. Religious institutions compile canons—deciding which texts are holy and which are heresy—and in doing so often reflect the politics and prejudices of their era. For example, the formation of the Christian biblical canon in the early centuries involved councils of bishops choosing certain gospels and epistles while rejecting others; this was not divine handwriting in the sky, but messy human process. To note this is not to dismiss those scriptures, but to contextualise them: they are works filtered through human minds, not infallible transcripts from God. History shows that many profound spiritual voices were marginalised or branded heretical because they threatened the authority of the established clergy. Meister Eckhart, a Christian mystic who taught the soul’s direct union with God (speaking of a God beyond all images), was tried for heresy. The Sufi mystic al-Hallaj, who joyfully proclaimed “I am the Truth” (implying unity with God), was executed as a blasphemer. These examples highlight the perennial tension between mysticism—personal, unmediated experience of the divine—and orthodoxy—the sanctioned belief system of a religious organisation.

One of my core criticisms of religious institutions is how they often prioritise doctrinal conformity over personal spiritual experience. Carl Jung once quipped that one of the main functions of formalised religion is to “protect people against a direct experience of God”. It’s a startling claim, but I see his point. Institutions develop layers of dogma, ritual, and hierarchy that can end up substituting for genuine spiritual encounter. As long as you recite the creed, attend the services, and obey the rules, you’re considered religious—even if you never actually feel a connection to the sacred. In fact, if someone in the pews does have a dramatic spiritual experience outside the approved norms, it may make the clergy nervous. It’s as if religions say, “Don’t try this at home—leave the God-contact to the professionals.” My intention is not to disparage all religious authorities; many are sincere seekers themselves. But the bureaucratisation of spirituality often leads to the domestication of the divine. God or the Absolute—wild, unbounded Reality—gets confined to formulas and ceremonies. The result can be hollow: people go through motions that once had meaning, but over time the symbolism is forgotten and only habit remains. Karen Armstrong’s research reminds us that scripture itself was traditionally interpreted with great flexibility. She notes that for centuries, Jews and Christians “insisted that it was neither possible nor desirable to read the Bible literally”, and that sacred texts “demand constant reinterpretation.” Myths were understood as symbolic stories pointing to truths that reason alone could not convey. This non-literal, dynamic approach to religious truth resonates with me. However, modern fundamentalism—a reaction against secular modernity—has hardened many into treating mythos as logos, insisting on literal truth where none was originally intended. The tragedy is that this invites an equally simplistic backlash from sceptics, who correctly point out the contradictions, and the scientific and historical errors, all the while missing the underlying spiritual insights that a more fluid reading could reveal.

In carving a path of spirituality without dogma, I retain many practices and values that religions have cultivated, but I do so by choice, not by mandate. For instance, I find comfort and insight in meditation (a practice prominent in Eastern traditions) and in contemplative prayer (drawn from Western mysticism). I love the beauty of religious music and art—a Bach cantata, a Rumi poem, a Zen garden—and appreciate their sublimity without attributing them to a sectarian narrative. In essence, I construct a personal canon of that which uplifts and edifies. Ethics, too, remain central: any spirituality worth its salt must show in one’s character and actions. I take inspiration from the core ethical teachings shared across faiths: compassion, kindness, humility, and a concern for justice. What I do not do is accept any moral dictate merely because “it is written” or because an authority claims infallibility. My conscience and intuition must ultimately resonate with a teaching for me to embrace it. This approach aligns with the view that religion is not mainly about believing certain propositions, but about experiencing and doing. As Armstrong highlighted, religion at its best is about praxis—living in a way that makes the transcendent real in daily life. Thus, I prize experience over creed. If a particular ritual or prayer helps open my heart or quiet my mind, I will use it, regardless of its origin—be it Christian, Buddhist, or other. Conversely, if a doctrine instils division, fear, hate, violence, or a sense of futility, I will question or discard it, even if it carries the weight of centuries.

I embrace an openness to insights from multiple traditions without feeling the need to formally belong to any. I have been deeply moved by Sufi literature (the poetry of Rumi and Hafez), by the non-dual teachings of Advaita Vedānta and contemporary teachers like Rupert Spira, by Christian mystics like Julian of Norwich (with her radical optimism that “all shall be well”), and by Daoist and Buddhist perspectives on harmony and impermanence. Each offers a piece of the puzzle, and each also has its cultural limitations or excesses. Rather than seeing the plurality of religions as a problem—“they can’t all be right, so none of it is true”, as a cynic might say—I see it as evidence that the human encounter with the sacred is real, even if coloured by culture and language. The lamps are indeed different, but the light is one. This pluralistic approach does come with challenges. It lacks the tidy certainty and communal reinforcement that belonging to one religion can provide. There is a risk of shallowness—skimming the surface of many traditions and mastering none. But I allow myself to learn from each faith I engage with, letting it challenge me. For example, Buddhism’s emphasis on mindfulness and releasing attachment has been a helpful antidote when my hopefulness turns into craving or clinging. The Christian ideal of grace—unconditional love given freely—humbles me when I become too prideful. Sufi devotion ignites my heart when my abstract philosophising grows arid. In this way, I remain grateful to religions while not confining myself to any single one.

Adopting ritual and reverence without binding belief has given me a sense of connection and meaning that pure scepticism never did. I do not need to believe that a certain scripture is the infallible word of God to find comfort in its verses; I do not need to believe a ritual literally changes the cosmos to feel it change me internally.

We live amid conflict, injustice, and ignorance. Believing that all is one and that love is our destiny can seem naively optimistic in the face of daily news filled with division and hate. However, I see the role of metaphysical hope not as a blindfold but as a guiding star. It informs how I respond to the darkness. If I think humanity is nothing more than a cosmic accident, I might fall into nihilism or selfish hedonism, reasoning that there is no deeper purpose to strive for. I think of the wisdom of someone like Viktor Frankl, who in the horrors of the concentration camps found that those who could find meaning in their suffering were more resilient. While Frankl’s approach was secular, I complement it with a spiritual trust that even in the darkest times, the light of meaning hasn’t been extinguished. There is a sense that every experience, even painful ones, can serve a purpose in the great tapestry.

In a world riven by cynicism and cruelty, some might argue that high-minded spiritual ethics make little difference. But spiritual growth, to me, is largely about enlarging one’s circle of identification: from ego to family to tribe to nation to all humanity to all sentient beings. It is a widening of the heart. There may be no finish line, but every step matters. If enough individuals adopt a spiritually rational outlook—combining clear-eyed reason with a heartfelt sense of sacred interconnectedness—then perhaps societies could shift in remarkable ways.

I consider it wise to approach the transcendent with what Zen Buddhism calls “beginner’s mind”, an attitude of openness and lack of preconceptions. This is not only epistemological but also spiritual: it means bowing before the mystery of existence and admitting that a finite mind cannot grasp it all. Paradoxically, accepting this not-knowing brings a profound peace. I am content to listen to others and continue refining my understanding.

We are meaning-seeking creatures, and even the triumphs of science have not quenched that thirst for the numinous. By approaching metaphysical questions with both an open heart and a critical mind, we can refuse to settle for sterile nihilism or irrational fideism. Instead, we step into a middle space—a space of questions, imagination, and conjecture. This may not fit neatly into any box on a survey, but it is sincerely mine.

As such, I will continue onwards, trusting that in the grand scheme, these efforts themselves are meaningful; for ultimately, humanity will survive if we are loving to the world and to each other. And if the spark of consciousness in us is around for billions of years, then we are currently the early originals. Maybe we are at the stage where we are just starting to recognise some shapes.