The Tomorrow War

The Tomorrow War is an intriguing film. [Major spoilers below.] It is something of an amalgamation of World War Z and Edge of Tomorrow, but it is also a fun and dynamic alien invasion movie in its own right. It has also crystallised something for me that I’ve felt for years but never quite known how to articulate…

When I was growing up, my grandpa loved old war movies. Mostly “prisoner of war” stuff, like Colditz and The Great Escape. I liked them too. They were often great family viewing. (At least in Britain, but let’s not go there…)

As I grew older, I remember being quite surprised about his love of such films. He fought in World War II, after all, and served as a navigator for the RAF. But he never talked about it. I don’t think he was under any illusion that war was something to enjoy or remember fondly. And yet, there was something about seeing this kind of romantic vision of wartime that was cathartic or calming for him, I think. It allowed him to relive what must have been one of the most affecting times of his life, but with a certain amount of distance and through a certain kind of soft-focus filter. He was just one man in a nation of men who, after the war, needed to tell themselves a certain kind of story.

In more recent years, it is interesting to see how that kind of film has developed with a new kind of veteran in mind. Ever since American Sniper came out in 2014, after a generation of veterans were starting to settle back into civilian life post-9/11 and the war on terror, there have been periodic film releases that insert a Chris Pratt or a Bradley Cooper into the mix — basically any young white contemporary American everyman — in order to tell a story (whether explicitly or implicitly) about duty and responsibility and, perhaps most importantly, the emotional toil of coming home. These films aren’t dramatizing what happened over there — this isn’t Jarhead or Black Hawk Down or a film from that generation of war movie — but what happens when it’s (supposed to be) all over. They are essentially PTSD films, told largely through flashbacks.

I have no problem with that kind of narrative. I find American militarism pretty nauseating, truth be told, but I do have a soft spot for films that explore its complexities. (I’ve seen American Sniper more times than I’d care to admit, actually — the result of a hangover from a childhood obsession with Clint Eastwood and his particular brand of reactionary anti-hero, I think.) As films, they can actually be quite charming, even if they are clearly made with a certain kind of ideological standpoint in mind. But what is telling, in consuming this sort of movie, is watching how that standpoint changes over time.

The Tomorrow War is fascinating in this regard, mainly because, through its time-travel drama, it facilitates a major subplot that explores the impact of intergenerational PTSD quite specifically.

Chris Pratt is a veteran of the Iraq War trying to kickstart a new life and make something of himself after the military. But it’s not going very well for him and he’s getting very sad and angry about it. When we meet him, he’s just walked into a house party he’s supposedly hosting, but he doesn’t interact with anyone there. He’s like a ghost, almost, with no time for anyone but his wife and daughter and, most significantly, some people on the end of a phoneline who might be offering him his dream job. But he doesn’t get it. And he takes the rejection surprisingly badly. The world fades out around him, as if this setback in his career is nonetheless taking him back somewhere much darker. There’s a dark sadness within him that is rising.

Alongside Pratt’s clearly undiagnosed PTSD, we learn about how he’s also deeply resentful of his father, played by JK Simmons. When he arrives home, he’s given an unopened Christmas card from the man, which he throws in the bin. (Though the narrative suggests Pratt goes dark over his failed job interview, this minor detail looms ever larger as the story progresses, as if the Christmas card is the real trigger for him.) Simmons, we later learn, came back from ‘Nam a broken man and wasn’t really present when Pratt needed him most. They’re estranged and not really on speaking terms, largely because Pratt refuses to engage with him.

Then the aliens arrive. Pratt goes into the future to fight a war, and whilst he’s there he meets his daughter, fully grown and now a Colonel fighting off the invaders — and she resents him. She keeps him at a distance and later tells him some home truths (albeit related to a life he hasn’t lived yet). She tells a story about how, when she got older, he and her mother separated and he was a bit of a mess. In the end, just seven years later, she watched him die following a car crash, after they’d been estranged for years. It is a case of “like father like son”, as it turns out. Whatever was eating Pratt when we first met him, devoured him whole a few years later. This disturbs Pratt greatly.

But something also clicks for him. Suddenly, you see this intergenerational picture being painted. Post-‘Nam dad is followed by post-Iraq son, and tomorrow war daughter isn’t really having any of it. Later, when Pratt is unceremoniously sent back to the past, having watched his future daughter die, he sets out on a redemption mission to destroy the aliens — frozen in ice on the Russian tundra, as it turns out — in order to make sure the war never happens and his daughter never has to die. But in the process, he ropes in his Dad, and together they’re two shaken veterans — one of them maybe an alcoholic — doing what they unfortunately do best and trying to save the world.

The psychological picture painted here is fascinating. None of this really takes precedence. It is all back story; little details that paint a big picture, which is nonetheless a familial backdrop to a big spectacular alien invasion movie. But these little details change the film in quite a profound way, I think.

Despite how it might sound, this isn’t quite the gung-ho American militarism we’ve come to expect. It doesn’t have much ideological pomp about American exceptionalism, filling its role as the world’s police force. Pratt is sent on a suicide mission into a war that America (but also the world) is definitely losing. It’s Vietnam, yeah, but it’s also the Middle East. But then, the aliens are not the Viet Cong or the Taliban. This isn’t a fantasy do-over, winning the war that was previously lost. This is a film about a band of troubled veterans who truly want to redeem themselves, haunted by the things they’ve done or the world they might have created through their actions. This is a band of veterans turning the tables. A Vietnam war vet and an Iraq war vet fighting off an invading species. This isn’t Predator, with a tank-like Schwarzenegger fighting off the single alien guerrilla, getting his own back on the enemy and securing the cathartic victory otherwise denied him (although the aliens in The Tomorrow World do look like the monstrous lovechildren of a xenomorph and a Predator). This is a film about vets redeeming themselves by fighting off a hoard of (notably white) invaders, rather than being one of them. It’s a film about war vets getting the sharp and sour taste of their own medicine, and wanting to use the time they have left to fight off an invading force. It’s a film about war vets stopping a war from ever taking place.

The Tomorrow World feels like a film for anti-war war vets, in this regard, dressed up as an overblown alien invasion movie. This feels like burnt-out American militarism creating a narrative where it gets to save the world from itself.

I Paint, Therefore I Am:
On Painting, Patrons, and the Rise of Liberalism

← Narcissus in Bloom

Understood in the most general terms as representations of ourselves, self-portraits are among the oldest art there is. The Cueva de las Manos (Cave of Hands) in Argentina, for instance, contains dozens of stencilled handprints that are almost 10,000 years old, and artists have been using themselves as models or tools for their paintings ever since

But the owners of the hands in that Argentinian cave would hardly recognise the selves depicted in the portraits of the modern era. Indeed, there is no “self” as such represented on that wall. What we see is a group, a collective, a community. The “self” of a self-portrait, on the contrary, is something quite specific.

Philosophically speaking, the “self” is an abstract and heuristic concept for our experience of ourselves as individuals. Despite what we might now assume, we have not always thought of ourselves in this way. That we have forgotten our old senses of self is telling, however. The “self” is such a powerful and intoxicating concept that it overrides and manipulates all forms of self-understanding that came before it. But it is only by understanding this shift that we can appreciate the revolutionary stature of the self-portrait when it first emerged before us.

The Delphic motto “know thyself”, for example, is one of the oldest and most enduring sentiments expressed within Western thought. Discussed repeatedly by Plato, in no less than six of his dialogues, it served as a cultural touchstone long before even he put it on the page. But back then, to “know thyself” typically meant to know one’s place in the general order of things. It was a move away from individuation, as a source of ignorance and as a product of fear and isolation.

We can see this position adopted in other examples of ancient Western culture as well. Consider Sophocles’ most famous Theban play, Oedipus Rex, which was first performed in 429 BC, during the same decade it is estimated Plato was born in. Despite its age, the story of Oedipus has retained a considerable if anachronistic influence over modern conceptions of the self, ever since it was utilised by Sigmund Freud as one of the founding allegories of psychoanalysis. For Freud, the Oedipus Complex was his term for that strange and fraught process of self-definition, when we come to appreciate, often through tantrums of inchoate sexual jealousy, that we are ourselves distinct beings and are in competition with others for our mothers’ attentions.

However, although we interpret it very differently today, Oedipus’s quest is hardly a story of self-discovery and individuation. Initially, there is little question, in Oedipus’s mind at least, of who he is as an individual; at the beginning of the play, he could not be surer of this. His true self is only uncovered when he fully understands his relations to those around him. The secret to be uncovered is, instead, who his mother and father are. It is not Oedipus’s true self but his true place in the social order that has been obscured from him.

Later philosophical conceptions of the self differed from this considerably, even though they often retained an interest in this ancient source material, as Freud’s particular reading of Oedipus Rex already suggests. Even Plato’s discussions of “knowing thyself”, often mentioned in the context of governance and statecraft, were later used to legitimate the authority of liberal governments in Renaissance Europe, inverting the tale of Sophocles’ doomed king to suggest that the self is not be so easily reconstructed from our social relations.

Though it is, of course, influenced by those around us, the self is essentially our understanding of those characteristics that are innate to us alone. It is what is left of us when we strip back everything else that is otherwise shared. Intriguingly, this understanding of the self is only slightly younger than the self-portrait as an artform; both of which came into common parlance and practice towards the end of the Middle Ages. 

For philosophers and political theorists at that time, the difference between the individual self and a collective subject was a novel but important distinction to make. It was Rene Descartes, writing in the 1630s, who first insisted upon such a distinction for philosophy. In his influential Discourse on Method, an autobiographical treatise on the very nature of thought and reason, Descartes hoped to provide a new methodology for separating truth from falsehood.[1] To do this, he stripped back everything that, he believed, could not be trusted. Approaching reality with a radical doubt, he began to pretend “that everything that had ever entered my mind was no more true than the illusions of my dreams.”[2] This included information gathered by the senses and just about everything else that came into the mind from the outside world. When all of this was discounted, Descartes was left with one thing – that is, the “thing” that thinks. “I noticed that, during the time I wanted thus to think that everything was false, it was necessary that I, who thought thus, be something.”[3] I think, therefore I am was his resulting declaration, and with that he established the self “as the first principle of the philosophy I was seeking.”[4]

This foundation was soon extended into other areas of thought as well. The politics of liberalism were also formalised at this time, for example, and were similarly built on a new conception of individual liberty and rights – the self as a first principle for politics also. A few decades after the publication of Discourse on Method, in his Essay Concerning Human Understanding, John Locke echoes Descartes’ philosophical position, writing that the “Self is that conscious thinking thing … which is sensible, or conscious of pleasure and pain, capable of happiness or misery, and so is concerned for itself, as far as that consciousness extends.”[5] This Cartesian foundation nonetheless responds to certain political ideals. It turns out that, for Locke, this consciousness can extend quite far indeed, depending on your social status. In fact, by Locke’s measure, not every living thing was “conscious” of itself in the same way. As a result, though much of his work pays lip service to universal freedoms to be enjoy by all individuals, this was not always true in practice, especially by today’s standards.

Locke argues that the word person – his supposedly “forensic term” for the self – “belongs only to intelligent agents capable of a law, and happiness and misery.”[6] To be a person, then, echoing Descartes, is to possess a form of consciousness that can reason with itself; that can reflexively ascertain itself as conscious. But, in Locke’s hands, this was not the same sentiment as “I think, therefore I am.” Locke instead positioned the self as a reflexive being that thinks in accordance with reason. Rather than the reflexive self being a foundation upon which reason can take place, the cart is put before the horse. The self doesn’t just reason – it is fundamentally reasonable.

Some of Locke’s resulting conclusions are relatively innocuous. Under his criteria, an animal is not a person, for example, because animals do not have laws or experience emotions in the same way that humans do. (Something we are only more recently starting to challenge.) But neither, in Locke’s view, do supposedly uncivilised persons, whose rights do not warrant the same respect as persons from more “reasonable” societies. This suggestion was very influential, and particularly disastrous given Locke’s political influence over the colonisation of North America – an influence that can be seen explicitly in historical studies of the Transatlantic Slave Trade, during which the emotions of slaves transported to the New World, clearly expressing trauma and grief, were ignored, denounced, or simply not perceived.[7]

The political impact of Locke’s “self” did not stop there, however. With a little help from Thomas Hobbes and his 1651 work Leviathan, “the self” also became a term for a kind of individual sovereignty, analogous to that of “the nation-state”. Self-knowledge was less defined by what we could be most sure of, as in Descartes’ formulation, and more by what we can claim possession of – whether that be the mind or the land underneath our feet. In this context, Descartes’ “I think, therefore I am” was soon extended into the realm of governance and property rights, making “I own, therefore I am” a more accurate founding doctrine for the politics of classical liberalism, settler-colonialism and, a few centuries later, neoliberal capitalism as well.

Whether in theory or in practice, it was already clear to many that “the self” was not the best foundation for a new era of thought and commerce. As such, Cartesianism, liberalism, and their legacies continue to be challenged by philosophers and political theorists to this day. However, given the ever-peculiar experience of being a conscious subject, Cartesian doubt remains an attractive starting point for many. Geopolitically, it continues to inform settler-colonial projects like the Israeli occupation of Palestine, for example.[8] Pop-culturally, Descartes’ questioning of the existence of a mind-independent can be found in everything from Nineties Hollywood blockbuster The Matrix to a 2020 hit single by pop star Billie Eilish.

And yet, despite this persistent influence, Descartes’ supposedly novel conception of the self had already been subjected to considerable scrutiny in the arts by the time he wrote his Discourse on Method. In fact, the first self-portraits emerged around a century before Descartes’ birth, and by the time his thesis was published, the self had already been openly investigated, even mocked, as an unstable but nonetheless generative concern in various artistic movements.

What is particularly notable about these early self-portraits is that they were often knowing attempts to depict what Jacques Lacan would (much) later call the “ideal-I”.[9] This ideal is formulated during what Lacan calls “the mirror stage”, a process during which a child first adopts an external self-image, and therefore a mental representation of the “I”. But this “I” is often idealised, in that it is generally a stable mental conception. Our bodies are, of course, not stable, and so these ideals shift and adapt as we grow and age, but the mental conception of ourselves is forever out of reach.

Though Lacan would not theorise the mirror stage until the mid-twentieth century, it is a process analogous to the developments in self-perception that occurred during the European Renaissance, when artists began to consider themselves in an entirely new way. Their ideal selves, newly depicted on canvas, seldom corresponded to reality either, but this often did not matter. To depict an ideal and improve upon reality was instead seen as a virtue by many, as if to be able to paint something in a form more beautiful than nature was evidence of human exceptionalism and our capacity for self-transformation. This belief influenced a new Renaissance humanism whilst, for others, it demonstrated our direct connection to the divine. This attitude was as present in the self-portraits of the era as it was in Renaissance landscapes and still lives.

The most famous examples of an artist depicting their “ideal-I” can be found in the works of Albrecht Dürer, who produced some of the most notable self-portraits of the 1500s. Though there were self-portraits before his, no other artist produced so many. The writer John Berger went so far as to declare Dürer “the first painter to be obsessed by his own image.”[10]

Among his plethora of selfies, Dürer most famously painted an immensely handsome portrait of himself that was so popular it eventually went on display in the town hall of Nuremberg, Germany, where the artist was born, lived and eventually died. “I, Albrecht Dürer of Nuremberg, painted myself thus, with undying colour, at the age of twenty-eight”, an inscription on the canvas reads. It is a painting layered with unsubtle symbolism and amusing strokes of self-aggrandisement. Not only does Dürer look like a classic depiction of Jesus Christ, his initials – a stylised “AD” – double up as both his signature and an allusion to the calendric label for those years following the birth of Christ: “Anno Domini”. Latin for “in the year of our lord”, it is unclear who “our lord” is supposed to be – Christ or Dürer himself.

Looking at this portrait today, one might expect Dürer’s self-image to be deemed sacrilegious, and most modern descriptions of the painting do mock it for the artist’s exuberant pride in himself, but in the early 1500s people flocked from far and wide to see the painting after it was put on public display. It may well have reflected Dürer’s hyperinflated sense of self as a master painter, as if he was on a par with God regarding the beauty of his artistic creations, but audiences seemed to agree with him. In fact, as a result of his work’s popularity, people became as obsessed with the man as they were with his paintings. Art historian James Hall notes that Dürer’s cascading curls were so famous that “after his death in 1528 his admirers exhumed his body to take casts of his face and hands, and cut lockets of hair.”[11]

However, beyond these tales of early celebrity, John Berger proposes another, far more interesting reading of the artist’s self-obsessed body of work. He compares Dürer’s Christ-like image to an earlier self-portrait painted in 1498, in which Dürer looks no less regal but perhaps a little more anxious, like a young debutante entering society for the first time. Berger suggests there is “perhaps a slight over-emphasis on his being dressed up,” as if “the portrait half-confesses that Dürer is dressing up for a part, that he aspires to a new role.”[12] Having recently travelled to Italy to see the latest trends within Venetian painting, and having heard the latest ideas shared by Italy’s art critics, Dürer no doubt “came to realise for the first time how independent-minded and socially honoured painters could be.”[13]

Contrary to our modern interpretations of Dürer’s pride, Berger wonders if the young artist wasn’t so much a prima donna but instead the first to depict a new kind of self. He argues that modern viewers give too much credence to their “many complacent assumptions of continuity between his time and ours.” We should instead be humbler and acknowledge that understanding “Dürer historically is not the same thing as recognising his own experience.”[14]

Berger argues that Dürer saw himself as a new kind of European man and was consistently fascinated by the cosmopolitan figure reflected back at him. Indeed, he was one of the very first examples of a “Renaissance Man”, newly aware of the potentials of his own will.[15] “When he looked at himself in the mirror he was always fascinated by the possible selves he saw there”, Berger writes.[16]

Though only beginning to emerge in the portrait from 1498, this is perhaps even more true of his Christ-like appearance in the self-portrait he painted two years later. Berger argues that it could not have been the painter’s intention to be blasphemous; he was a devout Roman Catholic, even after Martin Luther instigated the Reformation in 1517.[17] This makes the painting aspirational rather than self-aggrandising. “The picture cannot be saying: ‘I see myself as Christ’”, Berger argues. “It must be saying: ‘I aspire through the suffering I know to the imitation of Christ.’”[18] If Dürer was so self-obsessed, it was as a true narcissist. He hoped, more than anything, to be transformed.

Though one of the first, Dürer was far from the last artist to see himself in this way. His beautiful self-portrait is exemplary of a growing trend across Europe at that time, when artists were depicting themselves as notable members of society, rather than hired hands serving their rich and famous patrons. As a result, self-portraits took on an aura akin to contemporary headshots of famous actors and celebrities, and their grandiosity served a similar professional purpose as well. They were like all-in-one business cards or curriculum vitae, containing everything a curious patron might want to know about a person. First and foremost, they presented the viewer with both the artist and their skills, but they also occasionally advertised an individual’s social circle, as well as their personal interests and possessions.

In Italy, where self-portraits were especially popular, there developed a trend for artists to paint group portraits of themselves amongst various figures from high society – an early example of a professional portfolio, perhaps, or an antecedent to that Instagram staple, the group selfie, showing off your friends in their hottest outfits before you all hit the town. Less a depiction of a group identity, these paintings were made specifically to boost the social standing of the individual painter, who usually occupied a central panel whilst surrounded by studies of his famous friends. It was the beginning of a transitory period, where the social subject, as a member of a community, was transformed into a social self, with a person’s popularity and friendships being adopted as an individual virtue.

The uneasy or exaggerated forms through which these representations of self were manifest have never really gone away, but this is not a sign of their stability as aesthetic forms. Though they may have tried to adopt an ideal-I, the artists of the Renaissance did not eventually settle into stable identities. Ideal selves remained out of reach for the individuals concerned – although Dürer’s self-portraits were clearly adopted by others, coming to represent him in the popular imagination. For others, the gulf between self and self-portrait both narrowed and expanded. Allegorical self-portraits soon became popular, with artists inserting themselves into imagined scenes, but the psychological depth of such paintings provided further insight into an artist’s experience of themselves as an individual. Such an experience was not always positive. Soon enough, what began as sincere self-aggrandisement slipped into irony and irreverence, not to mention self-critique and self-deprecation.

Working almost a century after Dürer’s rise to fame, Caravaggio was easily the most infamous provocateur of the Italian Renaissance. His depictions of the self are evidence enough of this fact. Though he produced self-portraits in which he looks very handsome indeed, he was not partial to depicting himself as one of the rich and famous, like so many of his peers. All too familiar with the values and expectations of his patrons, particularly the Catholic Church, Caravaggio instead devised bold new ways to subvert them. These subversions did not go unnoticed. Unlike Dürer, his paintings were deemed sacrilegious acts, and his various controversies are well-documented.

These include hiring sex workers as models when painting commissions of the Virgin Mary and depicting the dirty soles of saints’ feet. But more interesting than his controversies are his self-portraits, in which he depicted himself in several surprising and even unflattering roles. Whereas Dürer hoped to be Christ reborn, Caravaggio saw himself as the devil incarnate. But his self-portraits were not demonstrations of a kind of pantomime showmanship, playing the villain for shock value and infamy; his unconventional selfies were often sincere and complex attempts at self-critique, even when dripping in irreverence.

In the very last years of his life, Caravaggio used his own likeness to paint the severed head of the giant Goliath, mouth agape and eyes bulging, with blood spurting from his ragged neck. In the original Bible tale, Goliath is a giant Philistine threatening the Israelites on the outskirts of Jerusalem. He is confident in his ability to squish all opponents who might try and challenge him, and so he goads the Israelites into sending forth a champion to duel him. A young shepherd, David, approaches the giant with a slingshot and some stones. To everyone’s surprise, David manages to knock Goliath unconscious with his ranged attack, before quickly chopping off his head. It is a tale today synonymous with upset wins, when unlikely underdogs bring down well-established opponents. Though it may not have held the same idiomatic associations as it does today, could we interpret Caravaggio’s depiction of David’s victory, with the painter casting himself as the dead giant, to be an expression of his own careerist insecurities? Caravaggio would die in 1610 and so the painting was one of his last works. Was he afraid some young new talent would knock him from his pedestal? Unfortunately, it seems that Caravaggio’s fears for his own head were far more literal. Indeed, many of his later works take beheadings as their subject matter, and each seems to express either a fear for his life or painterly pleas for mercy.

Caravaggio was in exile for much of his final decade. His reputation for fighting, insolence, and petty crime made him a target for both criminals and law enforcement alike. But his reputation was ruined utterly when, in 1606, he killed a man named Ranuccio Tommasoni. One story goes that Caravaggio and Tommasoni had bet on a game of tennis and disagreed on the outcome. Another version of the tale suggests that Caravaggio was jealous over Tommasoni’s relationship with Fillide Melandroni, a local sex worker who had modelled for Caravaggio on several occasions – most famously as Judith in yet another gory painting of an assassination, Judith Beheading Holofernes. Papers released by the Vatican in 2002 seem to confirm the latter tale.[19] Whatever the true source of their disagreement, the pair decided to settle their differences by having a duel. Caravaggio won that duel and attempted to castrate his opponent as punishment. Tommasoni died from his injuries.

Caravaggio had not intended to kill his rival and the fallout from the botched duel was complex. The painter’s life was turned upside down; he went on the run, travelling the length and breadth of Italy, and even spending time in Sicily and Malta. Whilst in exile, he painted himself as Goliath, but this was not the only painting to predict the painter’s imminent demise. He also painted Salome with the Head of John the Baptist, a painting in which the severed head once again looks like Caravaggio himself; Salome also resembles his former mistress, Fillide. The painting was presented to Fra Alof de Wignacourt, the Grand Master of an order of Maltese knights, as a gesture of goodwill. Having fled to Malta, he was later driven off the island, either because news of his crimes had reached the Maltese noblemen, or because Caravaggio still couldn’t behave himself and once again ended up on the wrong side of the law. No doubt already exhausted and tired of life as a fugitive, in repeatedly offering up his head on a painted platter, Caravaggio longed for mercy. He didn’t get it.[20]

Caravaggio’s self-depiction as Goliath is just one example of his work’s psychological and reflexive depth. Less graphic but no less self-destructive, he also painted himself as Bacchus, the Roman god of wine and fertility, as well as madness and the festival; otherwise known in Greek mythology as Dionysus. Rather than a jovial character, Caravaggio’s Young Sick Bacchus wears a queasy grimace and green skin. He is holding a bunch of grapes, as if ready to keep eating, but resembles a drunk at a party who has had one too many and should really start thinking about going home. Against his own better judgement, he is instead trying to keep up appearances. It is, in many respects, a subversion of Bacchus’s character. Whereas Narcissus may be associated with a kind of self-intoxication, the drunken Bacchus is instead a figure of social abandon. Those who follow him are freed from an otherwise suffocating self-consciousness. However, in Caravaggio’s hands, this Dionysian spirit is less clear cut. Inverting the narcissism of a self-portrait, Bacchus’s detachment from the self is nonetheless depicted as its own kind of sickness. Though care-free and hedonistic, a glutton for pleasure, Bacchus is as grotesque as any of Caravaggio’s other portraits of mythological monsters.

Each of these paintings seems to tell us something about Caravaggio’s sense of his own position in Renaissance society. Rather than exercising the braggadocio of many of his more well-to-do peers, to paint himself as a vanquished giant or a sickly hedonist suggests his lifestyle wasn’t all it was cracked up to be. Though none of these paintings can be labelled as “self-portraits” in a traditional sense, as allegorical paintings of the self they are arguably even more accurate depictions of Caravaggio’s self-understanding, in that they allude to an inner experience that the rest of the world was not privy to. Those elements that are most like or relevant to Caravaggio himself are hidden, obscured, or just symbolically alluded to. Though they may have been exploratory and reflexive for the man himself, as viewers of his paintings we are only made more aware of our distance from him – the mythologised painter with a bad reputation. It is as if, despite the fact he is wearing a series of masks, Caravaggio’s portrayals of others tell us something more compelling and less vainglorious about the person underneath than Dürer’s self-portraits ever could.

Caravaggio is also said to have painted a now-classic depiction of Narcissus. Some scholars dispute its authorship; it was first attributed to the painter as recently as the twentieth century. Most likely painted in the early 1600s, even if we were certain that this painting was produced by the Renaissance’s chief connoisseur of causes célèbres, it would remain unclear as to whether he used himself as a model or someone else. Regardless of its true authorship, as an unusual painting of Narcissus it still tells us a great deal about the time in which it was made.

The painting is eerily minimal; a striking example of the tenebroso style. Narcissus is enveloped in shadow and darkness, and we cannot see the world beyond him, only the kneeling figure and his gloomy reflection. Even the riverbank on which he sits seems dead and barren, as if the solitary hunter has become marooned on some terminal beach. If narcissism is an imbalance in the relationship between self and world, Caravaggio’s Narcissus has lost touch with the world altogether. Compositionally, his is a form totally in orbit of itself.

Though Narcissus may be that quintessentially (if extremely) reflexive subject today, for Caravaggio this reflexivity may have had another purpose. As with his paintings of Goliath and Bacchus, with so little of the world around him on display it is the act of looking itself that is the focus on the painting, making it less a comment on the relationship between self and world and more an evocation of that divide between artist and audience.

This may have something to do with how Italian artists and writers understood the myth of Narcissus during the Renaissance. For Leon Battista Alberti, for instance – an influential humanist and notable friend of Dürer’s, who he met on a trip to lecture in Nuremberg – Narcissus was “the inventor of painting”. On the one hand, this may be a reference to the average artist’s self-concern; the thrill of having one’s work admired and loved is, of course, a euphoric and narcissistic high. But there is another interpretation here. It is as if, for Alberti, Narcissus’ fury at his own impotence, his inability to capture and possess the reflection that has so captivated him, a reflection of his own nature no less, is an allegorical retelling of the experience that first drove the human species to paint in the first place. “As the painting is in fact the flower of all the arts, thus the whole tale of Narcissus perfectly adapts to the topic itself”, he argues. Narcissus’ metamorphosis is, in this sense, the primal scene of art history. After all, what is it to paint, Alberti asks, “if not to catch with art that surface of the spring?”[21] The emergence of culture is Narcissus’ metamorphosis in reverse. As the flower is transformed into a transcendental object that we cannot know or possess, we attempt to remake it by our own hand.

This reading presupposes the philosophy of Immanuel Kant, perhaps the most famous critic of Cartesianism in the centuries that followed the Renaissance. In his Critique of Pure Reason, first published a century and a half after Discourse on Method, Kant refutes Descartes’ “material idealism”, which he defines as “the theory that declares the existence of objects in space outside us either to be merely doubtful and unprovable, or to be false and impossible.”[22] For Kant, there is a clear relationship between subject and object, and this is true enough of paintings themselves. Objects, Kant observes, affect us. We sense them, and these senses intuitively give rise to understanding. But to understand something intuitively is not the same as being in possession of some rigorized conceptualisation of human behaviour. There is an a posteriori understanding that comes directly from experience and not from reason or theoretical deduction. Nevertheless, it is an a priori understanding that we should be striving for – an understanding that arises from scientific reason and analysis, independent of our personal experiences. Kant argues that it is only through experiential understanding that “objects are thought, and from it arise concepts.”[23]

In the present context, Kant’s so-called “transcendental aesthetic” suggests that it was our thinking about the self-portrait as an object that eventually gives rise to the concept of “the self” – and the art-historical timeline certainly supports this reading. Contrary to Descartes’ self-mythologising account that the concept of the self was innate to his own mind, and therefore conjured without any influential from the outside world, Kant observes that, whilst we cannot fully know things in themselves – that is, beyond their perception by the human senses – they can nonetheless illicit responses in us that tell us about the world in which we live. Self-portraits, then, as expressions of a posteriori experience, provide the foundation on which to build an a priori account of “the self”.

Intriguingly, this renders Caravaggio’s painting of Narcissus less a depiction of a reflexive subject than a reflexive object in its own right – a painting of the birth of painting. For art historian Susanna Berger, this makes Caravaggio’s Narcissus a “meta-image”. She suggests that, during the Renaissance, “such self-aware paintings could … thematize the potential fictiveness of visual experience” for the viewer, in the way that their content and structure echo the act of painting itself or, additionally, the very act of looking at a painting. “In visualizing acts of observation”, Berger argues that meta-images “turned gallery visitors into representations on display, an effect that would have made the spectators’ identification with Narcissus even closer.”[24] This is to say that paintings like Caravaggio’s Narcissus not only dramatize an artist’s own self-consciousness but raise that same consciousness in the viewer as well. Caravaggio may have been aware of this. Just as he lampooned the habits and values of his patrons on various other occasions, perhaps Narcissus was another knowing nod to our growing obsession with images. Just as the Catholic church betrayed its own narcissism in commissioning grand representations of its own mythology, so too did other patrons of the arts get off on the very act of looking at those objects that they owned.

In his final years, Caravaggio played up to this narcissism explicitly, hoping that, in painting his head on a platter and sending it to someone who could influence his future, he could sate the desires of those who wanted his actual head on a spike. The seeds that would eventually bloom into Locke’s liberalism begin to sprout – to “own” a Caravaggio was recognition from the artist that his noble patrons also “owned” the man himself. Whereas Dürer painted his own power, Caravaggio hoped to paint and flatter the power of others, including their power over him, forcing the viewer to reckon with their own cultural impact and influence. John Berger recognised this same tendency in Caravaggio’s oeuvre. If this Roman rebel was so arrested by self-hatred, routinely depicting his own precarity, perhaps that is because he had known the effects of living under this kind of power his whole life. As Berger writes, he was “the first painter of life as experienced by the popolaccio, the people of the back-streets, les sans-culottes, the lumpenproletariat, the lower orders, those of the lower depths, the underworld.” Through first-hand experience, he could avoid “presenting scenes” and instead depict “seeing itself”, as through the eyes of the lower classes.[25] “He does not depict the underworld for others: his vision is one that he shares with it.”[26] But this does not result in a new era of artistic sympathy and representation. Just as King Oedipus found out the hard way, the Delphic motto to “know thyself” does not automatically equate with an ability to like thyself.

The rise of the self-portrait was fraught, in this regard. Though we have repeatedly emphasised the liberal worldview, and the self-portrait as the artistic depiction of our experience of ourselves as individuals, as a form of painterly encouragement to “know thyself”, our place in a wider social order is never far away. As such, though they lived a century apart, with very different styles and concerns, both Caravaggio and Dürer were two subjects newly aware of their power and the power of others, and how that power could be wielded, from within and without.

[1] Discourse on Method is the most common abbreviation of the full and unwieldly title of Descartes’ work, which is Discourse on the Method of Rightly Conducting One’s Reason and of Seeking Truth in the Sciences.

[2] Rene Descartes, Discourse on Method and Mediations on First Philosophy, trans. Donald A. Cress. Cambridge: Hackett Publishing Company, 1993, 18.

[3] Ibid., 18-19.

[4] Ibid., 19.

[5] John Locke, An Essay Concerning Human Understanding, ed. Roger Woolhouse. London: Penguin Books, 1997, 307.

[6] Ibid., 312.

[7] […]

[8] At the time of writing, the state of Israel’s continued occupation of Palestine and its ethnic cleansing of Palestinian neighbourhoods in East Jerusalem has recently led to a short but disastrous conflict, which led to the death of a dozen Israelis and over 200 Palestinians. The state of Israel is a perfect example of how liberal politics can lead to atrocities in the twenty-first century. For Zionists, the Israeli occupation of land is explicitly tied to their theological and ontological ideals. To be Jewish, they suggest, is to have a home in Israel. As a result, to challenge Israel’s “right to exist” as a nation-state is, for many, to challenge the right of Jews to exist as a people. National sovereignty is equated with individual sovereignty; politics and ontology are fatally entwined; ideology is hidden under a flawed understanding of the very basis of human consciousness and reason. Many critics of Zionism argue that this is a false equivalence – not a truth but a liberal ideal – and it is very possible to be Jewish without violently claiming ownership of contested land and property. It is telling that it took until 2021 for this view to go mainstream.

[9] […]

[10] John Berger, Portraits: John Berger on Artists, ed. Tom Overton. London and New York: Verso Books, 2017, 56.

[11] James Hall, The Self-Portrait: A Cultural History. London: Thames & Hudson, 2014, 85.

[12] Berger, Portraits, 58.

[13] Ibid.

[14] Ibid., 57.

[15] Though best known as a painter, Dürer also wrote several books on mathematics and city planning, as well as an artistic treatise on perspective and bodily proportion.

[16] Berger, Portraits, 59.

[17] Following the publication of his Ninety-Five Theses in 1517, Martin Luther successfully orchestrated a split from the Roman Catholic Church, which he criticised for its political overreach and abuses of power. This included the church’s claims to absolve the sins of wealthy donors, in a kind of “cash for absolution” deal that Luther considered to be fundamentally corrupt and sacrilegious. Luther’s act of “protest” gave its name to the form of Christianity that developed in his wake: Protestantism. It also reasserted the sanctity of the individual in matters of faith. Arguing that contrition before Christ could not be bought and adjudicated by an institution, instead coming from within, Luther asserted that everyone is responsible for their own repentance on an individual basis. (We might expect this move to be attractive to Dürer, and research suggests he was politically sympathetic to his ideas, even wishing to draw Luther at one point, but it seems that he remained loyal to the Catholic church regardless.) Though useful when dealing with issues of corruption, this central Protestant sentiment was diluted and spread amongst the lower classes as well, providing the foundation for capitalist voluntarism and further allowing institutions of all forms, like employers, to relinquish responsibility for their workers.

[18] Berger, Portraits, 58.

[19] Catherine Milner, “’Red-blooded Caravaggio killed love rival in bungled castration attempt'”, The Telegraph, 02 June 2002: <;

[20] Caravaggio’s true cause of death has never been confirmed. Some believe he was assassinated by relatives of Tommasoni, or one of the Knights of Malta. This was certainly the dominant rumour at the time. But the painter did not die right away, and so it is thought that Caravaggio succumbed to sepsis after a wound sustained in a brawl became infected. Others believe his death was due to some other disease, like malaria or brucellosis. Recent archaeological investigations, following the examination of human remains believed to be Caravaggio’s, suggest that both his death and is erratic behaviour in life could be explained by lead poisoning, caused by lead salts commonly used in paint at that time. No matter how Caravaggio met his match, he died on the run, never having been forgiven for his crimes.

[21] Leon Battista Alberti, On Painting, trans. Rocco Sinisgalli. Cambridge: Cambridge University Press, 2011, 46.

[22] Immanuel Kant, Critique of Pure Reason, trans. Werner S. Pluhar. Indianapolis and Cambridge: Hackett Publishing Company, 1996, 288-289.

[23] Ibid., 72-73.

[24] Susanna Berger, “Narcissus to Narcosis”, Art History, 43:3. London: Association for Art History, 2020.

[25] Berger, Portraits, 88.

[26] Ibid., 88-89.

Learning and Trauma:
The Sharp Object of Ideology

In a rare moment of insecurity, I deleted this post from last week in order to revise it. It felt undercooked and needed to simmer a bit more. Here it is again.

I had an email recently from someone asking about Fisher’s various uses of the concept of “ideology” throughout his works.

In Capitalist Realism, we have this understanding of ideology that is ostensibly Lacanian / Žižekian — after the end of history, once capitalism no longer has any real ideological opponents, ideology itself seems to disappear. With nothing to compare capitalism to, our critical faculties go blunt as we fail to properly interrogate our current system’s shortcomings. Soon enough, capitalist realism isn’t an ideological position but rather the absence of ideology itself. Fisher writes: “The role of capitalist ideology is not to make an explicit case for something in the way that propaganda does, but to conceal the fact that the operations of capital do not depend on any sort of subjectively assumed belief.” We’re post-ideological, Žižek argues, illustrating that now familiar fallacy, as if the fall of the Soviet Union was to ideology as the Obama administration was to race consciousness.

But then, in the Postcapitalist Desire lectures, Fisher talks about Lukács and reification. This isn’t ideology disappearing but ideology making its presence known as a kind of solidified position. It’s similar — ideology becomes “common sense” or a “general rule”. It makes the case for itself as common sense. It still doesn’t announce itself as ideology, of course. But it will assert itself through concepts like freedom, or through a kind of moral encouragement, sending you towards all your hopes and dreams. It is assigned to general principles parasitically in order to establish itself as a kind of symbiote. But why is that relevant to us now? Why does Fisher go back to Lukács? Post-ideological capitalism doesn’t have to announce that “there is no alternative”. There literally isn’t any.

The emailed question, then, was how are we supposed to reconcile the two conceptions of ideology when thinking about Fisher’s overall trajectory? I sat on this question for about a week, but on reflection, I don’t think it’s too hard. To borrow from Fisher’s own terminology, at least in The Weird the Eerie, perhaps we can say that Lukács sees ideology as a failure of absence whilst Žižek sees ideology as a failure of presence.

This movement from Žižek to Lukács mirrors our own trajectory over the last decade or so. 2009 Fisher uses the Žižekian critique of post-ideology because that is what’s required. Obama has just ushered in a post-race society and the financial crash has shown that we don’t actually have any political imagination left to generate alternatives through ideological friction. But in 2016, when Fisher is giving his final lectures, we have far greater ideological tension, with an ascendant far-right and an emboldened far-left.

Things are only more explicit five years on. Ideology shows its bare face, and so we see not the absence of ideology but its undeniable presence. We witness, in real time, that eerie transition from one to the other. We have ridden the wave of that phase shift over the last decade — maybe without realising it, in some ways. We’ve gone from a popular abstention from politics to the popularisation of standpoints more broadly. Though often a depressing time to be alive, we have seen increased solidarity with marginal communities. We’ve gone from Jeremy Corbyn being denounced as a terrorist for simply not slotting into the status quo to the normalisation of criticism of Israel and neoliberalism all around the world. We’ve clawed ideology back from the void, in a way, and moved from post-ideology to a new kind of explicitly ideological landscape. Lukács is more appropriate to this world than the Žižek of 2009. But that’s not to say that Lukács and Žižek are incompatible…

Whilst thinking about all this, I came across this somewhat recent essay from Benjamin Noys, which I found really interesting and resonant with this discussion. First of all, he talks about how Capitalist Realism is, at heart, a book for students — and Mark, in general, was a writer for students. Not just post-16 HE teenagers, as an explicit demographic, but all of us as students. Mark was an educator, which, for him, is the same as a consciousness-raiser, which is the same as a sort of ideological diagnostician.

Noys then talks about Fisher’s view of capitalist ideology more explicitly, relating it to his personal writings on depression and our collective mental health crises. Crises precipitate change because crises denaturalise systems in revolt, he argues, and this is true at the level of the individual and society more generally.

Noys writes:

In terms of mental health, the breakdown of capitalist realism is not only a social breakdown, but also a psychic breakdown that condenses the forms and processes of the continual series of breakdowns and crises that compose capitalism. While “Capitalist realism insists on treating mental health as if it were a natural fact, like weather (but, then again, weather is no longer a natural fact so much as a political-economic effect)”, the effect of crisis is to further estrange and de-naturalize capitalism, mental health, and, of course, the weather. Overlapping forms of breakdown strike at the very heart of the usual ideological mechanism, central to the analysis of Roland Barthes in Mythologies, of treating what is cultural as natural. Now, with the widespread recognition and reality of climate catastrophe, even nature is no longer natural.

Consider, for example, how people who suffer from PTSD often experience feelings of “specialness” — not arrogance but an alienating uniqueness, as if they are not like other people, they’re outcasts, no one understands them, and it can get to the point where sufferers feel like they have a nonhuman subjectivity. Depersonalisation is precisely a breakdown where you no longer feel “natural”. Have we not experienced a kind of political PTSD in recent years, where traumas of all kinds — from pandemics and financial breakdowns to cultural crises and bizarre elections — have led to the very denaturalisation of the system at large? That seems to be Noys’ argument.

This kind of experience can make people feel really hopeless and listless, but there are ways that we can learn to channel our experiences into positive change. We can get better. But it can be a really daunting and even re-traumatising process. That’s something I wanted to explore in Egress — how to continue down the path of “education”, both academic and political, when that education is ruptured by a trauma. The response isn’t to divert off into a kind of mandated social therapy. This is what education is itself for — education, consciousness raising, libidinal engineering. The best kind of education is often the very rupturing of education itself and the biggest challenge is persisting with this kind of process, even when it gets really tough.

This is the same argument I’ve been trying to make for years, against “acid communism” as this kind of reheated hippie positivity. The breakdown of capitalist realism isn’t going to be a pleasurable experience by default. It can feel good to have the wool pulled off your eyes, and watch reality warp as a result, but the fetishisation of psychedelic experience ignores the fact it is not for everyone. It undermines the real movement, ultimately, by reducing it to a singular sort of aesthetic experience rather than a psychedelic unveiling to be experienced by all.

It’s the problem of Plato’s cave, in many ways. Some people are attracted to the bright colours and flashing lights, but others are more comfortable in the dark. That is the problem of our present moment and of political education more generally. In a polarised society, we see ideology like a colour phasing in and out of space. It’s a spectre, struggling to materialise — beautiful to some, terrifying to others. Because convincing people to deconstruct their ideologies, those “naturalised” perspectives on a familiar reality is a very difficult task to sustain. Indeed, it can be a traumatic experience. If we don’t retain an awareness of that, we might as well say goodbye to long-term goals.

K. Daniel Cho has a really fascinating book on this called Psychopedagogy. One of the central obstacles in learning and educating oneself about history, capitalism and our place within both, is that the reality revealed can be really unpleasant. We’ve seen what happens when that process gets to work. We see bizarre attacks made by the misinformed against so-called “critical race theory”, arguing it’s nothing but a guilt machine for white people, and guilt gets us nowhere. But where does that guilt come from? If you feel guilty, you might find your education to be traumatic. That not only puts “students” off learning, it also makes the burden of being an educator a little too much to bear. These are common talking points today, particularly on subjects like race. But Cho’s response to this is entirely in line, I think, with Fisher’s weird and eerie approach to education, which is not afraid to disparage or critique or disturb on its Platonic quest towards truth on the outside of an ideologically-instantiated “common sense” . Cho writes:

If critical pedagogy wants students to become more aware of the problems of racism, sexism, classism, and other forms of oppression, then it must allow for the repetition of those relations within the controlled space of the classroom. As these forms of oppression are rooted in authoritarian relations, repeating them in the classroom will inevitably involve transferring authoritarian positions — the Patriarch, the Bourgeoisie, the Colonizer, the Bigot — onto the person of the teacher insofar as the teacher occupies the position of the “subject supposed to know.” Acting as the recipient of the patient’s various transferences is not a comfortable task, which is why Freud describes the transference as the most arduous of all the analyst’s responsibilities. But it is the best way to learn the traumatic knowledge of the unconscious. In this way, taking on this transference can be seen as an ethical decision, as Lacan says the “status of the unconscious . . . is ethical”. One must make an ethical decision to become the subject of the transference in order to achieve the ends of social justice. Critical pedagogy must follow Freud in saying: “Whatever it is, I must go there”.

Fisher’s intuitive understanding of this is what made him such an excellent teacher. I always found him to be intimidatingly approachable in this regard, because as much as he was, in many of our minds, a guru and a deeply intelligent man, he was open about his own insecurities and the gaps in his knowledge. He was always, in the best way, a kind of student-teacher, shifting from the “subject supposed to know” to someone just as affected by the world as the rest of us, and it is the minor position of the student that allows one to be constantly open to outsides and new perspectives. Cho again:

Confronted with contradictions, the social investigator should make no attempt to rationalize them away and should instead conceive of them as integral features of the system itself. The negative space left by the refusal to assimilate one’s self to the system (i.e., identify with it) opens up the possibility for one to become the subject of traumatic knowledge. We might call this void the psychoanalytic or Lacanian subject or the proletariat subject with equal accuracy. But, perhaps, most apropos would be to call it simply this: the student.

When reactionaries grow concerned about the tendency of university students to radically change their political positions, or simply gain a political consciousness, this isn’t because they’re all overrun with communists looking to corrupt the youth. It’s because education — a proper education — always illuminates ideology. To become active within the world, aware of its structures and its ideological construction, is always to be a student. (This is surely part of the reason why neoliberalism tries to separate and create antagonism between students and workers, despite the economic and philosophical overlap between the two.)

Noys makes a similar point. He interrogates the friction in Fisher’s writing, which is at once the product of an educator’s desire to educate and a student’s desire to educate themselves. (I think this is true of all blogged writings, personally — the best ones, at least. Blogging is both an attempt to teach oneself how to say what one thinks, as well as articulate what you think in order to inform others around you — it is both personal and social.) For Fisher, the role of his books, blogs and lectures alike is to initiate “a process of the education of desire to both free us from capitalist realism and to develop a non-capitalist life.” Noys writes: “I am reminded of Fredric Jameson’s contention that our problem ‘lies in trying to figure out what we really want in the first place.'” (Fisher’s version of that same question is: “Do we want what we say he want?”) Noys continues:

Utopias are negative lessons, finally, that teach us the limits of our imagination in the face of the addictive culture of capitalism. It is only, Jameson insists, once the utopia has impoverished us, undertaken an act of “world reduction,” that we can undertake a “desiring to desire, a learning to desire, the invention of the desire called Utopia in the first place.”

This is the kind of ideological process we have undergone over the last decade or so. We have transition from a post-ideological society, where capitalism has won, becoming near-utopian in all the things it provides and the freedoms it facilitates. But towards the end of the 2000s, it felt — for my generation at least — like this utopia we were told about all the time had truly impoverished us and made us lacking. It was then, with perfect timing, that Fisher helped a lot of us see the light, and a new process of education began. But again, for me at least, having followed the k-punk blog, read Fisher’s books, and then applied to Goldsmiths, it was eventually clear that he was not some messiah but a student-teacher who led humbly by example.

Noys writes:

It is also important to consider Marx’s third thesis on Feuerbach, which suggests “it is essential to educate the educator,” and that: “The coincidence of the changing of circumstances and of human activity or self-changing can be conceived and rationally understood only as revolutionary practice.” If Fisher is writing largely outside of this context, as are we all, then we still have to consider this problem of education and self-education. The various attempts made at educational forms “outside” neo-liberal capitalist forms are often equivocal, even reproducing those forms in the dream of the “private”. Perhaps the closest we have to such experiments arise in the “teach-ins” or “outs” that have arisen in various struggles against privatizing education. These, however, remain temporary and are limited in addressing questions of self-reproduction in the context outside the wage. There is no simple solution to the problem and the difficulty of even sketching such forms speaks to our moment.

It is this project of education that remains before us and is left implied as the true substance of which “capitalist realism” is the truncated and mutilated form. To make good on this project we would need to articulate the weird “outside” with the eerie spaces of “absence,” of the fractures and dialectical tensions of capitalism with its empty appearance. This is the difficult bridge to be forged that is marked in the joining and divide of The Weird and the Eerie. Whether the acid or psychedelic would have been the sufficient mediator remains a question, and one which any continuation of Fisher’s project would have to suggest. I would argue, however, that any such project of education needs to abandon the conceptualization of inside and outside for a more dialectical grasping of the “interior” limits of capitalism and the articulation of those “limits” and their possibilities with that “interior.” This is where Fisher’s project requires urgent re-thinking.

I’d like to think his Postcapitalist Desire lectures clarify this last point, at least in part.

Blogging as Infinite Conversation:
Lately I’ve Been Feeling Like Arthur Rimbaud

← Preamble

Mark Fisher opens his 2014 book Ghosts of My Life with a line from Drake’s “Tuscan Leather”, the opening track from his 2013 album Nothing Was the Same.

“Lately I’ve been feelin’ like Guy Pearce in Memento.”

The track itself is an atemporal collage, as Drake heads back to the future. Heavily treated vocals gather together in reverse, as the beat staggers forwards for six minutes. That’s an eternity as far as rap albums go. This is no introductory skit or three-minute tone-setter but a six minute song that doesn’t bolt out of the gate but slithers, side-winding into earshot.

Much of what is mentioned in the song’s lyrics reappears over the course of the rest of the album. Track titles are spoken as lines of verse. But there are also nods to drama from Drake’s personal life, and references to past album sales and industry records. It even acknowledges itself as an intro. But it doesn’t feel like one. It feels like a closer; like a return. It’s a “previously on” introduction to a brand new season, just in case you missed what happened last time. As a result, that sample-reversing beat starts to feel like a mutant coda, not an opening salvo. All the while, the track builds and builds, with the beat gathering momentum, or at least taking up more space. After each verse, it seems more fleshed out, becoming thicker and more present, but still, the backwards main ingredient swerves around the drum pattern, which is propulsive and undeniably forward-facing. The two temporal directions box each other, ducking and diving, mirroring each other. There’s this strange sense that, although this is the intro, it is one half of a rhyming palindrome.

“How much time is this nigga spendin’ on the intro?
Lately I’ve been feelin’ like Guy Pearce in Memento.”

Does this sounds familiar? Maybe not yet. It is as if Drake knows the importance of an intro, of a first line. Forget the singles and the hooks. It’s those first few seconds of the album that are going to stick in your brain, no matter how amorphous they are. He knows it’s that first reversed sample that will act like a Proustian trigger for now and from now on. This is a future classic, Drake seems to say, and you’re gonna long for that moment when you first heard this, so let’s savour it for a while. Six minutes, to be exact.

“How much time is this nigga spendin’ on the intro?”

How much time is it gonna take for you to never forget this moment? The song takes its name from a perfume by Tom Ford. Smell is the scent that binds itself most firmly to memory, of course, and this is one decadent bottle of future nostalgia. If the coding of longevity into new content is now a music industry staple, Drake pioneered it for the streaming era. Like turning up to a job interview in your best threads when there are so many candidates to choose from, it’s less about a good first impression and more about ensuring you’re remembered. Drake knows that. He’s maybe even a little insecure about it. He seems to mourn the false construction of an event. So many of his songs are hedonistic laments. Yeah, this party might be “unforgettable”, but I’m barely present enough to enjoy it and commit it to memory. “Party hauntology”, Fisher called it. So many of music’s name-checked substances help us to forget. This is an album that wants to remember and be remembered.

But perhaps “Tuscan Leather” is also a comment on the process of writing itself? In what way does Drake feel like Guy Pearce, exactly? Does he suffer from short-term memory loss, or is it more that he recognises how the very process of writing an album / a book / a life is about inscribing fragments, clues, waypoints for yourself, as if there’s a future self trying to be born, leaving breadcrumbs for you, secret messages that you have to put together, just as Drake is doing for the listener over the course of his six-minute preamble. We might argue that’s how so many of the biggest names in music make albums these days. Look at Kanye, seizing every moment, picking up tracks and samples and people, all of whom he brings together like raw materials. Every encounter — sonic or otherwise — becomes a potential piece of the puzzle. Things aren’t planned from the start — this is a process, unfolding in real time, and you’re about to hear the outcome. Maybe that’s why “Tuscan Leather” feels like an outro in reverse. Though introducing the project to the listener, it was probably the last thing recorded, just as the introduction of a book is so often the last thing written. It’s a survey, letting the newcomer know what to expect, as you sign off and let it go.

Jean-Francois Lyotard once wrote that producing “a book means only one thing: that you’re fed up with this approach, this horizon, this tone, these readings.” (Fisher certainly seemed done with hauntology after the publication of Ghosts.) A book is a culmination of fragmentary thoughts, undertaken in search of some unknown thing. “There was a horizon sketched, uncertain.” Sometimes, those fragments see the light of day — they are inchoate attempts to prefigure something that has not yet fully emerged. “Nevertheless, you collect all of those attempts and you publish them as a book.” You write a book, you make an album, you direct a film — “you do it to get it over with.”

Then what?

The final section of Maurice Blanchot’s The Infinite Conversation is titled “The Absence of the Book”. It is an investigation into “the neutral, the fragmentary”. He begins with an end — Arthur Rimbaud’s “final” work.

Having scandalised much of the literary world as an anarchistic poet who broke all the rules, and much more besides, Rimbaud famously had an affair with Paul Verlaine. Verlaine was a drunk and an abuser, but a poet that Rimbaud admired, and they travelled to London together to immerse themselves in culture, in each other, and in their shared compulsion to write. But the two writers were seemingly only attached to each other because their spirals of destruction exerted a similar gravitational pull. Like two black holes caught in each other’s orbit, the affair ended with Verlaine taking pot shots at Rimbaud with his revolver. Following Verlaine’s arrest, they went their separate ways forever and, to the shock of the literary world, Rimbaud, the archetypical enfant terrible, never wrote another thing.

Daniel Mendelsohn, writing in The New Yorker, argues that

This sordid emotional cataclysm surely goes some way toward explaining Rimbaud’s desire for a new life: it’s hard not to feel that, perhaps for the first time, he realized that deranging his and other people’s senses could have serious and irreversible consequences.

But for Blanchot, this is not a retreat but an owning up to the life one has lived, or perhaps a way to at least forget that his has ended — “for one who wishes to bury his memory and his gifts, it is still literature that offers itself as ground and as forgetting.” Writers write their own stories and can rewrite their own histories. Inscriptions, poems, scars — they’re not memories but signifiers preloaded. How does an injury and a trauma become a battle scar? You renarrate it. Guy Pearce is trapped in the templexity that results. He awakes each day, remembering nothing, covered in inscriptions, which he sets about deciphering. He feels like he is at the beginning, but the end is already a foregone conclusion. He’s already written the book. Now he’s simply rereading what he’s written.

Though we like to think that Rimbaud never wrote another thing. He was more like Guy Pearce in Memento than we might like to think. Verlaine was his Joe Pantoliano. Perhaps he saw, in Verlaine, an archetype — the great Symbolist was a trigger-happy poet, a manipulator, an opportunist. Though he implored his fellow writers to “Keep away from the murderous Sharp Saying, Cruel Wit, and Impure Laugh”, he loaded his gun with worse things than that. Rimbaud, in response, turned Verlaine’s bullet-poetry on himself. (Camus described his abstention from poetry as a kind of “spiritual suicide”.)

But he did not die, he simply stopped becoming. That is not to say his work was forsaken or he set about renouncing his past life. He simply never wrote anything new. As Blanchot notes, even when Rimbaud was not writing, he took an interest in what he had written; “going back over the paths he has traced, he keeps them open as a possibility of communication with his friends.” After the publication of A Season in Hell, his “final” work, Rimbaud sought to publish his Illuminations — a compilation of sorts, written prior to his fallout with Verlaine, but reconsidered and reworked for years afterwards. These were his inscriptions, written in the midst of a trauma and later deciphered to find the essence of a life lived within. Some critics dismiss them as a failure, but in his own work he found the shadows of a mystery he wanted most to solve. He tried to excavate this errant and anarchic self, a mature voice trying to distill the fire of adolescence without snuffing it out. Did he succeed? Even after the Illuminations were posthumously published, they were seen as incomplete. Soon, the mythic temporality of the poems themselves were called into question. How can we say they were written “before” A Season in Hell if the work was not done on them until long afterwards (and, even then, was arguably never finished)? Blanchot argues that,

Even if written afterward, the prose poems belong to a time that is “anterior,” the time particular to art that the one who writes would have done with: “No more words” — a prophetic being, seeking by every means a future and seeking it on the basis of the end already come.

Though he stopped writing, Rimbaud was newly immersed in the art of literature. Not the writing of experience and poetry as a gesture on the cusp of the present itself, but the organisation of the past as a future yet to come. This was the shape of poetry after the end; beyond the fin de siècle.

Blogs are not written with books in mind. If books were in mind, we would not blog. But there comes a time when the material collated, accumulated, stored, suggests to oneself that there are threads to be entangled and rope to be made. The question is, when do you stop blogging? When do you say no more words and set about the thankless and withering task of drawing a line under the past in order to produce a future already written? When does one announce that one is no longer speaking and thinking and instead becoming the true master of words already said?

For Blanchot, “the affirmation of the end is anticipatory and prematurely announces a new hour”. The book is just such an announcement, but never on time. It is too much in time to ever be on it. It is the “speech of the turning where, in a vertiginous manner, time turns”. By comparison to the finality of the book, poetry is quantum, dead and alive, zombified, reified but not inert, the corpse of speech lying in wait for a mouth that might reanimate it, beckon it forwards. A poetry reading is a séance. “I was creating … the ghosts of future nocturnal luxury”, Rimbaud writes. Was he a genius? Only in stopping. That way the spirit, the genie, was not exorcised.

But books, too, are never over. That is why no one should ever write too many of them. “One book overlays another, one life another — a palimpsest where what is below and what is above change according to the measure taken, each in turn constituting what is still the unique original.” All books revolve around a centre, for Blanchot — “the needle, the point of secret pain that … harries with haste without pause.” Books are interruptions in writings; the recorded minutes of an infinite conversation. Poetry is writing interrupted before it can ever truly begin. Poetry is an intro, always arrested, before the laborious process of literature takes over. It is pure essence, bottled; a fine perfume, condensing on glass.

What are blogs? Nothing so romantic and ethereal, but they still capture that thrust, that life force from which writing emerges and which is hard to stop. Maybe if poetry is perfume, a blog is a sneeze.

What was it Burroughs said about the word-virus?

Memeing Politics

Yesterday’s post was written, somewhat tangentially, with the cover for Mike Watson’s The Memeing of Mark Fisher in mind. I’d already been thinking about Deleuze’s approach to history and its relationship to present appraisals of Mark Fisher and the Ccru the day before the Zer0 tweet went live. The book cover and its literal dramatisation of a weird Oedipus complex, with a kid whose Dad is Adorno looking at Mark Fisher memes, dovetailed with the sentiment I was already exploring. Beyond that, it wasn’t really a direct comment on it. But there was some debate about it on Twitter afterwards…

Tweeted out by the Zer0 Books Twitter account yesterday, the cover seemed to be everywhere by the evening, and for many people it crossed a line. I was particularly surprised that many people affiliated with Repeater Books, who would usually keep their criticisms private (in my experience), suddenly began tweeting about it disparagingly. Always the gobshite, I didn’t really think twice about adding my own two cents on Twitter…

Everyone talking about it negatively apparently had egg on their face, however, because the cover is ironic and didn’t your mum ever tell you not to judge a book by its cover? But the problem is perhaps that the cover is indicative of Zer0’s general output of phoned-in culture war provocations, filtered through their Frankfurt daddies. It unfortunately epitomised everything that a lot of people really hate about the present version of Zer0 Books.

Later that evening, someone shared the book online. I had a quick read-through and, thankfully, it is far from as provocative as the cover itself. It is tempered and thoughtful and engages with different meme trends, wondering how they express certain structures of feeling and relate to different philosophical concepts and movements. Though I still think the previous post is applicable to how it anachronistically treats its historical antecedents, the book hardly seems like the disaster the cover suggests it is.

So why choose that cover? Why pick something that is going to be such an obstacle for many people to get past? Isn’t that Fisher’s problem with aestheticised politics in the first place? Zer0 obviously runs on the belief that all press is good press these days, and so some of their fans saw the cover as doing its job, but that’s hardly applicable to Fisher’s own interest in online culture and parody. Why embody the absolute worst of what you’re intending to talk about in order to entice people into your argument? Have we learned nothing from accelerationism?

The go-to example for memetic politics I always think of is the bootleg Jeremy Corbyn Nike tick t-shirt from the 2017 UK general election. That tongue-in-cheek combination of designer clothing and socialist politics was exactly what Fisher meant by “designer communism”. It hijacked an already existing symbol, synonymous with desire and a certain kind of streetwear luxury, and somehow made a old socialist like Corbyn sexy by association. The lesson learned was a simple one — if you can’t sell a t-shirt, you’re not going to be able to sell the revolution. That’s the counter-intuitive provocation of Fisher’s postcapitalist desire.

Zer0’s various attempts to go viral in a similar way falter. Their intentions are suspect. Instead of grassroots organising and political consciousness, it’s all culture war bullshit and debate bro strategies. And because it doesn’t really have a material basis or a popular culture to attach itself to (beyond the one it attempts to create for itself), it always looks self-serving.

That sums up my problem with Zer0 Books and its various attempts to sell books to a market of memers more generally. Watson distances himself from this (unconvincingly), but that’s alright. For the sake of not judging the book by its cover, perhaps it is better to consider the publisher-wide problem people seem to think the book cover is somehow indicative of.

For all the attention Zer0’s various authors give to internet culture, memes and the political potential of the right aesthetic messaging, imploring the left to learn to meme and engage with contemporary culture… The reality is that most don’t need a lesson. They’re way better at it and smarter about it than Zer0 themselves are. They don’t need meme culture to be translated into Frankfurter talking points. Many are already making their own culture that is tapped into now. Maybe there’s a way of using that to make older works of political philosophy more accessible? But most attempts to turn Frankfurters into memes come across as anachronistic and weird. They’re ugly and didactic, having very little aesthetic merit whatsoever — not even ironically. It feels like meme politics as folk politics.

When I think about Mark Fisher memes — or at least memes he’d appreciate — nothing like a stock image with some fat text on it ever comes to mind, and ironic misunderstandings of his own concepts don’t seem to achieve anything, other than sending new readers down useless labyrinths of poor thinking. If there was a meme he’d like today, I reckon it’d be the one doing the rounds right now during the Euros, combining politics and football, as he liked to do. Every good performance is currently blamed on the England team’s embrace of Marxism. It’s a meme I’ve even seen right-wing pundits make. It’s hyperstitous, recognising the popular interest in football and a general desire the nation has (more or less) for its team to do well, and it ties that to criticism the team has got for taking the knee and infecting politics with “Marxism”. But as a prematch ritual, it looks like the Marxist gesture is working!

As a meme, it’s organic, it plants a humorously fitting seed regarding Marxist determinism for those in the know, but it’s utterly grounded in the present, and helps further normalise the message the team are hoping to send themselves. It might not have a pictorial format with text over image, but it is a joke, part of the fun of which is the way it is being widely shared and popularised. It’s a meme by any measure that uses something like Twitter to respond to an event (both literally and philosophically speaking), spreading a message about material conditions and politics in football.

(If you want a more dynamic and sustained masterclass in memeing yourself into the national conversation, without sacrificing on substance, you can also consider the UK’s Northern Independence Party.)

But whatever this video is above, and whatever that book cover represents, is something else entirely…

(The quote chosen in this video feels deeply ironic too, it must be said: “The less the culture industry has to promise, the less it can offer a meaningful explanation of life, and the emptier is the ideology it disseminates.” Welcome to meme world.)

Not being a fan of terrible meme cultures may make me elitist to some — I’m used to that accusation from members of the deeply cursed Mark Fisher Memes for Hauntological Teens group on Facebook — but the point is surely that aesthetics and cultural production really matter. The memes and the culture war videos and the book cover are misjudged, in much the same way a lot of Extinction Rebellion happenings are misjudged, for example — they irritate their target audience and the people they’re out to convince of their cause. The fact it’s much lower hanging fruit than XR only makes it worse. It stinks of a kind of detached hippiedom, which tunes out to the point it doesn’t realise how out of touch it is.

That was precisely the problem with psychedelic culture that Fisher first denounced. It prided itself on its detachment from the zeitgeist, in a lot of ways. It ignored material conditions and saw tuning out as a virtue. In some respects, it is, but meme tutorials feel like an instance of tuning so far out you can’t convince anyone but the already converted of what you’re talking about. It’s representative of leftist problems rather than a solution. It’s a problem of practice that preaches contemporaneity from within but already feels outdated from without. And that’s a shame, because there’s nothing really wrong with the theories being discussed and applied in themselves. But those theories are being turned into practices that rarely function as intended. So the practices undermine the application of the theory. To have something undermined entirely by its presentation, when presentation is also so much of the wider focus — it’s bewildering.

As @snowdriftmoon argued in a video response: if you make your literal book cover into a joke, don’t be surprised when people assume your work is a joke also.

But there’s also more to it than that. It’s symptomatic of a strange lag that they don’t seem to be aware of. This isn’t cutting edge cyber-praxis reaching out to zoomers on their own turf; this is meme warfare stuck in the left’s Twitter paroxysm of five years ago. Rhett made this point first and I think it’s a really pertinent one: Zer0 Books “are stuck in 2016’s trenches and they are just refusing to get out. I’m starting to believe that they are the first, real rear-garde of Trump nostalgia.” (Prat made a similar point as well.) It is a cultural approach that feels like it was built in response to an emergent alt right that had just broken into the mainstream by appropriating Pepe. But even if we were still living in that moment, clunky quotes on a stock image backdrop aren’t going to compete with that. It’s aesthetically minded but, ultimately, it’s aesthetically impotent. As such, it’s not memetic in any functional sense. These “memes” don’t spread in any positive sense. They’re always a backdrop to something else — book covers, YouTube videos… They’re captured within the publishing industrial-complex and are rarely seen outside their own context.

The Memeing of Mark Fisher likely doesn’t deserve the disdain and cynicism it has received over the last day or so, but the sheer amount of vitriol its central Mark Fisher meme has received from interested parties surely says something about how those responsible for it are able to navigate the very issues they are concerned about. And what it says isn’t good.

Memeing History

Our pervasive tendency to anachronistically historicize all recent contributions to intellectual discourse, showing how they were prefigured rather than what new observations they bring to the table, is itself a product of capitalist realism.

That there are resonances between ideas, irrespective of the time and place of their emergence, is important for us to consider – not only so that we can appreciate a diversity beyond the Western canon (although that is never a bad thing), but because it prefigures the problems that faced the twentieth century’s Marxist-Hegelian view of history.

Understanding that idealism and materialism were not conflicting theories but two parts of a wider feedback loop, the idea of the linear development of history came repeatedly under fire. For Gilles Deleuze, history did not unfold neatly one way or another. History – real history – cannot be sorted like the genealogy of a family tree: a repetitious series of pairings unfolding in an evolutionary line. Anyone who has investigated their own genealogy will know this. The more information you add, the more extended family you include, the more our relations spread outwards in an amorphous cloud of names and faces. Our records only go back so far, but there is no final ancestor to which we can ultimately attribute our existence. Our social histories and the history of ideas functions in much the same way for Deleuze. To constantly assign predecessors and antecedents, losing track of the particular temperament of the present, is to fall head first into philosophy’s own Oedipus complex. In truth, our canonical sense of intellectual progression is nothing more than a convenient framing device. But this is not to say that history isn’t evolutionary, rather we require a new way of understanding how history unfolds.

Deleuze argues that history is rhizomatic, with a central point of origin impossible to ascertain. Though we can follow certain lines through history, they do not simply pass “from one point to another”, he writes, but pass “between points, ceaselessly bifurcating and diverging, like one of [Jackson] Pollock’s lines.” To trace the line of development of a certain idea, then, is not to find a linear development but a multiplicity, capable of existing in multiple times and places at once, and referred to by many different names.

“Multiplicities are made up of becomings without history, of individuation without subject (the way in which a river, a climate, an event, a day, an hour of the day, is individualized)”, Deleuze continues. Channelling Heraclitus, for whom one cannot step into the same river twice, Deleuze argues that this very idea — the concept of becoming — is immediately undone once we individualise the river in question. The River Thames, for instance, remains the River Thames whether I paddle along its silted shores on a cold Thursday in January or a hot Monday in June. In naming everything individually, though life assumes a certain order as a result, the flowing multiplicity of the river and its relations is buried under certain signifiers. Its true nature is rendered as an abstraction, and the abstraction is discarded as useless and imprecise. But what is discarded is reality in all of its psychedelic complexity, and we do ourselves a disservice when we reject complexity out of hand.

To note the reductive nature of categorisation – of individualising the River Thames as the River Thames – is not to genericise the river as such, however. For Deleuze, “the abstract does not explain, but must itself be explained”. It forces us to offer up a more comprehensive explanation of the river’s becoming, its changing states, the ways it is impacted by the things around it, without relying on the one-dimensional shorthand of proper nouns and possessive understandings. Drawing on Whitehead, and echoing his often misused comment about footnotes to Plato, Deleuze insists that the aim of philosophy “is not to rediscover the eternal or the universal, but to find the conditions under which something new is produced.” When we historicise and point to this prefigurement of that, we focus entirely on what has been rather than on what has newly been created. And so, to stick with our example, by unpacking the individualised River Thames, which has cut through the heart of London for eternity, we suddenly unlock a perspective of the river underneath and the different things it has meant to different people – not the universal concept of the Thames but the plurality of a river’s history.

To take another example, we might consider the Ship of Theseus – one of the oldest thought experiments in Western philosophy. The ancient historian Plutarch penned the first recorded version of the tale, in which he explains how Theseus’s ship has been preserved over so many years. The people of Athens, he writes, “took away the old planks as they decayed, putting in new and stronger timber in their places, insomuch that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.” If every part on Theseus’s ship is changed over the course of a long and treacherous voyage, is it still the same ship? That is the question, or so we’re told. But Deleuze reveals the fallacy at the heart of this experiment. The point should be that the ship is, of course, still a ship. To debate whether it is still Theseus’s ship, since all the parts of the ship he originally owned have been replaced, covers over the ingenuity of his crew, who have found so many creative solutions to keep Theseus afloat. Whether Theseus recognises it possessively as his ship is short-sighted. If anything, the ship is now even more representative of the crew, of the multiplicity of persons who have sailed on board.

This not only describes Deleuze’s approach to history but philosophy itself. In his infamous “Letter to a Harsh Critic”, he explains that he belongs “to a generation … that was more or less bludgeoned to death with the history of philosophy”, which is nothing more than “philosophy’s own version of the Oedipus complex: ‘You can’t seriously consider what you yourself think until you’ve read this and that, and that on this, and this on that.’” (This remains a familiar sentiment today, of course.) Resentful of the overbearing weight of history, used as a straitjacket rather than productively, Deleuze engages with the history of philosophy through “a kind of buggery”, he explains, “taking an author from behind and giving him a child that would be his own offspring, yet monstrous.”

This, in turn, was the Ccru’s relationship to Deleuze and Guattari. But it is a relationship that we struggle to maintain with many of the Ccru’s former associates today. It is, notably, what killed accelerationism too. Accelerationism became a meme, and in the process, lost its motor — a militant insistence on the production of the new. As Vincent Garton wrote on this very topic: “Unleashing ideas — intercepting signals — demands a different approach.” We should know our history and we can work with it to produce new ideas, just as Deleuze did, but historicism quickly becomes a blunt instrument if used incorrectly. As Vince adds: “In the course of the history of ideas, reshaping and novelty have always trumped antiquarian precision.”

It is telling that most “memeings” of contemporary figures forget this. Memes of concepts encase events. They don’t unleash ideas but reify them. They turn a free-floating concept into a flat signifier. When created to services the desires of a new generation of philosophy-curious young readers, they abuse novelty by putting it in service of antiquarian precision (and even then, precision is often lacking). We and they deserve better…

Can you tell this is a subtweet that got out of control?

Our Zany Ministers:
On Matt Hancock and Boris Johnson,
the Personal and the Political

The UK’s response to the coronavirus pandemic has been mismanaged from day one – not least because “day one” was marked much later than it should have been. It has led many in the media to biopsy Johnson’s character and his suitability for the job, just as they had done when he was foreign secretary and, prior to that, mayor of London. What they found came as no surprise. As with Trump during his presidency, the media loves to boil Johnson’s character flaws down to his narcissism. Columnist John Crace, writing for the Guardian last year, went so far as to describe Johnson as a “narcissists’ narcissist”, because he thinks he can do whatever he pleases both at home and in government. Crace’s colleague Nick Cohen used his own column in the same newspaper to report that even Johnson’s fellow Conservatives talk about him “with a venom few socialists can match”, describing him as “a pathetically insecure narcissist who turns on you if you don’t feed his craving for applause.” More articles followed suit on other news websites and political blogs. Collating them all, a notable pattern emerges – armchair diagnoses of narcissism are an acutely liberal pastime.

Though it is easy to be cynical about the rhetorical habits of liberal pundits, this is not to deny the veracity of their observations – at least to an extent. Johnson certainly has a maladjusted and overinflated ego, but he is hardly the sole narcissist in government or even in the media. As the pandemic has entered its second year, more and more information regarding the government’s misconduct throughout the early stages of the pandemic has come to light, just as more and more journalists have been accused of a dangerous sycophancy in facilitating their political games. It is now for members of the political and media classes to be subject to accusations of “playing politics” – that is, not simply doing their jobs as politicians and journalists, serving the general public, but making political and/or journalistic decisions based on what best serves their own interests.

This self-interest has frequently made headlines, particularly recently, when Keir Starmer sought to question Tory MPs’ personal conduct and the motives behind certain governmental decisions, highlighting them as evidence of “the return of Tory sleaze” – a catchphrase that was popular for about a week but ultimately failed to “cut through” to the general public.

Starmer’s Labour Party made a great deal of fuss about messaging that could “cut through” the noise and stick in the minds of the public in this way, seemingly oblivious to the media’s overall bias in favour of establishment interests. In truth, contemporary liberals no doubt feel like they are caught between a rock and a hard place. They are reliant on the press whilst being aware that the press has no interest in their success. Rather than challenge this status quo, most politicians attempt to half-heartedly appease the media, mirroring its hostile lack of political imagination. But the Labour Party’s attempts to adapt to a hostile media have been blatant and have only affected ratings negatively. As a result, no matter how incompetent Johnson was made to look, Starmer slumped in the polls to levels worse than the supposedly unelectable Jeremy Corbyn. Those much further to the left argued that, yes, whilst Johnson bumbled through life making terribly poor decisions, at least decisions were made. Starmer avoided making any decisions whatsoever. As journalist Moya Lothian McLean argued, in a now-infamous article entitled “Keir Starmer is a Wet Wipe”, Starmer “does not lead proactively; he reacts, passively.”

Does this not make Starmer a “narcissist” too? Not a reckless and self-aggrandising narcissist like Boris, of course, but a narcissist who lurks at the more depressive end of the spectrum. So concerned is he for his own position and likeability, and especially concerned about how he is perceived, Starmer experiences a depleted ego as he walls himself “off against the unrealistic claims of an archaic grandiose self”, as Heinz Kohut writes in his classic text of narcissistic personality disorders, describing how a narcissist often responds to psychoanalysis. The “archaic grandiose self” nicely describes your typical Tory, but Starmer also walled himself off “against the intense hunger for a powerful external supplier of self-esteem”, which we might argue, in this instance, refers to pollsters and the wider electorate. But for columnists like Crace and Cohen, this makes Starmer’s lack of popularity a good thing, actually – at least for him personally. It means he is devoid of harmful narcissistic personality traits like a desire for success or any political ambition whatsoever.

Facetious jibes aside, we see once again how accusations of narcissism are seldom effective, becoming ever blunter the more frequently they are used. Particularly when thrown around by the media, such armchair diagnoses restrict our understanding of political leaders to their mediated personality traits, distancing us from an opportunity for material – rather than flawed psychological – analysis.

Consider how the American psychologist Mary J. Trump writes about the media’s understanding of her uncle in her best-selling exposé on the then-president and his upbringing. She explains how, throughout Trump’s presidency, she witnessed “countless pundits, armchair psychologists, and journalists [repeatedly] missing the mark, using phrases such as ‘malignant narcissism’ and ‘narcissistic personality disorder’ in an attempt to make sense of Donald’s often bizarre and self-defeating behavior”. The same can be said of the British media’s analyses of Boris Johnson. But the intention here is not to suggest that such labelling is inaccurate. “I have no problem calling Donald a narcissist”, she continues – “he meets all nine criteria as outlined in the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) – but the label gets us only so far.” This is not only because Trump’s observable pathologies are, in her opinion, “so complex and his behaviors so often inexplicable that coming up with an accurate and comprehensive diagnosis would require a full battery of psychological and neuropsychological tests that he’ll never sit for.” More to the point, it is precisely because he is a member of a social elite that such generic pathologizing is useless. It reduces Trump’s – and, by extension, Johnson’s – decision-making and egotism to circumstantial gossip, which utilises psychological nomenclature to sound intelligent but which is ultimately devoid of any actual substance. We need only consider how other psychologists hedge their bets when discussing the psychological make-up of world leaders.

In the Independent, Chantal Gautier writes that, “aS a PsYcHoLoGisT” — emphasis added — “I look at Boris Johnson and worry for Britain.” Gautier explains that she works in the field of “business psychology”, and introduces “trait theory” as a way that business psychologists understand what makes a good leader. It is a theory essentially concerned with subjective characteristics and perceptions of personality. Gautier argues, for instance, that “the key to successful leadership is grounded in integrity.” But what is “integrity” exactly? And how are we supposed to measure it clinically, outside the court of public opinion? It is not long before “trait theory” appears to be focus group fodder rather than a genuine diagnostic tool. In fact, it shares many issues with theories of personality in general. On the one hand, it seems like Gautier is steering away from making any wild claims but diagnosing public figures with psychological disorders in public newspapers — even if they are shit — doesn’t look good. On the other hand, I’m unconvinced this thin veil of professionalism is actually covering over any measure of actual substance.

It is worthy of note that many psychologists today are increasingly unlikely to diagnose patients with personality disorders – be they “narcissistic”, “paranoid”, “schizoid”, “borderline”, “obsessive compulsive”, etc. The mental health charity Mind explains on its website that such disorders are controversial because they are difficult to understand, often generate stigma and, most importantly of all, they don’t take social context into account. “People are complicated. There are many social factors that can affect our capacity to cope, to relate to others and to respond to stress”, the charity explains. These factors include childhood trauma, experiences of poverty and deprivation, as well as experiences of discrimination or abuse. But it is not only socially negative contexts that we need to take into account.

Mary Trump’s appraisal of her uncle explores Trump’s upbringing almost exclusively. Not only does she steer away from personality disorders as a result, she also suggests that Donald (and Boris) can’t really be considered using the diagnostic tools we use to understand other people within everyday society. Because Trump and Johnson have never lived in everyday society. Whilst he was still in office, Mary argued that “we can’t evaluate [Trump’s] day-to-day functioning because he is, in the West Wing, essentially institutionalized.” But this is nothing new. Just as the UK collectively suffers under “the curse of the public schoolboy” (as Douglas Murphy puts it in Nincompoopolis), with our leaders often raised in the privileged enclaves that are private boarding schools, so America suffers under the curse of the dynastic prodigal sons of business magnates. Shielded from real life by extreme wealth, “Donald has been institutionalized for most of his adult life”, Mary argues, and “so there is no way to know how he would thrive, or even survive, on his own in the real world.” A question re-emerges: is this kind of posh zoochosis we call “narcissism” just a way to pointlessly pathologize the otherwise familiar over-confidence of the ruling classes? And in attempting to understand the personality traits of our leaders psychologically, do we not deny ourselves the opportunity to see the personal – and, indeed, the psychological – as political?

When the personal and the political do come into contact in the mainstream media, it is often to highlight their disconnection. Returning to John Crace, for instance, in reference to Johnson’s often poor rhetorical performances in the House of Commons, he quips that, whilst “Boris can dump wives, mistresses, ministers and friends … he just can’t get rid of Keir Starmer.” Though his narcissism might get him what he wants at home, it isn’t necessary met without resistance in office. Crace argues that, for “the first time in his life, Johnson has come up against an immovable object.” His political life differs significantly from his personal life. But again, the analysis is meaningless, because Starmer hasn’t been able to dump Boris either. Their face-offs at weekly sessions of Prime Minister’s Questions (PMQs) are institutionally orchestrated. They’re not battles to the death where one side can oust the other from political life. Instead, the narcissist’s narcissist has spent the pandemic locked into a protracted stalemate with the liberal’s liberal, and it is telling that most commentators cannot see the reciprocity between the two. Indeed, just as Narcissus himself is captivated by his own image, these two leaders, jousting across the dispatch box in the House of Commons, constitute a narcissistic relation in themselves. They represent two parties tormented by the mirror image of themselves, but rather than transform they embrace their impotence in all its perfect, immovable harmony.

But there is plenty of friction here, as demonstrated by Matt Hancock’s mess of a personal and political life this past week, when it was revealed he was having an affair with a senior aide, Gina Colangelo. Hancock resigned as a result. Although texts from Boris Johnson to Dominic Cummings, calling Hancock “totally fucking useless” were leaked just a few weeks prior, Johnson feigned disappointment in Hancock’s decision, suggesting that his political conduct and his personal conduct are wholly unrelated and he should not feel the need to resign over gaffs related to the latter — never mind the thousands of death causes by “gaffs” related to the former. Whilst the rest of the country asks questions about corruption, whether Colangelo was given preferential treatment (professionally speaking…), and whether this was another example of government minister giving contracts to friends and booty calls (just as Johnson had done with Jennifer Arcuri).

If all of this seems like a confusing mess, with no-one entirely sure how to talk about it, perhaps it is because our attempts to connect or disconnect the personal and the political are wholly outdated. That once-ubiquitous phrase, “the personal is political”, started its life as an empowering mantra for raising feminist consciousness in the 1960s and 1970s, connecting personal experience to broader social structures. It allowed a generation of liberated women to drag the hidden politics of domesticity, for example, into the public arena of patriarchy. Since then, however, the phrase has become an albatross around the neck of modern subjectivity. We have realised that, if the personal is political, then the political is also personal. This may seem like an obvious tautology, but for all the attention we give to the impact of personal experience on contemporary politics, we often fail to appreciate how new forms of personal expression and influence often open up new strategies for electioneering to influence how we see our personal lives in turn. This is a dynamic felt increasingly by all, with social media acting as the kernel around which this personal-political feedback loop franticly revolves, but in the media it is typically made visible through the weathervane fortunes of our political class.

Johnson’s bizarre but nonetheless continuous success in our contemporary political landscape epitomises this. He is not an outlier, whose personal life intrudes upon politics, to be put in place by sensible liberals who are all politics without personality. The ugly truth is that Johnson’s broad appeal, which inexplicably emerges unscathed from his innumerable gaffes, defines our shift away from the dialectic of the personal-political, which has since been transformed into something altogether one-dimensional: the “social”.

This “social” understanding comes very easy to our political class, because whether feminists had to fight for their personal experiences to be taken seriously in the political sphere, the political class has never known any different. Your colleagues in government are likely to be part of a wider social circle you have known your entire life. And so, whilst “the personal is political” works as a way for normal people to understand the complex nature of your everyday experiences, it actually works to simplify and obscure the relationships that come naturally to the establishment. This is how Donald Trump, who was a part of establishment social circles for his entire life, could create a false barrier between himself and “career politicians”, by exacerbating an otherwise negligible gap between his personal life and his political life. What’s more, media and the entertainment industry have been prepping us to respond to this kind of dynamic for decades.

As Jodi Dean writes in his 2010 book Blog Theory:

Radio brought leaders’ voices directly into people’s homes, integrating leaders into their intimate spaces. Broadcast television likewise occupied a domestic space as it addressed its audience as personal members of a nation, perhaps imagined like a family (respected newscaster Walter Cronkite was affectionately referred to as “Uncle Walt”).

Despite the social nature of establishment relations, with any hard division between the personal-political being an illusion, some of our most beloved television shows have programmed us to see an entertaining gap between the two, embracing the awkward collision between the personal and the political as a loveable and humanising occupational hazard. To demonstrate this, we need only examine Johnson’s trademark “zaniness”, epitomised by his inability to adapt to whatever government role he finds himself in – be that mayor of London, foreign secretary, or prime minister. The purposes of this are not simply to better understand Boris Johnson, but a culture of narcissism that keeps electing him to high office.

Rather than prefiguring his inevitable demise, Johnson’s zany mannerisms are arguably his most aesthetically attractive (and quintessentially conservative) qualities. Writing in 2012, long before Trump and Johnson rose to such unfathomable prominence, cultural theorist Sianne Ngai argues that “zaniness” is one of the defining aesthetic categories of our postmodern age. She charts its emergence from the 1950s, following capitalism’s desire for its workers to not only possess certain demonstrable skills but certain demonstrable character traits as well. (Hi again, “trait theory”.) This requirement is intuitively understood today. Our success is not only dependent on how good we are at a job, but also how we present ourselves while doing that job. Beyond our generic constitution as shiny happy people, we should be infinitely adaptable, ready to seize every day and meet every challenge capitalism throws at us head-on. The emergence of this kind of post-war work ethic was perfectly suited to Great Britain’s repressive tendencies – the pop-cultural ubiquity of the Blitz mantra, “Keep calm and carry on”, printed on infinite mounds of tourist tat, reminds Londoners of this daily. However, Ngai notes that, as our cultural understanding of this new capitalist ideal emerged, we began to admire the fool more openly – that is, we began to admire those who, try as they might, cannot conform to this image of capitalism’s ideal subject. Instead, both on film and the emergent medium of television, the work ethic “encouraged by the postwar service economy [made] the very concept of ‘character’ seem comically rigid”, inviting people to laugh “at characters incapable of adjusting to new roles and social situations quickly”.

To demonstrate this, Ngai draws on Lucy Ricardo, Lucille Ball’s character in the classic American sitcom I Love Lucy, which pioneered the genre and dominated US living rooms throughout its original run in the 1950s. In the show, Lucy is a housewife to Ricky Ricardo, a singer and bandleader. Desperate to make it in showbiz like her husband, the show comically dramatizes the impossible demands placed on Lucy as a new woman in a new era. In her attempts to shake off the rigid performativity and expectations of a housewife, she bounces between various service jobs as she chases her dream, climbing to the top of the new working world now open to her, acquiring enough capital to comfortably take her shot at stardom. But as Lucy juggles various versions of herself, taking on various odd jobs, she finds the roles she is required to play are far more demanding and ridiculous than those she was previously used to. As Ngai notes, “each of Lucy’s temporary occupations requires her to put on a costume and act like someone else, as if to suggest a new instability in the postindustrial United States”. In losing the singular performative shackle of the “housewife”, she moves onto spinning various sociocultural and/or occupational plates. But these plates are essentially identical. The skills she needs to make it in showbiz are precisely those she needs to manage her domestic responsibilities, and so she cannot achieve one dream without improving in the role she wishes to leave behind. The result is a comic catch-22 that made I Love Lucy the televisual phenomenon of its generation.

Though seventy years have passed since it first aired, I Love Lucy remains relatable in this regard, and its enduring popularity with American audiences attests to this. But we might also consider how the sitcoms of our present era have further developed this zany archetype, with many examples revolving around the plate-spinning of our political class more specifically. Consider Parks & Recreation, or Armando Iannucci’s trans-Atlantic political sitcoms The Thick of It and Veep. Somewhat predictably, given their American context, both Veep and Parks & Rec follow the I Love Lucy model, revolving around women who are trying to have it all in a “new” world that is reluctant to relinquish all that it promises. But rather than playing with the tension between domesticity and showbiz, these shows more explicitly explore the relationship between the personal and the political.

Veep stars Julia Louis-Dreyfus as Selina Meyer, Vice President of the United States. Well-meaning and passionate, Meyer wants to have a positive impact on the nation more than anything, but her ambitions often come second to the daily bureaucracy of high-level government. Parks & Rec stars Amy Poehler as Leslie Knope, the Deputy Director of a Parks and Recreation Department based in the fictional city of Pawnee, Indiana. She, too, is well-meaning and passionate, and struggles to rise above the myopia of local government bureaucracy. She idolises – both sincerely and to comedic excess – pioneering liberal women like Hillary Clinton, Madeleine Albright, Condoleezza Rice and Nancy Pelosi. As she daydreams of writing nationally significant legislation, she is interrupted by zoning codes and the anti-social behaviour of local adolescents.

Despite these obstacles, both are successful and ambitious women (over)reaching for a dream – the same dream, funnily enough, of being President of the United States. One of them is certainly closer to that goal than the other, but both are quintessentially zany, according to Ngai’s definition, in that they flail their way through the conflicting responsibilities often associated with the personal and the political. Though they have risen high above the small-town diner or the factory line, these zany characters nonetheless remain trapped in this dichotomy’s affective paradox, often failing at one job as they daydream about another, just like Lucy Ricardo. They work tirelessly to maintain themselves as stable and reliable “characters” or “personalities”, as their public-facing or otherwise demeaning jobs demand, but both nonetheless reveal themselves, time and again, through their zaniness, to be all too human.

“Zaniness is the only aesthetic category in our contemporary repertoire explicitly about this politically ambiguous intersection of cultural and occupational performance, acting and service, playing and labouring”, Ngai writes. As Poehler and Louis-Dreyfus demonstrate with aplomb, zaniness also has a “stressed-out, even desperate quality that immediately sets it apart from its more lighthearted comedic cousins, the goofy or silly.” (It’s telling, too, that it remains feminine-coded in almost all instances.) As such, it is an aesthetic category that is perfectly at home in twenty-first century political spaces, as “an aesthetic of action pushed to strenuous and even precarious extremes.” With the stakes of contemporary politics being so high – or, as in Parks & Rec’s local government setting, often hilariously low – the political sitcom is a pressure cooker for the zany archetype. There is surely no job more stressful, strenuous or precarious, and we love to watch as those who “selflessly” answer an apparent call of duty, choosing to serve in public office for the betterment of all, have their unavoidable selves revealed to them as constant companions and trip-hazards. This makes the characters that Louis-Dreyfus and Poehler play both relatable and loveable. They embody the ideal work ethic of late capitalism whilst revealing, with both relief and schadenfreude, that maintaining such a work ethic is humanly impossible. It is precisely their wrestling with the familiar impossibilities of neoliberal expectation that humanises them.

However, in the UK, a very different approach to the zany takes precedence. Contrary to the loveable nature of their American cousins, the cast of The Thick of It are notably denied any humanising dimension. Whereas as Knope weathers all manner of public humiliations with a strained smile as she strives to live up to her political ideals, The Thick of It reveals that the personal and the political are much harder to keep apart in contemporary Britain. Indeed, our attempts to do so are partly why our political class appears so grey and dull. But this is not to say that politicians should do more to humanise themselves, revealing more about their personal lives. The pantomime of political discourse in the UK revolves around the fact politicians are damned if they do and damned if they don’t.

The Thick of It embraces this political theatre of cruelty. With a quintessentially dark British humour, there is no relatable respite in Iannucci’s Westminster sitcom. It is, instead, pure schadenfreude. Perhaps this is because Britain is more rigidly divided along class lines. The aspirational trajectory that defines the “American Dream”, which drives the I Love Lucy model of zaniness, is not part of our British sensibility — although so much of our media and social expectations are becoming increasingly Americanised. (When did Love Island become a 90-minute advertisement for cosmetic dentistry and American gob-ceramics?) On the contrary, British citizens seldom rise above their station, or experience class mobility as an alienating trauma if they do. Nevertheless, The Thick of It’s cast are no less zany because of their establishment credentials. As Ngai notes, they still give form to what Herbert Marcuse calls the “euphoria of unhappiness”, even if it is only the viewer who experiences the euphoric part of the equation.

In his highly influential 1964 book One-Dimensional Man, Marcuse argues that a “comfortable, smooth, reasonable, democratic unfreedom prevails in advanced industrial civilization”. This unfreedom represents capitalism’s inability (or, more accurately, reluctance) to provide us with “freedom from want” – what Marcuse calls “the concrete substance of all freedom”. But why is capitalism reluctant to make us truly free from desire? Capitalist society and its globalised trade networks would surely be capable of providing everyone with everything they might possibly need by now. Doesn’t that sound positively utopian? Ours is a world of almost unfathomable abundance. But without want, without lack, we are freed from desire as capitalism’s driving force. As Karl Marx first argued, in allowing us to grasp the dangling carrot of desire, capitalism begins to generate the conditions its own demise. The carrot must be graspable, therefore, but it must always be immediately replaced with something shinier and more attractive. Caught on this treadmill, our economic system nonetheless faces a productive conundrum of its own making.

For Marcuse, the potentials of grasping the carrot once and for all are hard to ignore. “If the individual were no longer compelled to prove himself on the market, as a free economic subject”, he argues, then freedom of enterprise – the freedom of private businesses to operate for profit – would surely disappear. This is an unambiguously positive turn of events. It would be “one of the greatest achievements of civilisation”, he argues – nothing less than the dawning of a post-work society. The potentials of such a transformation are enormous, releasing “individual energy into a yet uncharted realm of freedom beyond necessity.” What’s more, the “very structure of human existence would be altered; the individual would be liberated from the work world’s imposing upon him alien needs and alien possibilities.” But it is precisely through the imposition of these alien needs and possibilities that capitalism, in advanced industrial societies, retains overall control of its subjects. In generating artificial wants, and at the same time sating those wants itself, the system feigns generosity, all the while implementing an artificial scarcity of choice. It is in this way, Marcuse argues, that capitalism is “totalitarian”. The system may not have a single all-powerful ruler, but it is nonetheless ruled by “a specific system of production and distribution”; a false plurality of newspapers and political parties all parroting the same line: there is no alternative. As a result, a central political figurehead is replaced by an ideological apparatus that “precludes the emergence of an effective opposition against the whole.”

Still, these desires for other worlds and alternatives make themselves known to us, Marcuse argues, through our pervasive discontent. No matter how much we buy or consume, we are never truly satisfied. There is always something more to acquire and achieve. This is not a product of humanity’s innate industriousness. It is instead a sign that we are simply deferring the real problems in the world. We satisfy our individual desires as the social world around us does not change. Nevertheless, we take pleasure in our stagnant ability to strive.

As Marcuse puts it, the satisfaction of certain alien needs – “to relax, to have fun, to behave and consume in accordance with the advertisements, to love and hate what others love and hate” – only constitutes relief from the capitalist drudgery that we are otherwise required to undertake in order to acquire those means in the first place. This is not to say that relaxation and fun are unworthy goals but false ones, because the means of relaxation and having fun are sold to us by the very system that undermines them in the first place. Capitalism, then, is a socioeconomic feedback loop – a system that promises to provide relief from the pain and suffering it causes. Still, there is no denying that capitalist relief is relief nonetheless. But the result, Marcuse writes, “is euphoria in unhappiness.” The satisfaction we experience is not our satisfaction but the satisfaction of the system itself. The freedom we experience is not ours but the guiding hand of a controlling society.

This situation has morphed and twisted itself over the decades, but the overall social structure remains in tact. After all, this euphoria is often shared. To “love and hate what other love and hate” is to affirm our social connection to others and our similarities. “The personal is political” turned this sense of comradery on its head, as loves and hates were not defined by the system but by the people themselves, raising consciousness around their material conditions, contra ideological projections of the system at large. As a result, the guiding hand was revealed and slapped away, even if only momentarily. But today, our politicians still follow this model and embrace it. They make themselves relatable through their incompetence. As we watch zany characters like Johnson and Trump, we see figures who are struggling through the mire of contemporary society just as they are. This is why Conservatives love a culture war. “Woke” politics is defanged and painted as yet more liberal bureaucracy, yet more pitfalls for the average person to struggle to navigate. Social justice movements call for more freedom — freedom from “toil and fear”, as Marcuse puts it — but neoliberal governments decry the expansion of freedom as the expansion of rules and regulations.

This is the paradox lurking in our understanding of what constitutes “free choice”. Marcuse was already aware of it, of course. He writes:

The criterion for free choice can never be an absolute one, but neither is it entirely relative. Free election of masters does not abolish the masters or the slaves. Free choice among a wide variety of goods and services does not signify freedom if these goods and services sustain social controls over a life of toil and fear — that is, if they sustain alienation. And the spontaneous reproduction of superimposed needs by the individual does not establish autonomy; it only testifies to the efficacy of the controls.

The Conservatives are aware of this too. What is produced is a double-edged sword that works constantly in their favour, precisely because liberals cannot oppose it at its core. On the contrary, they buy into it. Their “biting” satire and criticism only helps to normalise it. Shows like The Thick of It help with this too, further normalising the idea that our politicians are just like us. They get their kicks where they can and eek pleasure out of an unjust world. Their world — their bubble — is obviously shit. Who’d be a politicians these days? All the more reason why we shouldn’t slut-shame Matt Hancock for any personal dalliances. (The argument that we shouldn’t project our sexual prudishness onto public figures is a very Twitter-level take, it must be said.) Because who hasn’t had an office romance? Who hasn’t done something they shouldn’t at the Christmas party? Who hasn’t had a bit of a mental breakdown under the weight of a piece-of-shit day job? We’re all navigating this stupid world, and so being a bit more forgiving when our politicians cock up allows us to be more forgiving of ourselves. Better that, of course, than actually change it.

That is what is required. But Starmer’s Labour doesn’t get it. They fail to appreciate the extent to which our present understanding of the personal and the political is not inconvenient for Conservative politicians, despite them feigning ignorance and acting all embarrassment when the two things touch. This state of affairs suits them, precisely because they know, at a societal level, it also suits the electorate.

If I might end on one more example of a lib commentator missing this point completely, consider this essay from Paul Mason, published on the New Statesman website at the start of the 2019 general election. He begins:

The French novelist Édouard Louis once wrote that “for the ruling class, in general, politics is a question of aesthetics: a way of seeing themselves, of seeing the world, of constructing a personality. For us it was life or death.” Nothing better illustrates this than the chaos and self-obsession that has characterised the opening days of the Conservative election campaign.

He’s not wrong, of course. But he fails to consider just how much of a stranglehold aesthetics has on society at large. As a result, all his essay boils down to is yet another commentator impotently decrying the Conservative government as a tribe of narcissists. It obfuscates the fact that, yes, whilst government incompetence does have a very real and horrific impact on our lives — the personal is political — it ignores how the entertainment factor of these fuck-ups only helps keep us in line.

Mason goes on to argue that politics, “for Johnson and the entire clan surrounding him, has become a form of showing off. And like all narcissists, they cannot abide an accurate reflection.” In fact, the strange truth is that they can. Not only do they abide it, they curate it — Boris Johnson especially. His zany incompetence is his primary selling point, against an opposition that is all politics without personality. Though we might despise it in principle, commentators like Mason rarely address the fact that even these damningly accurate reflections are also aesthetically instantiated. Their zany exploits help us feel better about ourselves. Their failures serve a purpose — keeping us drunk on the euphoria of unhappiness, the one thing the working class and the political class apparently share.