They Will Kill Your Library, Too

The mayor and city council of New Orleans have proposed a budget that cuts funding to the public library by 40 percent. Voters will decide whether to approve it on December 5th. If they do, an essential public service and community institution will be gutted. And you can soon expect to see the same thing happening in your own city or town.

The problem with public libraries is that while they are hugely popular, they are also in tension with the prevailing political and economic ideology, which suggests the government should run like a business, cutting costs and measuring the worth of every service by its market price. Libraries offer useless knowledge, the joy of intellectual discovery. A McKinsey consultant, looking at a public library, would wonder why it isn’t charging usage fees to its patrons, or why it has many shelves that few people visit, and what its economic value to the city is. To the extent governments think like McKinsey consultants instead of like human beings, libraries will always be under threat, because they seem like a “luxury” rather than an essential good. After all, if government does not provide free housing or free healthcare, why is it providing free knowledge, when one must be housed and alive before all else? (The correct answer to that question is that government should be providing housing and healthcare as well rather than axing library budgets.) 

The insidious and frightening thing about the cuts to the New Orleans library, and the reason that people who do not live in New Orleans need to pay close attention, is that those trying to make the cuts are also trying to convince us that they are not doing what they are obviously doing. When austerity comes, it is not branded as austerity, just like mass firings are branded as “restructurings.” It is branded as something good, something efficiency-maximizing, something that no reasonable person could disagree with. LaToya Cantrell, the mayor of New Orleans, is not going around saying that she wants to decimate the public library. She is saying that she wants to expand economic development and early childhood education, and she is denying that what she is proposing would hurt the library in any way. 

I recently received a propaganda email from a PAC called “Action New Orleans” encouraging me to vote “yes” on the mayor’s proposed budget changes. Action New Orleans says that if I vote for the propositions that are on the ballot, I will be saying “yes for local progress” and “address[ing] the problems that we urgently need to solve without raising taxes,” and we will “join other progressive cities around the country in allocating dedicated funding to early childhood care and education.” 

They’re not just using pleasant progressive-sounding language to pitch the austerity budget. They’re also using outright lies. The mayor falsely claimed that her budget was an attempt to “reduce taxes,” when it wasn’t. (Now they pitch it as something that would not raise taxes, but that’s actually a bad thing. We should be raising taxes! When local governments need revenue the rich need to pay up—and we already give giant unnecessary tax breaks to corporations.) Flyers supporting the cuts “falsely asserted that the Bureau of Governmental Research, a non-partisan think tank, supports the propositions” when the Bureau of Governmental Research actually opposed the plan and urged residents to vote down the proposals. As excellent reporting by Michael Isaac Stein in the nonprofit New Orleans Lens has shown, the mayor even illegally used the city’s official Twitter account to push the plan. 

It is important to understand the way that nefarious actions by governments are disguised as benign so that people won’t notice what is going on until it is too late. For example, in this case voters are told that the new budget increases funding for “early childhood education,” giving “scholarships to low-income preschoolers.” Who could oppose scholarships for low income preschoolers? But when we think for a moment about what that means, we realize that preschoolers shouldn’t need “scholarships”: what is actually happening is that a large amount of tax money is being handed out to private schools to take a small number of children. In other words, money that would have gone to the library is being given to private academies so that they will give seats to poor children, which they should simply be required by law to do. Jules Bentley of the Bayou Brief dug into the “early childhood education” proposal and concludes that it seems like a sham, that it will only help 100 children (out of thousands in need) and seems much more like “some kind of scheme to hand the public’s money to shady for-profit private companies.” 

Propaganda can be incredibly effective in misleading you. In this case, the city had the library itself put out a fact sheet advocating the budget cuts, which flipped reality on its head, suggesting that if the proposal failed the library would have its budget cut, while if the proposal succeeded the library would “continue fulfilling its mission of transforming lives.” Voters, then, will think their vote has the entirely opposite consequence than the one it would actually have, and they will have been told this by the library itself. (The executive director of the library has shamefully teamed up with the mayor to push the cuts, because you can never trust an executive director.)  

You’ll hear endless buzzwords designed to obscure reality. Our mayor and officials have talked of “right-sizing” and the need to “find new efficiencies.” The library budget cuts have been described as “part of a modernization effort, where we’re trying to bring in more technology and use data to really home in on the things that citizens need.” Efficiency and modernization are abstractions, and it’s important to demand specifics about what is actually being proposed in the real world. How does “more technology” change the fact that the library budget is being cut by 40%? Isn’t “homing in” just a euphemism for cutting things? When you drill down and demand real answers, you find that the obvious is true: you can’t cut budgets without making public services worse. The city’s Director of the Office of Youth and Families has promised the library will implement “cost cutting measures… that wouldn’t impact services for the public.” But when she tried to give an example of what this would be, she cited “data on what people are checking out,” showing “about 50 percent of the collection was not actually checked out.” I am sure you know what is being said here: we could eliminate about half the library catalogue, because it isn’t being checked out. Nevermind the importance of access to a wide range of knowledge. We could pick the 5 most popular books and keep those and call it a library, which is exactly what will happen if you let the consultants run your government.

We are in a particularly dangerous time for public services. It should be the case that in the middle of a disaster, government steps up its efforts to keep the public afloat. Instead, as Naomi Klein documents in The Shock Doctrine, in periods of fear and instability, pseudo-“reformers” take advantage of a crisis to jam through measures that, in normal times, the public would be up in arms about. Private for-profit companies are, by their nature, sociopathic in their quest for profits, meaning that they see it as their duty to seize any opportunity to extract money from new sources. The library is a potential cash cow. It is just sitting there. Of course charter school companies and developers should try to mislead the public into diverting its library funding into their own pockets. It is their job to try to lie to you to help themselves, and in a crisis that job becomes much easier. (Likewise, it was the “job” of cigarette companies and fossil fuel companies to mislead us about the social costs of their products. They did this very successfully.) 

I call this “nefarious,” even evil, but it’s important to note that it’s not the product of mere maliciousness. You may wonder why a Democratic mayor would want to hurt the library (especially one who was once far more supportive of it). The answer is that Democratic mayors subscribe to neoliberal ideology, under which hurting libraries is actually good. Bad people do not think they are bad, and that is part of what is so dangerous about them. They probably sincerely believe that they are helping “early childhood education” and “development,” because they live in a world of buzzwords in which these two abstractions are inherent goods, and never bother to interrogate their real-world meanings. After all, “development” just sounds good, doesn’t it? Like “growth.” How could growth be bad? Would you want shrinkage? Of course not.

It is important for all of us to be vigilant about attempts to convince us that bad things are good using euphemisms and buzzwords. The fight to kill libraries will come to your town. It will come because it has to come. The library has money, the library cannot justify itself under neoliberal logic, there are people who see it as a moral duty to root out inefficiency and maximize private profit. You must be prepared to fight to save precious public assets. The death of the library, the privatization of the postal service, cuts to Social Security and Medicare—all of it will happen if the abstract forces of economic logic are allowed to govern us. We can counteract those forces through organizing and fighting, and we must, or else the institutions that others worked so hard to build for us will be taken overnight. It can happen. It has already happened here: New Orleans privatized its school system after Hurricane Katrina (firing its unionized, largely Black teaching staff in the process), a paradigmatic example of the Shock Doctrine at work.

The good news is that this stuff can be stopped. The most important lesson of the infamous Milgram experiment is not that people will do horrible things to each other if told to do so by an authority. It is also that horrible things can be stopped if just a small fraction of the group points out how horrible they are. In the experiment, people were willing to administer electric shocks to victims if commanded to do so, but if one person refused, their neighbors were also likely to refuse. The people who do bad things have constructed a world of euphemism upon which reality never intrudes, kind of like the criminal bosses who can instruct their subordinates to “take care of problems” and never admit to themselves that what is really happening is murder. It becomes much more difficult to sustain the illusion if someone is there persistently demanding to know what “development” really means or helping people understand how they’re being manipulated. The only way library cuts can pass is by being sold as support for the library, because people don’t want library cuts. If the public understands reality, then, the library can be saved, but unfortunately, with cuts to local journalism, governments and the wealthy are held less and less accountable and can get away with more. When they declare that we are in an “emergency” or there is a “budget hole,” if there is no one to loudly say “Yes, but why can’t you tax the people who have all the money rather than taking away critical services?” then they are more likely sound persuasive. It is our job to ask that question over and over and refuse to shut up until they admit that they are making a choice to destroy public services rather than do anything that would demand greater sacrifice from the well-off. That’s what the No on 2 campaign has been doing here, as library and other city workers, Friends of the New Orleans Public Library members, DSA members band together to fight the cuts. Groups like this to refuse to accept austerity as inevitable, and should give us confidence that we do not have to passively accept the unacceptable.

“New Orleans – CBD: New Orleans Public Library” by Wally Gobetz via Flickr is licensed under CC BY 2.0.

Min-terview: Brianna Rennix, Leftenant Editor

This min-terview is fresh from the Current Affairs Patreon subscriber newsletter. More on that here!

Brianna Rennix is the leftenant editor of Current Affairs, and also its most foremost expert on the topics of immigrationbureaucraciesgender, and… children? Odd, but true! Today, however, they join us to discuss a different subject entirely: knives.

1. So, you’re into knives now. What’s that all about?
I mean, I’ve been “into” knives for a long time, in the sense that I think about murder on a pretty much constant basis, but more recently I’ve become deeply obsessed with the bladesmithing competition show FORGED IN FIRE, where some of the nation’s strangest men compete to turn, like, old car parts into seaxes, so that a panel of judges can then bash the seaxes violently against a giant rock to see if they break.

2. Is it fair to say there is a crisis of knife-awareness in the country today?
I think there is a crisis of awareness of knife-related vocabulary, in that before I started watching this show, I would have thought that sentences like “I need to hog off before I quench my edge” and “I’m struggling to peen this tang” were sexual, but it turns out this is just normal shit you say when making knives.

3. If Joe Biden had a knife policy, what might it look like?
I am not sure, but Joe Biden becoming president is definitely like when a weird blunt knife with a major warp becomes the default competition winner because the other knives all had freak accidents or missed parameters by 1/8 of an inch.

4. Hypothetically, could a knife used to prepare vegetables and whatnot for dinner also be used for crimes, or would a special knife be required, purely hypothetically of course?
Only FORGED IN FIRE edged weapons specialist Doug Marcaida is qualified to tell us whether any given knife will KILL, so if I were hypothetically planning to use a kitchen knife to murder someone I would probably find some excuse to invite Doug Marcaida over for the dinner also and get his opinion then.

5. What is one knife mistake you should never, ever make?
DON’T QUENCH YOUR BLADE WHEN IT’S TOO HOT. Unrelatedly I am furious that in these United States there has been no Forged in Fire product placement in where the judges bellow “TIME TO QUENCH” and then chug a bunch of Gatorade, I strongly feel that this is what the Founding Fathers would have wanted.

Merit, Access, and Swordsmanship

The publishing industry is famously one of the most transparently corrupt and cronyist industries in the United States, regularly giving large advances to authors who lack the ability to plot a novel, to create characters recognizable as persons, or to write more than two consecutive sentences of passable English prose. Entrenched racist and sexist standards consistently demand that women and people of color aim at what publishers call “authenticity” and what people who have given it 10 seconds’ thought call “a set of racist and misogynist constraints on the acceptable range of experiences for women and people of color.” On top of this, the relentless pursuit of profit in a shrinking print market breeds an aesthetic conservatism that makes the entire industry institutionally hostile to formally experimental novels in which typography and the visual relation of words to other words play a role. (One of my favorite children’s novels, Michael Ende’s The Neverending Story, is about a boy reading a book, and it originally printed the boy’s narrative and the book’s narrative in two different colors of type. This edition is now quite difficult to find, and so I am forced to hang on to the copy that I checked out from my middle school library and forgot to return.) The combination of these factors presents a seemingly insurmountable difficulty for women or minority authors who want to publish difficult, formally challenging work.

It’s a small miracle, then, that Helen DeWitt’s 2000 novel The Last Samurai was ever published at all, and its publication by a very small press was doubly tragic, because in 2003 an unrelated film of the same name starring Tom Cruise buried this remarkable book in search engine results. I think that it’s probably the best Anglophone novel of the 21st century so far (though I have not read all of them), and I am not alone. What I continue to find so beautiful about The Last Samurai is that despite its obvious formal modernism—among other things, the novel does not use quotation marks, forcing attentive reading to distinguish between narrative and dialogue, and in one memorable scene the flow of thought and narrative is disrupted on the page by the increasing volume and text size of a child’s exuberant attempts to count to 100 in ancient Greek—the novel invites its readers to learn how to read it, to acquaint themselves with just enough of what the narrators talk about that the boundaries of what readers could know or might know begin to stretch further out toward the horizon.

The idea of intellectual potential, of what might be but isn’t—or what might have been but never was—lies close to the novel’s heart. The first narrator is Sibylla, an American woman who faked her way into Oxford for her undergraduate studies and stayed on for a doctorate in classical philology. She is the daughter of two brilliant but stunted people: her father had earned a full scholarship to Harvard, but his minister father convinced him to give the seminary a chance, and his total disinterest in classes at the third-rate seminary that admitted him resulted in mediocre grades and the forfeiture of his Harvard scholarship. Her mother, meanwhile, was a brilliant violinist whose perfectionist father would not allow her to study to become a professional musician, simply because she would not be the best in the world. Sibylla’s successfully conning her way into Oxford is, then, a kind of vengeance on the supposedly meritocratic system in which both her parents were, by any measure, “qualified” to advance but failed, for ludicrous reasons entirely outside their control. Sibylla, too, was failed in the end by the system, having dropped out of her doctoral studies from boredom and taken a job in London digitizing the text of old print magazine articles.

There is a problem here, which is that Sibylla is plainly brilliant, and not in an “impress people at cocktail parties” superficial way, but possessed of an intellellect that cuts straight to the core of problems with almost frightening ease and precision. Reading her narration gives the sense of seeing into the mind of someone who has thought far longer and more deeply than the reader about any number of questions, and the novel could very easily become the story of a singular genius unjustly denied her due by society. But Sibylla herself is far too perceptive to expect justice from an arbitrary world, and she observes this in her first job as a secretary for a small publishing house:

It had an English dictionary that had first come out in 1812 and had been through nine or ten editions and sold well, and a range of technical dictionaries for native speakers of various other languages that sold moderately well, and a superb dictionary of literary Bengali which was full of illustrative material and had no rival and hardly sold at all. It had a two-volume history of sugar, and a three-volume survey of London doorknockers (supplement in preparation), and various other books which gradually built up a following by word of mouth. I did not want to be a secretary & I did not want to get into publishing, but I did not want to go back to the States.

There is no logic to this, just as there is no logic to Sibylla’s quite dire poverty. Nor is there an easy rationale behind her getting pregnant after a one-night stand with “a man who’d learned to write before he’d learned to think, a man who threw out logical fallacies like tacks behind a getaway car, and he always always always got away.” But pregnant she is, and a mother she becomes, and shortly thereafter with the business of raising and educating a child she must contend.

It is primarily through the boy Ludo’s education that the novel’s vision for human creative capacity unfolds itself. Sibylla is not a normal woman and does not take anything like a normal approach to teaching her son. For one thing, she still needs to work, and so her strategy is in part one of desperation: she allows his intellectual curiosity unlimited scope and does her best to satisfy it, reasoning that if he’s curious about the ancient Greek books on her shelves, the task of actually learning Greek will keep him sufficiently occupied that she’ll be able to work and keep their rent paid. Beginning by teaching him (and the reader) the alphabet and finding words for him to color in, we soon find Ludo on the cusp of his sixth birthday about to finish the last book of the Odyssey in addition to literary selections in Latin, Arabic, and Hebrew, eager to finish them all so that Sibylla will teach him Japanese.

The Japanese is quite important, because Sibylla and Ludo spend much of their free time at home rewatching a video cassette tape of Akira Kurosawa’s The Seven Samurai, one of the masterpieces of modern Japanese cinema. For Sibylla it represents a great artist at the peak of his craft, making full use of the available tools of that craft; it also, in her imagination, more than compensates for her having to raise Ludo by herself, since instead of a single male role model he now has 17 (the seven samurai, the peasant boy, the actors who play each of them, and Kurosawa himself). Throughout both Sibylla’s and Ludo’s narration, the same film scenes replay themselves, narrated in the same words, asking the reader to consider how the same fragments of art can strike us differently in different contexts. They also confront us with an artist pushing up against the limits of narrative, inviting us to see what might be possible beyond those limits.

And what tremendous possibilities they are, for Sibylla imagines a development of literature that would raise it to the same technical levels as painting or music, in a passage that goads the imagination like a horsefly:

…it is closer to the truth that a painter would think of the surface he wanted in a painting and the kind of light and the lines and the relations of colours and be attracted to painting objects that could be represented in a painting with those properties. In the same way a composer does not for the most part think that he would like to imitate this or that sound—he thinks that he wants the texture of a piano with a violin, or a piano with a cello, or four stringed instruments or six, or a symphony orchestra; he thinks of relations of notes.

This was all commonplace and banal to a painter or musician, and yet the languages of the world seemed like little heaps of blue and red and yellow powder which had never been used—but if a book just used them so that the English spoke English & the Italians Italian that would be as stupid as saying use yellow for the sun because the sun is yellow…Perhaps a writer would think of the monosyllables and lack of grammatical inflection in Chinese, and of how this would sound lovely next to lovely long Finnish words all double letters and long vowels in 14 cases or lovely Hungarian all prefixes suffixes, & having first thought of this would then think of some story about Hungarians or Finns or Chinese.

It’s a frankly astonishing vision, and what makes it all the more astonishing is that in DeWitt’s hands it seems genuinely possible. After her novel has taught its readers to read ancient Greek, words that were once opaque become merely another aesthetic choice that she makes, and we begin to see glimpses of the literature that might be, that we might ourselves enjoy if only we had time to learn to read it.

Because that, really is what we all need: the time to learn and an attentive teacher. When the initial formal difficulty of the novel has taught us to read it, we can see this diamond pillar of conviction holding up DeWitt’s entire picture of the world. Ludo’s breathtaking achievements are the culmination of a long family history of being failed by the system: Sibylla, with her sure knowledge of what thwarted potential is like, is determined not to stymie her son’s. But where does his potential come from? What makes him special? His father is an ass and a charlatan, so despite Sibylla’s fearsome intelligence, there’s no reason to think Ludo will be particularly brilliant on genetic grounds. The great unknown here is why, and the novel refuses to give us an answer. Instead, it shows us a family of people whose curiosity and potential were stymied, until finally one of them is not, culminating in a scene in which Ludo, all of 10 years old, begins to understand the spoken Japanese in The Seven Samurai, correcting the overly formal and conservative subtitles for his mother and thereby adding an entire new dimension to Sibylla’s experience of a film she knows by heart.

As Ludo grows, the reader is treated to snippets of his diary, until, at age 11, he takes over the narrative. His main concern is the identity of his father, and once he learns that the man is worthless, he begins the search for a new one; he does this because he wants someone in his life whom he can openly admire. Sibylla is not such a person, because she refuses to let herself be seen as such. Her work, done to provide for herself and Ludo, is alienating and mind-numbing, and Ludo’s growing independence causes her to treat him more and more as a small and somewhat naïve equal; this in turn exacerbates Sibylla’s already profound depression. She has not suddenly become a sentimental mother, but the task of raising Ludo did certainly keep her from boredom, and his increased ability to amuse and educate himself leave her with little to do during the day except work and try to amuse herself with her miniscule financial resources. Her discussions of suicide become more frequent.

One of the most persistent and disturbing questions that The Last Samurai asks is what makes life worth living. It offers up the answer early on, in various guises: this is a novel that believes fervently in the power and worth of art. Sibylla is utterly transported by a pianist’s highly experimental concert, and when she is bored of the Circle Line, she regularly takes Ludo to one of the many free museums in London. The city’s free provision of art again raises the question of access: who really has access to all of this? The art is there for viewing, but the education that would let people, if they wished, appreciate it more deeply is expensive and restricted to supposed “elites,” and the paradox of Sibylla and Ludo’s highly educated economic precarity exposes this absurdity. But the kind of art that Sibylla loves most—cerebral, allusive, elevating detail and texture above narrative—remains mostly inaccessible to her in the sense that it is unpopular and therefore a person cannot make a living with it: she cannot enjoy it because the people who would make it are compelled to do something else instead.

This is a very different version of accessibility than we are used to discussing, but Sibylla’s suicidal ideation makes it clear that, for DeWitt, the question of people’s access to art and artists’ access to a living are both intimately tied up with basic economic justice and survival. The problem facing Sibylla is that she is not free to seek out the kind of art and intellectual life that will make her happy; the problem facing the pianist whose concert she saw is that he is not free to give the world the kind of art that he thinks he ought to give. Both have unfulfilled needs, and both are hampered by the need to devote most of their days just to surviving. In dramatizing this, The Last Samurai recasts the question of the “accessibility” of “difficult” art into a question of time and money. DeWitt has written a formally strange and difficult novel that takes the time to teach us how to read it, and that makes us believe that we ourselves, had we the time, could learn to enjoy literature and art that we now think, perhaps wrongly, is beyond our reach. I did not think that it was possible for me to want to read Arnold Schönberg’s Theory of Harmony, but I know for a fact that I am not the only reader whom Helen DeWitt has persuaded that this famously difficult work of music theory, a book that outlines the principles of 12-tone composition, one of the most cerebral and anti-populist movements in the history of Western music, might nonetheless be a satisfying and even joyful book to read, if I take the time to read it properly.

What The Last Samurai believes in most of all is the astonishing untapped potential of every person. In this light, it is among the most thoroughly democratic novels any of us will ever read, because it does not distinguish between its two protagonists’ extraordinary intelligence and their extraordinary circumstances. In its pages readers learn to distinguish enough Greek and Hebrew and Japanese that we come to believe in the possibility that we might know these languages and many more in their entirety; we listen to a description of a nine-hour concert and think how wonderful it would be if we had the time to give our attention to such a thing; we see Sibylla outline a new horizon for literary art and believe, while we read, that with time and teaching we too could come to love such art and long for it the way she does. “When you play a piece of music,” the great pianist of the nine-hour concert says at one point, “there are so many different ways you could play it. You keep asking yourself what if. You try this and you say but what if and you try that. When you buy a CD you get one answer to the question. You never get the what if.” DeWitt poses the question in a radical way: what if our society were organized so that people could both produce and enjoy whatever they wanted? What if we prioritized the idea that people should be able to do and experience the things that bring them joy? What if the resources to satisfy a child’s endless curiosity were available to every child (and indeed to every parent of a child)? We do not live in such a world, but after reading Helen DeWitt’s masterpiece of a novel, I am convinced that it is the world that ought to be, and the one we ought not to rest from building.

The Socialist Ant

At first, you only see one. You kneel down, look closer: an ant. It is scurrying along your driveway in pursuit of some invisible goal. Then you see another. And another. Each one tap-tap-tapping its antennae on the cement, following a chemical trail laid by its sisters. Fascinated and amused, you observe these tiny workers with minute lives that seem so different from your own. Then, shifting your view a little further along toward their apparent destination, right at the edge of the pavement and your well-manicured lawn, you come upon it. A mass of undulating, swarming insect bodies. One on top of the other, legs gripping bodies, bodies gripping legs. Moving together as if controlled by the mind of a single organism–or some mindless Borgian will–the pile of ants is voraciously deconstructing what formerly resembled three double-stuffed Oreo cookies. The writhing insect horde consumes them rapidly, working in sync. Working to sustain the colony. Working, working, working.

The image of the ant primarily as a worker has persisted for millennia and repeatedly appears in disparate cultures across the world. The Hebrew scripture Proverbs 6:6 compels the reader to “go to the ant, O sluggard; consider her ways, and be wise!” Greek storyteller Aesop’s antsiest fable, The Ant and the Grasshopper, derives from the working ant the moral that, “it is best to prepare for the days of necessity.” A Mexican proverb cautions that, “an ant on the move does more than a dozing ox.” Above all else, ants work

What can we, as humans, learn from the most proletarian of all animals? I propose that the beauty of a truly socialist society is to be found in the ways of the ant, when properly understood. But first, in light of the apparently mindless insect horde you may find occupying your driveway, we must confront the leading alternative. Are ants in fact a terrifying representation of authoritarianism? Are ants… fascists?

They say that a little knowledge is a dangerous thing, and myrmecology (the study of ants) is no exception. Superficial observations of ant colonies may indeed suggest more fascistic tendencies. As far as ant scientists–myrmecologists–can surmise, ant workers do not get to choose the work they do for their colony. Dissent, such as it might occur among ants, is simply not tolerated. The individual spirit, any individual will, appears suppressed, with work unto death the fate of colony members. A soldier ant is born a soldier and must die a soldier. A worker ant is born a worker and must die a worker. A queen ant is born a queen and must die a queen.

Such societal regimentation and perceived lack of any individual spirit–under a monarch no less!–not unreasonably leads some to conclude that ants are our fascistic cousins in the insect world. Intellectual heavyweight and rightwing heartthrob Ronald Reagan, in his infamous 1964 speech “A Time for Choosing,” paints ant societies in the dull colors of authoritarian misery:

“You and I are told increasingly we have to choose between a left or right. Well I’d like to suggest there is no such thing as a left or right. There’s only an up or down—[up] man’s old—old-aged dream, the ultimate in individual freedom consistent with law and order, or down to the ant heap of totalitarianism. And regardless of their sincerity, their humanitarian motives, those who would trade our freedom for security have embarked on this downward course.”

The ant heap of totalitarianism! That certainly does seem ominous. Reagan shares this view with English author T.H. White who, in The Book of Merlyn, describes the ant thusly: Formica est exemplo magni laboris (“The ant is an example of great industry”). Sounds good, right? Wait until you hear what happens next: Merlyn turns Arthur into an ant in order to instruct him through lived experience. All of his fellow ants–along with Arthur himself–are known only by a number, language is largely reduced to “done” or “not done,” and the entrance to the colony is marked with a sign that reads “EVERYTHING NOT FORBIDDEN IS COMPULSORY.” At the head of the mindless working colony is an authoritarian queen, and work commands are dictated to Ant-Arthur directly via a “voice in his head” (presumably communicated via antennae). Just when his ant colony is on the precipice of total war with another colony, Ant-Arthur is, to his relief, returned to the human realm.

Propagandists like Reagan and White would have you believe that the life of the ant is terrible. Monotonous, drab, and, worst of all, lacking in individual freedom. Consigned to a life of endless work, the collective will of an ant colony is portrayed as monstrous as it is relentless and powerful–easy pickings for films like Them!, Phase IV, and Empire of the Ants. Perhaps some of our horror derives from an uncomfortable perceived similarity between an ant’s life and ours. The British writer Gerald Brenan could very well have been on to something when he wrote that “[w]e are closer to the ants than to the butterflies. Very few people can endure much leisure.” Most of us immersed in U.S. culture proclaim a love for individual freedom and exhibit a disgust toward allegedly mindless creatures like ants. Meanwhile, indoctrinated into a Protestant work ethic and the pursuit of the illusory American Dream of hierarchical social advancement—a 2017 Harvard study found that the chances of “moving on up” from the bottom quintile of earners to the top quintile is 50 percent lower than Americans think—we sacrifice most of our waking hours to behemoth corporations. We have replaced fulfilling work that serves the common good with exploited labor that serves only a bleak set of corporate interests. 

To be sure, the ways of the ant can be jarring, and it may be unwise to import into human society everything that we find among ant societies. In some species, a worker that contracts a disease will be forcibly removed from the colony to protect the wellbeing of the whole society. In weaver ants, even larvae are put to work in a form of child labor, with the adult workers using larval silk to bind leaves together and form nest structures in trees. Ant societies’ decisions are determined through chemical pseudo-communication, rather than anything at all resembling reasoned debate.

Art by Maxwell Singletary

But let us not put the “ant” in “ignorant.” It would come as a shock to the propagandists surveyed above–but not, of course, to Current Affairs subscribers–that the image of Formica est exemplo magni laboris is rather overstated.

It turns out that according to the Protestant-capitalist work ethic, many members of ant societies are abject sloths. Recent work on Temnothorax rugatulus ants, by Daniel Charbonneau and colleagues, has established that many so-called workers do not actually do much work. Colony activity is instead characterized by cycles of labor, with variation in how often any given worker is committed to any laborious task. Some “workers” may simply be a reserve force whose only purpose is to fill in any gaps should overall worker numbers fall below a certain level. At most other times, they are inactive, doing close to nothing. It is in fact believed to be common across colonies of different social insects that as many as 50 percent of workers are inactive at any one time. Historians of human work life before the Industrial Revolution—when eight hours of labor a day was considered a lot, and even peasants enjoyed several months’ worth of holidays—should find something familiar in these insect societies characterized by frequent breaks and periods of inactivity across large swathes of the workforce.

This unexpected fact about ant colonies echoes the Marxist slogan, “from each according to his ability, to each according to his needs.” The colony does not require constant activity from all workers at all times. The average person growing up in the United States, acculturated to a system of ceaseless alienated labor, might consider such inactivity the most, well, alien feature of ant colonies. Even people in more enlightened societies may be inclined to agree. Spanish essayist and novelist Miguel de Unamuno is certainly in this camp, as shown by his spiteful eruption of interspecies aggression in Mist: A Tragicomic Novel: “The ant, bah! The most hypocritical of all animals. All that he does is to walk about and make us believe that he is working.” Similarly, a century before the publication of research on “lazy” ants, the Canadian-American actor and comedian Marie Dressler presciently, if unintentionally, previewed a better world when she mused, “If ants are such busy workers, how come they find the time to go to all the picnics?” 

Then again, who wouldn’t like to devote less of their life to brain-numbing toil, and more of it to attending picnics in the park? By investigating ants’ lives in their totality, we begin to see the ant as socialist—and ant colonies as positive images of what human societies may become (and in some ways once were, prior to industrialization). Ironically, the Jewish proverb’s command to “go to the ant, O sluggard” commends an enlightened conception of human work as a more limited feature of our fundamental identity.

What about the authoritarian ant queen, oppressively ruling over Her subjects with an iron tarsus? While in some species the queen may exert a degree of chemical control over reproduction by suppressing egg-laying behavior in the worker caste, the label “queen” is largely a monarchic misnomer. Ant “queens” are, for the most part, simply the reproductive unit of the colony, with day-to-day decision-making (where to forage for food, as one important example) instead driven by emergent processes that arise from both the individual choices and collective will of the workers. Each worker that emerges from her colony to forage initially seeks food on her own, laying a chemical trail to alert her sisters to the location of a discovered food source. Over time, through the relative strengthening of trails to food sources that are both more abundant and nearer to the colony than others, the collective “chooses” to forage at better food sources (occasionally making mistakes along the way, of course). Contrary to the images conjured up by Reagan and T.H. White, the ant society functions far more like a grassroots democracy than a totalitarian dictatorship.

Ant sociality, organized through bottom-up decision-making, is as anti-patriarchal as you should now be expecting. To the extent that patriarchy fundamentally relies on the maintenance of hierarchies (sorry, lobsters!), this may not be very surprising. But it is also the case that all workers in ant colonies–that is, all members of the decision-making body of the colony–are female. Gender differentiation, even if ants could theoretically conceive of such a concept, would be highly unlikely to exist in any ant society, as males are generally produced only once a year, live for about a day, mate, and then die. Notably, while ant colonies are not patriarchal, neither are they matriarchal: no female ant, even the “queen,” serves as a true organizational head. Driven by neither a leader nor a commitment to a hierarchy, ant sociality is instead characterized by a dedication to community needs, whatever they may be and by whomever they are needed.

The positive, healthy sociality that binds ant colonies together has received attention from human luminaries over generations. Aristotle, in The History of Animals, links ants to humanity via shared sociality, both as “social creatures” that “have some one common object in view,” a feature that is unique even among “all creatures that are gregarious.” In Plato’s Phaedo, Socrates instructs Cebes that the happiest people are those who model the ants, bees, and wasps as “social and disciplined creature[s]” and thus become “decent citizens.” Like Aristotle and Plato, the philosopher Kanye West–before he ever donned a MAGA hat–spoke favorably of ant sociality in a lecture at Oxford University, observing that “people say it takes a village to raise a child. People ask me how my daughter is doing. She’s only doing good if your daughter’s doing good. We’re all one family. We have the ability to approach our race like ants, or we have the ability to approach our race like crabs.” (Two related ant-ecdotes: Both Khloé and Kourtney Kardashian once took to Twitter to ask if ants have dicks [they do], and the Daily News reported that fire ants could be used to help Kim Kardashian’s psoriasis. It is best to keep up with the Kardashians if only to learn more about ants!)

Could the spirit of the ant, rather than instilling in us a sense of disgusted horror, instead propel us toward a more beautiful, socialist state of international solidarity and flourishing existence? The opportunity we have to learn from the ant was not lost on Tenzin Gyatso, the 14th Dalai Lama, as expressed in his book on living a meaningful life:

“In fact ants, to cite just one example, work unselfishly for the community; we humans sometimes do not look good by comparison. We are supposed to be higher beings, so we must act according to our higher selves.”

The extreme individualism promoted as a fundamental virtue in the United States warps our perceptions of collective goods and communal goodness, manufacturing a misguided national revulsion at socialism and ant societies alike. Both are unfortunately perceived as destructive forces that limit societal goods, whether that be technological development or the wooden rafters in a suburban home. Yet ants have proven that an integrated, communal existence can be highly successful–there are over 13,000 ant species that are currently known to science, and ants are dominant or conspicuously present across nearly every continent on the planet (Antarctica, despite its name, is the one exception). Ants, which are technically a subset of wasps, likely evolved over 120 million years ago, and have survived and thrived up to and through the modern day. We humans do not look good by comparison, indeed!

Two of the most influential myrmecologists of this generation, E.O. Wilson and Bert Hölldobler, highlighted that the “competitive edge” that has led to the worldwide success of the ants is their “highly developed, self-sacrificial colonial existence.” Following this observation, they opined that “[i]t would appear that socialism really works under some circumstances. Karl Marx just had the wrong species.” In my view, the first sentence is a deep insight, but the second fails to capture the potential of human societies to move beyond a constraining capitalist framework. If ants are seen solely in terms of their identity as “workers,” which we have already discovered is something of an oversimplification, then perhaps Wilson and Hölldobler are correct. But it is a capitalist myth that human worth is reducible to economic productivity, and so too is it a myth that human inspiration from ant life is limited to their productive work ethic. 

South African singer and civil rights activist Miriam Makeba sees the power of the ant in spiritual rather than utilitarian terms, writing that “I look at an ant and I see myself: A native South African, endowed by nature with a strength much greater than my size so I might cope with the weight of a racism that crushes my spirit.” Uyghur civil rights activist Rebiya Kadeer derives similar inspiration from individual ant persistence, motivated by a fable her father used to tell her about a tenacious little ant and an egregiously skeptical bird. Individual strength, not simply communal success, is found among the ants. And what Wilson and Hölldobler term “self-sacrificial colonial existence” could also be termed “mutual care” or simply “love.” Why should we confine such behaviors and social structures to the anthill? The ant colony is an image not of fascism or authoritarianism, but rather of communal, socialist living where individuals are born with the innate belief–one that is highly attuned to reality–that all in a society rely on each other to some degree.

Is it a fool’s errand to try and build more ant-like, socialist human societies? While we can derive both individual and collective inspiration from a holistic understanding of ants, it is of course true that humans are not ants, and do not share as deeply an ant’s innate sense of mutual reliance and community. Ants may not even have any kind of “sense” that we would recognize, but rather pure non-conscious instinct. For us humans, with our freedom of choice, it is always a “time for choosing.” But it is no coincidence that ants keep cropping up, over thousands of years, as a source of knowledge and inspiration in various cultures around the globe. Miniconjou-Lakota holy man John (Fire) Lame Deer speaks about an “ant power” that exists despite the smallness of the ant. Ants foreshadow the great wealth of Midas in Greek mythology (when he was a child, a stream of ants paid tribute to the future king by feeding him grains of wheat). According to Hopi legend, the “Ant People” brought the Hopi into their tunnels to protect them during global cataclysms. Guru Nanak, the founder of Sikhism, favorably compares “an ant filled with the love of God” to “kings and emperors with heaps of wealth and vast dominion.” Aboriginal people in modern-day Australia include honey ants in traditional paintings. A Korean saying translates to “an ant hole could break your precious tower down,” indicating that a tiny mistake could ruin everything. A community in Ecuador named itself “Añangu,” a Kichwa term for “leaf-cutter ant.” The Brazilian footballer Miraildes Maciel Mota, known as “Formiga” (“Ant”), is praised for her athletic prowess. A proverb of the Mossi people in Burkina Faso states that “when the ants unite their mouths, they can carry an elephant.” The near-universal presence of ants cohabiting with humans provides an opportunity for shared metaphors, shared symbols, and shared imagination. Any movement dedicated to international solidarity and human progress would be foolish not to appreciate an animal with such potential for global inspiration.

I am reminded of a Chinese fable that a fellow researcher shared with me while I was conducting research on spiny ants at the Xishuangbanna Tropical Botanical Garden in Yunnan, China. In “The Wisdom of the Ants,” a colony of 1,000 ants are living on a mountaintop when, one day, a forest fire surrounds them and threatens to wipe out the entire colony. Without hesitation, the ants realize what must be done. All 1,000 ants group together into a ball, roll down the mountainside toward the fire, pass through the fire, and arrive on the other side, where the ball of ants disassembles. While many ants died from this approach, it was a communal decision that was necessary to ensure the survival of the colony, and the fable refers to this as “the wisdom of the ants.” Movingly, the story was once told in 1995 by Chai Ling, a survivor of the 1989 Tiananmen Square protests. 

I believe that this “self-sacrificial” nature of ants–true in reality as it is in this fable–is attainable even in the human species. Contrary to Wilson and Hölldobler, collectivist ideologies do not “have the wrong species,” as demonstrated whenever protestors, as a collective, risk their bodily safety (and sometimes their lives) for any noble aim. Even right-wing figures sometimes issue calls to self-sacrifice, such as pressuring workers, for the sake of “the economy” or to “save America,” to resume labor despite a higher personal risk of serious illness or death from the current coronavirus pandemic. While a vibrant economy indeed carries some positive connection to human wellbeing, self-sacrifice is better justified when in service of goods that more directly promote human flourishing like physical and mental health, family cohesion, spiritual development, or fulfilling leisure.

It is important to remember, too, the remarkable biological diversity that real-life ants protect with their collective efforts. The over 13,000 known species exhibit behaviors that range from tending to aphids as cattle (including defending them from predators, milking them for the sugary “honeydew” they excrete, and moving them to greener pastures) to nomadic mushroom harvesting. Leaf-cutter ants gather leaves to feed to fungi which they maintain via fungal gardening in their underground nests. Several Pseudomyrmex species rely on mutualistic relationships with plants, which produce hollow thorns for housing and fatty Beltian bodies for food in exchange for the ants’ aggressive defense of the plants against herbivores. This is but a small sample of the stunning array of behavioral variation that has evolved within these communal insect societies. 

Ants also enrich the world simply by existing in it. Far from monotonous and drab, the diversity of ant life is marvelously odd and ingenious. Turtle ant soldiers, in the genus Cephalotes, use their flat heads as doors to protect the circular openings of their arboreal twig nests. The bullet ant Paraponera clavata carries a stinger that delivers such painful venom that a person can be in pain for a full 24 hours from just a single sting. Fire ants can form giant living rafts out of their bodies in order to survive floods, while army ants do the same in order to build living bridges and “bivouacs.” Trap-jaw ants can close their mandibles so fast that if they hit a surface, their bodies go flying into the air up to 50 times their body length. Spiny ants in the genus Polyrhachis host ginormous defensive thorn-like spines that can nearly exceed the length of their entire thorax.

As it is, or should be, with human diversity, the variation across ant life is worth preserving for its own sake. To be sure, ants form mutualisms that benefit other species and also provide ecosystem services through soil aeration and significant nutrient cycling. But the intrinsic value of preserving an array of ant species is not rooted in the external services that they provide. Ant societies, much like human ones, are both richer and more remarkable for all of the evolutionary variations on a simple myrmecological theme.

Ants are socialists, a truth as beautiful as ants themselves. Confronting a pulsating mass of uncountably many living creatures piled atop each other while they work to consume every crumb of a cookie may, as when first grappling with a radically new idea, initially elicit revulsion. Yet when properly considered as an instance of a non-hierarchical, democratic society working toward a noble communal goal (feeding their ant babies and providing mutual aid), that which was formerly unsettling is transformed into an inspiration. Incorporating more ant-like perspectives into our political decision-making is sure to come with challenges. But our human capacity to draw inspiration from diverse sources and use it to imagine or re-imagine our future sets us apart as a species. With no shortage of domestic and global challenges to confront—including wars, malnourishment and starvation, inadequate healthcare, racist and classist justice systems, climate change, union busting, corporate exploitation, underemployment, corruption, destruction of natural resources, and so many other pressing issues—we would be unwise to waste any good tool at our disposal. Let us not waste the wisdom of the ants.

The Royal Jelly Problem of Modern Workplaces

“What if I’m not actually good at anything?” For many working people, this thought lurks like an eel in one’s brainsoup. Even if you’ve spent years in a particular field, it’s easy to feel like whatever skills you possess are the equivalent of being proficient with Microsoft Word. Such doubts gnaw at your soul even harder if (or, sorry to say it, when) you get laid off, furloughed, right-sized, or otherwise informed by your employer that your services are no longer needed. This sucks. Nothing you can do will make it not suck. However, you may find some consolation in the story of Draymond Green, power forward for the NBA’s Golden State Warriors.

If you saw Draymond Green on the street, you would correctly assume he was a basketball player. He is a very large man by normal standards, standing 6’6” and weighing 230 pounds. By Green’s professional standards, however, this is nothing special. The average height of an NBA player is 6’6”, and it’s common for Green to guard opponents who are 7 feet tall. Green’s statistics—his quantifiable production metrics—are similarly meh. In his eight-year career, he’s scored 9.0 points per game. The average NBA player scores around 8.6. He’s never been the speediest guy, and his career PER (the stat that claims to measure someone’s overall value) is just 15.1. The league average is 15.0. Judging from both the numbers and a quick eye test, Draymond Green looks extremely replaceable. 

Except he’s not. There’s a reason Green’s coach calls him “a future Hall of Famer” (a claim that, predictably, draws the ire of data-worshippers). His team has won three championships since he joined them, and Green’s contributions have been crucial. He’s a three-time All Star who was named the Defensive Player of the Year in 2017. His skills as a passer enabled his team to “crack the code of modern basketball,” as one sportswriter put it. He’s consistently mentioned as one of the toughest and most versatile players in the entire NBA.

Just in case it isn’t clear: Draymond Green is really fucking good at basketball. But if he hadn’t had the good fortune of playing for Golden State—widely regarded as one of the NBA’s best-run franchises—there’s a solid chance he would’ve already washed out of the league.

The Royal Jelly Theory of Career Paths

For over a decade, the renowned basketball trainer David Thorpe has advocated for a “royal jelly” approach to player development. According to Thorpe, royal jelly is the stuff that’s fed by bees to an exclusive group of  larvae who will become queens, and whoever receives the royal jelly is destined to be special. The gist of his theory is that a tiny handful of players, like LeBron James, possess such overwhelming talent that they can thrive no matter their surroundings—but the vast majority of players (like Green) need a nurturing, supportive environment to achieve their full potential. Thorpe’s grasp of the science may be shaky—royal jelly is actually fed to all larvae, just in different amounts—yet his broader point is still valid. Not just for basketball players, but for people in general. 

Being stuck in a job where you don’t get a chance to show what you can do is one of the most universally frustrating human experiences. In David Graeber’s excellent book Bullshit Jobs, the late anthropologist describes how “the pleasure of being the cause” is fundamental to both our happiness and our sense of worth. What is meant by “the cause” differs from person to person—a tailor may take pleasure in being the cause of a beautiful dress, an event planner might find similar satisfaction in being the cause of a conference that gets rave reviews from attendees. Regardless of your occupation, it just feels good to use your various talents to cause something you consider cool.

Too often, however, workers are denied the opportunity to be a cause for anything meaningful. This is especially true in the service industry, which is driving most of what passes for “job growth” in the pandemic-wracked United States (and has been for years). Your role is set in stone from the first day you’re hired. You’re expected to bring the customer their food, rub their feet, or provide some other service to them. It’s usually not forbidden to have ideas about better ways to provide that service, but it’s not encouraged much either. There are other people who get paid to do the thinking. They’re the queen bees, and you’re a drone. The company-colony is simply not structured in a way that would allow you to be anything more than that. 

Basketball teams and other companies typically have rigid hierarchies that determine who gets the royal jelly. Feeding the stars is Priority A. In basketball terms, this usually means giving opportunities to high draft picks and big-name free agent signings. There’s an understandable if crude logic to this. The organization has invested considerable resources in such players, and it makes management look stupid when those investments don’t pan out. This means stars (or would-be stars) get chance after chance to stretch their metaphorical wings, even if they shit the bed every time. There’s always team willing to give the benefit of the doubt to “assets” like former No. 1 overall draft pick Andrew Wiggins, despite the disappointment that always ensues. 

In the conventional working world, royal jelly is typically reserved for people who fall into one of three categories: 

1.    Fancy degree holders: If they have an MBA from an Ivy League university, how could they not be good at business stuff?

2.    Veterans  of big companies: If they were good enough for Apple to hire, they must be something special.

3.    Relatives of the boss: Authority is just in their genes, baby. 

To be fair, some companies have more enlightened ways of determining who gets the royal jelly—just as the Warriors gave Green a chance despite his lowly status as an undersized second-round draft pick. After all, one of the American business media’s favorite stories is “CEO started at the bottom and worked her way to the top.” But it’s worth considering why these stories are considered newsworthy. Why is it so surprising to learn that the guy who runs Wal-Mart once loaded the company’s trucks as a teenager*? 

“Because,” you might be screaming at your screen, “that almost never happens!” From your own experience, you know that the formula of Grit + Persistence + Loyalty = Personal Success and Fulfillment is one that exists only in the heads of billionaires and other wealth-fetishists. The odds of a non-pedigreed entry-level worker in any field getting a taste of the royal jelly is exceedingly slim. If you start at the bottom, you might make it to the upper-bottom or lower-middle with immense luck and hard work. But your odds of getting a chance to show what you can really do remain slim—unless, like Draymond Green, you manage to land in one of the rare situations that actually gives you a chance to succeed. 

*Wal-Mart’s current CEO earned the equivalent of $16.49 per hour when he was an inexperienced teenager loading trucks back in 1984. Today, he’d make around $11 per hour if he was lucky.

Royal Jelly and the Supply Problem

Back in 2012 when the Warriors picked Green in the second round of the NBA draft, few observers paid it much notice. Fast-forward a few years, and the entire league was obsessed with the quest to find the next Draymond Green. The appeal was obvious: instead of scuffling with 29 other teams over a handful of pedigreed studs (i.e. the type of player who always comes with a massive price tag), a savvy franchise that knew how to recognize and develop less-obvious talent could tap into a massive, under-exploited resource pool (while saving a shitload of money in the process). 

The conventional work world has had a similar epiphany in recent years. Publications aimed at business leaders now publish an unending stream of how-to guides for “discovering your employees’ hidden talents” and developing those skills to the max. In this case, the benefits are twofold. Companies can save money by promoting talented workers from within instead of paying a premium to lure them away from competitors. They can also avoid the public relations nightmares that occur when word gets out about discriminatory distribution of the royal jelly.   

Everyone agrees that the royal jelly should be distributed more widely, and yet… it is not. Not even close. Take the Warriors for example. Since drafting Green in 2012, the team has made an additional 11 selections. Only a handful of those players are still on the team, and none of them have achieved even a fraction of Green’s success (the Warriors’ abandonment of their previous royal jelly philosophy is something we’ll address in a second). Outside of the NBA, the story is much the same. Despite the constant assurances and multimillion dollar “initiatives” from companies like Google that claim to prioritize finding and cultivating talent from underrepresented demographics, there’s been little actual progress on that front for years. Organizations love to make noise about hiring diversity and inclusion executives, but they’re much quieter when talking about any tangible action that might spread the royal jelly more evenly.

Why is there such a disconnect between how we believe royal jelly should be distributed, and how it is in reality? Here, it might be helpful to define “royal jelly” in more concrete terms than we’ve used so far. Luckily this is easy to do. Green himself puts it best—it’s “the opportunity to fuck up.” In the NBA, this might mean bricking some 3-pointers early in the shot clock, blowing the occasional defensive assignment, or chucking an ill-advised outlet pass into the fifth row of seats. In the conventional working world, this could mean botching a presentation, messing up a big delivery, or failing to massage a client’s ego to their satisfaction. In either case, getting the royal jelly means knowing your next mistake won’t be your last. 

We can identify a handful of reasons why the supply of royal jelly is so constricted. First, there’s incompetent leadership. In many organizations the people in charge have little interest in or aptitude for cultivating the growth of the people they manage (Tom Thibodeau, the recently hired coach of the New York Knicks, might be the most notable example of this in the NBA, with one ex-player calling Thibs’ approach to development “a slap in the face”). Second, there’s an often-misplaced sense of urgency. Organizations that are bent on chasing short-term goals tend to devalue any action that doesn’t directly and immediately contribute to achieving that goal. This is why Green’s own team, the Warriors, has been unable to replicate the success they had with him—who cares about helping the next batch of young players hone their skills when there’s a championship to win this year? In more mundane terms, who’s got time to help a young graphic designer get better at Photoshop when there’s a big project due tomorrow?

A third reason for the royal jelly shortage is unique to the conventional working world, and it can be traced to the odious influence of the so-called “lean business model.” In theory, lean companies are designed to eliminate waste and inefficiencies by streamlining various internal structures and processes. Maybe that sounds like vacuous bullshit to you. That’s because it is! A lean company is really just one that has the minimum viable number of people working for it. There are no Miltons, in other words—no superfluous humans drawing unnecessary salaries, no bipedal drains on company resources who can’t pull their own weight or, more realistically, several times their weight. Fewer fixed expenses means fewer mouths to feed with the pie of profit.

Even more traditional-style companies have embraced lean business principles, and the end result is that professional development has now been outsourced to the employees themselves. It’s no longer the company’s responsibility to give you the help you need to hone your skills (and make them more money in the process). It’s up to you, the dubiously motivated employee, to do your own developing on your own time. Like if bee larvae were expected to feed themselves their own royal jelly.

Why Everyone Should Get a Taste of the Royal Jelly

When Draymond Green entered the NBA, expectations were low for a number of reasons. As mentioned before he wasn’t particularly tall, and basketball is a sport where height confers obvious advantages. He was also old, at least in relative terms. Green was a 22-year old rookie in a league where many top draft picks are teenagers and careers are measured in dog years. Perhaps worst of all, he was a tweener—a player stuck in between positions, without an obvious A+ skill that would allow him to thrive in a specific role. 

Green could dribble, but he wasn’t a top tier ball handler. He was willing to shoot from distance, but his accuracy came and went. He was quick but not exceptionally so, fast but not remarkably so, agile but not exactly a gymnast. Every good thing you could say about him came with an asterisk. He wasn’t terrible in any particular area, but talent evaluators doubted that a dude with a bunch of B-/C+ attributes could succeed at the highest level. Each team in the league (including the Warriors) passed on selecting Green at least once. 

A few years later, Green had become an indispensable part of the Warriors’ title runs. To use the lazy sportswriter’s favorite clichés, he was the glue that held his team together, the Swiss army knife who did a little of everything, the key that unlocked the Warriors’ full potential. Meanwhile, players who did have that single A+ skill prized by scouts—like the sweet-shooting Kyle Korver or the defensive dynamo André Roberson—were rendered increasingly obsolete in the playoffs. Their lack of versatility made them a liability when the stakes were highest. Green, meanwhile, was balling out. 

The same principle of “better to be pretty good at a bunch of things than excellent at just one” has been applied successfully outside of pro basketball. Books like David Epstein’s Range: Why Generalists Triumph in a Specialized World have made the argument for letting kids dabble in a variety of pursuits, and such views are now quite popular among parents who might have once steered their kids into focusing obsessively on a single sport, musical instrument, or area of study. Kids are happier when they get to explore the full range of their abilities, and they also tend to be better at whatever job they end up doing. This is an important point, and we’ll get back to it in a moment.

For now let us return to the conventional working world, which sadly still has little clue what to do with “tweeners.” Despite the fact that job duties are now broader than ever (Sell the company’s products! Run its social media accounts! Manage your boss’ schedule!) on-the-job training has been declining for decades. In 1996, over 13 percent of employees received such training—by 2008 that figure had dropped to 8.4 percent. While more up-to-date statistics are not yet available, since they’re collected only sporadically by the U.S. Census, economist Timothy Taylor makes a strong point when he says it seems “pretty unlikely that [the numbers] have risen in the aftermath of the Great Recession.” 

Thus, at a moment when workers are expected to wear more hats than ever before—while often being paid much less than their predecessors—companies are showing an astounding lack of interest in sharing the royal jelly. Instead of taking the time and energy to develop raw-but-talented personnel into well-rounded All Stars, organizations in almost every field are gambling they can find a desperate, over-qualified part timer to patch the leaks in their boat and keep it afloat until some deep-pocketed dumbass bails them out big time. There’s a reason why stock in “freelancer platforms” like Fiverr keeps soaring: you can wring more short-term profit out of a company when you can invest the bare minimum in the people who do the actual work. Such a strategy makes deeply depressing sense in theory, if not always in practice. 

This is a terrible way to run an organization, and an even worse way to organize a society. There is simply no excuse for failing to share the royal jelly. From the conservative, “business-friendly” point of view, it’s a waste of resources. If your company has a longtime intern who’s pretty good at strategic planning and can whip up solid press releases and even knows a bit about marketing, it’s ridiculous to pay outside consultants inflated retainers for basically the same services you could get for the cost of a decent pay raise and a change in title. From the left point of view, hoarding the royal jelly is immoral and unethical because it pampers a few people while making the majority depressed, hopeless, angry, and unfulfilled. Neither of these justifications is likely to prove persuasive to those of the opposite perspective, but who cares? Either is fine on its own if it leads to the broader distribution of royal jelly.

At this exact moment, there are millions upon millions of Draymond Greens walking among us. Imagine! What an enormous number of interesting, talented humans, each of whom could take their teams to new heights! Who knows what incredible things they could do? All they need is the opportunity to fuck up and try again.

The Escalating War Between The Republican Party and Democracy

“If Republicans don’t challenge and change the U.S. election system, there will never be another Republican president elected again.”  — Sen. Lindsey Graham (R-SC) 

The majority of Americans reject the Republican Party’s basic agenda. Two-thirds of the country believes the federal government needs to do more to deal with global climate change. The majority desire a single government healthcare plan that would cover everybody. More Americans actually think the number of immigrants to this country should be increased than think it should be decreased. 71 percent believe transgender people should be allowed to serve in the military, and less than ⅓ support keeping marijuana illegal. Some polls have shown that nearly ¾ of people believe in a new wealth-tax on the super-rich. 

The Republican Party rejects these dominant viewpoints. Not only does it not want a single government healthcare plan, but Republicans loathe the Affordable Care Act, which was intentionally watered down in the hopes of pleasing conservatives. The right’s position on climate change ranges from “it’s a hoax” (the president) to “it’s real but there’s no need to worry much about it” (the Wall Street Journal editorial page). The Trump administration’s immigration policies are hideously cruel, and would probably shock and disturb people if the news media paid as much attention to the suffering of immigrants as they do to Trump’s personal finances. Trump, while often referred to as a “populist” in the 2016 campaign, and promising to raise taxes on rich people like himself, promptly gave a giant tax cut to the rich when he became president.

We live in something faintly resembling a democracy, in the sense that ordinary citizens semi-regularly get to choose which of the two major parties they would like to be ruled over by. The existence of such a system presents a basic problem for the Republican Party. If their ideas are not popular, but the party with the most popular ideas is the one that wins under a democratic system, Republicans are destined to lose. Even if there are many tens of millions of passionate believers in the Republican agenda, if those people are ultimately a minority, they will not prevail in a system where you have to get the most votes in order to be given power.

Democracy has always presented a problem for the right, due to the relative unpopularity of right-wing ideas. This was even true in 1930s Germany. The majority of Germans never voted for the Nazi Party—even in an election where Hitler’s Brownshirts unleashed a giant campaign of violent repression against their political enemies, they fell well short of 50 percent of the vote, a problem they solved by getting rid of elections entirely as soon as they were able to. 

Some U.S. conservatives are quite open about their suspicion of universal suffrage. The Founding Fathers, of course, explicitly did not believe in it, and there are still women alive today who were born before women had the right to vote. Citing England as an example, James Madison famously noted that “if elections were open to all classes of people, the property of the landed proprietors would be insecure” and thus government should “be so constituted as to protect the minority of the opulent against the majority.” Today, right-wing publications still invoke the founders to explain why democracy is bad. Here’s a Heritage Foundation article about the risks democracy poses: 

In a democracy, of course, the majority rules. That’s all well and good for the majority, but what about the minority? Don’t they have rights that deserve respect? Of course they do. Which is why a democracy won’t cut it. As the saying goes, a democracy is two wolves and a sheep voting on what’s for dinner.

Now, here we should be careful to think about what is actually being referred to by the abstractions about minority rights, because it can be used to mean several very different things. A society in which 10 percent of people are Black, and 90 percent are White, and the latter use their demographic predominance to simply subvert the political preferences of the minority, is one where there is a “tyranny of the majority” problem. But what about a society in which one person has a billion dollars, and the rest of the people have no dollars, and must sell their labor to the billionaire in order to survive? They could, theoretically, vote to tax the billionaire, and because there are more of them, they would win in a system of “majority rule.” 

Clearly, then, it is correct that strict “majority rule” in which 50.00001 percent of people could do anything to the remaining 49.99999 percent if they had the votes is a recipe for injustice. People have inherent rights that can’t be violated by some larger group merely because it is larger. But what are those inherent rights? Is it “tyranny” for the “opulent” to be deprived of some of their opulence? The majority of people are not shareholders in ExxonMobil. Is it a violation of the rights of ExxonMobil shareholders for the majority to decide that the company’s contribution to climate destruction necessitates its expropriation and dismantlement? It’s easy to see how an argument that numerical minorities have inalienable rights could be used as a justification for subverting important popular policies that numerical minorities simply do not like.

Such is the situation the contemporary Republican Party finds itself in. Conservative ideology (at least of the kind that dominates the Congressional Republican Party) is that the wealthy are entitled to their wealth and that the government should not engage in the business of making society fairer through generous social welfare policies. But if the majority of the population does not agree, Republicans have a choice between simply accepting that they are beaten and finding some way to thwart that majority. Because conservative ideology says that small government and the protection of the wealthy’s property are issues of fundamental liberty, and that things like universal healthcare programs are tyranny and authoritarian socialism and the Road To Serfdom, they are disinclined to give up the fight.

And so the Republican Party is in an inherent conflict with American democracy. In the “red states,” where they predominate, they have less of a problem. But on the national level, they need institutions that will “protect the rights of the minority.” Fortunately, we have such institutions: the Senate, the Electoral College, and the Supreme Court. The Senate gives representation to Wyoming but not to Washington, D.C. or Puerto Rico, and gives a person in California much less of a voice than a person in New Hampshire or South Dakota. The Electoral College means that a candidate can become president despite getting millions fewer votes than their opponent. The Supreme Court has absolutely colossal powers; it can overturn essentially any law that it likes. Yet its justices are unelected, must be confirmed by the undemocratic Senate, and now tilt conservative by 6-3 in a country that does not tilt conservative 6-3. 

It is worth emphasizing just how illegitimate the structure of American government is from a democratic perspective. The entire basic framework, the Constitution, was never approved by the population on which it was imposed. Denying full voting rights to D.C. and Puerto Rico is impossible to defend (likewise colonial possessions like Guam and the U.S. Virgin Islands); every day that the situation persists is a day that the United States government is making its laws without satisfying the most elementary condition of legitimate governance: that people ought to have says in what the rules are, either directly or through the election of representatives. 

Republicans will fight hard, though, to make sure that these serious deficiencies in our basic system of government are not resolved. Donald Trump has been clear on the reason he opposes giving D.C. the statehood status that should belong to it by right. He says plainly: it shouldn’t happen because it will empower Democrats

“D.C. will never be a state. You mean District of Columbia, a state? Why? So we can have two more Democratic — Democrat senators and five more congressmen? No, thank you. That’ll never happen.”

There is no good argument for forcing “taxation without representation” on D.C. residents. The rationalization that the right will give is that some things are more important than universal suffrage, namely “liberty” (their word for radical laissez-faire capitalism). If elections would produce undesirable egalitarian economic outcomes, then voters are irrational and must have more enlightened rulers make better decisions on their behalf. (There are entire books by libertarian scholars called things like Against Democracy and The Myth of the Rational Voter making these kinds of arguments.) Senator Mike Lee (R—of course) created some controversy recently when he tweeted that “democracy isn’t the objective; liberty, peace, and prospefity [sic] are” and that “rank democracy” can “thwart” the flourishing of the human condition. Historically, United States government officials have used similar arguments to justify subverting the popular will around the world. In Vietnam, the United States propped up a government it knew was unpopular and would lose a fair election, because it felt justified in preventing the country from going communist. From Chile to Guatemala to Iran, the United States has supported the overthrow of democratically-elected governments on the grounds that an autocrat better serves the people’s interest (and our own). When the people of Iraq made it clear through their representatives that they didn’t want U.S. troops there, Donald Trump immediately threatened to impose sanctions. Grand talk of spreading democracy goes out the window the moment a democracy does something we don’t like.

At the moment, Donald Trump is doing his best to cling to power despite having lost a fair election. I very much doubt he’ll succeed—for one thing, most of the power elite has no incentive to help Trump thwart the process, given that Joe Biden poses little threat to the status quo. A lot of Republicans have been eerily silent even as Trump has spread baseless lies about a rigged election and refused to concede. This is alarming, because it raises the disquieting possibility that if Trump tried to stage a coup, the right would raise no objection. Many Republicans, it seems, believe that any means justifies the ends of remaining in power. They are purely Machiavellian and will surrender nothing unless they are forced to surrender it. Hence gerrymandering, poll closures, etc. If the gap between the popular will and the Republican agenda grows wide enough, the measures necessary will become increasingly extreme. Lindsey Graham is certainly right when he suggests that if Republicans want to win the presidency, mail-in ballots are going to be a serious obstacle. It’s not surprising that prominent members of the Republican Party do not have much of a principled opposition to seizing power undemocratically, since, after all, their whole ideology rejects democracy to begin with. As Zack Beauchamp explained in a useful pre-election Vox article:

The idea that majority rule is intrinsically oppressive is necessarily an embrace of anti-democracy: an argument that an enlightened few, meaning Republican supporters, should be able to make decisions for the rest of us. If the election is close, and Trump makes a serious play to steal it, [Mike] Lee’s “we’re not a democracy” argument provides a ready-made justification for tactics that amount to a kind of legal coup.

We can expect, then, to see increasingly anti-democratic measures if popular opinion diverges far enough from the Republican agenda. But there is nothing inevitable about the trend of public opinion. The Republican Party does have another important political tool at its disposal, namely shameless lying. For example, Americans generally do not believe that people should be denied insurance coverage because of pre-existing health conditions. Ideologically, however, the Republican Party is opposed to prohibiting companies from denying coverage because of preexisting conditions, because it believes that corporations should not be told by the government who they have to sell their products to. This creates a problem: the ideology is in conflict with the popular will. Some Republicans resolve it by simply lying about their position. The same is true of Social Security and Medicare cuts. People do not want the government to reduce their retirement benefits or health coverage, so Donald Trump insisted that “We will not be touching your Social Security and Medicare in Fiscal 2021 Budget,” before proposing a budget that did exactly that. Republicans try to get people to believe false things about climate change, false things about the healthcare systems of other countries, false things about the impacts of taxation on the economy, etc., because if people knew the truth, they would not be able to swallow the ridiculous idea that social democracy is the “road to serfdom.” (Many Democrats also spread nonsense about the healthcare systems of other countries.) 

The Republican Party is an extremist organization that poses a serious threat to the future of human civilization. If it is allowed to hold power, it will accelerate climate catastrophe and the international arms race. Fortunately, the suicidal ideology of free-market libertarianism is marginal, because broadly speaking people do not want to live in a world where natural disasters destroy their communities and they work long hours at tedious jobs where they have no rights. In Florida, a state carried by Trump, voters overwhelmingly supported a $15 minimum wage, because most people do not accept the right-wing dogma that high minimum wages are bad for workers. 

But conservative ideology is scary, not just because of its unhinged policy ideas (let neo-feudalism flourish, burn the planet, build more nukes, strip workers’ rights, let people die if they can’t afford to go to a hospital) but because it contains a ready-made justification for imposing those ideas by authoritarian means. Majority rule is tyranny. The Founding Fathers wanted to protect minority rights (not racial minorities, of course—minorities of the opulent). This can easily morph into something like “My opinions are unpopular, thus I have a right to impose them on the masses to prevent their tyrannizing over me.” The more Republicans agree with Mike Lee that “liberty” rather than democracy is the goal, with liberty meaning the private property rights of the rich, the more we will see a form of feudalism justified as being healthy and good. Either Republicans will succeed in changing people’s minds and getting the public on the whole to share their delusions, or they will have to resort to less and less principled methods of pushing their agenda through. When we see Mitch McConnell’s ruthlessness in ditching his respect for procedural norms whenever it is convenient, we have to understand that McConnell is doing what he has to do to secure a viable future for his rotten party. He knows that principles might sign the Republican Party’s death warrant. And they do not intend to die quietly. 

Couch Surfing the Waves of American Poverty

When most people think of “couch surfing,” they picture the adventurous European travels of college students during summer vacations. But the term is also used by homeless people to describe their own efforts to avoid the streets by temporarily staying with friends, family members, or (oftentimes) complete strangers. This type of couch surfing is a sort of purgatory that exists midway between sleeping in the abandoned ruins of factories and the relative comfort of one’s own subsidized housing. If the couch surfer is staying in someone else’s subsidized housing unit (as is often the case, because poor people tend to shelter with people from their own social networks) that is likely to draw intense bureaucratic scrutiny. For both couch surfers and those harboring them, there is risk from landlords, housing authority officials, and caseworkers who (often in concert) have the authority to harass, evict, and even terminate precious subsidies. Couch surfers then become the targets in a high stress game of cat and mouse. For millions of Americans there is no assurance that the bed, sleeping bag, or undersized couch they slept on last night will be available the next day. But in a country where the “official” social safety net exists more in theory than practice, poor people have few other options. 

Couch surfing is a form of homelessness, but the U.S. government refuses to recognize it as such. To appreciate this conceptual failure, one has merely to scan the Department of Housing and Urban Development’s (HUD) 2019 Homeless Assessment Report to Congress. The 98-page document begins with a statement by HUD secretary Ben Carson, accompanied by a photo of his sleepy face. The thing that most struck me about this document, however, is that the term “couch surfing” never appeared. Not once. The report mentioned, in passing, that many homeless people stay with relatives or friends prior to becoming officially homeless, but “staying with relatives or friends” is a rather euphemistic phrase that does not capture the anxiety and desperation inherent in the struggle to keep a roof over your head when you can’t pay rent. 

The 2019 HUD report on homelessness estimated there are fewer than 600,000 homeless people in America on any given night. However, this is equivalent to concluding that the only Americans who eat are those who are within the walls of a grocery store on “any given night.” The HUD’s numbers refer only to people who stay in an official shelter, or no shelter all. The total would be far higher if the HUD included people who fall under the Urban Dictionary’s definition of a couch surfer, which refers to anyone “who is homeless and finds various couches to sleep on and homes to survive in until they are put out.” It is both concerning and darkly amusing that an extensive, supposedly definitive government report provides less context than an anonymous quip posted to illustrate vernacular speech.

I am an eyewitness to the government’s failure to take the poverty crisis in good faith. Over the last 25 years, prior to my retirement, I worked as an outreach mental health clinician in Greenfield and Turners Falls, Massachusetts—two remnants of former American glory. These rural mill towns feature stunning Mesozoic geology, splendid old factory ruins, and fine 19th century architecture. Franklin County, where these towns are located, may not seem like “a poor place” at first glance (at least to people who associate poverty with urban settings, Blackness, Hispanic heritage, southerness, or some combination of all of these). But consider that Turners Falls, once a booming center of tool manufacturing, is 95 percent white with 19 percent of the population living below the poverty line. The beauty of the area belies its harshness. Most of my clients lived in large projects carved into bucolic landscapes: all three of Greenfield’s housing projects are situated along the bubbling Green River. There, one can find bald eagle nests, beaver dams, and pristine swimming spots that would have pleased Norman Rockwell. Most of the homeless encampments are along the Connecticut River in Turners Falls where the fishing is better.

The vast majority of people I worked with belonged to one of four categories (which tend to be fluid rather than fixed, unchanging states): 

  1. The “utterly” homeless surviving in tents in the woods, sleeping in abandoned buildings, or secretly crashing in basements and hallways of occupied structures. 
  2. Couch surfers who bounced from one apartment to the next and were likely to suffer some nights of “utter” homelessness. 
  3. People who’d obtained enough support from “the system” to be free of imminent catastrophe.This group included people who qualified for Social Security Disability Insurance (SSDI), Section 8 or other housing subsidies, and/or Supplemental Nutrition Assistance Program benefits (SNAP, commonly known as “food stamps”). 
  4. The working poor who toiled at low-paying jobs with no union backing. A few had tenuous employer-paid health insurance with crushing co-pays. Sometimes they’d also qualify for SNAP, but most made just enough income to be disqualified for government benefits.

Of the four categories above, couch surfing was the most universal. Not everyone has the wherewithal to live in a tent or sleep in a shed, and obtaining SSID and a housing subsidy is a little like hitting the lottery. But anyone can couch surf, and it is a critical practice among those who are stuck in the clogged pipeline for federal housing subsidies. According to estimates from the Public and Affordable Housing Research Corporation, tens of millions of people are currently waiting for subsidized housing approval. So if there are fewer than 600,000 homeless people on “any given night,” and tens of millions on HUD program waitlists, where are those people waiting?


When I first met my client Rhonda, a Black woman in her 30s, at her apartment in downtown Turners Falls, she pointed to the shirtless man snoring on her living room couch and told me, “That is my gas money.”

As Rhonda explained, “Don’t anybody drive around here but me, and they all got to see the doctor, all got to eat, all got to go here and go there, and that’s on me. Do any of them give me a nickel for gas?” Rhonda received $750 a month from SSDI and another $75 apiece for her two boys, aged nine and 11. She drove her sister and her sister’s four children to appointments, as well as her aged mother who was living alone but showing signs of early dementia. Couch surfing is a way for people like Rhonda to squeeze a few dollars out of an underground economy that makes a life of poverty nominally viable. People with a housing subsidy can “sublet” a corner of their apartment to those who are even less fortunate, and Rhonda collected about $60 a week from this man. She was participating in an informal system that extends the meager generosity of the state’s so-called safety net to encompass those who are arbitrarily excluded from protection. Poverty hurts many people, and destroys others, but no one should assume that poor people passively succumb to brutal economic systems. People invent, improvise, and find ways to circumvent a bureaucratic structure that looms over life with nihilistic indifference.

However, the couch surfing system is a very unstable one, and Rhonda soon ran into bad luck. The shirtless man on her couch, who she had met at the laundromat, apparently had a record for domestic violence. The perfect storm came down hard. A neighbor had some “beef” with Rhonda and reported the visitor to the property manager, who called Rhonda’s DSS caseworker. In such contexts, agencies and officials are like pinball flippers operated by a blindfolded player. People file complaints against one another for child neglect all the time. In most cases, these accusations result from nothing more than private animosities. The consequences for the accused, though, can be dire. 

Running afoul of the DSS is an ever-present worry for many poor people, and Rhonda’s experiences showed why. First, the DSS worker argued that Rhonda had a long history of exposing her kids to dangerous boyfriends. Then, the property manager served an eviction notice because Rhonda allegedly had a history of taking in people whose names weren’t on the formal lease. Finally, the housing authority moved to terminate Rhonda’s Section 8 subsidy. The tempest took many years to resolve, with Ronda’s children being taken from her by the courts and put up for permanent adoption. Both children ran away from their separate adoptive families years later, back to Rhonda, who was couch surfing in another complex after being evicted. At this time, she was assigned a new DSS worker who supported family reconciliation, and helped her get a new subsidized apartment. 

Rhonda was lucky, relatively speaking—most similar stories don’t end on such an “upbeat” note. I do not want to suggest that DSS workers routinely take children from their families for frivolous reasons. In most cases such workers try hard to keep kids with their families. Nor do I wish to malign frontline housing authority staff. They generally try to do the best they can in a system designed to give too much control to private landlords. There are, however, enough instances of questionable DSS intervention and arbitrary punishments from housing authorities to make poor people terrified about interacting with these agencies. In my caseload everybody knew somebody who had lost their children, and many more had been threatened with that possibility. When you’re poor, critical moments in your life are often shaped by bureaucratic whim.

Before I started working as a therapist in Franklin County, I had little idea what poverty felt like on a visceral level. Everybody knows that poverty is a monstrous problem in America, born of wealth inequality, capitalist greed, public indifference, and so on. But few people who are not poor themselves have the chance to view poverty with intimate detail over time. At close range, one observes that poverty has predictable patterns and wounds people in repetitive and familiar ways. The poor do not suffer from random acts of fate, but rather from frighteningly mundane methods of cruelty and humiliation. To be poor is to be pressed on one side with the challenges of basic survival, and on the other side by the indifference of bureaucratic institutions—the proverbial rock and a hard place. 

My client, Richard, was a prime example. He had a few unpaid parking tickets that doubled and tripled over time, until finally the Registry of Motor Vehicles (RMV) suspended his license. He was working as a house painter and had to get to work, so he drove anyway and got caught—repeatedly. I can’t recall how many times he was stopped before the accumulation of charges mounted up to a penalty of jail time. Richard couldn’t pay his fines, so the state decided to lock him in a cage. The disruptions caused by small fines are, for people with no savings, potentially life-altering. Events that would have little impact on a person with even nominal financial security can obliterate the tenuous stability of poor people in an instant.  

For Richard and people like him, the loss of freedom means that time will slow to a crawl. The tedium, boredom, and emptiness of prison life has been described to me unfailingly by many clients. Since poor people are given less time on Earth than their more prosperous peers (the medical journal, Lancet, reported that there is a 14-year discrepancy in life expectancy between the top 1 percent of earners and the lowest 1 percent) robbing them of a portion of that time is particularly cruel. Life is objectively shorter for the poor, and I have heard many distraught people telling me how neighbors and family members had passed away in their 40s, 50s, or 60s.

If life is shorter for those who have nothing, this hardly means that it proceeds at a gallop, regardless of whether they’re free or imprisoned. Bureaucratic time has a uniquely slow cadence even if life itself is quickly running out. My client Alex, for example, spent 10 years in prison and another 10 years couch surfing while waiting for the approval of his SSDI application and subsequent Section 8 housing voucher. Since most low income units are operated by private landlords, there is no mechanism within the housing authority to assure that former prisoners are not discriminated against. Thus, it took Alex an eternity until he found a single room in a building directly managed by the housing authority.

Only a month after Alex moved in, a resident down the hall knocked on Alex’s door to complain about loud music. The neighbor was over a foot taller than Alex. Alex grabbed a bottle, smashed it against the door frame and brandished the jagged glass. The next morning, police arrested Alex and he went back to the Franklin County Jail for three months. When I met him there, Alex explained that when he first went to prison people beat the shit out of him because he was small. He learned that one had to display craziness quickly and often. Sometimes things that work well in prison don’t work on the outside. When Alex was eventually released, he had no government benefits and nowhere to go. Today he is still homeless and couch surfing while hoping for his number to come up on the housing authority waitlist once more. 


Aside from the interminable waiting periods for government benefits, the rituals that poor people must perform to receive them are seemingly designed to reinforce the idea that being poor is a personal failing. Applying for SSDI is an arduous, soul-obliterating endeavor that can take anywhere from a year and a half to three years or more.  It requires psychological testing, multiple forms and questionnaires, and piles of documents from doctors and therapists. Applicants must face down a gauntlet of hostile, bureaucratic functionaries searching each narrative for contradictions–and especially for evidence of non-prescription drug use. 

This is from a brochure put out by a for-profit business calling itself “The Legal Disability Benefits Center”: 

“The SSA will not grant Social Security Disability benefits based on a Drug Addiction. In fact, a Drug Addiction may prevent you from obtaining the Social Security Disability benefits you may otherwise be entitled to. For example, if you suffer from bipolar disorder and are not undergoing proper treatment for the condition and are trying to self-medicate with illegal drugs, your Social Security Disability application is likely to be denied. You will need to recover from your addiction and undergo necessary treatment for your bipolar disorder in order to qualify for Social Security Disability benefits.”

I always coached my clients who applied for SSDI benefits to hide their drug issues, and cautiously edited my own entries to their applications. However, many (if not most) people dealing with addiction have a substance-related offense, or else their addiction history turns up in a medical record or past psychological evaluation. Some 25 to 30 million Americans suffer from substance use disorders, which means almost 10 percent of the population is intentionally excluded from safety net protections. The SSDI protocol depicts substance abuse as a personal, moral failing—flagrantly dodging medical research indicating that substance abuse disorders reflect genetic predispositions

Antiquated moralizing about substance use is just one of the humiliations inflicted upon people trying to get help from the government. If a person is lucky enough to procure SSDI, an almost Herculean task of persistence, luck, and multi-year patience, there is another mountain to climb: the subsidized housing application process (since applicants need a source of income to even qualify for subsidized housing, applying for SSDI almost always comes first). In Franklin County, where I worked, the wait for Section 8 housing averaged three to five years. The number of available units is so small that people are forced to apply for units that are nowhere close to where they currently live. A recent conversation I had with a staff member at the Franklin County Regional Housing Authority revealed that there are over 244,000 individuals and families on the Section 8 waitlist for Greenfield, a city of under 20,000. This reflects that many people in the Boston vicinity are so desperate for housing that they have put in applications for subsidies throughout the state. Nationally, it is estimated that there are 35 subsidized units for every hundred families who need one. 

People who are lucky enough to be admitted into HUD-subsidized housing projects are still kept at arm’s-length from “respectable” society. Concentrating the poor into projects is a form of ghettoization that virtually mirrors the concept of prisons. Poor people are sequestered from the larger community and the intricacies of their lives are carefully hidden, like those of prisoners or people condemned to leper colonies. Where I worked, the project construction involved the strategic use of woods, hills, rivers, and other geographic features to isolate the complex from the surrounding community. For example, the Greenfield Gardens housing project is set in a valley carved by the Green River and abutted by an abandoned field that once served to grow crops for prisoners at the Franklin County Jail. Known as “the Gardens,” this project is a privately run complex with some 200 units constructed in a manner that gives children, enjoying the playground swings, a vertiginous view of the jail’s barbed wire loops, chain link fences, and cinder-block structures that loom from atop the adjacent hill. One wonders if the town planners envisioned the jail and the housing project as a rendering of the entire life cycle that poverty maps out. A person is born in the projects to a family with a housing subsidy, winds up in jail, and is released with no housing subsidy to begin couch surfing down the hill. The whole odyssey takes place within a square quarter of a mile, and involves, almost invariably, a net loss.

Let’s return to the case of Richard. When he got out of prison after serving three months for driving with a suspended license, he couch surfed with his parents in Greenfield Gardens (their three grandchildren from Richard’s sister were also staying in the apartment). Richard no longer had a car, nor was he eager to risk driving again without a license, so he lost his painting job. He knew that his parents would be pressured to have him leave, so he moved in with an old high school friend in another project. Then, his friend died of a fentanyl overdose (opiate deaths are epidemic in Franklin County). Richard subsequently moved back in with his parents, provoking the property manager to threaten to evict his family.

In HUD complexes, people like Richard’s parents—who have almost nothing—often risk their all-important government subsidy in order to protect homeless family members. This sort of generosity, the normally laudable reaction of parents whose children are faced with deadly hardship, puts them at risk from property managers, nosy bureaucrats and vengeful neighbors. Once the authorities detect a couch surfer, the bureaucratic and legal mechanisms leading to eviction and homelessness become a consuming possibility for the holder of the subsidy. From here, the narrative can take many idiosyncratic twists. The lease holder might deny that the couch surfer is staying overnight, and devise ways to hide the person from authorities. The couch surfer might circulate back and forth between several apartments in a complex to avoid detection. Even under the best of circumstances, tensions and conflicts between hosts and couch surfers can quickly escalate and explode. Subsidized apartments are small and couch surfers often have little to offer hosts. Squabbles over food and small amounts of money are inevitable. 

In conditions like these, the term “poverty syndrome” can be useful for understanding the challenges faced by Richard and others. Poverty should never be viewed as a mere collection of external obstacles to success or happiness, but rather, as an amalgam of economic, social, and psychological components that (considered together) create a powerful drag on individuals and communities. In particular, post-traumatic stress disorder (PTSD) plays a major role in the psychology of poverty. Poor people afflicted with severe anxiety disorders like PTSD are handicapped in their ability to undertake personal challenges. Internalized fear of rejection, failure, and humiliation become debilitating. All interactions with authority figures can evoke a reflexive sense of terror. 

Poverty syndrome also encompasses a range of more “tangible” factors. Most of my clients had the following obstacles constricting their social mobility: no high school diploma, no driver’s license, and no teeth. All three of these afflictions might be seen as rites of passage because they happen in a distinct order within a predictable time frame. (We generally think of rites of passage as cultural rituals conferring new levels of status. The use of the term here is ironic: the poor are ritualistically stripped of status as they enter adulthood.)  Most of the kids in my case load dropped out of school in 10th grade after failing the state-mandated standardized tests. These tests are the products of a lucrative industry and make tens of millions for their designers and promoters. Dropping out of high school may limit job choices, but it also formalizes a lifelong sense of existential doubt. It is rare that the ritual of leaving school does not manifest in a permanent habit of fierce self-deprecation. The second rite of passage for the poor can occur at any time in adolescence or early adulthood. It may happen the first, second, or third time that a young person fails the $35 test to gain a learner’s permit, or it may happen after gaining a learner’s permit, when the person realizes there is no one in their family able or willing to take them out on the road to practice for the actual driver’s license test. Acquiring a driver’s license is a hugely important symbol of adult competence and independence. The failure to achieve it becomes yet another component of self-doubt. The third rite of passage is an absolutely avoidable consequence of inadequate dental care—most of my clients had lost all or many of their teeth by age 30. The cumulative result of these rites of passage is a terrifying sense of hopelessness and being alone. It is not unusual to see people shaking and crying as they enter the housing authority office for a mundane yearly renewal of their Section 8 subsidy.

The callousness of the American welfare system does not go unopposed, though. I have been guilty of assuming—unfairly—that my clients were too beaten down, too exhausted from the daily exertion needed to survive, to engage in activism. However, in 2017 a group of Greenfield’s homeless people organized and took over the local town commons to erect a “tent city.” The episode sparked a rift between the progressive and conservative members of the town council as they debated the traditional choice between police repression or responsive social welfare. The ultimate resolution involved little more than sending local social workers to get the homeless activists placed on SSDI and housing waitlists, but the ability of homeless people to organize was nevertheless heartening. The activism and confrontational tactics of this small cohort of homeless people in Greenfield recalled the much larger protests and confrontations of the homeless throughout Northern Europe in the early 1980s. So fierce was the ire and organization of the homeless in Amsterdam that the government unleashed thousands of riot police armed with tanks and tear gas against them in April 1980. If progressives are unable to pressure the incoming Biden administration to prioritize affordable housing and stimulus relief, the confrontations that occurred in Europe are likely to be reenacted on a vast scale in American cities.

This dire prospect of suffering and unrest has been forcefully raised by economist Richard Wolff, who has predicted that we are about to experience a “tsunami of evictions” and homelessness with the ending of stimulus payments and eviction moratoriums that held the worst economic effects of the pandemic at bay. The image of tens of millions of people thrown into the streets is one that summons forth apocalyptic visions of civil strife and police violence of the sort that rocked Amsterdam forty years ago. However, there are probably already millions of homeless people invisibly couch surfing as people on the bottom social strata deploy their own life boats. At the moment, couch surfing acts as a sponge that wipes up the spills of an uncaring nation. We really do not know what the absorption capacity of the sponge is. We are about to find out, but at some point the improvised systems of the poor will become overwhelmed by the sheer magnitude of inequity. It’s time for the government to step up.

A Place Like Home

It’s midnight on the 27th Tuesday of the pandemic, and outside the small Portuguese bar beneath my apartment, men are arguing about football again. Their laughter echoes against the dark mountains that hug the valley. I watch them from my balcony, flicking a joint into a pickle jar so the ash won’t fall on their heads. A joyous drunkard with a red cap and long black ponytail is flitting from patron to patron. He slaps their backs and hands out cigarettes. Even from a lofty distance you can sense the man has an earnest, almost pathetic need to be there. The past months have been hard on him. He is so happy to be home.

Home has been on my mind a lot this year. Before the springtime lockdowns started, friends asked if I planned on leaving Andorra (the landlocked microstate between France and Spain where I’ve spent the past four years), and “going home” to Minnesota, the place where my family has lived since I was a small child. The answer was an unequivocal “no” for many reasons. There were the untenable financial costs, the challenges of international travel with two cats, and the fact that returning to the United States in the midst of an uncontrolled pandemic felt like running back into a house with flames bursting out the windows. But most of all, I just didn’t consider Minnesota to be “home” anymore. 

Then, in May, protests hit the streets of Minneapolis over the police murder of George Floyd. At that point, the burning building analogy took on a rather more literal sense. Like many leftists, I cheered when protestors torched the Third Precinct, which has long been a stronghold for some of Minneapolis’ worst cops—in just a decade, it has paid out over $2 million in settlements for abusing local residents. But as the fires grew, so did my sense of guilt. The Cup Foods where the cops choked the life out of Floyd was just a few blocks from my old house, where my sister now lives. Helicopter surveillance and gunshots were a nightly occurrence for weeks. When we spoke on the phone, I heard the fear and exhaustion in my sister’s voice—and understood deep in my gut that, even if I no longer felt at home in Minnesota, it will be “home” as long as my people live there. 

This got me thinking: what is home, anyway? Well, it’s where the heart is. Or where the cat is, or the coffee, or the wine. Home: there’s no place like it. Home is not just a house, but a house can be a home (though not when she goes away). Home is where we’re headed. Home is going back, sometimes, and other times home is built anew. Home is sweet Chicago, home is on the range. Home is whenever I’m with you. Home is a whole bunch of other things too, depending on who’s talking about it, but the general consensus seems to be that home is good and desirable in any case. 

We need a more tangible definition of home if this story is going to make any sense, though. There must be some binding agent that can congeal all these amorphous concepts together into a digestible mindcookie. To that end, let me suggest that “home” is in essence just a nice warm feeling of being at peace. It’s a mental state we all crave, even if we’ve never known it before. A desire for home seems (gulp) hardwired into our very nature as humans.

I am sure that someone has said this before in much more elegant and eloquent language. There is, probably, a long illustrious lineage of home-centric literature, and I would like to acknowledge both its existence and the fact that I haven’t read any of it. My only excuse is pandemic-induced lethargy, though if you said I needed to mount a better defense or risk cancellation I would cite Barbara Ehrenreich’s observation that our thoughts are “thoroughly colonized by the thoughts of others through language, culture, and mutual expectations.” If you want to say something new and interesting about home, shouldn’t you avoid importing more colonists?

But I’m getting off track. We’re supposed to be talking about home here. Home, that ephemeral sensation of safety and everything-being-all-right-ness. Home, the thing that feels further away than ever right now. The year 2020 can kiss a goat’s asshole for a great number of reasons, but the most unsettling thing about recent events is how they’ve threatened every conception of home we have in one fell, endlessly stupid swoop. Whatever we take refuge in—people, buildings, places, ideas, identities—it all feels like it could be lost in an instant. 

Homes are disappearing in such astonishing numbers, and in such grievously multitudinous ways, that horror seems the only sane response. In the United States alone, cruel men with guns and documents stand ready to remove 40 million human beings from the places where they eat, shit, cry, dream, and wash themselves. Millions of pictures will be taken off walls as millions of garbage bags are stuffed with whatever will fit. The bags will be carried or dragged—who knows where—by people who leave behind stains, dried tears, small objects that slip out from the holes torn by hastily stowed books or forks. The rows of empty homes will loom like tombstones until the market dictates otherwise. 

“Empty houses,” you might say, since it’s the people within those walls and windows that give a given structure its homelike qualities. But those people are being lost as well. As I write this in early October, over 215,000 mothers, fathers, daughters, sons, grandmas, grandpas, cousins, mentors, lovers, and friends have perished from the pandemic in the United States alone (and of course the United States is not alone, much as some might wish otherwise). 

Where do we find refuge to process all this grief and loss? Not in the once-peaceful forests, where millions of acres are ablaze from demonic wildfires that blot out the sun. Not by the seaside, where wave after wave of fierce storms batter the land until it is unrecognizable. Certainly not in the streets of our cities, where the police execute people without warning in a hail of bullets. The museums, theaters, bars, parks, and restaurants where we felt like we fit—our second homes—are either gone, or ghostly. Nowhere feels safe. Media outlets like the National Interest that pride themselves on their “realism” are running ominous screeds on the possibility of a Second Civil War

Maybe this all sounds a bit melodramatic. Things are bad, to be sure. But life is going on, at least for most of us, to some extent. Unless you live on the West Coast, the sky is not on fire and the air is likely quite breathable. If you’re not near the Gulf of Mexico, you’re in little danger of being drowned by a tidal wave or hit by a flying street sign. There’s a good chance that you don’t even have a personal connection to anyone with COVID—according to an August poll from Axios-Ipsos, only 50 percent of Americans know someone who’s contracted the virus. That same month, the Bureau of Labor Statistics reported that 91.6 percent of the country’s “labor force” was still employed. Around 280 million Americans aren’t at immediate risk of being evicted. Sports are back (kind of), school is in session (to some extent), and it turns out the asteroid that might hit Earth on Election Day is only six and a half feet wide (it’s almost certain to miss, anyway).

So why does it still feel like we’re fucked? Why do we sense—even if we, personally, are “oh fine, considering the circumstances”—that home is about to slip beyond our reach forever? Why is the nice warm feeling so elusive right now? 


The men outside the Portuguese bar are stomping on their cigarette butts. Some head back inside, others to their cars—it’s getting late, and presumably they have work in the morning. I watch the man in the red cap give one of his departing friends a hug. I can’t hear what he says (even if I could, I wouldn’t understand much), but it’s clear that he is expressing some form of love. I’m a little surprised to notice how much sadness, jealousy, and rage this gesture arouses in me.

My reaction is both irrational and mean. I don’t immediately recognize it as such, of course. Instead, my brain starts whirring with excuses. The man is wearing a red cap—the unofficial headwear of monsters and fascists. Never mind that MAGA isn’t a thing in the mountains of Andorra, or that I know for a fact those letters aren’t emblazoned on his cap because on multiple occasions I’ve bummed a light from him and his friends, the friends he’s hugging (no masks, bad social distancing!) and chatting with right now. The friends whose physical presence he’s enjoying while I have only my cats for company. The man in the red cap waves and shouts tchau as his buddy drives off in a small white work truck. Fuck ‘em.

Without anyone to look at or listen to, I’m alone with my thoughts. For me, this has been the worst part of the pandemic. It seems to be a common problem, as a recent report from the Centers for Disease Control and Prevention (CDC) found that 41 percent of American adults are now anxious, depressed, or some foul brew of both. What is there to think about that isn’t bad? Trying to picture any kind of future seems absurd right now—the daydreams that used to provide an escape from the present moment have lost most of their appeal. I’m sure this isn’t the case for everyone: some people can still get excited about the prospect of landing their dream job, buying a new house, having a kid, whatever. I am happy for them, I think.

But my mind is on the past, and the paths that we take without realizing what we’ve done. How we make hundreds and thousands of seemingly insignificant choices that close the doors to different homes. I remember a conversation with a Korean ESL academy recruiter at a job fair many years ago. The feeling of breathing underwater for the first time. Waking up one morning to discover I’d been robbed. Watching my parents get older on a screen from thousands of miles away. Packing the things on my desk into a little cardboard box, just like in the movies, and tearing up the letter from my boss. Drinking a little glass of champagne before getting on an airplane. Reading an email that promised to change everything, forever. Ignoring a phone call from a friend. Getting the “one big break” I’d been waiting for, leaping at the opportunity, failing in quiet and unspectacular fashion.

The more I think about the decisions that have led me further and further away from home, the more disgusted I become. Because no matter how scared or alone I might be, there are many millions of others who have it much, much worse. It’s unsettling to feel, at once, so sad and yet so lucky. 

Worried about making rent in the coming months? At least you’re tossing and turning all night in a real bed, under a real roof. Your level of precarity would be a hard-to-fathom luxury for many. “I’m sick of that rock-bottom feeling, where you don’t know where your next move is going to be,” a homeless man named Dennis Barrow told the Minneapolis Star Tribune upon being driven from the park where he’d pitched his tent, after getting kicked out of the hotel that had been his temporary refuge during the pandemic. “We have nowhere left to go.” Cutting back on “non-essential” groceries to stretch your savings? A bag of beans and pasta would be a godsend for people like Sheila Ritter and her family. As she said to CNBC, “Most of our conversations are, ‘When are we getting something else to eat?’ and ‘Mom, I’m hungry.’” So lonesome for a kind human touch you cry unprovoked at odd times of day? Some types of longing are more bittersweet than others. When the Washington Post asked nursing home resident Cary Johnston how it felt to be locked in the same facility with her husband while being unable to see him because he required special care, this is how she answered: 

“He is in a golden prison, and so am I. I know he is not going to live forever. Neither am I. But I don’t want to lose him this way.”

I mention these examples not as some tedious exercise in “counting your blessings,” but as an illustration of the sheer scope of suffering that we as a people are experiencing. Our rational brains can’t begin to comprehend it. The numbers, even the pictures, have lost what little impact they once had. But we can sense that happiness and comfort are being drained from our world. We are realizing, maybe for the first time in our lives, that we could be next. Against our will, we are being forced to acknowledge the pain of others. As we watch them lose their homes it starts to become clear that we are losing something as well.

Home, in whatever form it takes, is usually understood to be an intensely personal thing. Maybe the most personal thing, in fact—a home has to belong to someone in order for it to be a home. An apartment isn’t a home unless there are people who live in it. A father’s hug doesn’t feel like home unless there’s a son to feel comforted by it. The scents of lilacs and pine trees aren’t a reminder of home unless there’s a person to smell them and remember.

All these things conjure the nice warm feeling of home. But they don’t exist in a vacuum. Like human beings ourselves, they draw their meaning from a web of marvelously intricate connections. Our sense of home depends on others being secure in their sense of home—if you feel at home when sitting in a coffee shop with a friend, that good feeling depends (to varying degrees) on your friend’s ability to make rent, the barista’s relationship with their parents, and all the people at all the tables around you being relaxed enough with the circumstances of their lives to generate the background chatter without which the place would feel empty and lifeless. Home, then, is a collective thing as well; our homes depend on the homes of others. And as those homes vanish, we experience a curious sort of homesickness. The feeling is flavored with empathy and dread, resentment and hope. We want—no, we need—things to be OK for others. We know that the people around us must have homes. But what about us?  What about the things we ourselves have lost?

This is the dark side of our collective homesickness. On an intellectual level, maybe we know that another person’s gain is not necessarily our loss. If the Portuguese man in the red cap gets to be with his friends again, it doesn’t follow that this somehow prevents me from being with my friends. I know I should be happy for his happiness. In some sense I am. Still, I can’t stop worrying that the world’s supply of miracles has run so low that there are none left for me. 


The Portuguese bar is closed the next morning when I go out to buy bread. I peek through the window as I walk past, taking stock of the knickknacks on the walls. There are brightly colored football scarves, trophies from a long-forgotten darts tournament, a notice for this year’s Christmas lottery, some advertisements for an amateur singer’s CD that look like they were made with Publisher 97. A cozy, homey vibe.

The bar is a stark contrast with my own apartment, where the bare walls are marked by solitary nails and scars of ripped plaster. All my knickknacks—wooden figurines of cats that remind me of my own, Greek Orthodox icons I don’t really believe in anymore but are so pretty I keep them around anyway, jars of seashells from a trip to a long ago beach—are stuffed into plastic bags piled in the corner of a cramped room. How things came to be this way is a tedious and embarrassing story. I only mention it because it’s a neat (if a bit on-the-nose) illustration of how jealousy now colors the way we think about home. 

Here, I’m not talking about being envious of Nancy Pelosi and her $24,000 refrigerators stuffed with ice cream that costs $13 a pint, or Chris Cuomo’s cavernous living room into which he tearfully emerged after a period of self-imposed isolation in his equally cavernous basement. Jealousy of the ultra-rich has always had a kind of “no shit” quality that makes it uninteresting to talk about (which might explain why millionaire comedian Ricky Gervais attempted to corner that particular market by ranting about celebrities who live “in a mansion with a swimming pool”). Yes, their opulence is grotesque; yes, their hoardings should be seized and redistributed; yes, they are the monstrous product of a racist imperial cisheteronormative patriarchal capitalist regime in its death spiral etcetera, etcetera. It’s true, all true. But it also kind of feels beside the point. If you’re an adult, you don’t even bother dreaming about that idea of home anymore. Not seriously, anyway.

Illustration by Ellen Burch

The kind of home-jealousy I’m talking about is much smaller and more mundane. It hits when you’re on a video call with a friend and you notice they have a nice bookshelf with some lovely plants on it. Or maybe even before that, when they suggest doing the call and you have to remind them you don’t have internet at home, and data is expensive. The jealousy can hit when you see a picture of your sibling with their partner and kids—how nice would it be to have hugs whenever you need them? You can feel this kind of jealousy from the mere knowledge that someone, somewhere is picking apples or going swimming or petting a dog. This isn’t the just jealousy of the poor toward the wealthy, or the precarious toward the stable. It’s the jealousy of you toward anyone and everyone who has some form of comfort that you yourself do not. 

It’s odd to think that the people you’re jealous of are, in all likelihood, jealous of you too. Your sibling would probably love a moment of solitude at this point. Your friend with the nice bookshelf might be longing for a day that goes by without violence in the streets outside their window. For literally anything you can imagine doing, having, or experiencing, there is an enormous number of people for whom your little nothing would be an extravagant treat. 

This jealousy is silly, or at least misplaced. It’s the expression not of a real grievance against a particular person, but of a general sense of loss at the hands of monstrous unseen powers. Our ideas of home are being taken from us. As they slip away, we lash out with impotent sadness at anything that reminds us of the happiness that was once ours. Such a reaction might not be logical. But it is understandable.

Being alive in this version of the world feels so unstable. Does home even exist anymore? Is there anywhere we can run (our parents’ house, an idyllic rural town, maybe New Zealand) where we can feel safe? Can we press our faces to the necks of our loved ones and be calm, knowing what lurks out there in the world? Are we ever going to feel the nice warm feeling the way we did before?

If there’s a bright side to all our newfound insecurity about home—dear god, there must be—it’s the tenderness that always accompanies pain. Maybe if we aren’t driven mad by our own longing we can finally understand, in a visceral way, the longing in the hearts of everyone who goes without peace or comfort or acceptance. We might come to see them as ourselves (they are us, we cannot be separated from each other). All homes rely on the survival of other homes. I hope yours will endure; I hope you wish the same for me.  

Abolishing the Economics Nobel Isn’t Enough

Another year, another round of battles over the Economics Nobel Prize. This year the Swedish committee went with Paul Milgrom and Robert Wilson “for improvements to auction theory and inventions of new auction formats.” This seemed a relatively safe choice: auction theory’s domain is behind-the-scenes technical fixes to markets rather than the more politically contentious areas of economics. Yet for critics, the prize was further evidence that the discipline was out of touch and dominated by both a particular type of person and a particular approach. For mainstream economists, the furor over the award only serves as further evidence that econ critics will complain about literally anything the discipline does, even if they don’t fully understand why they’re doing it.

Criticism of the Nobel Prize in economics has a long history. When Milton Friedman won the prize in 1976, a protestor shouted “Friedman go home” at his ceremony and was dragged out of the emporium by security. Today the criticisms are less theatrical but no less intense, and economists increasingly tire of them. Some of these criticisms appear reasonable, while others appear less so (at least at first glance). For example, many economists could probably sympathize with critiques of giving the 2018 prize to William Nordhaus for models which predicted that 4°C/7.2°F of global warming would only reduce GDP by a few percentage points—which is surely an underestimate and in any case completely neglects the main issues raised by climate change, such as mass death. But most economists would be less sympathetic to critiques of giving the 2019 award to Duflo, Kremer, and Banherjit for their work with Randomized Control Trials (RCTs). A mainstay of econ critics has long been the discipline’s obsession with abstract theory, and RCTs are believed to address this shortcoming by providing a neat scientific comparison between a control group and an experimental group. Still, the subsequent surfacing of some of the more obviously evil RCTs out there—such as randomly allocating missionary work or randomly threatening to cut off the water supply of Kenyan households—laid bare the neocolonial themes of modern RCTs even to the discipline’s staunchest defenders.

But this year the backlash against the Economics Nobel seems quite different. Auction theory is such an unassuming field detailing—with fascinating mathematics—how best to design auctions so that buyers do not cheat, the item is allocated to those who value it most, and sellers get the most revenue they can (subject to these constraints). But auction theory is not just theory. It is the origin of many practical applications, including selling off wireless spectrums, carbon permits, and even combating corruption. These seem like laudable applications and a clear example of economic theory proving useful in the real world—yet this wasn’t enough for the critics. The criticism took a few shapes, but was probably best summed up by David Blanchflower, a former economist at the Bank of England: “The Nobel prize in economics once again goes to a couple of old white men who published esoteric mathematical squiggles years ago that have little or no bearing on the lives of ordinary people… Economics has lost its way.” 

Criticism on grounds of diversity is familiar and extremely fair, especially given that the recent wave of Black Lives Matter protests has prompted the discipline to reexamine its relationship with race. The Nobel Prize in Economics has only ever been awarded to two women and three non-white economists out of 86 recipients and has once again gone to two white dudes from the United States, neglecting not just the work of women and people of color within the mainstream of the discipline but also a vast array of approaches outside it—work disproportionately done by marginalized groups. Catriona Watson of the organization Rethinking Economics called it “disappointing” that the prize had gone to “two white men from the global north working on auction theory.” Devika Dutt, a PhD student at the University of Massachusetts Amherst, called it “predictable” that the prize had been awarded to “two old U.S. white men from the same Ivy League uni” adding that “we are in a moment of reckoning as regards structural discrimination” and that this prize “looks like closing ranks around the existing power structures in econ.”

Branko Milanovic, a well-known economics professor who sits largely outside the mainstream of the discipline, expressed befuddlement at never having heard of Milgrom and Wilson. He also asked why scholars from outside the Global North were not being recognized, and in the process detailed a second common objection: that auction theory, as neat and practical as it may be, is simply not significant enough to warrant a Nobel. Milanovic cited the rise of China, the reduction of global poverty, and of course the global pandemic as areas which would seem more worthy of attention. He wasn’t the only person underwhelmed by auction theory’s impact: as Dutt stated, the Nobel prize for physics pertained to black holes, while the medicine prize pertained to hepatitis, while the peace prize pertained to global hunger. It’s fair to say these are all more obviously interesting to most people—whether from an intellectual or ethical perspective—than the economics prize.

Yet there is a solid rebuttal to these critiques. Diversity issues aside, not all important intellectual advances have to be immediately understandable or eye-catching. For instance, in 2016 the physics prize was given “for theoretical discoveries of topological phase transitions and topological phases of matter.” For most non-physicists this is equally as obscure and difficult to understand as the auction theory announcement. As with auction theory, though, the average person can get the gist if you explain it clearly. For instance, the 2016 physics prize dealt with surfaces so flat that they can be considered two-dimensional, which has important implications for the electronics and superconductors we use every day. But neither phase transitions nor auction theory have the same emotional force as “we found out something wild about black holes” or “we discovered a new virus, paving the way for cures and treatments.” Scientific discovery is sometimes immediately understandable and practical; but sometimes it just isn’t, and that’s okay. Besides, most critics seemed unaware of the litany of applications of auction theory, which surely goes to show that the critics are mistaken about auction theory’s significance. 

So Why Is the Econ Nobel So Contentious?

The usual bone picked by critics is that the Economics Nobel isn’t a “real” Nobel because it was created by the Swedish Central Bank—the Riksbank—after Alfred Nobel was long gone, and Actually Its Full Name Is The Sveriges Riksbank Prize In Economic Sciences In Memory Of Alfred Nobel Thank You Very Much. Although I find it amusing that economists invented their own Nobel to try and seem more scientific, for me this point doesn’t really hit home. Ultimately all prizes are somehow “made up.” Plus, Alfred Nobel was an arms dealer, so I don’t see why his made up prizes should be so highly valued among progressive critics of the discipline. The critic might reply, fairly, that whatever the reason, the name “Nobel” does carry some weight and it is this authority they are concerned about. Phillip Mirowski, a historian of economic thought, has written that economics increasingly deals with creating reality according to its own ideas and values—as opposed to the natural sciences, which seem to deal with discovering reality and applying what they’ve learned. Auctions and RCTs both create opportunities for economists to apply their ideas to the allocation of resources, which prompts the question: where does that decision-making authority come from? Mirowski, who is firmly on the left, would here find an unlikely ally in the libertarian economist Friedrich Hayek, who won the prize in 1974 for his work on the inherently diffused nature of economic knowledge. Shortly afterward, Hayek remarked, “if I had been consulted whether to establish a Nobel Prize in economics, I should have decidedly advised against it… the Nobel Prize confers on an individual an authority which in economics no man ought to possess.”

Under this argument, the problem with the economics Nobel is not so much its origin, but its mere existence. The fact that political power is accorded to those who have won the prize—in addition to the political power they must already possess to have put their ideas into practice in the first place—is a matter of democratic contention. Economists have a habit of convincing themselves that their proposals are scientific and sidestep questions of democracy, when actually those proposals just prioritize economists’ own values and approaches over those of others. Whether we’re talking about RCTs, auctions, or monetary policy, much of the application of economic ideas has taken place behind closed doors and in a language inaccessible to most of us. In every case this secrecy has had clear consequences for which policies have been implemented—and subsequently which groups have benefited and which have lost out.

For example, the aforementioned auctions of wireless spectrums are one area many economists claim as a victory. These auctions were first used to sell companies like Sprint and Verizon the rights to use government-owned 3G spectrums, and they undoubtedly raised millions of dollars worth of revenue. More progressive-minded economists even view such auctions as a way of taxing the rich without them really noticing, and that auctions are therefore more politically viable than calls to raise the top tax rate. It sounds good, yet as with so many applications, the neat picture painted by the theory hides a messier reality. Government revenue is only one among many goals when designing a mobile phone network, and in the first spectrum auctions in the United States, the focus on maximizing revenue meant other outcomes such as rural coverage were neglected.

To give another example, one key dilemma of auction theory is the “Winner’s Curse” where winning companies overpay, thus violating the theoretical condition that buyers pay what they value. Auction theorists have come up with some ideas to combat this, including reducing or “shading” your bid when the value of the object isn’t clear. Despite this, in my own country of the United Kingdom, we didn’t manage to avoid the winner’s curse as Vodafone ran into serious trouble and British Telecom (BT) had to sell off assets to survive after “winning” their auctions. As someone who has dealt with BT directly, I have limited sympathy for them. But the question is whether the auction delivered on its stated aims, which amounts to asking whether the companies were acting rationally by valuing spectrum rights so highly they almost went bankrupt. One summary of the spectrum auctions has a list of failures or partial failures which includes Britain and Germany in 2000, Ghana in 2015 and 2018, India in 2016, and Bangladesh in 2018. I’m left wondering how many policy mistakes a set of ideas can result in and still win the Nobel Prize.

Auction theorists would of course reply that they are working on, or have even worked out, many of these issues. But this just begs the question (and yes, internet pedants: that is the proper usage of the phrase) of why economists are the ones who get to work out all these problems without including citizens or citizens’ representatives. If the application of economic models to policy throws up the same challenges that democracy was intended to solve—that is, balancing a whole set of competing considerations, and making a lot of mistakes while doing so—why move away from democracy in the first place? And if many economists do not know much of the work and how it’s been used, what hope does your average citizen have?

The classical answer to all this is that economic expertise (as with medical expertise) allows us to pursue some “good” which would be unattainable in its absence. But this does not answer the question of who gets to choose which “good” is being pursued. Nor does it address whether economists truly have this exclusive claim to knowledge, or who will hold them accountable if they mess up. Medicine has its issues, of course, but there is a strong code of ethics, and doctors are not typically put in charge of healthcare policy. By contrast, economists have no professional standards or ethical code, and they’re recruited to help shape almost every area of public policy, as well as the private sector. The lack of critical introspection by the discipline does not help here. While anthropologists, for example, tie themselves in knots over whether their entire discipline is just a vestige of colonialism, economists have a hard time conceiving of themselves as doing anything other than good.

Secrecy, Legitimacy, and the Modern Role of Economists

The economist Suresh Naidu once mused that the increase in firms hiring economists to work for them may be a strategy for preventing those same economists from working to regulate those same firms. The reality, at least in the case of the spectrum auctions, seems to be that economists have flitted between both. In an exchange on the website promarket.org, economists Glen Wyl and Stefano Feltri both criticized the 2017 spectrum auctions, with Wyl stating that the entire process was “shrouded in secrecy” and amounted to “mass privatisation of public resources.” Feltri charged that the auctions had been a boon for private equity firms, including ones started by the economists themselves. Milgrom, who was involved directly in these auctions, refuted the allegations as “conspiratorial.” But even David Henderson in a generally favorable summary of this year’s winners, admitted in the Wall Street Journal that there was a potential “conflict of interest” in economists both designing policy for governments and advising firms in how best to approach that policy. At the very least there are questions to be asked about democratic accountability and the proper role and conduct of economists.

Management consultants are famous for a job that is less about substance and more about providing external legitimacy and authority to a strategy that was already decided on (and by “strategy” I of course mean “firing people”). Economists clearly have more direct input into shaping policies than this, but the overall dynamic bears some resemblance. Robert Lent, in a TED Talk ostensibly defending economists, recounted that when Google first attempted to auction off advertising slots, two of its engineers came up with the famous “second price auction,” which had partially won William Vickrey the prize in 1996. This awards the auction item to the highest bidder, but at the second highest value bidded (i.e. if I bid $100 and you bid $200, then I bid $300 and you decide to back out, the auction is awarded to me at $100). As it turns out, this solves many unnecessary complications. Eric Schmidt, the CEO of Google at the time, was skeptical until he ran his ideas past well-known economist Hal Varian, who confirmed that the second price auction could be mathematically proved to be optimal, which convinced Schmidt. The role of economists here seems to be more in refining, legitimizing, and even selling ideas than discovering anything, since two people who were presumably untrained in auction theory were able to come up with the idea themselves.

I could detail other examples, including the financial markets constructed by the winners of the 1997 prize, which ended in catastrophic failure. But most examples are less dramatic. Although I obviously place value on my own subfield of behavioral economics, which won the Nobel in 2002 and again in 2017, it’s hard to escape the idea that it has exhibited a similar dynamic: mathematizing and legitimating a set of ideas easily discoverable by people working outside academia, or even those within academia but from other fields. Relative to economists’ existing models of rationality, behavioral economics looks good because it deals with “how people actually behave.” But as recognized by Daniel Kahneman, the recipient of the 2002 Nobel “for having integrated insights from psychological research into economic science,” a solid foundation in behavioral economics won’t give you any more insight into the quirks of the human mind than a mechanic has. Another psychologist playfully remarked that he was in awe of how knowledge from psychology suddenly became cool when it was labeled “behavioral economics.” Following on from the 2010 book Nudge by Cass Sunstein, technocrat-in-chief of the Obama administration [Editor’s note: and sworn enemy of multiple Current Affairs staff members], and Richard Thaler, who went on to win the 2017 Nobel, the application of behavioral economics has spread and behavioral economists are now actively shaping policy across the world.

My own contention is that parts of the discipline of economics have become a reservoir for the public and private sectors to hire well-educated and intelligent people who, for whatever historical reasons, have a degree of prestige and credibility attached to their role. This isn’t to say that economic expertise is useless. It’s just that much of what these people do is a result of someone clever thinking hard about how to solve a problem—while promoting a particular set of principles, interests and values—rather than an achievement of economic theory itself. Conveniently, the prestige and authority of ideas aren’t things most economists would consider relevant for firms; they’d tend to fall back on the refrain that firms will only do things which work, so the fact that auction theorists are hired by firms only goes to show the theory’s efficacy. In fact, the spectrum auctions illustrate that the application of economic theory on paper bears little resemblance to economic theory put into practice.

High Modernism and Economists’ (Un)scientific Methods

Like all models, those used in auction theory are necessarily simplified. There was no obvious theory which could be definitively applied to the U.S.’s spectrum auctions in 2000, which as mentioned above sold off parts of the wireless spectrum to telecoms. The result was that many contradictory approaches were proposed by different economists. These economists were often employed directly by the bidding companies and(surprise surprise) the rules proposed by each economist tended to align with the interests of the company for whom they were lobbying. (Directors of larger companies actually reported that they felt they had gotten a bargain, contra the common contention that auctions would extract the maximum possible amount of value.) The resulting choice of auction was an amalgam of different ideas and it was widely admitted that auction theory’s role had been more “intuitive” than formal. Furthermore, the final design had a number of ad hoc rules—which had no home in auction theory—added to it in practice. So the design of the auction was not obviously transposed straight from auction theory, which proved impractical for such a complex real world decision.

One consequence of this is that auction theory has rarely been tested in the scientific sense of the word. This would require a precise mapping from theory to reality, which is all but impossible. Existing tests are often done in the lab (and the Nobel Committee admitted that even results from these were mixed); sometimes they consist of little more than stylized predictions one might make after thinking about a problem for a few minutes. (A particularly egregious genre is when a prediction “provides a theoretical foundation” for what is already “common practice” among auctioneers.) And although the large amount of revenue raised by the spectrum auctions is trumpeted as an empirical demonstration of auction theory’s validity, it’s hard to know whether this was just the result of the auctions involving a much sought-after prize—it was unthinkable for telecoms companies not to have access to the wireless spectrums—rather than a demonstration of the theory’s prowess. These kinds of statements could not be made about the physics prize, where the discovery was dependent on observation at an obscene level of precision.

To be sure, auction theory most certainly contains a number of advances in the realm of pure mathematics which are worthy of intellectual admiration. It has also resulted in applications which most people would agree were successful, such as its role in allocating food to food banks. But the question is not whether ideas in auction theory, or any other set of economic theories, have some utility. Of course they do. As James Scott detailed in his masterpiece Seeing Like A State, big ideas—or what he called High Modernism—which reshape the world in their image often succeed at achieving the goals they prioritize while sidelining other, typically harder-to-measure goals. With their conceit that proper market design will sidestep political complexity, the basic worldview of these economists is little different than the 19th century planner Le Corbusier, whose vision of an urban utopia proposed to demolish much of downtown Paris and replace it with a handful of mega-skyscrapers. Auction theory has just provided a set of principles to design policy, principles which could have been different had electrical engineers or some other group of non-economists been in the driver’s seat.

I’ve always resisted the conclusion that economists are simply free market-obsessed shills for the rich. It seems overly simplistic and it’s at odds with my own experience of the profession’s views. But the discipline certainly retains a preference for markets over alternatives such as state or community ownership, and is happy to assist firms in the construction of these markets under the conceit that this will aid the general welfare. In Lents’ talk, he recounts time and time again how economists’ recommendations have aided huge tech firms like Google, Amazon, and Microsoft in their rise to dominance. According to Mirowski, following the U.S.’s spectrum auctions, “the industry has gone through a spate of mergers, acquisitions, and bankruptcies, ultimately leading to a high degree of license concentration.” The economists who facilitated this have been handsomely rewarded: 2012 Nobel laureate Alvin Roth (of whose work I am generally a fan) said this had all become “a new way for game theorists to earn their livings, as consulting engineers for the market economy.” Some conclusions here are inescapable.

Politics and economics are both shaped in large part by who has the fanciest credentials, and the Nobel Prize in Economics acts as the ultimate “credibility booster” for various policies. But the people who award it rarely consider the perspective of the people impacted by the policies. Although such a tool could surely be wielded for good, the arbitrary direction of political focus by a small group of individuals, often projected onto another small group of individuals, is not a net benefit to the polity. The initial reaction by many critics now has additional force, since if the award of the Prize has some role in directing political priorities and can give cover to powerful interests, the fact that it doesn’t engage with big issues like climate change, poverty, and the global pandemic is a gross sin of omission. This point is, I think, more interesting than pedantry about the origins of the prize, though its establishment could possibly be seen as a power grab by elements within the profession. To call for its abolition is an obvious conclusion; to call for wider changes in the discipline is the full one.

A Priesthood Rather Than A Science

In a 2012 paper with the mocking title How is This Paper Philosophy?, Kristie Dotson argued that philosophy has strong “cultures of justification”: policing of what really counts as philosophy and who really counts as a philosopher. There are intellectual components to this, such as a devaluing of context and experience in favor of general axioms and deductive reasoning. But these are closely twinned with issues of social power, best summed up by a guidance counselor in a historically black U.S. college: “Philosophy is not for black women. That is a white man’s game.” I’m not sure what the statistics are for philosophy, but I do know there are exactly zero black female professors of economics in the United Kingdom. Everything Dotson says about philosophy seems to apply at least as much to economics.

Policing of what counts as economics is so common that it will be recognizable to anyone who’s taken a class in the field. Economics degrees are often sold as ways of learning to “think like an economist” rather than a substantive education in basic empirical features of economies and the development of critical argumentation surrounding said features. Typically, there’s little attempt to wrestle with the methodological, political, and ethical issues raised by the subject matter. Economists have a core approach which emphasizes (or fetishizes) mathematics, in particular optimal control theory, set theory, and regression techniques—as they often put it, they “do it with models,” the type of quip which makes one wish humor were also included in economics curricula.

At the risk of giving too much credence to Twitter, it’s worth taking a look at the reaction of the mainstream of the profession to critics like Milanovic, Dutt, and myself. Like Milanovic, I didn’t recognize the names of the prize-winners and took this as an indication they were relatively small names even within auction theory. This turned out to be an incorrect assumption on my part, as they are behind many key ideas and applications that anyone who has taken a class on auction theory will come across. Still, at the time I added that auction theory as a whole was of little help for any of our current global crises, a point I stand by. The subsequent reaction to my tweet—including questioning of my credentials, as well as put-downs from the editor of a well-known journal—only served to confirm that the discipline harbors a huge amount of intellectual insecurity. In the value it places on awards and status, the economics profession is more akin to a priesthood than a science.

Milanovic also commented that he had received some “nasty” responses, but the reactions to our criticisms were far less severe than the reactions to Dutt’s, which included hate mail and implied death threats, as well as threats that she would never be able to find a job once she finished her PhD (if you want a bit of positivity, she has also received a lot of support and potential opportunities off the back of this debacle). Most tellingly, the responses to her included accusations of sexism and racism at her simply mentioning the race and sex of the recipients. Such a response fails to understand the basic nature of sexism and racism as systematic discrimination, something white men most certainly do not experience in the economics profession. It is telling that the leaders and the members of the discipline saw much more need to police the criticisms of two already-rich guys at the top of the profession who have just won $1,000,000 between them (criticisms they won’t even see), than this obviously gendered and racialized harassment, another set of issues that economists have historically struggled to address. It is gatekeeping, and it sends a very clear message to junior researchers that if we continue, we may find ourselves on the wrong side of the gate. Or maybe we already are.

It is interesting to reflect on how these “cultures of justification” interact with the authority of the economics discipline. Philosophy does not have the same direct influence over our lives as economics, so it’s clear that political power is not a necessary condition for a strong culture of justification. But a strong culture of justification may be necessary for political authority: as all parents know, presenting a unified front often takes priority even if there are disagreements. It also gives economics the appearance of science, since most scientists are too busy actually experimenting to worry about metaphysics—an attitude economists have managed to emulate. This might not be a bad thing. Ultimately if there are no key tenets shared by economists, they will be continually bickering about the basics and unable to propose policies at all. Maybe I’m alone in seeing this as a positive vision of what the discipline could be (provided the bickering is good-natured), but I hope others will at least agree that the current narrowness of the profession has negative consequences both inside and outside the academy. 

How could we introduce a “conscience” to the economics discipline? One idea is a strict code of ethics, which could help prevent the type of conflicts of interest found in the spectrum auctions. Initiatives such as Rethinking Economics, Diversify and Decolonise, and the charity Economy all go further, aiming to broaden the scope of the discipline and force it to engage better with the public affected by its policies. Rethinking Economics aims at pluralism—the inclusion of ideas currently outside economists’ narrow range of methods—while Diversify and Decolonise makes the case for inclusion of people and ideas other than white men from the Global North. Economy calls for democratizing the discipline so that economists are required to communicate and be accountable to the public, an idea starting to be put into practice by Andy Haldane at the Bank of England. Economist Sheila Dow sees the projects of ethics and broadening the discipline as inextricably linked because redefining who can talk about economics, and even who counts as an “economist,” would automatically reduce the authority held by those inside the gates. 

I’ll conclude with a few thoughts on prizes and authority. The influence of the economics profession does not stem entirely from the Nobel; economists had already played key roles in designing the international financial system long before the prize was established in 1969. One of the most influential economists of all time, John Maynard Keynes, had died over 20 years before that. Yet the Nobel has become the crown on the head of the so-called “king of social sciences” and serves to grant immense credibility to a certain class of ideas. But challenges to the economics-king are not met with the aplomb or curiosity of a regent who is confident in his decisions. Instead,  the mildest critiques of establishment thinking are subject to routine denouncements, hostility, and even outright aggression. Economists can console themselves with the idea that critics don’t possess their knowledge or their (second-rate) mathematical skills, but the truth is that many people understand the feeling that important decisions are being made without them, not to mention the feeling that they are being ripped off. Public trust in the economics profession remains low. They have only themselves to blame.