I Don’t Care How Good His Paintings Are, He Still Belongs In Prison

George W. Bush committed an international crime that killed hundreds of thousands of people.

Critics from the New Yorker and the New York Times agree: George W. Bush may have been an inept head of state, but he is a more than capable artist. In his review of Bush’s new book Portraits of Courage: A Commander in Chief’s Tribute to America’s Warriors (Crown, $35.00), New Yorker art critic Peter Schjeldahl says Bush’s paintings are of “astonishingly high” quality, and his “honestly observed” portraits of wounded veterans are “surprisingly likable.” Jonathan Alter, in a review titled “Bush Nostalgia Is Overrated, but His Book of Paintings Is Not,” agrees: Bush is “an evocative and surprisingly adept artist.” Alter says that while he used to think the Iraq War was “the right war with the wrong commander in chief,” he now thinks that it was the “wrong war” but with “the right commander in chief, at least for the noble if narrow purpose of creatively honoring veterans through art.”

Alter and Schjeldahl have roughly the same take on Bush: he is a decent person who made some dreadful mistakes. Schjeldahl says that while Bush “made, or haplessly fronted for, some execrable decisions…hating him took conscious effort.” Alter says that while the Iraq War was a “colossal error” and Bush “has little to show for his dream of democratizing the Middle East,” there is a certain appeal to Bush’s “charming family, warm relationship with the Obamas, and welcome defense of the press,” and his paintings of veterans constitute a “message of love” and a “step toward bridging the civilian-military divide.” Alter and Schjeldahl both see the new book as a form of atonement. Schjeldahl says that with his “never-doubted sincerity and humility,” Bush “obliviously made murderous errors [and] now obliviously atones for them.” Alter says that Bush is “doing penance,” and that the book testifies to “our genuine, bipartisan determination to do it better this time—to support healing in all of its forms.”

This view of Bush as a “likable and sincere man who blundered catastrophically” seems to be increasingly popular among some American liberals. They are horrified by Donald Trump, and Bush is beginning to seem vastly preferable by comparison. If we must have Republicans, let them be Bushes, since Bush at least seems good at heart while Trump is a sexual predator. Jonathan Alter insists he is not becoming nostalgic, but his gauzy tributes to Bush’s “love” and “warmth” fully endorse the idea of Bush’s essential goodness. Now that Bush spends his time painting puppies and soldiers, having mishaps with ponchos and joking about it on Ellen, more and more people may be tempted to wonder why anyone could ever have hated the guy.

Nostalgia takes root easily, because history is easy to forget. But in Bush’s case, the history is easily accessible and extremely well-documented. George W. Bush did not make a simple miscalculation or error. He deliberately perpetrated a war crime, intentionally misleading the public in order to do so, and showed callous indifference to the suffering that would obviously result. His government oversaw a regime of brutal torture and indefinite detention, violating every conceivable standard for the humane treatment of prisoners. And far from trying to “atone,” Bush has consistently misrepresented history, reacting angrily and defensively to those who confront him with the truth. In a just world, he would be painting from a prison cell. And through Alter and Schjeldahl’s effort to impute to Bush a repentance and sensitivity that he does not actually possess, they fabricate history and erase the sufferings of Bush’s victims.

First, it’s important to be clear what Bush actually did. There is a key number missing from both Alter and Schjeldahl’s reviews: 500,000, the sum total of Iraqi civilians who perished as a result of the U.S. war there. (That’s a conservative estimate, and stops in 2011.) Nearly 200,000 are confirmed to have died violently, blown to pieces by coalition air strikes or suicide bombers, shot by soldiers or insurgents. Others died as a result of the disappearance of medical care, with doctors fleeing the country by the score as their colleagues were killed or abducted. Childhood mortality and infant mortality shot up, as well as malnutrition and starvation, and toxins introduced by American bombardment led to “congenital malformations, sterility, and infertility.” There was mass displacement, by the millions. An entire “generation of orphans” was created, with hundreds of thousands of children losing parents and wandering the streets homeless. The country’s core infrastructure collapsed, and centuries-old cultural institutions were destroyed, with libraries and museums looted, and the university system “decimated” as professors were assassinated. For years and years, suicide bombings became a regular feature of life in Baghdad, and for every violent death, scores more people were left injured or traumatized for life. (Yet in the entire country, there were less than 200 social workers and psychiatrists put together to tend to people’s psychological issues.) Parts of the country became a hell on earth; in 2007 the Red Cross said that there were “mothers appealing for someone to pick up the bodies on the street so their children will be spared the horror of looking at them on their way to school.” The amount of death, misery, suffering, and trauma is almost inconceivable.

These were the human consequences of the Iraq War for the country’s population. They generally go unmentioned in the sympathetic reviews of George W. Bush’s artwork. Perhaps that’s because, if we dwell on them, it becomes somewhat harder to appreciate Bush’s impressive use of line, color, and shape. If you begin to think about Iraq as a physical place full of actual people, many of whom have watched their children die in front of them, Bush’s art begins to seem ghoulish and perverse rather than sensitive and accomplished. There is a reason Schjeldahl and Alter do not spend even a moment discussing the war’s consequences for Iraqis. Doing so requires taking stock of an unimaginable series of horrors, one that makes Bush’s colorful brushwork and daytime-TV bantering seem more sickening than endearing.

But perhaps, we might say, it is unfair to linger on the subject of the war’s human toll. All war, after all, is hell. We must base our judgment of Bush’s character not on the ultimate consequences of his decisions, but on the nature of the decisions themselves. After all, Schjeldahl and Alter do not deny that the Iraq War was calamitous, with Alter calling it one of “the greatest disasters in American history,” a “historic folly” with “horrific consequences,” and Schjeldahl using that curious phrase “murderous error.” It’s true that both obscure reality by using vague descriptors like “disaster” rather than acknowledging what the invasion meant for the people on whom it was inflicted. But their point is that Bush meant well, even though he may have accidentally ended up causing the birth of ISIS and plunging the people of Iraq into an unending nightmare.

anatomyad2

Viewing Bush as inept rather than malicious means rejecting the view that he “lied us into war.” If we accept Jonathan Alter’s perspective, it was not that Bush told the American people that Iraq had weapons of mass destruction when he knew that it did not. Rather, Bush misjudged the situation, relying too hastily and carelessly on poor intelligence, and planning the war incompetently. The war was a “folly,” a bad idea poorly executed, but not an intentional act of deceit or criminality.

This view is persuasive because it’s partially correct. Bush did not “lie that there were weapons of mass destruction,” and it’s unfortunate that anti-war activists have often suggested that this was the case. Bush claims, quite plausibly, that he believed that Iraq possessed WMDs, and there is no evidence to suggest that he didn’t believe this. That supports the “mistake” view, because a lie is an intentional false statement, and Bush may have believed he was making a true statement, thus being mistaken rather than lying.

But the debate over whether Bush lied about WMDs misstates what the actual lie was. It was not when Bush said “the Iraq regime continues to possess and conceal some of the most lethal weapons ever devised” that he lied to the American people. Rather, it was when he said Iraq posed a “threat” and that by invading it the United States was “assuring its own national security.” Bush could not have reasonably believed that the creaking, isolated Saddam regime posed the kind of threat to the United States that he said it did. WMDs or not, there was nothing credible to suggest this. He therefore lied to the American people, insisting that they were under a threat that they were not actually under. He did so in order to create a pretext for a war he had long been intent on waging.

This is not to say that Bush’s insistence that Saddam Hussein had WMDs was sincere. It may or may not have been. The point is not that Bush knew there weren’t WMDs in Iraq, but that he didn’t care whether there were or not. This is the difference between a lie and bullshit: a lie is saying something you know to be untrue, bullshit is saying something without caring to find out if it’s true. The former highest-ranking CIA officer in Europe told 60 Minutes that the Bush White House intentionally ignored evidence contradicting the idea that Saddam had WMDs. According to the officer, when intelligence was provided that contradicted the WMD story, the White House told the officer that “this isn’t about intel anymore. This is about regime change,” from which he concluded that “the war in Iraq was coming and they were looking for intelligence to fit into the policy.” It’s not, then, that Bush knew there were no WMDs. It’s that he kept himself from finding out whether there were WMDs, because he was determined to go to war.

The idea that Saddam posed a threat to the United States was laughable from the start. The WMDs that he supposedly possessed were not nuclear weapons, but chemical and biological ones. WMD is a catch-all category, but the distinction is important; mustard gas is horrific, but it is not a “suitcase nuke.” Bashar al-Assad, for example, possesses chemical weapons, but does not pose a threat to the U.S. mainland. (To Syrians, yes. To New Yorkers, no.) In fact, according to former Saddam aide Tariq Aziz, “Saddam did not consider the United States a natural adversary, as he did Iran and Israel, and he hoped that Iraq might again enjoy improved relations with the United States.” Furthermore, by the time of the U.S. invasion, Saddam “had turned over the day-to-day running of the Iraqi government to his aides and was spending most of his time writing a novel.” There was no credible reason to believe, even if Saddam possessed certain categories of weapons prohibited by international treaty, that he was an active threat to the people of the United States. Bush’s pre-war speeches used terrifying rhetoric to leap from the premise that Saddam was a monstrous dictator to the conclusion that Americans needed to be scared. That was simple deceit.

In fact, Bush had long been committed to removing Saddam, and was searching for a plausible justification. Just “hours after the 9/11 attacks,” Donald Rumsfeld and the Vice Chairman of the Joint Chiefs of Staff were pondering whether they could “hit Saddam at the same time” as Osama bin Laden as part of a strategy to “move swiftly, go massive.” In November of 2001, Rumsfeld and Tommy Franks began plotting the “decapitation” of the Iraqi government, pondering various pretexts for “how [to] start” the war. Possibilities included “US discovers Saddam connection to Sept. 11 attack or to anthrax attacks?” and “Dispute over WMD inspections?” Worried that they wouldn’t find any hard evidence against Saddam, Bush even thought of painting a reconnaissance aircraft in U.N. colors and flying it over Iraqi airspace, goading Saddam into shooting it down and thereby justifying a war. Bush “made it clear” to Tony Blair that “the U.S. intended to invade… even if UN inspectors found no evidence of a banned Iraqi weapons program.”

Thus Bush’s lie was not that there were weapons of mass destruction. The lie was that the war was about weapons of mass destruction. The war was about removing Saddam Hussein from power, and asserting American dominance in the Middle East and the world. Yes, that was partially to do with oil (“People say we’re not fighting for oil. Of course we are… We’re not there for figs.” said former Defense Secretary Chuck Hagel, while Bush CENTCOM commander John Abizaid admitted “Of course it’s about oil, we can’t really deny that”). But the key point is that Bush detested Saddam and was determined to show he could get rid of him; according to those who attended National Security Council meetings, the administration wanted to “make an example of Hussein” to teach a lesson to those who would “flout the authority of the United States.” “Regime change” was the goal from the start, with “weapons of mass destruction” and “bringing democracy” just convenient pieces of rhetoric.

Nor was the war about the well-being of the people of Iraq. Jonathan Alter says that Bush had a “dream of democratizing the Middle East” but simply botched it; Bush’s story is almost that of a romantic utopian and tragic hero, undone by his hubris in just wanting to share democracy too much. In reality, the Bush White House showed zero interest in the welfare of Iraqis. Bush had been warned that invading the country would lead to a bloodbath; he ignored the warning, because he didn’t care. The typical line is that the occupation was “mishandled,” but this implies that Bush tried to handle it well. In fact, as Patrick Cockburn’s The Occupation and Rajiv Chandrasekaran’s Imperial Life in The Emerald City show, American officials were proudly ignorant of the Iraqi people’s needs and desires. Decisions were made in accordance with U.S. domestic political considerations rather than concern for the safety and prosperity of Iraq. Bush appointed totally inexperienced Republican Party ideologues to oversee the rebuilding effort, rather than actual experts, because the administration was more committed to maintaining neoconservative orthodoxies than actually trying to figure out how to keep the country from self-destructing. When Bush gave Paul Bremer his criteria for who should be the next Iraqi leader, he was emphatic that he wanted someone who would “stand up and thank the American people for their sacrifice in liberating Iraq.”

As the situation in Iraq deteriorated into exactly the kind of sectarian violence that the White House had been warned it would, the Bush administration tried to hide the scale of the disaster. Patrick Cockburn reported that while Bush told Congress that fourteen out of eighteen Iraqi provinces “are completely safe,” this was “entirely untrue” and anyone who had gone to these provinces to try and prove it would have immediately been kidnapped or killed. In tallies of body counts, “U.S. officials excluded scores of people killed in car bombings and mortar attacks from tabulations measuring the results of a drive to reduce violence in Baghdad.” Furthermore, according to the Guardian “U.S. authorities failed to investigate hundreds of reports of abuse, torture, rape and even murder by Iraqi police and soldiers” because they had “a formal policy of ignoring such allegations.” And the Bush administration silently presided over atrocities committed by both U.S. troops (who killed almost 700 civilians for coming too close to checkpoints, including pregnant women and the mentally ill) and hired contractors (in 2005 an American military unit observed as Blackwater mercenaries “shot up a civilian vehicle” killing a father and wounding his wife and daughter).

Then, of course, there was torture and indefinite detention, both of which were authorized at the highest levels. Bush’s CIA disappeared countless people to “black sites” to be tortured, and while the Bush administration duplicitously portrayed the horrific abuses at Abu Ghraib as isolated incidents, the administration was actually deliberately crafting its interrogation practices around torture and attempting to find legal loopholes to justify it. Philippe Sands reported that the White House tried to pin responsibility for torture on “interrogators on the ground,” a “false” explanation that ignored the “actions taken at the very highest levels of the administration” approving 18 new “enhanced interrogation” techniques, “all of which went against long-standing U.S. military practice as presented in the Army Field Manual.” Notes from 20-hour interrogations reveal the unimaginable psychological distress undergone by detainees:

Detainee began to cry. Visibly shaken. Very emotional. Detainee cried. Disturbed. Detainee began to cry. Detainee bit the IV tube completely in two. Started moaning. Uncomfortable. Moaning. Began crying hard spontaneously. Crying and praying. Very agitated. Yelled. Agitated and violent. Detainee spat. Detainee proclaimed his innocence. Whining. Dizzy. Forgetting things. Angry. Upset. Yelled for Allah. Urinated on himself. Began to cry. Asked God for forgiveness. Cried. Cried. Became violent. Began to cry. Broke down and cried. Began to pray and openly cried. Cried out to Allah several times. Trembled uncontrollably.

Indeed, the U.S. Senate Select Intelligence Committee’s report on CIA interrogation tactics concluded that they were “brutal and far worse than the CIA represented to policymakers.” They included “slamming detainees into walls,” “telling detainees they would never leave alive,” “Threats to harm the children of a detainee, threats to sexually abuse the mother of a detainee, threats to cut a detainee’s mother’s throat,” waterboardings that sometimes “evolved into a series of near drownings,” and the terrifyingly clench-inducing “involuntary rectal feedings.” Sometimes they would deprive detainees of all heat (which “likely contributed to the death of a detainee”) or perform what was known as a “rough takedown,” a procedure by which “five CIA officers would scream at a detainee, drag him outside of his cell, cut his clothes off, and secure him with Mylar tape. The detainee would then be hooded and dragged up and down a long corridor while being slapped and punched.” All of that is separate from the outrage of indefinite detention in itself, which kept people in cages for years upon years without ever being able to contest the charges against them. At Guantanamo Bay, detainees became “so depressed, so despondent, that they had no longer had an appetite and stopped eating to the point where they had to be force-fed with a tube that is inserted through their nose.” Their mental and emotional conditions would deteriorate until they were reduced to a childlike babbling, and they frequently attempted self-harm and suicide. The Bush administration even arrested the Muslim chaplain at Guantanamo Bay, U.S. Army Captain James Yee, throwing him in leg irons, threatening him with death, and keeping him in solitary confinement for 76 days after he criticized military practices.

printedit

Thus President Bush was not a good-hearted dreamer. He was a rabid ideologue who would spew any amount of lies or B.S. in order to achieve his favored goal of deposing Saddam Hussein, and who oversaw serious human rights violations without displaying an ounce of compunction or ambivalence. There was no “mistake.” Bush didn’t “oops-a-daisy” his way into Iraq. He had a goal, and he fulfilled it, without consideration for those who would suffer as a result.

It should be mentioned that most of this was not just immoral. It was illegal. The Bush Doctrine explicitly claimed the right to launch a preemptive war against a party that had not actually attacked the United States, a violation of the core Nuremberg principle that “to initiate a war of aggression…is not only an international crime; it is the supreme international crime, differing only from other war crimes in that it contains within itself the accumulated evil of the whole.” Multiple independent inquiries have criticized the flimsy legal justifications for the war. Former U.N. Secretary General Kofi Annan openly declared the war illegal, and even Tony Blair’s former Deputy Prime Minister concurred. In fact, it’s hard to see how the Iraq War could be anything but criminal, since no country—even if it gathers a “coalition of the willing”—is permitted to simply depose a head of state at will. The Iraq War made the Nuremberg Laws even more empty and selective than they have always been, and Bush’s escape from international justice delegitimizes all other war crimes prosecutions. A core aspect of the rule of law is that it applies equally to all, and if the United States is free to do as it pleases regardless of its international legal obligations, it is unclear what respect anybody should hold for the law.

George W. Bush may therefore be a fine painter. But he is a criminal. And when media figures try to redeem him, or portray him as lovable-but-flawed, they ignore the actual record. In fact, Bush has not even made any suggestion that he is trying to “atone” for a great crime, as liberal pundits have suggested he is. On the contrary, he has consistently defended his decision-making, and the illegal doctrine he espoused. He even wrote an entire book of self-justifications. Bush is not a haunted man. And since any good person, if he had Bush’s record, would be haunted, Bush is not a good person. Kanye West had Bush completely right. He simply does not think very much about the lives of people darker than himself. That sounds like an extreme judgment, but it’s true. If he cared about them, he wouldn’t have put them in cages. George Bush may love his grandchildren, he may paint with verve and soul. But he does not care about black or brown people.

It’s therefore exasperating to see liberals like Alter and Schjeldahl offer glowing assessments of Bush’s book of art, and portray him as soulful and caring. Schjeldahl says that Bush is so likable that hating him “takes conscious effort.” But it only takes conscious effort if you don’t think about the lives of Iraqis. If you do think about the lives of Iraqis, then hating him not only does not take conscious effort, but it is automatic. Anyone who truly appreciates the scale of what Bush inflicted on the world will feel rage course through their body whenever they hear his voice, or see him holding up a paintbrush, with that perpetual simpering grin on his face.

Alter and Schjeldahl are not alone in being captivated by Bush the artiste. The Washington Post’s art critic concluded that “the former president is more humble and curious than the Swaggering President Bush he enacted while in office [and] his curiosity about art is not only genuine but relatively sophisticated.” This may be the beginning of a critical consensus. But it says something disturbing about our media that a man can cause 500,000 deaths and then have his paintings flatteringly profiled, with the deaths unmentioned. George W. Bush intentionally offered false justifications for a war, destroyed an entire country, and committed an international crime. He tortured people, sometimes to death.

But would you look at those brushstrokes? And have you seen the little doggies?

Speaking of Despair

How much can suicide hotlines do?

I started volunteering at a suicide hotline around three years ago. Whenever I happen to mention to someone that this is a thing I do, they usually seem a bit shocked. I think they imagine that I regularly talk callers off ledges, like a Hollywood-film hostage negotiator. “How many people have you saved?” an acquaintance asked me once. I have no idea, but the answer is probably none, or very few, in the immediate sort of sense the questioner was likely envisioning, where somebody calls the hotline intending to kill themselves and I masterfully persuade them not to. In reality, the vast majority of your time at a hotline is spent simply listening to strangers talk about their day, making little noises of affirmation, and asking open-ended questions.

The conversations you end up having on a suicide hotline are inherently somewhat peculiar. They’re more intimate than you would have in daily life, where an arbitrary set of social niceties constrains us from talking about the things that are close to our hearts. But they are also strangely impersonal. Operators at most call centers are forbidden from revealing personal details about themselves, offering opinions on specific subjects, or giving advice on problems: all of which tend to be central features of ordinary human conversation.

With practice, and a sufficiently lucid and responsive caller, you can sometimes make this bizarre lopsidedness feel a bit less awkward. At the same time, however, you also have to find a way to squeeze in a suicide risk assessment—hopefully, not with a bald non-sequitur like “Sorry to interrupt, but are you feeling suicidal right now?” but in some more fluid and natural manner. The purpose of the risk assessment is to enable the person to talk about their suicidal thoughts, in case they’re unwilling to broach the topic themselves, and also to allow you, the operator, to figure out how close the caller might be to taking some kind of action. From “are you feeling suicidal?” you work your way up to greater levels of specificity: “have you thought about how you might take your life?” “Do you have access to the thing you were planning to use?” “Is it in the room with you right now?” “Have you picked a time?” And so on.

I can’t speak for every operator at every call center, but in my own experience, I would estimate that fewer than 10% of the people I’ve ever spoken to have expressed any immediate desire or intention to end their lives. Well over half of callers, I would estimate, answer “no” to the first risk assessment question. This might, on its face, seem surprising. So who’s calling suicide hotlines, then, if not people who are thinking about killing themselves?

Well, for starters—let’s just get this one out of the way—a fair number of people call suicide hotlines to masturbate.

“Wait, but why?” you, in all your naïve simplicity, may be thinking. “Why would someone call a suicide hotline, a phone service intended for people in the throes of life-ending despair, to masturbate?” Friends, that question is beyond my ken: as theologians are fond of saying, we are living in a Fallen World. If I had to make a guess, I’d say a) suicide hotlines are toll-free, b) a lot of the operators are women, and c) there is a certain kind of person who gets off on the idea of an unwilling and/or unwitting person being tricked into listening in on their autoerotic exploits. The phenomenon would be significantly less annoying if some of the callers didn’t pretend to be kind-of-sort-of suicidal in order to keep you on the line longer: it’s rather frustrating, when one is trying one’s best to enter empathetically into the emotional trials of a succession of faceless voices, to then simultaneously have to conduct a quasi-Turing test to sort out the bona fide callers from the compulsive chicken-chokers.

All right, aside from that, who else is calling?

The other callers are the inmates of our society’s great warehouses of human unhappiness: nursing homes, mental institutions, prisons, homeless shelters, graduate programs. They are people with psychiatric issues that make it difficult for them to form or maintain relationships in their daily lives, or cognitive issues that have rendered them obsessively focused on some singular topic. They are people who are deeply miserable and afraid, who are repelled by the idea of ending their own life, but who still say that they wish they were dead, that they wish they could cease to exist by some other means. Among the most common topics of discussion are heartbreak, chronic illness, unemployment, addiction, and childhood sexual abuse.

Some people are deeply depressed or continually anxious, experiencing recurring crises for which the suicide hotline is one of their chief comforts or coping strategies; while others present as fairly cheerful on the phone, and are annoyed by your attempts to risk-assess them or steer the conversation towards the reason for their call. The great common denominator is loneliness. People call suicide hotlines because they have no one else, because they are friendless in the world, because the people in their lives are unkind to them; or because the people they love have said they need a break, have said don’t call me anymore, don’t call me for a while, I’ll come by later, we’ll talk later, and they are struggling to understand why, why they can’t call their sister or their friend or their doctor or their ex ten, twelve, fifteen times a day, when that’s the only thing that briefly alleviates the terrible upswelling of sadness inside them.

One thing you learn quickly, from taking these kinds of calls, is that misery has no respect for wealth or class. Rich and poor terrorize their children alike. Misery is everywhere: it hides in gaps and secret spaces, but it also walks abroad in daylight, unnoticed. The realm of misery is a bit like the Otherworld of Irish myth, or perhaps the Upside Down on the popular Netflix series Stranger Things. It inhabits the same geographic space as the world that happy people live in. You might pride yourself on your sense of direction, but if you were to wander unaware into the invisible eddy, if you were to catch the wrong thing out of the corner of your eye, you too could find yourself there all of a sudden, someplace where everything familiar wears a cruel and unforgiving face. Somebody you know might be in that place now, perhaps, and you simply can’t see it.

If misery could make a sound like a siren, you would hear it wailing in the apartment next door; you would hear it shrieking at the end of your street; a catastrophic klaxon-blast would shatter the windows of every single hospital and high school in the country, all an endless cacophony of “help me help me it hurts it hurts.” And even if most of the people who call hotlines never come close to taking their own lives, their situation still feels like an emergency.

printedit

We might ask, though, what is the rationale behind a hotline whose protocols are set up for assessing suicidality, when the vast majority of people who call the hotline do not, by their own account, have any concrete thoughts of suicide. The prevailing theory is that suicide hotlines are catching people “upstream,” so to speak, before they find themselves in a crisis state where suicide might start to feel like a real option for them. These people, in theory, are people who are at risk of becoming suicidal down the line if they aren’t given the right kind of support now. But is this actually true?

The fact is, we have no idea. If we take “suicide prevention” as the chief purpose of suicide hotlines, we soon find that the effectiveness of hotlines is very tricky to assess empirically. Of the approximately 44,000 people in the United States who complete suicide every year, we have no way of knowing how many may have tried calling a hotline in the past. Of the people who do call a suicide hotline presenting as high-risk, we don’t know how many ultimately go on to attempt or complete suicide. Small-scale studies have tracked caller satisfaction through follow-up calls, or have tried to measure the efficacy of hotline operators by monitoring a sample of their conversations. But these studies are, by their very nature, of dubious evidentiary value. There’s no control group of “distressed/suicidal people who haven’t called hotlines” to compare to, and the pool of callers is an inherently self-selecting population, which may or may not reflect the population of people who are at greatest risk. There are also obvious ethical concerns about confidentiality when it comes to actively monitoring phone calls by “listening in” without permission from the caller, or placing follow-up calls with people who have phoned the service. A substantial number of people who call suicide hotlines express anxiety about the privacy of their calls. Given the social and religious stigma that continues to be associated with thoughts of suicide, we might posit that the higher-risk a caller is, the more anxious they are likely to be. They may perhaps be reluctant to agree to a follow-up call when asked, and nervous to call the hotline again if they suspect they might be part of some study.

All of this is not to say that we need Hard Numbers to justify the existence of a service that provides a listening ear to people in distress. The value of human connection is self-evident, and when it comes to intangibles like happiness, spiritual purpose, and a sense of closeness to others, so-called scientific studies are mostly bunk anyway. Nonetheless, we can still use our imaginations and our common sense to hypothesize about the limitations of the current system and possible alternatives. I think there are two questions worth considering: first, are suicide hotlines generally accessible or useful to people who are actively suicidal? Secondly, for the “low-risk” callers who appear to be the most frequent users of suicide hotlines, is the service giving them what they need, or is there some better way to provide comfort and relief to these people?

As to whether high-risk individuals are actually being reached by suicide hotlines, as outlined above, it’s hard to tell. Anecdotally, the perception of suicide hotlines seems to differ pretty markedly when you peek in on suicide-themed message boards, as opposed to message boards centered around support for depression or other psychological issues. For example, posters on the mental health support forum Seven Cups describe suicide hotline operators as “supportive,” “non-judgmental,” “patient and understanding,” “some of the most loving people you’ll ever talk to,” and “varied from unhelpful-but-kind to helpful.” By contrast, on the Suicide Project, a site specifically devoted to sharing stories about attempting or losing someone to suicide, posters wrote that their calls were “awkward and forced,” “left me thinking I should just get on with killing myself [and] not speak to anyone before hand,” and “totally useless,” and commented negatively on long hold times or call time limits.

We can’t really draw conclusions from this tiny sample, not least because the kinds of people who frequent message boards and comments sections on the internet are not necessarily representative of broader populations who share some of the same self-identified characteristics. But—again anecdotally—I have noted that high-risk or more despairing callers on the hotline I volunteer for, when questioned about the extent of their suicidal intention, often express sentiments like, “If I were really suicidal, I wouldn’t be calling” or “If I wanted to commit suicide, I would just do it.” It’s hard to say exactly what this means, but it seems as if a general perception among borderline-suicidal callers is that an actively suicidal person wouldn’t bother to call a hotline. Given that suicide is sometimes a split-second decision, and that people who complete suicide tend to use highly lethal means, such as firearms, this perhaps isn’t surprising. (Calls where someone claims to be holding a gun are always the most alarming.)

For lower-risk callers, meanwhile, is a fifteen-minute conversation all we can do for them? People who call hotlines sometimes express frustration at the impersonality of the service. They want a give-and-take conversation, more like a normal interaction with a friend, but many suicide hotlines (including the one I volunteer for) forbid volunteers from giving out personal information about themselves. You never share your own opinion on a topic, even if the caller asks you directly: you merely express empathy, and give short reflective summaries of the caller’s responses to your questions, in order to demonstrate engagement and help the caller navigate through their own feelings.

This isn’t necessarily a bad approach, broadly speaking, since it keeps operators out of the thorny territory of giving possibly-useless, possibly-harmful advice to a person whose full life circumstances they know very little about, or of overwhelming or inadvertently shaming the caller with some inapposite emotional response of their own. For some callers, this non-reciprocal outpouring of feeling may be exactly what they need. But for other callers, who often become wise to a call center’s protocols over many repeated calls, this one-sided engagement is not at all what they say they want. What they want is a real human connection, even its messiness and impracticality, not a disembodied voice that might as well be a pre-programmed conversation bot. Reconciling these conflicting goals is a tricky thing. There are certainly people who use hotlines in what seems to be a compulsive kind of way: they’ll call every half-hour, and if you don’t impose some kind of limit, they’ll tie up the line for less persistent (but perhaps, by some metrics, more vulnerable) callers. But it nevertheless feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules.

“It feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules…”

I often wonder if a suicide hotline’s unique ability to reach a population of acutely unhappy people could be harnessed for more personal, community-based interventions. Currently, there are both national and local call centers, but even on local lines, the caller is still miles away from you, and operators aren’t allowed to set up meetings with the people they speak to. Many people call because of a serious crisis in their lives, but the most you can do is give them a referral to a mental health organization that might be able to help them. I’ve frequently wished it were possible to send an actual human to check up on the person, ask how they’re doing, and see what they might need help with. It would be nice if neighborhoods or cities had corps of volunteers who were willing to be on-call for that kind of thing.

This, it seems to me, might be especially important for callers who seem more desperate and perhaps at higher risk of suicide. When you’re a hotline operator, there’s no middle ground between giving somebody verbal comfort and perhaps a referral, and dispatching emergency services directly to their location. (Some hotlines will only do this if the caller gives permission, while others, if the situation seems imminently dangerous, will send any information associated with the caller’s phone number to local police.) People who have previously had ambulances called on them often express deep shame and embarrassment about the experience. It attracts attention of all their neighbors; depending on the circumstances, the caller might even have been taken out of their home on a stretcher and rushed to an emergency room. Callers who have had this happen, or know someone it’s happened to, will often be especially cagey about sharing their suicidal thoughts, or paranoid about the information that might be being gathered about them. This is extremely problematic, because it means that potentially high-risk callers might deliberately understate the extent of their emotional distress if they ever call again in the future. Moreover, if they’ve been to hospitals before under these circumstances and found the experience traumatizing, they may be unwilling to accept medical interventions in the future. Wouldn’t it be better if instead the caller could consent for a nice person to come discreetly check up on them at their house, have a nice chat, maybe make them a cup of tea? For lower-risk callers, especially people in hospitals or nursing homes who don’t have any company, shouldn’t we be able to find someone living nearby who can pay them a visit during the week?

Of course, suicide hotlines are already understaffed, and so expanding them into an even more labor-intensive grassroots organization wouldn’t be easy. The kinds of callers who call suicide hotlines repeatedly and obsessively would likely be pleading for visits on a constant basis: you would probably need some kind of rationing system to make sure they weren’t overwhelming the entire volunteer network. In a small number of cases, there might be safety concerns about going in person to a caller’s house. (No house-calls for the masturbators, obviously.) The bigger problem, however, is figuring out how to mobilize communities and get people to feel invested in the emotional wellbeing of their neighbors. Personal entanglement is inherently a hard sell. Part of the reason why people volunteer with charitable organizations rather than simply knocking on their neighbors’ doors is because they want to keep their regular lives and their volunteer obligations strictly separate. They want to perform a service for someone without becoming closely enmeshed in the day-to-day reality of that person’s problems. This kind of distance is preferred by most part-time volunteers—I certainly find it more convenient to compartmentalize my life in this way, though I’m not at all sure that’s a good thing—and it may be preferable for some callers, too, especially those who are dealing with issues they intensely desire to keep private, for whom a visit from the wrong neighbor might be mortifying.

But I think we must attempt to surmount these obstacles. When people lament the demise of communities or multi-generation family units in the United States, this is the kind of mutual support they’re thinking of. The extent to which America was once comprised of warm, child-raising villages in its real-life past is, of course, greatly exaggerated, and we certainly shouldn’t romanticize local communities per se: they always have the capacity to be meddling, oppressive, and exclusionary. But all communities don’t have to be like that, and instead of abdicating community ideals as outdated, we could be working to realize them better in the particular places we live. As American lifestyles become increasingly mobile and rootless, close involvement in a community may not be foremost on people’s minds; to the extent that people these days talk about “settling down” somewhere, they usually seem to be thinking in terms of sending their kids to a local school, patronizing nearby restaurants, and attending summer concerts in the park, not trundling around to people’s homes and asking what they can do for them.

But even if we aren’t planning to live in the same town for the entire rest of our lives, we mustn’t allow ourselves to use this as a convenient excuse to distance ourselves from local problems we may have the power to ameliorate. People who come to the U.S. from other parts of the world often find our way of living perverse, in ways we simply take for granted as facts of human nature, rather than peculiar societal failings. I was recently talking to a Haitian-born U.S. citizen who works long hours as a nurse’s aid, and then comes home each night to care for her mentally disabled teenage son. She told me that if it were possible, she would go back to Haiti in a heartbeat. She was desperately poor in Haiti, but there, she said, her neighbors would have helped her: they would have invited her over for dinner, they would have offered to look after the children. “Here,” she said, “nobody helps you.” That’s one of the worst condemnations of American civil society I’ve heard in a while.

As Current Affairs has written in the past, many of the problems that underlie or exacerbate people’s suicidal crises—homelessness, unemployment, lack of access to healthcare—are the result of an economic and political system that is fundamentally profit-driven, and fails to prioritize the well-being of its most vulnerable citizens. Large-scale political changes are necessary to free up the resources that would be necessary to truly tackle these problems in a lasting and meaningful sense, and foster a society that’s better geared towards the health and happiness of all its members. But we must also recognize that government programs—even if well-funded—will never be enough, if they’re administered by an impersonal bureaucracy. What people want, what they need, are real fellow-humans who will come talk to them, and look them in the eye, and genuinely care about what happens to them. At the moment, given the system we currently have to work with, to allocate all that responsibility onto a few poorly-paid, exhausted social workers and health sector employees just isn’t fair—nor is it effective. This is a responsibility that should belong to all of society: to anybody who has even a hour to spare.

Giving people a number to call is a start. It would make sense to use existing hotlines as a tool to find and reach people who need help, both those who are at high risk of harming themselves, and those that are simply unhappy. As for how local volunteer forces could be coordinated, this is something municipalities should trade ideas about: possibly there are communities who have successfully implemented programs like this. Organizations that work narrowly on certain types of social problems might have ideas about how to structure a multi-purpose community-wide organization that could intervene more generally in a variety of contexts. When it comes down to it, actually caring about—and taking care of—your neighbors, even when it’s difficult, is always the most radical form of political activism.

How Liberals Fell In Love With The West Wing

Aaron Sorkin’s political drama shows everything wrong with the Democratic worldview…

In the history of prestige tv, few dramas have had quite the cultural staying power of Aaron Sorkin’s The West Wing.

Set during the two terms of fictional Democratic President and Nobel Laureate in Economics  Josiah “Jed” Bartlet (Martin Sheen) the show depicts the inner workings of a sympathetic liberal administration grappling with the daily exigencies of governing. Every procedure and protocol, every piece of political brokerage—from State of the Union addresses to legislative tugs of war to Supreme Court appointments—is recreated with an aesthetic authenticity enabled by ample production values (a single episode reportedly cost almost $3 million to produce) and rendered with a dramatic flair that stylizes all the bureaucratic banality of modern governance.

Nearly the same, of course, might be said for other glossy political dramas such as Netflix’s House of Cards or Scandal. But The West Wing aspires to more than simply visual verisimilitude. Breaking with the cynicism or amoralism characteristic of many dramas about politics, it offers a vision of political institutions which is ultimately affirmative and approving. What we see throughout its seven seasons are Democrats governing as Democrats imagine they govern, with the Bartlet Administration standing in for liberalism as liberalism understands itself.

More than simply a fictional account of an idealized liberal presidency, then, The West Wing is an elaborate fantasia founded upon the shibboleths that sustain Beltway liberalism and the milieu that produced them.

“Ginger, get the popcorn

The filibuster is in

I’m Toby Ziegler with The Drop In

What Kind of Day Has It Been?

It’s Lin, speaking the truth

—Lin-Manuel Miranda, “What’s Next?

During its run from 1999 to 2006, The West Wing garnered immense popularity and attention, capturing three Golden Globe Awards and 26 Emmys and building a devout fanbase among Democratic partisans, Beltway acolytes, and people of the liberal-ish persuasion the world over. Since its finale more than a decade ago, it has become an essential part of the liberal cultural ecosystem, its importance arguably on par with The Daily Show, Last Week Tonight, and the rap musical about the founding fathers people like for some reason.

If anything, its fandom has only continued to grow with age: In the summer of 2016, a weekly podcast hosted by seasons 4-7 star Joshua Malina, launched with the intent of running through all 154 episodes (at a rate of one per week), almost immediately garnered millions of downloads; an elaborate fan wiki with almost 2000 distinct entries is maintained and regularly updated, magisterially documenting every mundane detail of the West Wing cosmos save the characters’ bowel movements; and, in definitive proof of the silence of God, superfan Lin-Manuel Miranda has recently recorded a rap named for one of the show’s most popular catchphrases (“What’s next?”).

While certainly appealing to a general audience thanks to its expensive sheen and distinctive writing, The West Wing’s greatest zealots have proven to be those who professionally inhabit the very milieu it depicts: Washington political staffers, media types, centrist cognoscenti, and various others drawn from the ranks of people who tweet “Big, if true” in earnest and think a lanyard is a talisman that grants wishes and wards off evil.  

The West Wing “took something that was for the most part considered dry and nerdy—especially to people in high school and college—and sexed it up,” former David Axelrod advisor Eric Lesser told Vanity Fair in a longform 2012 feature about the “Sorkinization of politics” (Axelrod himself having at one point advised West Wing writer Eli Attie). It “very much served as inspiration”, said Micah Lasher, a staffer who then worked for Michael Bloomberg.

Thanks to its endless depiction of procedure and policy, the show naturally gibed with the wonkish libidos of future Voxsplainers Matt Yglesias and Ezra Klein. “There’s a cultural meme or cultural suggestion that Washington is boring, that policy is boring, but it’s important stuff,” said Klein, adding that the show dramatized “the immediacy and urgency and concern that people in this town feel about the issues they’re working on.” “I was interested in politics before the show started,” added Yglesias. “But a friend of mine from college moved to D.C. at the same time as me, after graduation, and we definitely plotted our proposed domination of the capital in explicitly West Wing terms: Who was more like Toby? Who was more like Josh?”

Far from the Kafkaesque banality which so often characterizes the real life equivalent, the mundane business of technocratic governance is made to look exciting, intellectually stimulating, and, above all, honorable. The bureaucratic drudgery of both White House management and governance, from speechwriting, to press conference logistics, to policy creation, are front and center across all seven seasons. A typical episode script is chock full of dweebish phraseology — “farm subsidies”, “recess appointments”, “census bureau”, “congressional consultation” — usually uttered by swift-tongued, Ivy League-educated staffers darting purposefully through labyrinthine corridors during the infamous “walk-and-talk” sequences. By recreating the look and feel of political processes to the tee, while garnishing them with a romantic veneer, the show gifts the Beltway’s most spiritually-devoted adherents with a vision of how many would probably like to see themselves.

In serving up this optimistic simulacrum of modern US politics, Sorkin’s universe has repeatedly intersected with real-life US politics. Following the first season, and in the midst of the 2000 presidential election contest, Salon’s Joyce Millman wrote: “Al Gore could clinch the election right now by staging as many photo-ops with the cast of The West Wing as possible.” A poll published during the same election found that most voters preferred Martin Sheen’s President Bartlet to Bush or Gore. A 2008 New York Times article predicted an Obama victory on the basis of the show’s season 6-7 plot arc. The same election year, the paper published a fictionalized exchange between Bartlet and Barack Obama penned by Sorkin himself. 2016 proved no exception, with the New Statesman’s Helen Lewis reacting to Donald Trump’s victory by saying: “I’m going to hug my West Wing boxset a little closer tonight, that’s for sure.”

Appropriately, many of the show’s cast members, leveraging their on-screen personas, have participated or intervened in real Democratic Party politics. During the 2016 campaign, star Bradley Whitford—who portrays frenetically wily strategist Josh Lyman—was invited to “reveal” who his [fictional] boss would endorse:

“There’s no doubt in my mind that Hillary would be President Bartlet’s choice. She’s—nobody is more prepared to take that position on day one. I know this may be controversial. But yes, on behalf of Jed Bartlet, I want to endorse Hillary Clinton.”

Six leading members of the cast, including Whitford, were even dispatched to Ohio to stump for Clinton (inexplicably failing to swing the crucial state in her favor).

anatomyad2

During the Democratic primary season Rob Lowe (who appeared from 1999-2003 before leaving in protest at the ostensible stinginess of his $75,000/episode salary) even deployed a clip from the show and paraphrased his own character’s lines during an attack on Bernie Sanders’ tax plan: “Watching Bernie Sanders. He’s hectoring and yelling at me WHILE he’s saying he’s going to raise our taxes. Interesting way to communicate.” In Season 2 episode “The Fall’s Gonna Kill You”, Lowe’s character Sam Seaborn angrily lectures a team of speechwriters:  

“Every time your boss got on the stump and said, ‘It’s time for the rich to pay their fair share,’ I hid under a couch and changed my name…The top one percent of wage earners in this country pay for twenty-two percent of this country. Let’s not call them names while they’re doing it, is all I’m saying.”

What is the actual ideology of The West Wing? Just like the real American liberalism it represents, the show proved to be something of a political weather vane throughout its seven seasons on the air.

Debuting during the twilight of the Clinton presidency and spanning much of Bush II’s, it predictably vacillated somewhat in response to events while remaining grounded in a general liberal ethos. Having writing credits for all but one episode in The West Wing’s first four seasons, Sorkin left in 2003, with Executive Producer John Wells characterizing the subsequent direction as more balanced and bipartisan. The Bartlet administration’s actual politics—just like those of the real Democratic Party and its base—therefore run the gamut from the stuff of Elizabeth Warren-esque populism to the neoliberal bilge you might expect to come from a Beltway think tank having its white papers greased by dollars from Goldman Sachs.  

But promoting or endorsing any specific policy orientation is not the show’s true raison d’être. At the conclusion of its seven seasons it remains unclear if the Bartlet administration has succeeded at all in fundamentally altering the contours of American life. In fact, after two terms in the White House, Bartlet’s gang of hyper-educated, hyper-competent politicos do not seem to have any transformational policy achievements whatsoever. Even in their most unconstrained and idealized political fantasies, liberals manage to accomplish nothing.

The lack of any serious attempts to change anything reflect a certain apolitical tendency in this type of politics, one that defines itself by its manner and attitude rather than a vision of the change it wishes to see in the world. Insofar as there is an identifiable ideology, it isn’t one definitively wedded to a particular program of reform, but instead to a particular aesthetic of political institutions. The business of leveraging democracy for any specific purpose comes second to how its institutional liturgy and processes look and, more importantly, how they make us feel—virtue being attached more to posture and affect than to any particular goal. Echoing Sorkin’s 1995 film The American President (in many ways the progenitor of The West Wing) it delights in invoking “seriousness” and the supposedly hard-headed pragmatism of grownups.

cast2

Consider a scene from Season 2’s “The War at Home”, in which Toby Ziegler confronts a rogue Democratic Senator over his objections to Social Security cuts prospectively to be made in collaboration with a Republican Congress. The episode’s protagonist certainly isn’t the latter, who tries to draw a line in the sand over the “compromising of basic Democratic values” and threatens to run a third party presidential campaign, only to be admonished acerbically by Ziegler:  

“If you think demonizing people who are trying to govern responsibly is the way to protect our liberal base, then speaking as a liberal…go to bed, would you please?…Come at us from the left, and I’m gonna own your ass.”

The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue more than any fealty to specific liberal principles.

Even when the show ventures to criticize the institutions of American democracy, it never retreats from a foundational reverence for their supposed enlightenment and the essential nobility of most of the people who administer them. As such, the presidency’s basic function is to appear presidential and, more than anything, Jed Bartlet’s patrician aura and respectable disposition make him the perfect avatar for the West Wing universe’s often maudlin deference to the liturgy of “the office.” “Seriousness,” then— the superlative quality in the Sorkin taxonomy of virtues—implies presiding over the political consensus, tinkering here and there, and looking stylish in the process by way of soaring oratory and white-collar chic.   

“Make this election about smart, and not. Make it about engaged, and not. Qualified, and not. Make it about a heavyweight. You’re a heavyweight. And you’ve been holding me up for too many rounds.”

—Toby Ziegler, Hartsfield’s Landing (Season 3, Episode 14)

Despite its relatively thin ideological commitments, there is a general tenor to the West Wing universe that cannot be called anything other than smug.

It’s a smugness born of the view that politics is less a terrain of clashing values and interests than a perpetual pitting of the clever against the ignorant and obtuse. The clever wield facts and reason, while the foolish cling to effortlessly-exposed fictions and the braying prejudices of provincial rubes. In emphasizing intelligence over ideology, what follows is a fetishization of “elevated discourse” regardless of its actual outcomes or conclusions. The greatest political victories involve semantically dismantling an opponent’s argument or exposing its hypocrisy, usually by way of some grand rhetorical gesture. Categories like left and right become less significant, provided that the competing interlocutors are deemed respectably smart and practice the designated etiquette. The Discourse becomes a category of its own, to be protected and nourished by Serious People conversing respectfully while shutting down the stupid with heavy-handed moral sanctimony.  

In Toby Ziegler’s “smart and not,” “qualified and not” formulation, we can see a preview of the (disastrous) rhetorical strategy that Hillary Clinton would ultimately adopt against Donald Trump. Don’t make it about vision, make it about qualification. Don’t make it about your plans for how to make people’s lives better, make it about your superior moral character. Fundamentally, make it about how smart and good and serious you are, and how bad and dumb and unserious they are.

“The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue…”

In this respect, The West Wing’s foundational serious/unserious binary falls squarely within the tradition that has since evolved into the “epic own/evisceration” genre characteristic of social media and late night TV, in which the aim is to ruthlessly use one’s intellect to expose the idiocy and hypocrisy of the other side. In a famous scene from Season 4’s “Game On”, Bartlet debates his Republican rival Governor Robert Ritchie (James Brolin). Their exchange, prompted by a question about the role of the federal government, is the stuff of a John Oliver wet dream:  

Richie: My view of this is simple. We don’t need a federal Department of Education telling us our children have to learn Esperanto, they have to learn Eskimo poetry. Let the states decide, let the communities decide on health care and education, on lower taxes, not higher taxes. Now he’s going to throw a big word at you — ‘unfunded mandate’, he’s going to say if Washington lets the states do it, it’s an unfunded mandate. But what he doesn’t like is the federal government losing power. I call it the ingenuity of the American people.”

Bartlet: Well first of all let’s clear up a couple of things: unfunded mandate is two words, not one big word. There are times when we are 50 states and there are times when we’re one country and have national needs. And the way I know this is that Florida didn’t fight Germany in World War Two or establish civil rights. You think states should do the governing wall-to-wall, now that’s a perfectly valid opinion. But your state of Florida got 12.6 billion dollars in federal money last year from Nebraskans and Virginia’s and New Yorkers and Alaskans, with their Eskimo poetry — 12.6 out of the state budget of 50 billion. I’m supposed to be using this time for a question so here it is: Can we have it back please?”

In an even more famous scene from Season 2 episode “The Midterms”, Bartlet humiliates homophobic talk radio host Jenna Jacobs by quoting scripture from memory, destroying her by her very own logic.

printedit

If Richie and Jacobs are the obtuse yokels to be epically taken down with facts and reason, the show also elevates several conservative characters to reinforce its postpartisan celebration of The Discourse. Republicans come in two types: slack-jawed caricatures, and people whose high-mindedness and mutual enthusiasm for Putting Differences Aside make them the Bartlet Administration’s natural allies or friends regardless of whatever conflicts of values they may ostensibly have. Foremost among the latter is Vinick: a moderate, pro-choice Republican who resembles John McCain (at least the imaginary “maverick” John McCain that liberals continue to pretend exists) and is appointed by Bartlet’s Democratic successor Matthew Santos to be Secretary of State. (In reality, there is no such thing as a “moderate” Republican, only a polite one. The upright and genial Paul Ryan, whom President Bartlet would have loved, is on a lifelong quest to dismantle every part of America’s feeble social safety net.)

Thus Bartlet Democrats do not see Republicans as the “enemy,” except to the extent that they are rude or insufficiently respectful of the rules of political decorum. In one Season 5 plot, the administration opts to install a Ruth Bader Ginsburg clone (Glenn Close) as Chief Justice of the Supreme Court. The price it pays—willingly, as it turns out—is giving the other vacancy to an ultra-conservative justice, for the sole reason that Bartlet’s staff find their amiable squabbling stimulating. Anyone with substantively progressive political values would be horrified by a liberal president’s appointment of an Antonin Scalia-style textualist to the Supreme Court. But if your values are procedural, based more on the manner in which people conduct themselves rather than the consequences they actually bring about, it’s easy to chuckle along with a hard-right conservative, so long as they are personally charming (Ziegler: “I hate him, but he’s brilliant. And the two of them together are fighting like cats and dogs … but it works.”)

“What’s next?”

Through its idealized rendering of American politics and its institutions, The West Wing offers a comforting avenue of escape from the grim and often dystopian reality of the present. If the show, despite its age, has continued to find favor and relevance among liberals, Democrats, and assorted Beltway acolytes alike, it is because it reflects and affirms their worldview with greater fidelity and catharsis than any of its contemporaries.

But if anything gives that worldview pause, it should be the events of the past eight years. Liberals got a real life Josiah Bartlet in the figure of Barack Obama, a charismatic and stylish politician elected on a populist wave. But Obama’s soaring speeches, quintessentially presidential affect, and deference to procedure did little to fundamentally improve the country or prevent his Republican rivals from storming the Congressional barricades at their first opportunity. Confronted by a mercurial TV personality bent on transgressing every norm and truism of Beltway thinking, Democrats responded by exhaustively informing voters of his indecency and hypocrisy, attempting to destroy him countless times with his own logic, but ultimately leaving him completely intact. They smugly taxonomized as “smart” and “dumb” the very electorate they needed to win over, and retreated into an ideological fever dream in which political success doesn’t come from organizing and building power, but from having the most polished arguments and the most detailed policy statements. If you can just crush Trump in the debates, as Bartlet did to Richie, then you’ve won. (That’s not an exaggeration of the worldview. Ezra Klein published an article entitled “Hillary Clinton’s 3 debate performances left the Trump campaign in ruins,” which entirely eliminated the distinction between what happens in debates and what happens in campaigns. The belief that politics is about argument rather than power is likely a symptom of a Democratic politics increasingly incubated in the Ivy League rather than the labor movement.)

Now, facing defeat and political crisis, the overwhelming liberal instinct has not been self-reflection but a further retreat into fantasy and orthodoxy. Like viewers at the climax of The West Wing’s original run, they sit waiting for the decisive gestures and gratifying crescendos of a series finale, only to find their favorite plotlines and characters meandering without resolution. Shockingly, life is not a television program, and Aaron Sorkin doesn’t get to write the ending.

The West Wing is many things: a uniquely popular and lavish effort in prestige TV; an often crisply-written drama; a fictionalized paean to Beltway liberalism’s foundational precepts; a wonkish celebration of institutions and processes; an exquisitely-tailored piece of political fanfiction.

But, in 2017, it is foremost a series of glittering illusions to be abandoned.

Illustrations by Meg T. Callahan.

The Dangerous Academic is an Extinct Species

If these ever existed at all, they are now deader than dodos…

It was curiosity, not stupidity that killed the Dodo. For too long, we have held to the unfair myth that the flightless Mauritian bird became extinct because it was too dumb to understand that it was being killed. But as Stefan Pociask points out in “What Happened to the Last Dodo Bird?”, the dodo was driven into extinction partly because of its desire to learn more about a new, taller, two-legged creature who disembarked onto the shores of its native habitat: “Fearless curiosity, rather than stupidity, is a more fitting description of their behavior.”

Curiosity does have a tendency to get you killed. The truly fearless don’t last long, and the birds who go out in search of new knowledge are inevitably the first ones to get plucked. It’s always safer to stay close to the nest.

Contrary to what capitalism’s mythologizers would have you believe, the contemporary world does not heap its rewards on those with the most creativity and courage. In fact, at every stage of life, those who venture beyond the safe boundaries of expectation are ruthlessly culled. If you’re a black kid who tends to talk back and call bullshit on your teachers, you will be sent to a special school. If you’re a transgender teenager like Leelah Alcorn in Ohio, and you unapologetically defy gender norms, they’ll make you so miserable that you kill yourself. If you’re Eric Garner, and you tell the police where they can stick their B.S. “loose cigarette” tax, they will promptly choke you to death. Conformists, on the other hand, usually do pretty well for themselves. Follow the rules, tell people what they want to hear, and you’ll come out just fine.

Becoming a successful academic requires one hell of a lot of ass-kissing and up-sucking. You have to flatter and impress. The very act of applying to graduate school to begin with is an exercise in servility: please deem me worthy of your favor. In order to rise through the ranks, you have to convince people of your intelligence and acceptability, which means basing everything you do on a concern for what other people think. If ever you find that your conclusions would make your superiors despise you (say, for example, if you realized that much of what they wrote was utter irredeemable manure), you face a choice: conceal your true self or be permanently consigned to the margins.

The idea of a “dangerous” academic is therefore somewhat self-contradictory to begin with. The academy could, potentially, be a place for unfettered intellectual daring. But the most daring and curious people don’t end up in the academy at all. These days, they’ve probably gone off and done something more interesting, something that involves a little bit less deference to convention and detachment from the material world. We can even see this in the cultural archetype of the Professor. The Professor is always a slightly harrumphy—and always white and male—individual, with scuffed shoes and jackets with leather elbows, hidden behind a mass of seemingly disorganized books. He is brilliant but inaccessible, and if not effeminate, certainly effete. But bouncing with ideas, so many ideas. There is nothing particularly menacing about such a figure, certainly nothing that might seriously threaten the existing arrangements of society. Of ideas he has plenty. Of truly dangerous ones, none at all.

If anything, the university has only gotten less dangerous in recent years. Campuses like Berkeley were once centers of political dissent. There was open confrontation between students and the state. In May of 1970, the Ohio National Guard killed four students at Kent State. Ten days later, police at the historically black Jackson State University fired into a crowd of students, killing two. At Cornell in 1969, armed black students took over the student union building in a demand for recognition and reform, part of a pattern of serious upheaval.

But over the years the university became corporatized. It became a job training center rather than an educational institution. Academic research became progressively more specialized, narrow, technical, and obscure. (The most successful scholarship is that which seems to be engaged with serious social questions, but does not actually reach any conclusions that would force the Professor to leave his office.)

anatomyad2

The ideas that do get produced have also become more inaccessible, with research inevitably cloaked behind the paywalls of journals that cost astronomical sums of money. At the cheaper end, the journal Cultural Studies charges individuals $201 for just the print edition, and charges institutions $1,078 for just the online edition. The science journal Biochimica et Biophysica Acta costs $20,000, which makes Cultural Studies look like a bargain. (What makes the pricing especially egregious is that these journals are created mostly with free labor, as academics who produce articles are almost never paid for them.) Ideas in the modern university are not free and available to all. They are in fact tethered to a vast academic industrial complex, where giant publishing houses like Elsevier make massive profits off the backs of researchers.

Furthermore, the academics who produce those ideas aren’t exactly at liberty to think and do as they please. The overwhelming “adjunctification” of the university has meant that approximately 76% of professors… aren’t professors at all, but underpaid and overworked adjuncts, lecturers, and assistants. And while conditions for adjuncts are slowly improving, especially through more widespread unionization, their place in the university is permanently unstable. This means that no adjunct can afford to seriously offend. To make matters worse, adjuncts rely heavily on student evaluations to keep their positions, meaning that their classrooms cannot be places to heavily contest or challenge students’ politics. Instructors could literally lose their jobs over even the appearance of impropriety. One false step—a video seen as too salacious, or a political opinion held as oppressive—could be the end of a career. An adjunct must always be docile and polite.

All of this means that university faculty are less and less likely to threaten any aspect of the existing social or political system. Their jobs are constantly on the line, so there’s a professional risk in upsetting the status quo. But even if their jobs were safe, the corporatized university would still produce mostly banal ideas, thanks to the sycophancy-generating structure of the academic meritocracy. But even if truly novel and consequential ideas were being produced, they would be locked away behind extortionate paywalls.

The corporatized university also ends up producing the corporatized student. Students worry about doing anything that may threaten their job prospects. Consequently, acts of dissent have become steadily de-radicalized. On campuses these days, outrage and anger is reserved for questions like, “Is this sushi an act of cultural appropriation?” When student activists do propose ways to “radically” reform the university, it tends to involve adding new administrative offices and bureaucratic procedures, i.e. strengthening the existing structure of the university rather than democratizing it. Instead of demanding an increase in the power of students, campus workers, and the untenured, activists tend to push for symbolic measures that universities happily embrace, since they do not compromise the existing arrangement of administrative and faculty power.

It’s amusing, then, that conservatives have long been so paranoid about the threat posed by U.S. college campuses. The American right has an ongoing fear of supposedly arch-leftist professors brainwashing nubile and impressionable young minds into following sinister leftist dictates. Since massively popular books like Roger Kimball’s 1990 Tenured Radicals and Dinesh D’Souza’s 1992 Illiberal Education: The Politics of Race on Campus, colleges have been seen as hotbeds of Marxist indoctrination that threaten the civilized order. This is a laughable idea, for the simple reason that academics are the very opposite of revolutionaries: they intentionally speak to minuscule audiences rather than the masses (on campus, to speak of a “popular” book is to deploy a term of faint disdain) and they are fundamentally concerned with preserving the security and stability of their own position. This makes them deeply conservative in their day-to-day acts, regardless of what may come out of their mouths. (See the truly pitiful lack of support among Harvard faculty when the university’s dining hall workers went on strike for slightly higher wages. Most of the “tenured radicals” couldn’t even be bothered to sign a petition supporting the workers, let alone march in the streets.)

But left-wing academics are all too happy to embrace the conservatives’ ludicrous idea of professors as subversives. This is because it reassures them that they are, in fact, consequential, that they are effectively opposing right-wing ideas, and that they need not question their own role. The “professor-as-revolutionary” caricature serves both the caricaturist and the professor. Conservatives can remain convinced that students abandon conservative ideas because they are being manipulated, rather than because reading books and learning things makes it more difficult to maintain right-wing prejudices. And liberal professors get to delude themselves into believing they are affecting something.

harmlessacedemics

Today, in what many call “Trump’s America,” the idea of universities as sites of “resistance” has been renewed on both the left and right. At the end of 2016, Turning Point USA, a conservative youth group, created a website called Professor Watchlist, which set about listing academics it considered dangerously leftist. The goal, stated on the Turning Point site, is “to expose and document college professors who discriminate against conservative students and advance leftist propaganda in the classroom.”

Some on the left are delusional enough to think that professors as a class can and should be presenting a united front against conservatism. At a recent University of Chicago event, a document was passed around from Refusefascism.org titled, “A Call to Professors, Students and All in Academia,” calling on people to “Make the University a Zone of Resistance to the Fascist Trump Regime and the Coming Assault on the Academy.”

Many among the professorial class seem to want to do exactly this, seeing themselves as part of the intellectual vanguard that will serve as a bulwark against Trumpism. George Yancy, a professor of philosophy and race studies at Emory University, wrote an op-ed in the New York Times, titled “I Am A Dangerous Professor.” Yancy discussed his own inclusion on the Professor Watchlist, before arguing that he is, in fact, dangerous:

“In my courses, which the watchlist would like to flag as ‘un-American’ and as ‘leftist propaganda,’ I refuse to entertain my students with mummified ideas and abstract forms of philosophical self-stimulation. What leaves their hands is always philosophically alive, vibrant and filled with urgency. I want them to engage in the process of freeing ideas, freeing their philosophical imaginations. I want them to lose sleep over the pain and suffering of so many lives that many of us deem disposable. I want them to become conceptually unhinged, to leave my classes discontented and maladjusted…Bear in mind that it was in 1963 that the Rev. Dr. Martin Luther King, Jr. raised his voice and said: ‘I say very honestly that I never intend to become adjusted to segregation and discrimination.’… I refuse to remain silent in the face of racism, its subtle and systemic structure. I refuse to remain silent in the face of patriarchal and sexist hegemony and the denigration of women’s bodies.”

He ends with the words:

“Well, if it is dangerous to teach my students to love their neighbors, to think and rethink constructively and ethically about who their neighbors are, and how they have been taught to see themselves as disconnected and neoliberal subjects, then, yes, I am dangerous, and what I teach is dangerous.”

Of course, it’s not dangerous at all to teach students to “love their neighbors,” and Yancy knows this. He wants to simultaneously possess and devour his cake: he is doing nothing that anyone could possibly object to, yet he is also attempting to rouse his students to overthrow the patriarchy. He suggests that his work is so uncontroversial that conservatives are silly to fear it (he’s just teaching students to think!), but also places himself in the tradition of Martin Luther King, Jr., who was trying to radically alter the existing social order. His teaching can be revolutionary enough to justify Yancy spending time as a philosophy professor during the age of Trump, but benign enough for the Professor Watchlist to be an act of baseless paranoia.

Much of the revolutionary academic resistance to Trump seems to consist of spending a greater amount of time on Twitter. Consider the case of George Ciccariello-Maher, a political scientist at Drexel University who specializes in Venezuela. In December of 2016, Ciccariello-Maher became a minor cause célèbre on the left after getting embroiled in a flap over a tweet. On Christmas Eve, for who only knows what reason, Ciccariello-Maher tweeted “All I Want for Christmas is White Genocide.” Conservatives became enraged, and began calling upon Drexel to fire him. Ciccariello-Maher insisted he had been engaged in satire, although nobody could understand what the joke was intended to be, or what the tweet even meant in the first place. After Drexel disowned Ciccariello-Maher’s words, a petition was launched in his defense. Soon, Ciccariello-Maher had lawyered up, Drexel confirmed that his job was safe, and the whole kerfuffle was over before the nation’s half-eaten leftover Christmas turkeys had been served up into sandwiches and casseroles.

Ciccariello-Maher continues to spend a great deal of time on Twitter, where he frequently issues macho tributes to violent political struggle, and postures as a revolutionary. But despite his temporary status as a martyr for the cause of academic freedom, one who terrifies the reactionaries, there was nothing dangerous about his act. He hadn’t really stirred up a hornet’s nest; after all, people who poke actual bees occasionally get bee stings. A more apt analogy is that he had gone to the zoo to tap on the glass in the reptile house, or to throw twigs at some tired crocodiles in a concrete pool. (When they turned their rheumy eyes upon him, he ran from the fence, screaming that dangerous predators were after him.) U.S. academics who fancy themselves involved in revolutionary political struggles are trivializing the risks faced by actual political dissidents around the world, including the hundreds of environmental activists who have been murdered globally for their efforts to protect indigenous land.

“University faculty are less and less likely to threaten any aspect of the existing social or political system…”

Of course, it’s true that there are still some subversive ideas on university campuses, and some true existing threats to academic and student freedom. Many of them have to do with Israel or labor organizing. In 2014, Steven Salaita was fired from a tenured position at the University of Illinois for tweets he had made about Israel. (After a protracted lawsuit, Salaita eventually reached a settlement with the university.) Fordham University tried to ban a Students for Justice in Palestine group, and the University of California Board of Regents attempted to introduce a speech code that would have punished much criticism of Israel as “hate speech.” The test of whether your ideas are actually dangerous is whether you are rewarded or punished for expressing them.

In fact, in terms of danger posed to the world, the corporatized university may itself be more dangerous than any of the ideas that come out of it.

In Hyde Park, where I live, the University of Chicago seems ancient and venerable at first glance. Its Ye Olde Kinda Sorta Englande architecture, built in 1890 to resemble Oxbridge, could almost pass for medieval if one walked through it at dusk. But the institution is in fact deeply modern, and like Columbia University in New York, it has slowly absorbed the surrounding neighborhood, slicing into older residential areas and displacing residents in landgrab operations. Despite being home to one of the world’s most prestigious medical and research schools, the university refused for many years to open a trauma center to serve the city’s South Side, which had been without access to trauma care. (The school only relented in 2015, after a long history of protests.) The university ferociously guards its myriad assets with armed guards on the street corners, and enacts massive surveillance on local residents (the university-owned cinema insists on examining bags for weapons and food, a practice I have personally experienced being selectively conducted in a racially discriminatory manner). In the university’s rapacious takeover of the surrounding neighborhood, and its treatment of local residents—most of whom are of color—we can see what happens when a university becomes a corporation rather than a community institution. Devouring everything in the pursuit of limitless expansion, it swallows up whole towns.

The corporatized university, like corporations generally, is an uncontrollable behemoth, absorbing greater and greater quantities of capital and human lives, and churning out little of long-term social value. Thus Yale University needlessly decided to open a new campus in Singapore despite the country’s human rights record and restrictions on political speech, and New York University decided to needlessly expand to Abu Dhabi, its new UAE campus built by low-wage workers under brutally repressive conditions. The corporatized university serves nobody and nothing except its own infinite growth. Students are indebted, professors lose job security, surrounding communities are surveilled and displaced. That is something dangerous.

Left professors almost certainly sense this. They see themselves disappearing, the campus becoming a steadily more stifling environment. Posturing as a macho revolutionary is, like all displays of machismo, driven partially by a desperate fear of one’s impotence. They know they are not dangerous, but they are happy to play into the conservative stereotype. But the “dangerous academic” is like the Dodo in 1659, a decade before its final sighting and extinction: almost nonexistent. And the more universities become like corporations, the fewer and fewer of these unique birds will be left. Curiosity kills, and those who truly threaten the inexorable logic of the neoliberal university are likely to end up extinct.

Illustrations by Chris Matthews.

Andrew Sullivan Is Still Racist After All These Years

Viewing racial groups as undifferentiated blobs defined by stereotypes is a dangerous form of bigotry…

Andrew Sullivan’s latest piece of writing for New York is a bizarre thing indeed. Entitled “Why Do Democrats Feel Sorry For Hillary Clinton?”, it spends most of its length making the (correct) argument that the person most responsible for the poor management of the Hillary Clinton presidential campaign was Hillary Clinton. But after laying out the thoroughly convincing case for this bleedingly obvious proposition, Sullivan takes a rather unexpected detour into the politics of race. Suddenly pondering on the causes of achievement gaps among racial groups, Sullivan muses thusly:

Asian-Americans, like Jews, are indeed a problem for the “social-justice” brigade. I mean, how on earth have both ethnic groups done so well in such a profoundly racist society? How have bigoted white people allowed these minorities to do so well — even to the point of earning more, on average, than whites? Asian-Americans, for example, have been subject to some of the most brutal oppression, racial hatred, and open discrimination over the years. In the late 19th century, as most worked in hard labor, they were subject to lynchings and violence across the American West and laws that prohibited their employment. They were banned from immigrating to the U.S. in 1924. Japanese-American citizens were forced into internment camps during the Second World War, and subjected to hideous, racist propaganda after Pearl Harbor. Yet, today, Asian-Americans are among the most prosperous, well-educated, and successful ethnic groups in America. What gives? It couldn’t possibly be that they maintained solid two-parent family structures, had social networks that looked after one another, placed enormous emphasis on education and hard work, and thereby turned false, negative stereotypes into true, positive ones, could it?

As I say, for anybody who had been pleasantly savoring Sullivan’s Clinton critique, the abrupt transition is somewhat jarring. But apparently this is the format of Sullivan’s new New York column; he meanders from subject to subject, riffing on whatever he finds important or what comes into his mind.

And so it’s curious that this, of all things, should be occupying Sullivan’s thoughts. He is, after all, restating a version of an argument that has been made for about forty years, one that has been the subject of countless responses from social scientists. The argument has a name (the “Model Minority” argument) and an extensive Wikipedia article. In its core form, it goes roughly as follows: “I don’t see why black people are always whining about racism in this country. After all, Asian people seem to do just fine. If there’s so much ‘racism,’ why are Asian test scores so high, hm?”

There are more sophisticated versions of this argument, but Sullivan is stating it in its absolute crudest form, suggesting quite openly that instead of America being a “profoundly racist society,” a better explanation for why some races are “earning more” and are more “well-educated” on average is that members of those racial groups have made better choices, e.g. the choice to have marry and tell their kids to get an education.

Now, I think the above paragraph by Sullivan is deeply and obviously racist. I also think it is willfully empirically ignorant. But since the argument he is making is very common, and since charges of racism and ignorance are very serious and require substantiation, let me explain why Sullivan’s perspective is both bigoted and mistaken.

The first objectionable aspect of Sullivan’s argument is his suggestion that Asian-Americans have “turned false, negative stereotypes into true, positive ones.” In and of itself, this is a racist notion, because it suggests that certain racial stereotypes can be “true” and “positive.” Because I believe that racial stereotypes are inherently racist, since stereotypes are crass and prejudiced generalizations, I find Sullivan’s idea that stereotypes about Asians could be “true and positive” to be racist.

There are several problems with Sullivan’s embrace of racial stereotypes about Asians. First, as Matthew Bruenig documented at Jacobin, because racial stereotypes treat race as a helpful analytic category (even though “Asian American” lumps together people of totally different backgrounds), they lead to poor social science. Bruenig points out why it’s ignorant to discuss “Asian Americans” as being “better educated” or “more prosperous.” First, Asian Americans as a group actually have a higher poverty rate than non-Hispanic whites. But more importantly, using “Asian American” as a category obscures the massive differences among different Asian Americans, with Filipino Americans having a substantially lower poverty rate than whites and Hmong Americans having a far, far higher poverty rate than whites. Because some subgroups of Asian Americans have far higher incomes than white Americans, statistics for Asian Americans overall look pretty good. But one can only posit a theory of how “Asian” emphasis on education and family ties has led to their success if one ignores the fact that many groups of Asian Americans have not achieved this incredible success, even though they share whatever distinctively Asian cultural characteristics Sullivan thinks are important.

anatomyad2

But stereotypes don’t just create empirical failures by obliviously viewing distinctive groups as amorphous racially-defined blobs. They are also deeply harmful, and there is no such thing as a “positive” racial stereotype. By saying there are such things as “positive” racial stereotypes to begin with, we are allowing for the possibility of ordering racial groups hierarchically (the “diligent” races, the “lazy” races, etc.), and if some groups are associated with “positive” racial traits it is inevitable that others will be associated with negative ones. Members of the British Colonial Office during the 1950s, for example, praised “the skilled character and proven industry of the West Indians,” contrasting them with “the unskilled and largely lazy Asians.” It may seem as if calling West Indians “industrious” is paying them a compliment, but in doing so one is adopting a framework by which character traits are assigned to ethnicities, a framework which views people not as individuals but as the prisoners of their racial identity.

Regardless of what judgments are being made, positive or negative, the inclination to judge people by their race is poisonous. Certain white people see nothing wrong with classifying Asians as “smart” or “hard-working.” After all, what could possibly be objectionable about stereotyping someone as intelligent? But all racial stereotypes have deleterious impacts, particularly on children. For many young Asian Americans, the “Model Minority” stereotype causes serious psychological anxiety. Because, thanks to racial stereotypes, they are expected to be scientifically-minded, humble, and diligent, Asian American students often feel a sense of inadequacy if they cannot live up to unreasonable expectations, an incredible psychological burden of racial expectation that leads to not seeking help when they are struggling and has been linked to suicide. (Some schools even offer counseling for Asian students trying to deal with the mental health consequences inflicted by Sullivan’s worldview.) Every racial stereotype is ugly, and every single one hurts the people to whom it is applied, and the very idea of a “true, positive” racial stereotype is both unscientific and insidious.

(It’s worth mentioning that Sullivan’s perspective also conforms to a common line of thinking among those who emphasize the importance of racial categories: that if one sees Asians as superior, one cannot be racist. I have seen this repeatedly from those who attempt to defend Bell Curve-type thinking; they believe that if they claim Asians are equal or superior to whites, they cannot be white supremacists. Here we should note the implications of this worldview: that someone who used the n-word and advocated the return of Jim Crow would not be racist so long as he carved out an exception for Asians. And that’s not a theoretical argument: white South Africans exempted Japanese people from Apartheid restrictions by making them “honorary whites.” The fact is that it doesn’t matter what your racial hierarchy is; if you have a racial hierarchy at all, you’re a racist. If you think black people are lazy, but Asian people are superhumans, you are being racist against both groups by treating them as cartoons instead of people.)

There are other serious deficiencies with Sullivan’s argument. For one thing, in his attempt to blame racial cultural traits for differing economic outcomes, Sullivan does not give a moment’s consideration to the differences in history between groups. It’s been pointed out over and over that since black people disproportionately consist of the descendants of slaves, while large numbers of Asian American immigrants are among the most prosperous and well-educated in their home countries, it’s absurd to attribute the resulting economic disparities to freely-made cultural and behavioral choices. An honest person would at least mention and discuss the importance of differences in background, including the education levels of Asian immigrants and the fact that black people spent two centuries being whipped, raped, and killed. Sullivan does not mention and discuss these things. Therefore Sullivan is not an honest person.

That dishonesty is the central problem with Sullivan’s passage. The causes of people’s economic and education outcomes are of central concern to the social sciences; an extraordinary amount of research is done on these topics. Sullivan pretends that this research does not exist, acting as if the long conversation on the errors and dangers of the Model Minority myth simply has not been happening, even though it has been going on for multiple decades. He wishes to beat up on the “social justice” types for their comical view that America is racist, without considering any of the actual evidence they put forth to support the view that America is racist. This means that Andrew Sullivan is not interested in finding out the truth, but in advancing a particular prejudiced worldview.

One has to conclude, then, that Sullivan hasn’t learned much since the days when he helped midwife The Bell Curve and grant flimsy race science a veneer of intellectual respectability. He still believes race is a reasonable prism through which to view the world, and that if only our racial stereotypes are “true,” they are acceptable. He is therefore an unreliable and ideologically-biased guide to political and social science. He is also a racist.

Fines and Fees Are Inherently Unjust

Fining people equally hurts some people far more than others, undermining the justifications of punishment…

Being poor in the United States generally involves having a portion of your limited funds slowly siphoned away through a multitude of surcharges and processing fees. It’s expensive to be without money; it means you’ve got to pay for every medical visit, pay to cash your checks, and frankly, pay to pay your overwhelming debts. It means that a good chunk of your wages will end up in the hands of the payday lender and the landlord. (It’s a perverse fact of economic life that for the same property, it often costs less to pay a mortgage and get a house at the end than to pay rent and end up with nothing. If I am wealthy, I get to pay $750 a month to own my home while my poorer neighbor pays $1,500 a month to own nothing.) It’s almost a law of being poor: the moment you get a bit of money, some kind of unexpected charge or expense will come up to take it away from you. Being poor often feels like being covered in tiny leeches, each draining a dollar here and a dollar there until you are left weak, exhausted, and broke.

One of the most insidious fine regimes comes from the government itself in the form of fines in criminal court, where monetary penalties are frequently used as punishment for common misdemeanors and ordinance violations. Courts have been criticized for increasingly imposing fines indiscriminately, in ways that turn judges into debt collectors and jails into debtors’ prisons. The Department of Justice found that fines and fees in certain courts were exacted in such a way as to force “individuals to confront escalating debt; face repeated, unnecessary incarceration for nonpayment despite posing no danger to the community; lose their jobs; and become trapped in cycles of poverty that can be nearly impossible to escape.” A new report from PolicyLink confirms that “Wide swaths of low-income communities’ resources are being stripped away due to their inability to overcome the daunting financial burdens placed on them by state and local governments. There are countless stories of people being threatened with jail time for failing to pay fines for “offenses” like un-mowed lawns or cracked driveways.

Critics have targeted these fines because of the consequences they are having on poor communities. But it’s also important to note something further. The imposition of flat-rate fines and fees does not just have deleterious social consequences, but also fundamentally undermines the legitimacy of the criminal legal system. It cannot be justified – even in theory.

I work as a criminal defense attorney, and I have defended both rich and poor clients (mostly poor ones). Many of my clients have been given sentences involving the imposition of fines. For everyone, regardless of wealth, if a fine means less (or no) jail time, it’s almost always a better penalty. But, and this should be obvious, fines don’t mean the same thing to different people. For my poor clients, a fine means actual hardship. In extreme cases, it can mean a kind of indenture, as the reports have pointed out. If you make $1,000 a month, and are trying to pay rent and support yourself, a $500 fine means a lot. It means many months of indebtedness as you slowly work off your debt to the court. It might mean not buying clothes for your child, or forgoing necessary medical treatment.

Of course, the situation changes if you’re wealthy, or even middle-class. You write the check, you leave the court, the case is over. For my wealthy clients, a fine isn’t just the best outcome, it’s a fantastic outcome, because it means the crime which you are alleged to have committed has led to no actual consequences that affect you in a substantive way. You haven’t had to make any sacrifices –  your life will look precisely the same in the months after the fine was imposed as it did in the months before. Wealthy defendants want to know: “What can I pay to make this go away?” And sometimes paying to make it go away is exactly what they can do as courts will often accept pre-trial fines in exchange for dismissal.

As I said, it’s not news that it’s harder to pay a fine if you’re poor. But the implications of this are rarely worked all the way through. For if it’s true that the punishment prescribed by law hurts one class of defendants far more than it hurts another class of defendants, then the underlying justification for having the punishment in the first place is not actually being served, and the basic principle of equality under the law is being undermined.

anatomyad2

If fines are imposed at flat rates, poor people are being punished while rich people are not. If it’s true that wealthy defendants couldn’t care less about fines (and a millionaire with a $500 fine really couldn’t care less), then they’re not actually being deprived of anything in consequence of their violation of law. Punishment is supposed to serve the goals of retribution, deterrence, or rehabilitation. Leaving aside for the moment whether these are actually worthy goals, or whether criminal courts actually care about these goals, flat-rate fines don’t serve any of them when it comes to wealthy defendants. There’s no deterrence or rehabilitation, because if you can pay an insignificant fee to commit a crime, there’s no reason not to do it again. It’s wildly unclear how a negligibly consequential fine would deter a wealthy frat boy from continuing to urinate in public, whereas a person trying to escape homelessness might become very careful not to rack up any more fines.

Nor does the retribution imposed have a rational relationship to the significance of the crime. If the point of retribution is to make someone suffer a harm in proportion to the suffering they themselves have imposed (a dubious idea to begin with), flat-rate fines make no sense, because some people are being sentenced to far greater suffering than others. This means that it is unclear what we believe the actual correct retributive amount is supposed to be. It’s as if we punish in accordance with the philosophy of “an eye for an eye,” but we live in a society where some people start with one eye and some people start with a twenty eyes. Taking “an eye for an eye” means something quite different when imposed on a one-eyed man than it does with a twenty-eyed man. The one-eyed man has been punished with blindness while the twenty-eyed man can shrug and simply have one of the lenses removed from his spectacles.

This is important for how we view the law. If courts aren’t calibrating fees based on people’s actual wealth, then massively differential punishments are being imposed. Some people receive indenture while others receive no punishment at all, even given the same offense at the same level of culpability. If fines are supposed to have anything to do with making a person experience consequences for their crime, whether retributive consequences or rehabilitative consequences, then punishments are failing their stated purpose and being applied grossly unequally.

It may be objected that fines do not constitute an unequal application of the law, because they are applied equally to all. But the point here is that application of a law equally in each case does not mean “equal application of law to all” in any meaningful sense. In other contexts, this is perfectly clear. A law forbidding anyone from wearing a yarmulke and reading the Torah does not constitute the “equal application of law to all.” It clearly discriminates against Jews, even though Christians, Muslims, Hindus, and the non-religious are equally prohibited from wearing yarmulkes. (The absurdity of “equal application” meaning “legal equality” was well captured by Anatole France, who wrote that “The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges.”)

It is inevitable that laws will always affect people differently, because people will always be different. But if some people are given something that constitutes far more of a burdensome punishment for them than it is for others, the actual purposes of the law aren’t being served. Separate from the equality arguments, for a large class of people punishment simply isn’t even serving its intended function.

Of course, you could easily take a step toward this, by fining people in accordance with a percentage of their income rather than at a flat rate (or redistributing all wealth). If a fine is, say, 2% of one’s annual income, then a person with a $20,000 income would face a $400 fine whereas a person with a $200,000 income would face a $4,000 fine. That’s still grossly unfair of course, because $400 means far more to the poorer person than $4,000 does to the richer person. You wouldn’t have a fair system of fines until you figure out how to make the rich experience the same kinds of effects that fines impose on the poor. The fact that even massively increasing fines on the rich wouldn’t bring anything close to equal consequences should show how totally irrational our present system is.

But rather than having courts appropriate larger quantities of rich people’s wealth (though their wealth obviously does need appropriating), we could also simply reduce the harm being inflicted on the poor, through reforming local fines-and-fees regimes. It’s clear that in many cases, fines don’t have anything to do with actual punishment; they’re revenue-raising mechanisms, a legalized shakedown operation, as the Justice Department’s report on Ferguson made clear. Courts aren’t interested in actually calculating the deterrence effects of certain financial penalties. They want to fund their operations, and poor people’s paychecks are a convenient piggy bank.

We know that fines and fees have, in many jurisdictions, created pernicious debt traps for the poor, arising from trivial offenses. But it’s when we examine the comparative impact on wealthy defendants that this system is exposed as being irrational as well as cruel. It doesn’t just ensnare the less fortunate in a never-ending Kafkaesque bureaucratic nightmare. It also fundamentally delegitimizes the entire legal system, by severing the relationship between punishments and their purpose. It makes a joke out of the ideas of both the punishment fitting the crime and equality under the law, two bedrock principles necessary for  “law” to command any respect at all. So long as flat-rate fines are disproportionately impacting the poor, there is no reason to believe that criminal courts can ever be places of justice.

It’s Basically Just Immoral To Be Rich

A reminder that people who possess great wealth in a time of poverty are directly causing that poverty…

Here is a simple statement of principle that doesn’t get repeated enough: if you possess billions of dollars, in a world where many people struggle because they do not have much money, you are an immoral person. The same is true if you possess hundreds of millions of dollars, or even millions of dollars. Being extremely wealthy is impossible to justify in a world containing deprivation.

Even though there is a lot of public discussion about inequality, there seems to be far less talk about just how patently shameful it is to be rich. After all, there are plenty of people on this earth who die—or who watch their loved ones die—because they cannot afford to pay for medical care. There are elderly people who become homeless because they cannot afford rent. There are children living on streets and in cars, there are mothers who can’t afford diapers for their babies. All of this is beyond dispute. And all of it could be ameliorated if people who had lots of money simply gave those other people their money. It’s therefore deeply shameful to be rich. It’s not a morally defensible thing to be. 

To take a U.S. example: white families in America have 16 times as much wealth on average as black families. This is indisputably because of slavery, which was very recent (there are people alive today who met people who were once slaves). Larry Ellison of Oracle could put his $55 billion in a fund that could be used to just give houses to black families, not quite as direct “reparations” but simply as a means of addressing the fact that the average white family has a house while the average black family does not. But instead of doing this, Larry Ellison bought the island of Lanai. (It’s kind of extraordinary that a single human being can just own the sixth-largest Hawaiian island, but that’s what concentrated wealth leads to.) Because every dollar you have is a dollar you’re not giving to somebody else, the decision to retain wealth is a decision to deprive others.

Note that this is a slightly different point than the usual ones made about rich people. For example, it is sometimes claimed that CEOs get paid too much, or that the super-wealthy do not pay enough in taxes. My claim has nothing to do with either of these debates. You can hold my position and simultaneously believe that CEOs should get paid however much a company decides to pay them, and that taxes are a tyrannical form of legalized theft. What I am arguing about is not the question of how much people should be given, but the morality of their retaining it after it is given to them.

Many times, defenses of the accumulation of great wealth depend on justifications for the initial acquisition of that wealth. The libertarian-ish philosopher Robert Nozick gave a well-known hypothetical that is used to challenge claims that wealthy people did not deserve their wealth: suppose millions of people enjoy watching Wilt Chamberlain play basketball. And suppose, Nozick wrote, that each of these people would happily give Wilt Chamberlain 25 cents for the privilege of watching him play basketball. And suppose that through the process of people paying Wilt Chamberlain, he ended up with millions of dollars, while each of his audience members had (willingly) sacrificed a quarter. Even though Wilt Chamberlain is now far richer than anyone else in the society, would anyone say that his acquisition of wealth was unjust?

Libertarians use this example to rebut attempts to say that the rich do not deserve their wealth. After all, they say, the process by which those rich people attained their wealth is totally consensual. We’d have to be crazy Stalinists to believe that I shouldn’t have the right to pay you a quarter to watch you play basketball. Why, look at Mark Zuckerberg. Nobody has to use Facebook. He is rich because people like the product he came up with. Clearly, his wealth is the product of his own labor, and nobody should deprive him of it. People on the right often defend wealth along these lines. I earned it, therefore it’s not unfair for me to have it.

But there is a separate question that this defense ignores: regardless of whether you have earned it, to what degree are you morally permitted to retain it? The question of getting and the question of keeping are distinct. As a parallel: if I come into possession of an EpiPen, and I encounter a child experiencing a severe allergic reaction, the question of whether I am obligated to inject the child is distinguishable from the question of whether I obtained the pen legitimately. It’s important to be clear about these distinctions, because we might answer questions about systems differently than we answer questions about individual behavior. (“I don’t hate capitalism, I just hate rich people” is a perfectly legitimate and consistent perspective.) 

I therefore think there is a sort of deflection that goes on with defenses of wealth. If we find it appalling that there are so many rich people in a time of need, we are asked to consider questions of acquisition rather than questions of retention. The retention question, after all, is much harder for a wealthy person to answer. It’s one thing to argue that you got rich legitimately. It’s another to explain why you feel justified in spending your wealth upon houses and sculptures rather than helping some struggling people pay their rent or paying off a bunch of student loans or saving thousands of people from dying of malaria. There may be nothing unseemly about the process by which a basketball player earns his millions (we can debate this). But there’s certainly something unseemly about having those millions. 

One of the reasons wealthy people rarely have to defend their choices is that “shaming the rich” is not really compatible with any of the predominating political perspectives. People on the right obviously believe that having piles of wealth is fine. Centrist Democrats can’t attack rich people for being rich because they’re increasingly a party for rich people. And socialists (this is the interesting case) tend to believe that questions about the morality of having wealth are relatively unimportant, because they are far more interested in how the state divides up wealth than in what individuals choose to do with it. As G.A. Cohen points out in If You’re an Egalitarian, How Come You’re So Rich?, Marxists have been concerned with eliminating capitalism generally, which has kept them from thinking about questions of the justice of people’s personal choices. After all, if the problem of inequality is systemic, and rich people do not really make choices but pursue their class interests, then asking whether it is moral for wealthy people to retain their wealth is both irrelevant (because individual decisions don’t affect the systemic problem) and incoherent (because the idea of a moral or immoral capitalist makes no sense in the Marxist framework). In fact, there is a certain leftist argument that giving away wealth in the form of charity is actually bad, because it allows capitalism to look superficially generous without actually altering the balance of power in the society. “The worst slave owners were those who were kind to their slaves, because they prevented the core of the system from being realized by those who suffered from it,” as Oscar Wilde ludicrously put it. (In their book Blueprints for a Sparkling Tomorrow, Nimni and Robinson parody this perspective by portraying two leftist academics who insist on being rude to servers in restaurants, on the grounds that being polite to them obscures the true brutality of class relations.)

But I think it is a mistake to avoid inquiring into the moral justifications for wealth. This is because I think individual decisions do matter, because if I am an extremely wealthy man I could be helping a lot of people who I am choosing not to help. And for those people, at least, it makes a difference when a billionaire decides to retain their wealth rather than rid themselves of it.

Of course, when you start talking about whether it is moral to be rich, you end up heading down some difficult logical paths. If I am obligated to use my wealth to help people, am I not obligated to keep doing so until I am myself a pauper? Surely this obligation attaches to anyone who consumes luxuries they do not need, or who has some savings that they are not spending on malaria treatment for children. But the central point I want to make here is that the moral duty becomes greater the more wealth you have. If you end up with a $50,000 a year or $100,000 a year salary, we can debate what amount you should spend on helping other people. But if you earn $250,000 or 1 million, it’s quite clear that the bulk of your income should be given away. You can live very comfortably on $100,000 or so and have luxury and indulgence, so anything beyond is almost indisputably indefensible. And the super-rich, the infamous “millionaires and billionaires”, are constantly squandering resources that could be used to create wonderful and humane things. If you’re a billionaire, you could literally open a hospital and make it free. You could buy up a bunch of abandoned Baltimore rowhouses, do them up, and give them to families. You could help make sure no child ever had to go without lunch.

We can define something like a “maximum moral income” beyond which it’s obviously inexcusable not to give away all of your money. It might be 5o thousand. Call it 100, though. Per person. With an additional 50 allowed per child. This means two parents with a child can still earn $250,000! That’s so much money. And you can keep it. But everyone who earns anything beyond it is obligated to give the excess away in its entirety. The refusal to do so means intentionally allowing others to suffer, a statement which is true regardless of whether you “earned” or “deserved” the income you were originally given. (Personally, I think the maximum moral income is probably much lower, but let’s just set it here so that everyone can agree on it. I do tend to think that moral requirements should be attainable in practice, and a $30k threshold would actually require people experience some deprivation whereas a $100k threshold indisputably still leaves you with an incredibly comfortable lifestyle better than almost any other had by anyone in history.)

Of course, wealthy people do give away money, but so often in piecemeal and self-interested and foolish ways. They’ll donate to colleges with huge endowments to get needless buildings built and named after them. David Geffen will pay to open a school for the children of wealthy university faculty, and somehow be praised for it. Mark Zuckerberg will squander millions of dollars trying to fix Newark’s schools by hiring $1000-a-day-consultants. Brad Pitt will try to build homes for Katrina victims in New Orleans, but will insist that they’re architecturally cutting-edge and funky looking, instead of just trying to make as many simple houses as possible. Just as the rich can’t be trusted to spend their money well generally, they’re colossally terrible at giving it away. This is because so much is about self-aggrandizement, and “philanthropy” is far more about the donor than the donee. Furthermore, if you’re a multi-billionaire, giving away $1 billion is morally meaningless. If you’ve got $3 billion, and you give away 1, you’re still incredibly wealthy, and thus still harming many people through your retention of wealth. You have to get rid of all of it, beyond the maximum moral income. 

The central point, however, is this: it is not justifiable to retain vast wealth. This is because that wealth has the potential to help people who are suffering, and by not helping them you are letting them suffer. It does not make a difference whether you earned the vast wealth. The point is that you have it. And whether or not we should raise the tax rates, or cap CEO pay, or rearrange the economic system, we should all be able to acknowledge, before we discuss anything else, that it is immoral to be rich. That much is clear.

The Racism v. Economics Debate Again

Anyone who says the election was “about race” (or “about” anything) has little regard for truth…

I would have thought we could have moved on by now. Both before and after the 2016 election, there were months of acrimonious debate over the question of whether Trump voters were motivated by racial hatred or anxiety over their economic prospects. And I thought the general conclusion would have been that the premise was wrong to begin with, that you couldn’t talk about “Trump voters” as a single unit, because the category includes a broad spectrum of people with a varying set of motivations. Some of them liked Trump’s rhetoric on jobs and globalization, some liked his rhetoric on immigration and Islam, and some liked all of it. Both of the appeals obviously contributed to his victory. (Those of us on the left, however, frequently suggested that Democrats should focus on winning over the economically-motivated Trump voters, rather than the wealthy racists, because the ones anxious about jobs are the ones whose support Democrats have a greater chance of peeling off.)

The “racism or economics” debate is a pretty easy one to resolve, then. Trump’s campaign was based on bigotry, but also fueled by a backlash to the unfairness of the contemporary globalized economy. And many workers fell for his promises to bring jobs back, just as racists got excited over his stigmatization of Mexican immigrants. A question that appears contentious and intractable actually has a fairly obvious answer.

But British journalist Mehdi Hasan has decided to reignite the debate once more, with a new column in The Intercept arguing that racism was the primary cause of Trump’s victory and that Democrats who say Trump voters were hurting economically are “trafficking in alternative facts.” Hasan is blunt and his conclusions unqualified: “The race was about race,” he says. “It’s not the economy. It’s the racism, stupid.” Hasan singles out Bernie Sanders and Elizabeth Warren for criticism, saying that by claiming Trump voters were economically motivated, Sanders and Warren are ignoring the “stubborn facts” and “coddling…those who happily embraced an openly xenophobic candidate.”

Hasan’s column repeats arguments that have been made over and over for two years, from Salon to Vox to The Atlantic. Many liberal pundits have consistently dismissed the idea that Trump voters acted out of defensible economic motives, instead suggesting that they were just as deplorable as Hillary Clinton made them out to be. (In fact, they go beyond Clinton, who was trying to draw a distinction between those who were deplorable and those who should be respected and listened to.) The position is somewhat surprising coming from Hasan, though, who has often seemed sympathetic to the Sanders left, and it’s doubly surprising for appearing in Glenn Greenwald’s Intercept, which has been consistently critical of Vox-ian liberalism.

If Hasan thinks this is true, then, it is worth dealing with his evidence. His argument for the proposition that the election was “about race” is as follows: There are a series of statistical correlations between racism and Trump support. Donald Trump did better than Romney or McCain among voters with high racial resentment. The best way to predict whether any given person is a Trump supporter is to ask them whether they think Barack Obama is a Muslim. If they say yes, they’re almost certainly a Trump supporter. (“This is economic anxiety? Really?” comments Hasan incredulously.) Those who hold negative racial stereotypes about African Americans are far more likely to be Trump supporters. (“Sorry, but how can any of these prejudices be blamed on free trade or low wages?”) On the other hand, having a low income did not predict support for Trump, and Trump supporters actually tend to have higher incomes than Clinton supporters. And while there may be “economic anxiety” among Trump voters, it tends to be the product of racial resentment rather than its cause; in 2016, people who were racist tended to be economically anxious, while people who were economically anxious did not thereby become racist.

anatomyad2

These are the entirety of the facts that Hasan presents to support his conclusion that the election was “about” race and that Bernie Sanders is factually wrong to say things like “millions of Americans registered a protest vote on Tuesday, expressing their fierce opposition to an economic and political system that puts wealthy and corporate interests over their own.”

I have long been critical of those in the political press who loudly insist on their superior allegiance to Fact and Truth. By contrast with Hasan, who quotes John Adams that facts are “stubborn things,” I tend to believe facts are fundamentally slippery things. Statements that are literally factually true can often be highly misleading, and sometimes you do actually need the addition (not substitution) of some “alternative facts” in order to understand what is really going on. For example: I can cite GDP growth as proof that Americans are doing well economically. But it’s not until I understand the distribution of the economic benefits across society that I will know how the majority of Americans are actually doing. Or I can cite the fact that lifespans are increasing as evidence that American healthcare is “making us live longer.” But it might be that richer people are living longer while poorer people are actually living less long, making the word “us” erroneous. If a fact is true, but is incomplete, then it might actually leave us more ignorant than we were before.

This is precisely the situation with Hasan’s statistics. They are carefully selected to support his argument, with the statistics that don’t support it simply ignored. He, like many others who have written “it’s about racism” pieces, depends heavily on evidence that racism “predicts” support for Trump while income doesn’t, meaning that racists are more likely to be Trump supporters while poor people aren’t more likely to be Trump supporters.

But if we think about this statistic for a moment, we can see why it’s a dubious way of proving that Trump support was “about” race. First, Hasan is confusing the statement “Most racists are Trump supporters” with the statement “Most Trump supporters are racists.” Of course most racists are Trump supporters; racists tend to be on the political right, because the political left defines itself heavily by its commitment to advancing the social position of racial minorities. It would be shocking if racism didn’t predict support for Trump, because it would mean that racists had decided to ignore David Duke’s endorsement of Trump and vote for a candidate who embraced the language of “intersectional” social justice feminism. Nor is it surprising that Trump did better with racists than his more centrist predecessors. The more racist your campaign rhetoric is, the more the racists like you.

The income statistic is similarly unsurprising. Of course Trump’s supporters tend to be higher income. Republicans are the party of low taxes on the rich, and Trump wants to lower taxes on the rich. Democrats are the party of social programs for the poor. So poor people were always going to disproportionately be for Clinton, and rich people were going to disproportionately be for Trump. Furthermore, since Democrats are disproportionately the party of racial minorities, and racial minorities tend to be less wealthy than white people (due in part to several hundred years of black enslavement), the racially diverse Democratic base will ensure that poverty doesn’t predict Trump support.

Note how neither of these facts address the actual question. If we want to understand the relative role of race and economics in creating votes for Donald Trump, it doesn’t really help us to know that racists tend to be Trump voters. Imagine we have 100 voters, 10 of whom are high-income racists and 90 of whom are low-income non-racists concerned about the economy. Well, we know our 10 rich racists will probably vote for Donald Trump. And we know that being a low-income non-racist doesn’t really predict support for Donald Trump, so let’s say those votes split equally, or even break slightly in favor of Clinton. We count the votes, and the result is: 54 Trump, 46 Clinton. Trump gets 10 rich racists, plus 44 poor non-racists. Clinton gets 46 poor non-racists.

We can see, then, what can be concealed by statistics showing that “wealthy racists tend to support Trump” and “poor and economically anxious people tend to support Clinton.” Those two statistics are consistent with a situation in which the vast majority of Trump’s support occurs for economic reasons rather than racial ones. Yes, it’s true, the presence of racists in Trump’s coalition put Trump “over the top.” But it’s also true to say that the Democrats losing half of all economically anxious people put Trump over the top, and if you focused on the racism, you’d be focusing on the minor part of Trump’s overall support.

In laying out this hypothetical, I am not attempting to show that this is actually what happened. The two statistics (“racists support Trump” and “poor people support Clinton”) are also consistent with a situation in which 100% of Trump’s supporters are racist. Instead, I am demonstrating that the two premises in and of themselves can’t lead us to the conclusion Hasan wants to draw (and that other pundits have drawn over and over from them), which is that Trump’s support was about racism.

Hasan calls the idea that Trump “appealed to the economic anxieties of Americans” a fiction and concludes that “instead, attitudes about race, religion, and immigration trump (pun intended) economics.” But what he’s proved is that racial attitudes trump economics as predictors of a particular individual person’s support for Donald Trump, not that racial attitudes trump economics as the main issue Trump voters cared about or the main reason for his success. If we take the question “Was the election about race or about economics?” to mean “What was the relative role of race issues and economic issues in determining the outcome of the election?” then Hasan’s evidence does not actually address his question.

To get closer to a real answer, we might do better to look at what the most important issues were to Trump voters. What attracted them to Trump? Do they care more about economics or about race? We can begin to get an answer from a Pew poll conducted in July of 2016, which ranked issues by their importance to voters, broken down by the candidate they were supporting. Among voters generally, the economy was considered a “very important” issue to 84%, with immigration only the sixth-most important issue. Among Trump supporters, though, economic issues were considered very important to 90%, compared to 80% of Clinton supporters. For Trump supporters, immigration was the third-most important issue, with 79% considering it very important. Thus nearly every Trump supporter was “very” concerned about economic issues, and economic issues won out by at least 10% over immigration.

We still don’t know very much from this. But we do know that a good chunk of Trump supporters cared about economics without caring as much about immigration (and we must assume that all Trump voters who cared about immigration were racists in order to accept Hasan’s conclusion). Of course, “being worried about the economy” can mean a lot of things; a rich man can be worried about his tax rate increasing, and we don’t know anything about racial attitudes from this survey. But it should caution us against coming to simple conclusion like “the election was about race.”

Even if we stick to demonstrations of the factors that predict Trump support, we find Hasan burying crucial evidence. Hasan quotes a Gallup report that, in his words, “found that Trump supporters, far from being the ‘left behind’ or the losers of globalization, ‘earn relatively high household incomes and are no less likely to be unemployed or exposed to competition through trade or immigration.’” But let’s look at the original context of that quote:

[Trump’s] supporters are less educated and more likely to work in blue collar occupations, but they earn relatively high household incomes and are no less likely to be unemployed or exposed to competition through trade or immigration. On the other hand, living in racially isolated communities with worse health outcomes, lower social mobility, less social capital, greater reliance on social security income and less reliance on capital income, predicts higher levels of Trump support.

Hasan’s presentation of the Gallup analysis therefore borders on intellectual dishonesty. If you quote the bit about high average incomes and no lower likelihood of unemployment (facts which, as I explained before, we would expect given the general composition of the Republican base compared to the Democratic one), but you don’t quote the part about bad health outcomes, blue collar jobs, and low social mobility, then you’re selecting only those facts that confirm your worldview and refusing to deal with the ones that contradict it.

This is the trouble with Hasan’s overall argument, and with these types of pieces generally. They accuse others of ignoring “the facts,” but they don’t really care about facts themselves. Otherwise, why wouldn’t Hasan mention the fact that the economy was “very important” to 90% of Trump supporters? Why wouldn’t he even deal with that statistic, even if he had a good argument for why it should be disregarded? It’s the duty of a responsible political analyst to address the evidence that undermines their position.

Hasan is likewise unfair in his characterization of the Sanders/Warren position on Trump voters. He says that “for Sanders, Warren and others on the left, the economy is what matters most and class is everything.” But Sanders repeatedly accused Trump of running a “campaign of bigotry” and whipping up nativist sentiments. In the op-ed Hasan quotes, Sanders says that “millions” of Trump voters voted out of economic concerns. But he does not deny that large numbers of Trump’s voters may be racist. (He has explicitly acknowledged that “some are.”)

In fact, I don’t know a single leftist who denies that Trump ran a racist campaign that energized racist voters. The leftist position is, rather, that there are many (“millions of”) Trump voters who were drawn to his anti-Establishment stance because of their economic hardships, that Democrats should have had a better message to target those particular Trump voters, and that suggesting Trump voters as a unit are racist is both politically unwise and unsupported by evidence. Hasan is extremely derisive toward this position, with his repeated suggestion that it’s factually ignorant, even stupid. But he doesn’t offer any actual proof for why it’s wrong. Instead, he willfully mischaracterizes it.

Actually, the left-wing stance here should be extremely uncontroversial. It doesn’t even have to presume that the majority, or even a very large percentage, of Trump voters were “economically anxious” rather than racist. Consider the 100-voter scenario from earlier. Say we have 48 rich racists and 52 poor anxious people. Trump snags all the racists by default, but then manages to lure 4 anxious poor people through his message on trade. Trump wins. In that situation, it’s still worth pointing out that Democrats needed a better economic message, and that economics were an important determinant of the outcome. A lot of the misguided attempts to decide what the election was “about” result from failures to think about marginal differences. If most Trump voters were racist, and a minority were economically anxious, and the election was decided by a small number of votes in Rust Belt states (which it was), then politically you might reasonably decide that it’s not worth focusing on the racists (who will never vote for you) and instead you should craft a rhetorical appeal to the economically anxious Rust Belt voters who can mean the difference between winning and losing. (As I said, though, so much depends on how you want to define the phrase “what the election was about.” If it’s about majorities, you might get one answer. If it’s about margins, you might get another. In Trump: Anatomy of a Monstrosity I go into more detail about how anyone can construct any story they like about the election and have it be true in a certain sense.)

I should add here that the necessity of fairness applies no matter which side of this you think is correct. If I say “90% of Trump voters thought the economy was the most important issue, therefore the race was about economics,” and I do not mention or deal with the disproportionate amount of racial prejudice among Trump voters, I am also cherry-picking the facts that support my preferred conclusion. Anyone who tells you the one issue that the election was “about,” and cites factors that “predict” support, without telling you the full range of relevant information, is arguing either ignorantly or dishonestly. They are not putting all of the facts on the table; rather, they are just giving the evidence that supports their own position. This is partisanship and bias, which nobody should engage in. Having a well-defined set of political commitments does not justify misrepresentations of the truth.

Frankly, Hasan’s column saddens me. I have really respected some of the excellent work he has done on his interview programs (even though he has a consistently irritating tendency to constantly interrupt his guests). And I’m disappointed in The Intercept, which promised to follow Glenn Greenwald’s idea that you can be opinionated and honest at the same time, for publishing it. That’s not because it offers a conclusion I disagree with; I’m happy to have a discussion about the role of racism in the 2016 election, as weary as I am of that particular debate. Rather, it’s because Hasan uses the characteristic argumentation technique of the glib pundit: instead of helping the reader think through an issue and showing your work, you just throw out a few random statistics that back up your position.

The truth about race and economics in the election is easy to grasp. They both mattered, and we can focus on whichever we choose. (Personally, I think that means focusing whatever is most useful or instructive, and that the question “Do Trump supporters tend to be racist?” is less consequential than “Are there enough non-racist, economically anxious Trump voters to where economic anxiety played a significant role in his margin of victory thereby meaning Democrats need to address the issue more?”) And if Mehdi Hasan were as committed to Facts and Truth as he professes himself to be, he would be happy to concede this rather than perpetuating a pernicious misrepresentation.

Rahm Emanuel’s College Proposal Is Everything Wrong With Democratic Education Policy

Emanuel’s idea is the reductio ad absurdum of the “college solves poverty” idea…

On Wednesday, Chicago Mayor Rahm Emanuel announced a new educational proposal: starting with this year’s freshman class, every student in the Chicago public school system will be required to show an acceptance letter from a college, a trade school or apprenticeship, or a branch of the military in order to graduate. “We live in a period of time when you earn what you learn,” Mayor Emanuel said. (Democratic politicians’ attempts at folksiness are always pretty grim.) “We want to make 14th grade universal,” he also said. The proposed measure is almost certainly a publicity stunt which will have little effect in practice. But Emanuel has made it clear how he thinks educational problems should be solved.

The Emanuel plan is perhaps the stupidest idea a nationally prominent politician has publicly endorsed in the past decade. I hesitate to even explain why it’s stupid lest I insult my readers’ intelligence by belaboring the obvious. But it’s worth spelling out what’s wrong with this, because the fact that a major Obama-aligned Democratic politician is attempting to do this says a great deal about the worldview of the establishment Democratic Party. So here goes.

In Mayor Emanuel’s opinion, working-class kids are too stupid to recognize their own interests. They’re simply unaware that people who go to college earn more than people who don’t, which is why (silly them) they don’t go to college. If you just force them to go to college by flunking them out of high school unless they promise to go to college, they’ll all become highly compensated white-collar workers and America will be a wealthier place.

Allow me to propose an alternative model: working-class kids are not stupid. They’re aware that college grads earn more money on average than they ever will. They’re also aware that not all college degrees are created equal, and that a degree from a community college or some fly-by-night for-profit—the kind of school most working-class kids from Chicago might actually get into—is dramatically less valuable than one from Sarah Lawrence, where Rahm got his BA. They’re aware that college degrees aren’t what they once were, partly because so many degrees are from mediocre institutions; perhaps they’ve seen family members work hard to get that University of Phoenix diploma only to wind up little better off than they’d have been otherwise.

They’re also aware that college costs money, not only money for tuition but all the money you won’t be able to earn while you’re in school, and that people whose parents can’t support them, people who may in fact need to help support their families themselves, can’t afford to just not work for two to four years. Finally, they’re aware that college is hard, particularly for working-class kids with less academic preparation than their middle-class peers who also have less social support and need to work while their peers are studying, and that working-class kids are at a high risk of dropping out. They know that going into debt to attend a college and then dropping out with no degree can be financially catastrophic.

In other words, they know, unlike their mayor, that what happens to the average kid who goes to college—a middle-class kid from the suburbs with white-collar parents who can afford to subsidize his textbooks and partying for four years—is a very poor indicator of what will happen to them, personally, if they decide to go to college. Knowing all this, they make their choice; 62% of Chicago’s high school students decide to have a crack at college after they graduate, 38% don’t.

Now, it may well be that there are a few kids in that 38% who are making the wrong choice, just as there are a few in that 62% (very possibly more than a few) who are making the wrong choice and will just end up dropping out with debt or graduating with a worthless degree and more debt. It might be that a better school guidance program would push some kids into college for whom it’s the right decision. But Rahm isn’t proposing to nudge a few more kids into college; he’s proposing to hold the high school degree of every student in the system hostage until they all go to college, or sign up for the army, or enter an apprenticeship.

What’s likely to happen if his proposal passes? Well, trade schools and apprenticeship programs are bright enough to know that the world only needs so many plumbers, so not a lot of students are going to manage to go that route. Some will join the army, at which stage Mr. Emanuel can congratulate himself for having forced some working-class kids to die for their country on pain of facing the stigma of the high school dropout for the rest of their lives. Some will simply decide to leave high school without graduating. But many will be forced into a choice they know is the wrong one, and have a crack at whatever community college or awful open-admissions for-profit college they can get an acceptance letter from. Expect to see the already overburdened and underfunded community college system pushed to the wall. Expect to see a small boom in the for-profit college industry and the exploitative student loan industry that feeds it. Expect to see many, many students drop out of school with nothing to show for it but un-bankruptable education debt that will haunt them for years.

anatomyad2

And finally, perhaps most importantly, expect to see those students who do manage to graduate from whatever bottom-tier school is willing to accept them quickly discover that the degree Rahm Emanuel forced them to earn at great personal expense isn’t worth the paper it’s printed on. First, because college-educated workers, like any other commodity, are subject to the law of supply and demand, and Rahm’s plot to dump hundreds of thousands more of them onto the Chicago labor market will cause supply to greatly outpace demand and prices to crater. Second, because employers will recognize that people who got a college degree from a bottom-tier school that slashed admissions standards to take advantage of the Rahm-and-debt-fueled bonanza don’t have the same skill set or qualifications as the college students they now pay higher wages. In other words, producing a genuinely more educated workforce is a lot harder than Rahm’s plan to print a whole bunch more college diplomas, but even if you could produce a genuinely more educated workforce it wouldn’t raise wages; you’d just have more people competing for the same number of white-collar jobs., and wages would go down.

(Of course, middle-class kids who went to Sarah Lawrence would still do just fine.)

Emanuel’s plan, in other words, will be a disaster if implemented. But if the plan were just his own idiosyncratic idiocy, it would be beneath refutation. Unfortunately, it’s not. The mayor of Chicago is an utterly characteristic representative of the dominant wing of the Democratic Party, and his “you earn what you learn” claptrap reflects what has been a core element of its messaging and policy for decades: the notion that we can solve poverty through education. For most of my lifetime, the Democratic Party’s answer to the apparently permanent stagnation of working-class wages has been to advise the electorate that it’s a knowledge economy and only a better-educated workforce can hope to earn more.

This is terrible policy based on obviously shoddy reasoning: while it’s true that highly educated computer programmers make a lot of money, the notion that if everyone were a highly educated computer programmer everyone would make more money is absurd, first because not everyone can become a highly educated computer programmer and second because if everyone could then computer programmers would no longer make a lot of money.

It should be emphasized, though, that  on top of being terrible policy this is also terrible messaging. When voters hear that your analysis of the economy is that it simply has no place anymore for uneducated workers, and that your plan to increase working-class wages is “educate people better for the knowledge economy,” they get three messages: first, that if you’re a low-income thirty-year-old high school graduate with a family who can’t go to school, the Democrats’ plan for you is that you’ll die poor, because hey, it’s a knowledge economy, what can they do? It’s a knowledge economy. Second, that Democrats think your poverty is pretty much your fault for not doing better in school. And third, that Democrats are so completely out of touch that they genuinely believe that becoming a high-tech worker is a serious option for your working-class kids. In other words, what you hear is that Democrats don’t know you, don’t care about you, look down on you, and have no plan to help you. Is it any wonder that you don’t bother to vote, or that if you do you vote for someone who promises to bring the jobs back?

Every time Democrats say or imply that there’s no way for people to succeed in the 21st-century economy without a college degree, they announce loud and clear that they’ve largely given up on helping the existing working class.

But if the Democratic line on education fails on policy and politics grounds alike, why are they so attached to it? I’d suggest two reasons.

First, claiming that class differences result from educational achievement flatters the American elite’s sense of its own meritocracy. If differences in income are mostly explained by differences in education, elites don’t have to worry about why their own incomes have skyrocketed over the past three decades while the rest of the country has done so poorly; it’s the natural result of market forces rewarding talent and hard work. You can see this perhaps most clearly in Silicon Valley entrepreneurs’ excitement about charter schools, an excitement most of the Democratic establishment shares: charters are the noblesse oblige of an utterly self-confident meritocratic elite, an elite which believes that they earned what they have and that the way to make everyone else better off is not to take from the deserving rich and give to the undeserving poor but to make the poor more deserving. (The fact that many of these charters’ educational model is to replace those stupid, lazy public school teachers with brilliant and disruptive Yale graduates says everything here.) The education-solves-poverty line sells well with affluent white-collar professionals, and the average Democratic politician spends vastly more time addressing herself to the needs of those professionals than talking to working-class voters.

But second, and far more importantly, building an economy that once again provides decent, well-paying and dignified jobs for the working class is very difficult. It’s far easier to pretend that the jobs are waiting in the wings if only the working class were educated enough to deserve them than to take on the employers who refuse to offer those jobs. Rebuilding the American working class would require a higher minimum wage, a serious effort to encourage unionization in the service sector, and, at least in areas with sky-high unemployment (places like Chicago), a major federal jobs program to put people to work and force private-sector employers to raise wages. Every one of those initiatives would require direct confrontation with businesses big and small. Creating more innovative charter schools, or forcing more students into college, requires no such confrontation. Placing the burden of fixing the economy on working-class students and their teachers rather than on big business and the wealthy makes plenty of political sense, in its way.

But it won’t work. And liberal pundits who scoff at Trump voters by reminding them that those manufacturing jobs he promised won’t come back would do well to remember that Democrats’ agenda on working-class jobs is just as empty a promise.

The Regrettable Decline of Space Utopias

Why is it only the libertarians who fantasize about space these days?

Star Trek is one of those TV shows whose basic premise would be horrifying if the show weren’t so utterly committed to its own optimism. Viewed in the abstract, it’s hard to imagine how anybody stays sane on a starship. Star Trek characters are constantly flying blind into some fresh hell. Literally every corner of the universe they visit, Starfleet encounters some fucked-up shit that defies all extant scientific knowledge. Crew members are routinely bodyswapped, brainwashed, possessed by alien lifeforms, or implanted with false memories. Oh, and most crew members bring their entire families on board, so during the ship’s weekly brushes with death, they all get to grapple with the knowledge that their spouse and children will almost certainly be burned alive or suffocated in the vacuum of space. Everyone on that show should be on the verge of complete psychosis, but somehow, they all seem pretty contented with their lives. The characters’ preternatural level of peace with the unknown is probably one of the main reasons why Star Trek is extraordinarily comforting to watch.

Another reason why Star Trek is comforting is that there are no goddamn lawyers in space.

This is not completely true. There are a couple of lawyers in space. But there are no lawyers affiliated with the United Federation of Planets, the big, happy humanitarian alliance of planetary civilizations that are committed to universal peace, cultural interchange, and the accumulation of scientific knowledge. There are a few itinerant JAGs, but there’s no shipboard counsel. There are no legal teams dispatched to scenes of interstellar conflict. When characters find themselves in compromising situations, they never ask if they can speak to an attorney.

This, on the one hand, is completely bonkers. After all, non-Federation planets have all kinds of nutty legal standards, ranging from “guilty until proven innocent” to “automatic death penalty for anybody who accidentally steps on a flowerbed inside the invisible Punishment Zone.” Given the many entirely foreseeable dangers of this approach, you’d think that every starship would have some highly-trained legal wonk on board, ready to deal with these horrifying situations. But nope. It’s implied that the Federation does have lawyers somewhere, and there even is a loose notion that they are important to the effective functioning of the judicial system. In one episode, we learn that during a period of Earth history known as the Post-Atomic Horror (which is scheduled to occur—get ready, guys—in the mid-21st century), all the world’s lawyers were systematically murdered. This is characterized as having been an undesirable development for humanity, so we can infer that the legal profession was subsequently reinstated. But whenever there’s a legal hearing of any kind, Starfleet personnel either A) represent themselves, or B) are represented by a random bridge officer who is deputed to act as counsel.

Now you might say, on the one hand, that we shouldn’t read too much into this. Maybe writing a random lawyer into a storyline was just going to be one more actor cluttering up the set, frittering away the weekly episode budget with dispensable lines. But the complete absence of lawyers across multiple Star Trek seasons, each under different creative direction, each with their own standalone law-centric episodes, is at least a little weird. So is there some other reason why the Federation has no need for lawyers?

spacebux

One of the central premises of the Star Trek universe, which is set a couple centuries into the future, is that humanity has evolved—not dramatically beyond all recognition, but nonetheless significantly. After a period of mass calamity on Earth, characterized by nuclear war, genocide, and famine, the remainder of Earth’s global population finally comes to the negotiating table, as it were. A world government is established. Societies are rebuilt. Money is abolished. All basic human needs are provided for. People enter professions, learn trades, and provide services because they find these activities fulfilling, not out of economic necessity. Crime is almost nonexistent; with the elimination of material want, the impetus for most kinds of crime is also eliminated, and it’s implied that psychological dispositions towards violence are somehow detected and rehabilitated in their early stages. The establishment of an egalitarian regime of resource distribution, and the discovery of alien civilizations on other planets, seems to have drawn the human species together and eroded social distinctions. While there are still pockets of institutional corruption, and although humans still sometimes give in to their lesser impulses, people are largely motivated by goodwill. Federation officers in particular have a widespread reputation for honesty, which other civilizations, weirdly, mostly seem to accept at face value.

These characteristics seem to percolate through the Federation legal system. In the courtroom episodes, there are never “gotcha” moments where somebody wins on a technicality or gets tripped up by an arcane legal formulation. Making a common-sense argument, or a soliloquy to general principles of justice, is usually enough to win over an adjudicator. The implication seems to be that in a world where fact-finders are honest, and where parties can make more or less sensible claims in their own defense, the system can afford to be equitable and ad hoc. It’s the ultimate access-to-justice dream where—even better than a lawyer for every client—the law is so reasonable and the judges so fair that every person can represent themselves in court with total confidence, or, at most, bring along a moderately clever friend to help them make their case. In addition, when interacting with other legal systems, the strong presumption of integrity on the part of Federation actors often helps the legal process along.

This all may seem fairly pie-in-the-sky—but could it actually be possible? Could humanity, someday, theoretically, if basic material insecurities were resolved, reach a general state of compassion and reasonability towards one another? Could lawyers, at present a hideous but necessary evil, eventually be rendered obsolete by more humane social attitudes? God, that would be amazing, wouldn’t it?

Of course, the opposing theory of human nature says that our impulse towards selfishness and cruelty is so deeply-rooted, spiritually or biologically, that we can never hope to eliminate it; that at most, we might mitigate it, but that this will never be a durable achievement across cultures or across generations. This theory is quite popular, but we have no idea if it’s true. It certainly seems to be humanity’s default mode, if we make no attempts at self-improvement. But our species hasn’t been around terribly long, in the grand scheme of things, and if we’re honest with ourselves, most of us haven’t exactly been doing our utmost to better the world we live in. As G.K. Chesterton once wrote about Christianity: “Christianity has not been tried and found wanting; it has been found difficult, and not tried.” The same could easily be said for most schemes of social organization that require some form of moral effort or voluntary material renunciation.

Sadly, utopias are presently out of vogue, as the tedious proliferation of dystopian fiction and disaster films seems to indicate. No genre is safe. Game of Thrones is the dystopian reboot of Lord of the Rings; House of Cards is the dystopian reboot of The West Wing; Black Mirror is the dystopian reboot of The Twilight Zone. The slate of previews at every movie theatre has become an indistinguishably sepia-toned effluence of zombies, terrorists, and burnt-out post-apocalyptic hellscapes. Even supposedly light-hearted superhero movies now devote at least 3.5 hours of their running time to the lavishly-rendered destruction of major metropolises.

There is clearly some deep-seated appeal to these kinds of films; and indeed, it would take a heart of inhuman moral fiber to truly regret the sudden vanishing of New York City, whose existence serves no beneficial purpose for humanity that I’m aware of. But my general feeling is that our fondness for dystopian narratives is a pretty nasty indulgence, especially for those of us who live mostly comfortable lives, far-removed from the visceral realities of human suffering. Watching scenes of destruction from the plush chair of a movie theater, or perhaps on our small laptop screen while curled up in bed, heightens our own immediate sense of safety. It numbs us to the grinding, intermittent, inescapable reality of violence in neglected parts of our world, which unmakes whole generations of human beings with terror and dread.

spacecolors1

Immersing ourselves in narratives where 99% of the characters are totally selfish also engrains a kind of fashionable faux-cynicism that feels worldly, but is in fact simply lazy. I say faux-cynicism because I don’t believe that most people who profess to be pessimists truly believe that humanity is doomed, at least not in their lifetimes, or in their particular geographic purviews: if they did, then watching a film that features the drawn-out annihilation of a familiar American landscape would probably make them crap their pants. But telling yourself that everything is awful, and nothing can be fixed, is a marvelously expedient way to absolve yourself of personal responsibility. There is, happily, nothing about an apocalyptic worldview that obligates you to give up any of the comforts and conveniences that have accrued to you as a consequence of global injustice; and you get to feel superior to all those tender fools who still believe that a kinder world is possible! It’s a very satisfying form of moral escapism. No wonder our corporate tastemakers have been churning this stuff out.

And there’s no doubt that it’s often hard to make utopias seem dramatically sophisticated. Star Trek is renowned, even by those who love it, for being campy as hell. Moral tales in general are too often sugary and insubstantial. They’re suitable for kids, or maybe emotionally-stunted adults, but they’re not something to be taken seriously. We have come to view utopian narratives as inherently hokey, and preachy. But dystopias are, of course, their own form of preaching; they are preaching another hypothesis about humanity, which, due to moody lighting and oblique dialogue, has an entirely undeserved appearance of profundity, and the illusory farsightedness of a self-fulfilling prophecy.

TWO PLEAS FOR THE FUTURE OF HUMANITY

But don’t we all want a world without lawyers? Isn’t that, at least, something that our whole species can agree on? Star Trek tells us that there are two hurdles between us and this great goal: global economic justice, and warp-speed technology. These may take several more centuries to achieve. But here are two things we can all start working on now.

1. Make utopias popular again.

Fictional narratives are a huge factor in shaping our expectations of what is possible. However, as discussed earlier, utopias are hard to write. You have to forfeit a lot of the cheap tricks that writers use to generate dramatic momentum. After all, it’s always easy to create tension when all your characters are self-serving, back-stabbing bastards; less so when your characters mostly get along. (The writers of Star Trek: TNG famously tore their hair out over creator Gene Roddenberry’s insistence that all the main cast had to be friends.) Constructing plots that are based primarily around problem-solving takes a lot of intricate planning. But we’ve seen a thousand narrative iterations of societal collapse: why not write some narratives about societal construction? What would a better world look like, at different stages of its realization—at its inception? Weathering early internal crises? When facing an existential threat? We should put more imagination into thinking about what this could look like, and how to generate emotional investment in the outcome.

Aspirational fiction seems especially important at this moment in our national history, when a significant number of Americans cast a ballot for a candidate they disliked, or were even disturbed by, simply because they wanted something different. There’s always been a gambling madness in the human spirit, a kind of perverse, instinctive itchiness that suddenly makes us willing to court disaster, simply on the off-chance of altering the mundane or miserable parameters of our daily lives. If we could transform some of that madness into a madness of optimism and creativity, rather than boredom, rage, and despair, that could only be a good thing.

2. Don’t let assholes win the space race.

Do you know who’s really excited about interplanetary exploration these days? Silicon Valley tycoons, and white supremacists. Elon Musk wants to set up a creepy private colony on Mars for ultra-rich survivalists who can shell out $200,000 for their spot, and has stated his own intention of dying on Mars. Meanwhile, a fresh-faced crop of racists are convinced that if the U.S. would only give up trying to provide social services and education to its citizens, lily-white geniuses could easily be conquering the galaxy at this very moment. As Richard Spencer (of “Heil Trump” fame) has it:

“[O]ur Faustian destiny to explore the outer universe. That is what we were put on this earth to do. We weren’t put on this earth to be nice to minorities, or to be a multiculti fun nation. Why are we not exploring Jupiter at this moment? Why are we trying to equalize black and white test scores? I think our destiny is in the stars. Why aren’t we trying for the stars?”

These dickheads are trying for the stars! The rest of us therefore need to make sure they don’t get there first. If the likes of Elon Musk and Richard Spencer are humanity’s ambassadors, our entrée into outer space will simply be a high-tech recapitulation of all the moral horrors of our last Age of Exploration. Thankfully, I’m pretty sure Richard Spencer is no astrophysicist, and Elon Musk’s would-be spacecrafts keep exploding on the launchpad. Now is our chance to thwart them!

Space exploration doesn’t have to be a last-ditch effort to save the species after we screw everything up on earth; nor should it be an alternative project to building an egalitarian global society. We still have time to make a better world here, on the planet we do have, before we inflict ourselves on other parts of the universe. Space travel may well have an improving effect on humanity, but we should also make a point of improving ourselves before we head out into the interstellar beyond. Only then will we have earned the privilege to Boldly Go.

Starfleet or bust!

Illustrations by Mike Freiheit