From "The Atlantic" Bring Back the Nervous Breakdown:It used to be okay to admit that the world had simply become too much.Read Now
Bring Back the Nervous Breakdown
It used to be okay to admit that the world had simply become too much.
by Jerry Useem
Contributing writer at The Atlantic
This article was published online on February 8, 2021.
April 1935 was a nervous month. Unemployment in America stood at 20 percent. A potential polio vaccine was failing trials. The term Dust Bowl made its first appearance in newsprint. A thousand-mile storm carried away much of Oklahoma. And Fortune magazine introduced its readers to “The ‘Nervous Breakdown.’ ”
Soon reprinted as an 85-page book, the article cited experts “whose names loom largest in the fields of mental hygiene.” The takeaway? The nervous breakdown was deemed to be “as widespread as the common cold and the chiefest source of misery in the modern world.” Anyone could be susceptible; it could be precipitated by nearly anything, and it prevented one “from carrying on the business of normal living.” Resolution of the breakdown entailed a time-out, ideally at one of the deluxe sanitariums profiled a few pages in.
Right now—I think we can all agree—Americans are once again living in a nervous time. Pandemic. Wildfires. Indefinite homeschooling. Postelection political chaos. TikTok. Feelings of impending collapse have arguably never rested on firmer empirical ground. But today we no longer have recourse to the culturally sanctioned respite that the nervous breakdown once afforded. No longer can we take six weeks at the Hartford Retreat, one of the healing getaways described in Fortune—all long since closed or transmuted into psychiatric facilities that require a formal mental-health diagnosis for admission. No restorative caesura is forthcoming for us. The nervous breakdown is gone.
For 80 years or so, proclaiming that you were having a nervous breakdown was a legitimized way of declaring a sort of temporary emotional bankruptcy in the face of modern life’s stresses. John D. Rockefeller Jr., Jane Addams, and Max Weber all had acknowledged “breakdowns,” and reemerged to do their best work. Provided you had the means—a rather big proviso—announcing a nervous breakdown gave you license to withdraw, claiming an excess of industry or sensitivity or some other virtue. And crucially, it focused the cause of distress on the outside world and its unmeetable demands. You weren’t crazy; the world was. As a 1947 headline in the New York Herald Tribune put it: “Modern World Viewed as Too Much for Man.”
The term nervous breakdown first appeared in a 1901 medical treatise for physicians. “It is a disease of the whole civilized world,” its author wrote. This disquisition built on the work of a Gilded Age doctor, George Miller Beard, who posited that we all had a set amount of nerve force, which could be depleted, like a battery, by the stress of modern life. Beard had argued that an epidemic of nervous disease had been unleashed by technology and the press, which accelerated everything. “The chief and primary cause of this … very rapid increase of nervousness is modern civilization,” he wrote in American Nervousness in 1881.
This idea of the nervous breakdown as a natural response to modern life gained currency through the go-go 1920s, and then achieved cultural ubiquity with the economic collapse of the 1930s. “Is a nervous breakdown a sign of weakness?” asked a 1934 book titled Nervous Breakdown.
Not at all. You have put up a good fight, but the odds were too heavy against you … Nature has warned you and given you respite. The breakdown is a definite indication that you are still functioning, and have within you the material for recovery.
Famous cases illustrated this. Rockefeller’s best-remembered achievements—the national parks, the art museums, Rockefeller Center—came after his breakdown in 1904, which sent him to the south of France for six months’ relief from strain. Weber wrote The Protestant Ethic and the Spirit of Capitalism while prostrated by an excess of the very work ethos he described. (He recovered and resumed teaching just in time to die of the 1918 pandemic flu.)
But by the mid-1960s, when the Rolling Stones recorded “19th Nervous Breakdown,” the concept was getting pushed to the margins by the rise of mass-market, prescription-driven psychiatry (presaged by another Stones single, “Mother’s Little Helper”). The developing field had little use for an affliction that could be treated without the assistance of physicians. Diseases like major depressive disorder, generalized anxiety disorder, and obsessive-compulsive disorder—diagnosed and treated by specialized doctors dispensing specialized drugs—replaced the nervous-breakdown catchall.
This did quite a bit of good: Many people with psychological ailments gained access to medical treatments that could be effective. But something important was lost.
“The very general and ill-defined characteristics of the nervous breakdown were its benefits,” Peter Stearns, a social and cultural historian of the nervous breakdown at George Mason University, told me. “It played a function we’ve at least partially lost. You didn’t have to visit a psychiatrist or a psychologist to qualify for a nervous breakdown. You didn’t need a specific cause. You were allowed to step away from normalcy. The breakdown also signaled a temporary loss of functioning, like a car breaking down. It may be in the shop, sometimes recurrently, but it didn’t signal an inherited or permanent state such as terms like bipolar or ADHD might signal today.”
The nervous breakdown was not a medical condition, but a sociological one. It implicated a physical problem—your “nerves”—not a mental one. And it was a onetime event, not a permanent condition. It provided sanction for a pause and reset that could put you back on track. But as psychology eclipsed sociology in the late 20th century, it turned us inward to our personal moods and thoughts—and away from the shared economic and social circumstances that produced them.
“The psychiatric approach tends to say that you have a specific problem that other people don’t have, and we’re here to fix your problem independent of what’s happening to everyone else,” Stearns noted. The effect is atomizing even in normal times. Today, “everyone is so isolated that you have even less sense than usual as to what the collective mood is. So we may need something like the nervous breakdown—something that is less medically precise but encapsulates the way people are encountering the moment.”
But in a society reflexively suspicious of rest, getting a restorative break tends to require a formal mental-health diagnosis. Otherwise, you risk getting called a slacker. That’s what happened to Alexandria Ocasio-Cortez a couple of years ago when she announced she was taking a few days off for “self-care” after a grueling election. “Congresswoman-elect Alexandria Ocasio-Cortez hasn’t yet started her new job,” Fox News blared, “but she’s already taking a break.”
This got me thinking that maybe we need to bring back the nervous breakdown, to protect the nation’s collective reserve of nerve force at a time when it’s stretched so thin. What would the modern version of a culturally accepted, nervous-breakdown-precipitated time-out look like?
A century ago, the famous Battle Creek Sanitarium in Michigan marketed itself as a “Temple of Health.” Under a canopy of glass and hanging ferns, bathed in sunlight and the super-fresh air provided by elaborate ducts, patients—the diseased and the nervous; distinctions blurred—engaged in quiet conversation or opulent repose. In one building was the gymnasium, flanked on either side by the hydrotherapy wings. Other rooms housed vibrating chairs and therapeutic light baths. Outside, naturalist talks. And in the dining room, staff serenading diners with the Chewing Song. (The sanitarium’s superintendent, John Harvey Kellogg, believed that each bite of food should be chewed no fewer than 40 times; to aid digestion, he had invented a special breakfast cereal, cornflakes.)
Sanitariums like Battle Creek became places to restore whatever ailed the body or spirit. To be sure, quackery abounded: Kellogg believed in spinal douches and eugenics. But with the right combination of relaxation, engagement, and yogurt enemas, you could leave feeling like Rockefeller, who came for a stay in 1922.
Sometimes, the treatments even worked. “People responded to the fact that something was being done for them,” Edward Shorter, a medical historian and the author of How Everyone Became Depressed: The Rise and Fall of the Nervous Breakdown, told me. “The placebo is a very powerful treatment.” And the experience of communal recuperation prevented the social isolation of private seclusion.
But the cost of a sanitarium stay in the 1930s could run as high as $3,000 a week in today’s dollars, putting it outside the reach of the comfortably middle class. That would still be true today. And even if we could somehow eliminate the financial hurdles, we’d be faced with the cultural ones that Weber traced to Ben Franklin—time is money, idleness is sloth, and all that. Anything that smacked of, say, government-subsidized spa days, no matter how healthful those might be, would be considered un-American.
So rather than the nervous breakdown writ large, we could introduce a more modestly scaled version of it: a series of buffers, firebreaks, or (to use a bankruptcy metaphor) bridge loans to stave off the full Chapter 11 scenario. The French lent us reculer pour mieux sauter—literally “to withdraw in order to make a better jump.” We could slip something more muscularly American, like power break or power-up, into our national lexicon. “Boss, I need a power-up” isn’t an admission of weakness; it’s a simple statement of fact. Achieving widespread cultural acceptance of the practice may take less time than you’d expect—consider how swiftly paternity leave traversed the gap from unheard-of to expected.
The mini-break could insinuate itself into American life in bite-size increments. When I asked an intensive-care nurse what a power break might look like for her, she said it could be small. A two-minute “debrief” after a death in the unit—a moment to stop, reflect, and connect with the constant and familiar—would go a long way in helping someone regroup before they have to lurch to the next crisis. Though the psychic needs of an ICU nurse are particular, the basic concept is generalizable.
Adam Waytz, a management professor at Northwestern, says that to be effective, breaks should entail true disconnection from work—which is to say we need to be able to slip off our electronic leashes. Both France and Spain have made “the right to disconnect” from after-hours work communication an actual legal right. Daimler, the German auto manufacturer, may have gone the furthest of any company toward establishing full mental-bankruptcy protection for its workers: When Daimler employees take time off, they can opt to have their incoming emails deleted on arrival, with senders getting politely notified that their message has been destroyed and that if they need something urgently, they can contact an alternate person. “The idea,” Daimler has said, “is to give people a break and let them rest. Then they can come back to work with a fresh spirit.”
Existing bits of U.S. legislation could augment efforts like this by private companies. For instance, an expiring provision of the Families First Coronavirus Response Act that reimburses employers for up to 12 weeks’ paid leave if an employee’s kid’s school closes could be a starting point for building broader availability of paid time off for family crises or restorative breaks.
Nervous breakdowns, as F. Scott Fitzgerald wrote of his own in his 1936 essay “The Crack-Up,” are “not a matter of levity.” He’d found himself “like a man overdrawing at his bank”—“I began to realize that for two years my life had been a drawing on resources that I did not possess, that I had been mortgaging myself physically and spiritually up to the hilt.” At that point, Fitzgerald was well into terminal alcoholic decline, and on his way to an early death.
The past year has made clear the tremendous emotional and social damage that accumulates when whole populations get pushed beyond easily endurable limits. Alcohol consumption is up; drug overdoses are up; reports of anxiety and depression are up. Even once this pandemic wanes, its psychic effects will linger. The previous century’s flu pandemic lasted until 1920, but a spike in suicides was seen the following year, in 1921. Which is why, individually and collectively, we would be wise to do better than remain bundles of never-ending nervousness, too frayed to provide much solace or support for anyone, waiting for the psychiatric-industrial complex to handle America’s growing mental-health crisis, and doing little or nothing to head it off.
Better a more economically feasible and culturally acceptable nervous breakdown now than something worse later on.
Has the pandemic really caused a 'tsunami' of mental health problems?
By Richard Bentall
Our research shows coronavirus ‘winners’, ‘losers’, and a lot of resilience. Understanding that can help us target support better
How is the population of the UK coping with the continuing coronavirus crisis? According to some media reports and commentators in the mental health community, we are now facing “the greatest threat to mental health since the second world war” and a potential “tsunami” of psychological problems.
With a team of experts from the Universities of Sheffield, Ulster, Liverpool, UCL and Royal Holloway and Bedford College I have been monitoring the mental health of the UK population since the beginning of the crisis. Looking at our findings, we think that this tsunami narrative is misleading. If accepted uncritically, it could undermine efforts to protect the health of the population and also our ability as a nation to recover once the crisis is over. Here is why.
Like many other mental health researchers, we quickly recognised the importance of understanding how the pandemic affected the wellbeing of ordinary people. Working with the survey company Qualtrics, beginning on 23 March 2020 (the first week of lockdown) we recruited 2,025 adults who were representative of the UK population in age, sex, household income, political attitudes and many other factors. We measured mental health, but also aimed to make our survey as broad as possible, asking about family relationships, adherence to social distancing, attitudes towards vaccines, belief in Covid-19 conspiracy theories and many other things. We have been following these people ever since (the last survey was before Christmas) and have used other methods such as telephone interviews and internet-based psychological tests and diaries to enrich our understanding. We have also helped friends in other countries to launch parallel surveys. What have we found?
It will take an enormous effort to link all this data and create a rounded picture of how the UK population has fared in these extraordinary times, but we can already see some important patterns. In the first week of lockdown, we saw higher rates of depression, anxiety and stress than had been reported in previous UK population surveys, and similar findings have been reported by other researchers in the UK and elsewhere. Across all these studies, it seemed that people who had previously suffered from mental health difficulties, who were poor, young or who had small children at home were suffering the worst.
But only a few studies have examined changes that have occurred since that first lockdown period, and when these changes are examined a different picture emerges. We have seen an overall reduction in the number of people who report “above threshold” levels of psychiatric symptoms and similar findings have been reported by other research groups. This picture of adaptation and resilience should not be surprising because we know from previous research that individual, interpersonal traumas (for example, sexual assaults) are far more mentally damaging than collective traumas such as natural disasters. This is at least in part because strong social bonds protect people against stress and, during a crisis, people often come together to help each other, creating a sense of belonging and a shared identity with neighbours.
At the same time, it is important to recognise that average levels of psychological symptoms in the population could never be particularly informative. Even if there really were a tidal wave of mental illness washing over the population, what would anyone be able to do about it (it would not be possible to install a clinical psychologist in every neighbourhood)? Instead, when we use advanced statistical methods to discover different patterns of change, we see that the majority of the population (56.5% in the case of anxiety and depression) have been resilient, showing no evidence of mental illness at any time. These are contrasted with a small group who have been unwell throughout (6.5%), some who have deteriorated after starting with low (17%) or moderate symptoms (11.5%) and some who have shown considerable improvement in their mental health (8.5%). So, in total, about a quarter of the population is doing badly. This picture of what we might call “different slopes for different folks” does not look like a tsunami.
What could be driving these differences? People in these different groups are starting from different positions. Broadly, we found that individuals with a history of mental illness, who were lonely, who were intolerant of uncertainty, who were prone to death anxiety and who felt they had little control over their lives tended to do poorly. In a separate study of the Spanish population with friends in Madrid, we also found that people who started out with positive beliefs about the world (they thought that the world was fundamentally a good place) often experienced “post-traumatic growth”; they used the pandemic as an opportunity to re-evaluate their lives and change for the better.
But pandemics are dynamic and multifaceted, so that how people react over time depends not only on where they start out from but also on how events unfold. It is important to recognise that some of the consequences of the pandemic have been beneficial – people who have kept their jobs have often saved money, the daily commute has been eliminated for some, and we found that most parents of older children have enjoyed having their kids at home (although, as already noted, having young children at home is stressful).
We found that the economic threats associated with the pandemic were most linked with symptoms, whereas exposure to the virus seemed to have little effect (although very few of our sample have required hospital treatment and we know from other studies that those who do are very likely to suffer from persisting post-traumatic stress disorder).
Because of its bleak implications, the “tsunami” narrative carries the risk of becoming a self-fulfilling prophecy. Our more nuanced understanding of the psychological effects of the pandemic, by contrast, has practical implications. The government can most preserve the population’s mental health by protecting people from the economic consequences of the pandemic and by providing practical support to parents of young children. When additional resources are available for mental health services, they should be directed to those who are most vulnerable, for example those with pre-existing mental health difficulties, or those who have been hospitalised because of the virus, or frontline workers.
Looking to the long-term, studies such as ours can help provide the framework for a UK-wide resilience strategy. When the next crisis of a similar scale befalls our nation, we will hopefully be better prepared to withstand the shock.
The Science Behind the Smile
From the Harvard Business Review Magazine (January–February 2012)
Only recently have we been able to apply science to one of the world’s oldest questions: “What is the nature of happiness?” In this edited interview, the author of the 2006 best seller Stumbling on Happiness surveys the field. Gilbert explores the sudden emergence of happiness as a...
Harvard psychology professor Daniel Gilbert is widely known for his 2006 best seller, Stumbling on Happiness. His work reveals, among other things, the systematic mistakes we all make in imagining how happy (or miserable) we’ll be. In this edited interview with HBR’s Gardiner Morse, Gilbert surveys the field of happiness research and explores its frontiers.
HBR: Happiness research has become a hot topic in the past 20 years. Why?
Gilbert: It’s only recently that we realized we could marry one of our oldest questions—“What is the nature of human happiness?”—to our newest way of getting answers: science. Until just a few decades ago, the problem of happiness was mainly in the hands of philosophers and poets.
Psychologists have always been interested in emotion, but in the past two decades the study of emotion has exploded, and one of the emotions that psychologists have studied most intensively is happiness. Recently economists and neuroscientists joined the party. All these disciplines have distinct but intersecting interests: Psychologists want to understand what people feel, economists want to know what people value, and neuroscientists want to know how people’s brains respond to rewards. Having three separate disciplines all interested in a single topic has put that topic on the scientific map. Papers on happiness are published in Science, people who study happiness win Nobel prizes, and governments all over the world are rushing to figure out how to measure and increase the happiness of their citizens.
HBR: How is it possible to measure something as subjective as happiness?
Gilbert: Measuring subjective experiences is a lot easier than you think. It’s what your eye doctor does when she fits you for glasses. She puts a lens in front of your eye and asks you to report your experience, and then she puts another lens up, and then another. She uses your reports as data, submits the data to scientific analysis, and designs a lens that will give you perfect vision—all on the basis of your reports of your subjective experience. People’s real-time reports are very good approximations of their experiences, and they make it possible for us to see the world through their eyes. People may not be able to tell us how happy they were yesterday or how happy they will be tomorrow, but they can tell us how they’re feeling at the moment we ask them. “How are you?” may be the world’s most frequently asked question, and nobody’s stumped by it.
There are many ways to measure happiness. We can ask people “How happy are you right now?” and have them rate it on a scale. We can use magnetic resonance imaging to measure cerebral blood flow, or electromyography to measure the activity of the “smile muscles” in the face. But in most circumstances those measures are highly correlated, and you’d have to be the federal government to prefer the complicated, expensive measures over the simple, inexpensive one.
HBR: But isn’t the scale itself subjective? Your five might be my six.
Gilbert: Imagine that a drugstore sold a bunch of cheap thermometers that weren’t very well calibrated. People with normal temperatures might get readings other than 98.6, and two people with the same temperature might get different readings. These inaccuracies could cause people to seek medical treatment they didn’t need or to miss getting treatment they did need. So buggy thermometers are sometimes a problem—but not always. For example, if I brought 100 people to my lab, exposed half of them to a flu virus, and then used those buggy thermometers to take their temperatures a week later, the average temperature of the people who’d been exposed would almost surely be higher than the average temperature of the others. Some thermometers would underestimate, some would overestimate, but as long as I measured enough people, the inaccuracies would cancel themselves out. Even with poorly calibrated instruments, we can compare large groups of people.
A rating scale is like a buggy thermometer. Its inaccuracies make it inappropriate for some kinds of measurement (for example, saying exactly how happy John was at 10:42 AM on July 3, 2010), but it’s perfectly appropriate for the kinds of measurements most psychological scientists make.
HBR: What did all these happiness researchers discover?
Much of the research confirms things we’ve always suspected. For example, in general people who are in good romantic relationships are happier than those who aren’t. Healthy people are happier than sick people. People who participate in their churches are happier than those who don’t. Rich people are happier than poor people. And so on.
That said, there have been some surprises. For example, while all these things do make people happier, it’s astonishing how little any one of them matters. Yes, a new house or a new spouse will make you happier, but not much and not for long. As it turns out, people are not very good at predicting what will make them happy and how long that happiness will last. They expect positive events to make them much happier than those events actually do, and they expect negative events to make them unhappier than they actually do. In both field and lab studies, we’ve found that winning or losing an election, gaining or losing a romantic partner, getting or not getting a promotion, passing or failing an exam—all have less impact on happiness than people think they will. A recent study showed that very few experiences affect us for more than three months. When good things happen, we celebrate for a while and then sober up. When bad things happen, we weep and whine for a while and then pick ourselves up and get on with it.
HBR: Why do events have such a fleeting effect on happiness?
Gilbert: One reason is that people are good at synthesizing happiness—at finding silver linings. As a result, they usually end up happier than they expect after almost any kind of trauma or tragedy. Pick up any newspaper, and you’ll find plenty of examples. Remember Jim Wright, who resigned in disgrace as Speaker of the House of Representatives because of a shady book deal? A few years later he told the New York Times that he was “so much better off, physically, financially, emotionally, mentally and in almost every other way.” Then there’s Moreese Bickham, who spent 37 years in the Louisiana State Penitentiary; after his release he said, “I don’t have one minute’s regret. It was a glorious experience.” These guys appear to be living in the best of all possible worlds. Speaking of which, Pete Best, the original drummer for the Beatles, was replaced by Ringo Starr in 1962, just before the Beatles got big. Now he’s a session drummer. What did he have to say about missing out on the chance to belong to the most famous band of the 20th century? “I’m happier than I would have been with the Beatles.”
One of the most reliable findings of the happiness studies is that we do not have to go running to a therapist every time our shoelaces break. We have a remarkable ability to make the best of things. Most people are more resilient than they realize.
HBR: Aren’t they deluding themselves? Isn’t real happiness better than synthetic happiness?
Gilbert: Let’s be careful with terms. Nylon is real; it’s just not natural. Synthetic happiness is perfectly real; it’s just man-made. Synthetic happiness is what we produce when we don’t get what we want, and natural happiness is what we experience when we do. They have different origins, but they are not necessarily different in terms of how they feel. One is not obviously better than the other.
Of course, most folks don’t see it that way. Most folks think that synthetic happiness isn’t as “good” as the other kind—that people who produce it are just fooling themselves and aren’t really happy. I know of no evidence demonstrating that that’s the case. If you go blind or lose a fortune, you’ll find that there’s a whole new life on the other side of those events. And you’ll find many things about that new life that are quite good. In fact, you’ll undoubtedly find a few things that are even better than what you had before. You’re not lying to yourself; you’re not delusional. You’re discovering things you didn’t know--couldn’t know until you were in that new life. You are looking for things that make your new life better, you are finding them, and they are making you happy. What is most striking to me as a scientist is that most of us don’t realize how good we’re going to be at finding these things. We’d never say, “Oh, of course, if I lost my money or my wife left me, I’d find a way to be just as happy as I am now.” We’d never say it—but it’s true.
Employees are happiest when they’re trying to achieve goals that are difficult but not out of reach.
HBR: Is being happy always desirable? Look at all the unhappy creative geniuses—Beethoven, van Gogh, Hemingway. Doesn’t a certain amount of unhappiness spur good performance?
Gilbert: Nonsense! Everyone can think of a historical example of someone who was both miserable and creative, but that doesn’t mean misery generally promotes creativity. There’s certainly someone out there who smoked two packs of cigarettes a day and lived to be 90, but that doesn’t mean cigarettes are good for you. The difference between using anecdotes to prove a point and using science to prove a point is that in science you can’t just cherry-pick the story that suits you best. You have to examine all the stories, or at least take a fair sample of them, and see if there are more miserable creatives or happy creatives, more miserable noncreatives or happy noncreatives. If misery promoted creativity, you’d see a higher percentage of creatives among the miserable than among the delighted. And you don’t. By and large, happy people are more creative and more productive. Has there ever been a human being whose misery was the source of his creativity? Of course. But that person is the exception, not the rule.
HBR: Many managers would say that contented people aren’t the most productive employees, so you want to keep people a little uncomfortable, maybe a little anxious, about their jobs.
Gilbert: Managers who collect data instead of relying on intuition don’t say that. I know of no data showing that anxious, fearful employees are more creative or productive. Remember, contentment doesn’t mean sitting and staring at the wall. That’s what people do when they’re bored, and people hate being bored. We know that people are happiest when they’re appropriately challenged—when they’re trying to achieve goals that are difficult but not out of reach. Challenge and threat are not the same thing. People blossom when challenged and wither when threatened. Sure, you can get results from threats: Tell someone, “If you don’t get this to me by Friday, you’re fired,” and you’ll probably have it by Friday. But you’ll also have an employee who will thereafter do his best to undermine you, who will feel no loyalty to the organization, and who will never do more than he must. It would be much more effective to tell your employee, “I don’t think most people could get this done by Friday. But I have full faith and confidence that you can. And it’s hugely important to the entire team.” Psychologists have studied reward and punishment for a century, and the bottom line is perfectly clear: Reward works better.
HBR: So challenge makes people happy. What else do we know now about the sources of happiness?
Gilbert: If I had to summarize all the scientific literature on the causes of human happiness in one word, that word would be “social.” We are by far the most social species on Earth. Even ants have nothing on us. If I wanted to predict your happiness, and I could know only one thing about you, I wouldn’t want to know your gender, religion, health, or income. I’d want to know about your social network—about your friends and family and the strength of your bonds with them.
HBR: Beyond having rich networks, what makes us happy day to day?
Gilbert: The psychologist Ed Diener has a finding I really like. He essentially shows that the frequency of your positive experiences is a much better predictor of your happiness than is the intensity of your positive experiences. When we think about what would make us happy, we tend to think of intense events—going on a date with a movie star, winning a Pulitzer, buying a yacht. But Diener and his colleagues have shown that how good your experiences are doesn’t matter nearly as much as how many good experiences you have. Somebody who has a dozen mildly nice things happen each day is likely to be happier than somebody who has a single truly amazing thing happen. So wear comfortable shoes, give your wife a big kiss, sneak a french fry. It sounds like small stuff, and it is. But the small stuff matters.
I think this helps explain why it’s so hard for us to forecast our affective states. We imagine that one or two big things will have a profound effect. But it looks like happiness is the sum of hundreds of small things. Achieving happiness requires the same approach as losing weight. People trying to lose weight want a magic pill that will give them instant results. Ain’t no such thing. We know exactly how people lose weight: They eat less and exercise more. They don’t have to eat much less or exercise much more—they just have to do those things consistently. Over time it adds up. Happiness is like that. The things you can do to increase your happiness are obvious and small and take just a little time. But you have to do them every day and wait for the results.
HBR: What are those little things we can do to increase our happiness?
Gilbert: They won’t surprise you any more than “eat less and exercise more” does. The main things are to commit to some simple behaviors—meditating, exercising, getting enough sleep—and to practice altruism. One of the most selfish things you can do is help others. Volunteer at a homeless shelter. You may or may not help the homeless, but you will almost surely help yourself. And nurture your social connections. Twice a week, write down three things you’re grateful for, and tell someone why. I know these sound like homilies from your grandmother. Well, your grandmother was smart. The secret of happiness is like the secret of weight loss: It’s not a secret!
For decades psychologists and economists have asked, “Who’s happy?” But until now we were working with pretty blunt tools.
If there’s no secret, what’s left to study?
There’s no shortage of questions. For decades psychologists and economists have been asking, “Who’s happy? The rich? The poor? The young? The old?” The best we could do was divide people into groups, survey them once or maybe twice, and try to determine if the people in one group were, on average, happier than those in the others. The tools we used were pretty blunt instruments. But now millions of people are carrying little computers in their pockets—smartphones—and this allows us to collect data in real time from huge numbers of people about what they are doing and feeling from moment to moment. That’s never been possible before.
One of my collaborators, Matt Killingsworth, has built an experience-sampling application called Track Your Happiness. He follows more than 15,000 people by iPhone, querying them several times a day about their activities and emotional states. Are they at home? On a bus? Watching television? Praying? How are they feeling? What are they thinking about? With this technology, Matt’s beginning to answer a much better question than the one we’ve been asking for decades. Instead of asking who is happy, he can ask when they are happy. He doesn’t get the answer by asking, “When are you happy?”—because frankly, people don’t know. He gets it by tracking people over days, months, and years and measuring what they are doing and how happy they are while they are doing it. I think this kind of technology is about to revolutionize our understanding of daily emotions and human well-being. (See the sidebar “The Future of Happiness Research.”)
The Future of Happiness Researchby Matthew Killingsworth You’d think it would be easy to figure out what makes us happy. Until recently, though, researchers have had to rely mainly on people’s reports about their average emotional states over long ...
What are the new frontiers of happiness research?
We need to get more specific about what we are measuring. Many scientists say they are studying happiness, but when you look at what they’re measuring, you find they are actually studying depression or life satisfaction. These things are related to happiness, of course, but they are not the same as happiness. Research shows that people with children are typically less happy on a moment-to-moment basis than people without children. But people who have kids may feel fulfilled in a way that people without kids do not. It doesn’t make sense to say that people with kids are happier, or that people without kids are happier; each group is happier in some ways and less happy in others. We need to stop painting our portrait of happiness with such a fat brush.
Will all this research ultimately make us happier?
We are learning and will continue to learn how to maximize our happiness. So yes, there is no doubt that the research has helped and will continue to help us increase our happiness. But that still leaves the big question: What kind of happiness should we want? For example, do we want the average happiness of our moments to be as large as possible, or do we want the sum of our happy moments to be as large as possible? Those are different things. Do we want lives free of pain and heartache, or is there value in those experiences? Science will soon be able to tell us how to live the lives we want, but it will never tell us what kinds of lives we should want to live. That will be for us to decide.
It's Time for Startup Culture to Talk About Mental Health
Entrepreneurs don't talk enough about mental health. This is the year we should change that.
By: Aytekin Tank
Entrepreneur Leadership Network VIP
Entrepreneur; Founder and CEO, JotForm
It was 2001, and Ben Huh was deeply depressed. His first start-up had failed, taking hundreds of thousands in investor money with it. His loneliness was all-encompassing; his will to live zapped.
“I spent a week in my room with the lights off and cut off from the world, thinking of the best way to exit this failure,” he wrote years later. “Death was a good option—and it got better by the day.”
Huh doesn’t know what exactly made him leave that room in the end, but he did. He eventually became the CEO of the wildly successful Cheezburger Network, but it wasn’t until 2011 that he opened up about his depression, spurred by the suicide of Diaspora founder Ilya Zhitomirskiy.
“My post is about everyone suffering (through depression) quietly,” Huh told Mashable. Zhitomirskiy, like Huh, likely felt a deep sense of isolation and aloneness—and he didn’t have to. “From a long line of entrepreneurs who suffered alone and quietly under our own self-doubt, I wish I could talk to you and tell you to bash the shit out of your own self-doubt,” he wrote in his blog post.
Too often, entrepreneurs don’t pay enough attention to their mental health, and startup culture is notoriously reticent on the topic. The combination can be deadly, and it needs to change.
Why entrepreneurs are susceptible An entrepreneur’s mindset can be the perfect habitat for depression and anxiety to take hold.
For one thing, the nature of our work is deeply stressful. The popular conception of an entrepreneur is someone who is constantly sleep-deprived and frazzled, hunched over a keyboard in an empty office surrounded by coffee cups and fast food wrappers. Mental health often goes by the wayside—and that’s assuming we have the resources, like insurance, to seek help, which many of us do not.
Then there’s solitude. Building a startup can be lonely work, especially when it feels every moment not spent on the goal is being wasted, which leaves little time to maintain healthy connections with family and friends. So much isolation also lends itself to what Megan Bruneau, host of the podcast The Failure Factor, calls “impression management”—the idea that we have to appear as though we have it all together.
“Many entrepreneurs believe that, in order to be considered competent by stakeholders, we need to be perceived as infallible—a stark contrast to the stigmatized stereotypes of a person with compromised mental health,” she writes. This perpetuates shame and disconnection—which cause depression—and discourages help-seeking.
Lastly, there’s the matter of our identities. When you dedicate yourself so completely to something, it can become impossible to tell where you end and the business begins, and we begin to detach from our own needs.
“The looming existential void (and self-worth tied to our company's success) is a manifestation of perfectionism that causes both anxiety and an emotional roller coaster—dependent on our everchanging company forecast,” Bruneau writes. And when your personal worth is tied to something as unpredictable as a startup, what happens when it fails?
The stigma of talking about mental health
Despite alarming statistics about the pervasiveness of mental health struggles in the startup world, talking about it still carries a stigma. Complicating matters is that many disorders, like anxiety and depression, aren’t always apparent to others, so it’s hard to know when someone is struggling.
But poor mental health can happen to anyone, executive, staff or intern. As leaders, it’s up to us to create a culture where discussion of mental wellbeing is treated with openness, not shame. We also need to walk the walk by making sure we’re respecting employees’ work-life balance. For example, not expecting an immediate reply to an email sent at 2 a.m., and actively encouraging employees to disconnect during vacations and weekends.
For founders, the enormous stress and pressure to hit targets is all but unavoidable, Jess Ratcliffe, a personal development coach to several startups and brands, tells Forbes. Historically, there’s been an overemphasis on short-term success over longevity, but luckily, that’s starting to change.
“Over the last few years, I’ve noticed founders becoming increasingly aware of the importance of their mental wellbeing. I see more founders working with coaches and even supporting their team with access to coaching,” she says. “This is incredibly exciting and will positively impact the founder, team and mission they are on.”
Knowing when to get help
In her book Employee to Entrepreneur, Suzanne Mulvehill writes that, “preparing the mind, body and spirit for entrepreneurship is like preparing the mind, body and spirit for the Olympics.”
Getting enough sleep, eating well and exercising regularly are all great practices to take care of your wellbeing. It’s also a good idea to check in with yourself about how you’re feeling. Jon Dishotsky, CEO and cofounder of Starcity, writes for Fast Company that, “hitting bottom isn’t always a dramatic crash. Sometimes it’s a slow sink to the bottom.”
Maybe you’ve felt a general sense of malaise, or just not feeling like yourself. Maybe you’re more tired than usual, but aren’t sure why. Maybe there are more acute physical symptoms, like a tightness in your chest or a weight in the pit of your stomach. Your body can signal to you when something is wrong, even before your conscious mind is clued in.
Ratcliffe agrees. Founders should look out for warning signs, like finding yourself feeling slowed down by self-doubt, or pulled off-course by your own mental narratives. “One of the most powerful things is to proactively work on your mental wellbeing, rather than waiting for the time when it feels like you need help,” she says.
The good news is, help is all around you. Reach out to a trusted friend or colleague, or find an online support network like 7 Cups of Tea, a peer-to-peer online counseling platform.
It’s also incumbent upon entrepreneurs to be honest about their journeys, warts and all. If you struggled for years with your business before finally achieving a breakthrough, don’t create the false impression that your success came quickly or easily. I often talk about how my company, JotForm, is bootstrapped, and how I grew it slowly over the years to the more than 8 million users we have today. Going it without VC funding and refusing the mainstream "startup hustle" advice hasn’t been an easy journey, and I don’t want anyone to get the idea that it didn’t come without lots of hard work, ups and downs, and lots of trial and error.
Building a company is hard—mentally, physically and emotionally. There are steps we can take every day to help ourselves survive in such an extreme environment, but we also need to look out for each other. By acknowledging our own difficulties and supporting one another, we can make the startup world a less hostile place for everyone.
The Coddling of the American Mind
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health
Story by Greg Lukianoff and Jonathan Haidt
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
Two terms have risen quickly from obscurity into common campus parlance. Microaggressions are small actions or word choices that seem on their face to have no malicious intent but that are thought of as a kind of violence nonetheless. For example, by some campus guidelines, it is a microaggression to ask an Asian American or Latino American “Where were you born?,” because this implies that he or she is not a real American. Trigger warnings are alerts that professors are expected to issue if something in a course might cause a strong emotional response. For example, some students have called for warnings that Chinua Achebe’s Things Fall Apart describes racial violence and that F. Scott Fitzgerald’s The Great Gatsby portrays misogyny and physical abuse, so that students who have been previously victimized by racism or domestic violence can choose to avoid these works, which they believe might “trigger” a recurrence of past trauma.
Some recent campus actions border on the surreal. In April, at Brandeis University, the Asian American student association sought to raise awareness of microaggressions against Asians through an installation on the steps of an academic hall. The installation gave examples of microaggressions such as “Aren’t you supposed to be good at math?” and “I’m colorblind! I don’t see race.” But a backlash arose among other Asian American students, who felt that the display itself was a microaggression. The association removed the installation, and its president wrote an e-mail to the entire student body apologizing to anyone who was “triggered or hurt by the content of the microaggressions.”
This new climate is slowly being institutionalized, and is affecting what can be said in the classroom, even as a basis for discussion. During the 2014–15 school year, for instance, the deans and department chairs at the 10 University of California system schools were presented by administrators at faculty leader-training sessions with examples of microaggressions. The list of offensive statements included: “America is the land of opportunity” and “I believe the most qualified person should get the job.”
The press has typically described these developments as a resurgence of political correctness. That’s partly right, although there are important differences between what’s happening now and what happened in the 1980s and ’90s. That movement sought to restrict speech (specifically hate speech aimed at marginalized groups), but it also challenged the literary, philosophical, and historical canon, seeking to widen it by including more-diverse perspectives. The current movement is largely about emotional well-being. More than the last, it presumes an extraordinary fragility of the collegiate psyche, and therefore elevates the goal of protecting students from psychological harm. The ultimate aim, it seems, is to turn campuses into “safe spaces” where young adults are shielded from words and ideas that make some uncomfortable. And more than the last, this movement seeks to punish anyone who interferes with that aim, even accidentally. You might call this impulse vindictive protectiveness. It is creating a culture in which everyone must think twice before speaking up, lest they face charges of insensitivity, aggression, or worse.
We have been studying this development for a while now, with rising alarm. (Greg Lukianoff is a constitutional lawyer and the president and CEO of the Foundation for Individual Rights in Education, which defends free speech and academic freedom on campus, and has advocated for students and faculty involved in many of the incidents this article describes; Jonathan Haidt is a social psychologist who studies the American culture wars. The stories of how we each came to this subject can be read here.) The dangers that these trends pose to scholarship and to the quality of American universities are significant; we could write a whole essay detailing them. But in this essay we focus on a different question: What are the effects of this new protectiveness on the students themselves? Does it benefit the people it is supposed to help? What exactly are students learning when they spend four years or more in a community that polices unintentional slights, places warning labels on works of classic literature, and in many other ways conveys the sense that words can be forms of violence that require strict control by campus authorities, who are expected to act as both protectors and prosecutors?
There’s a saying common in education circles: Don’t teach students what to think; teach them how to think. The idea goes back at least as far as Socrates. Today, what we call the Socratic method is a way of teaching that fosters critical thinking, in part by encouraging students to question their own unexamined beliefs, as well as the received wisdom of those around them. Such questioning sometimes leads to discomfort, and even to anger, on the way to understanding.
But vindictive protectiveness teaches students to think in a very different way. It prepares them poorly for professional life, which often demands intellectual engagement with people and ideas one might find uncongenial or wrong. The harm may be more immediate, too. A campus culture devoted to policing speech and punishing speakers is likely to engender patterns of thought that are surprisingly similar to those long identified by cognitive behavioral therapists as causes of depression and anxiety. The new protectiveness may be teaching students to think pathologically.
How Did We Get Here?
It’s difficult to know exactly why vindictive protectiveness has burst forth so powerfully in the past few years. The phenomenon may be related to recent changes in the interpretation of federal antidiscrimination statutes (about which more later). But the answer probably involves generational shifts as well. Childhood itself has changed greatly during the past generation. Many Baby Boomers and Gen Xers can remember riding their bicycles around their hometowns, unchaperoned by adults, by the time they were 8 or 9 years old. In the hours after school, kids were expected to occupy themselves, getting into minor scrapes and learning from their experiences. But “free range” childhood became less common in the 1980s. The surge in crime from the ’60s through the early ’90s made Baby Boomer parents more protective than their own parents had been. Stories of abducted children appeared more frequently in the news, and in 1984, images of them began showing up on milk cartons. In response, many parents pulled in the reins and worked harder to keep their children safe.
The flight to safety also happened at school. Dangerous play structures were removed from playgrounds; peanut butter was banned from student lunches. After the 1999 Columbine massacre in Colorado, many schools cracked down on bullying, implementing “zero tolerance” policies. In a variety of ways, children born after 1980—the Millennials—got a consistent message from adults: life is dangerous, but adults will do everything in their power to protect you from harm, not just from strangers but from one another as well.
These same children grew up in a culture that was (and still is) becoming more politically polarized. Republicans and Democrats have never particularly liked each other, but survey data going back to the 1970s show that on average, their mutual dislike used to be surprisingly mild. Negative feelings have grown steadily stronger, however, particularly since the early 2000s. Political scientists call this process “affective partisan polarization,” and it is a very serious problem for any democracy. As each side increasingly demonizes the other, compromise becomes more difficult. A recent study shows that implicit or unconscious biases are now at least as strong across political parties as they are across races.
So it’s not hard to imagine why students arriving on campus today might be more desirous of protection and more hostile toward ideological opponents than in generations past. This hostility, and the self-righteousness fueled by strong partisan emotions, can be expected to add force to any moral crusade. A principle of moral psychology is that “morality binds and blinds.” Part of what we do when we make moral judgments is express allegiance to a team. But that can interfere with our ability to think critically. Acknowledging that the other side’s viewpoint has any merit is risky—your teammates may see you as a traitor.
Social media makes it extraordinarily easy to join crusades, express solidarity and outrage, and shun traitors. Facebook was founded in 2004, and since 2006 it has allowed children as young as 13 to join. This means that the first wave of students who spent all their teen years using Facebook reached college in 2011, and graduated from college only this year.
These first true “social-media natives” may be different from members of previous generations in how they go about sharing their moral judgments and supporting one another in moral campaigns and conflicts. We find much to like about these trends; young people today are engaged with one another, with news stories, and with prosocial endeavors to a greater degree than when the dominant technology was television. But social media has also fundamentally shifted the balance of power in relationships between students and faculty; the latter increasingly fear what students might do to their reputations and careers by stirring up online mobs against them.
We do not mean to imply simple causation, but rates of mental illness in young adults have been rising, both on campus and off, in recent decades. Some portion of the increase is surely due to better diagnosis and greater willingness to seek help, but most experts seem to agree that some portion of the trend is real. Nearly all of the campus mental-health directors surveyed in 2013 by the American College Counseling Association reported that the number of students with severe psychological problems was rising at their schools. The rate of emotional distress reported by students themselves is also high, and rising. In a 2014 survey by the American College Health Association, 54 percent of college students surveyed said that they had “felt overwhelming anxiety” in the past 12 months, up from 49 percent in the same survey just five years earlier. Students seem to be reporting more emotional crises; many seem fragile, and this has surely changed the way university faculty and administrators interact with them. The question is whether some of those changes might be doing more harm than good.
The Thinking Cure
For millennia, philosophers have understood that we don’t see life as it is; we see a version distorted by our hopes, fears, and other attachments. The Buddha said, “Our life is the creation of our mind.” Marcus Aurelius said, “Life itself is but what you deem it.” The quest for wisdom in many traditions begins with this insight. Early Buddhists and the Stoics, for example, developed practices for reducing attachments, thinking more clearly, and finding release from the emotional torments of normal mental life.
Cognitive behavioral therapy is a modern embodiment of this ancient wisdom. It is the most extensively studied nonpharmaceutical treatment of mental illness, and is used widely to treat depression, anxiety disorders, eating disorders, and addiction. It can even be of help to schizophrenics. No other form of psychotherapy has been shown to work for a broader range of problems. Studies have generally found that it is as effective as antidepressant drugs (such as Prozac) in the treatment of anxiety and depression. The therapy is relatively quick and easy to learn; after a few months of training, many patients can do it on their own. Unlike drugs, cognitive behavioral therapy keeps working long after treatment is stopped, because it teaches thinking skills that people can continue to use.
The goal is to minimize distorted thinking and see the world more accurately. You start by learning the names of the dozen or so most common cognitive distortions (such as overgeneralizing, discounting positives, and emotional reasoning; see the list at the bottom of this article). Each time you notice yourself falling prey to one of them, you name it, describe the facts of the situation, consider alternative interpretations, and then choose an interpretation of events more in line with those facts. Your emotions follow your new interpretation. In time, this process becomes automatic. When people improve their mental hygiene in this way—when they free themselves from the repetitive irrational thoughts that had previously filled so much of their consciousness—they become less depressed, anxious, and angry.
The parallel to formal education is clear: cognitive behavioral therapy teaches good critical-thinking skills, the sort that educators have striven for so long to impart. By almost any definition, critical thinking requires grounding one’s beliefs in evidence rather than in emotion or desire, and learning how to search for and evaluate evidence that might contradict one’s initial hypothesis. But does campus life today foster critical thinking? Or does it coax students to think in more-distorted ways?
Let’s look at recent trends in higher education in light of the distortions that cognitive behavioral therapy identifies. We will draw the names and descriptions of these distortions from David D. Burns’s popular book Feeling Good, as well as from the second edition of Treatment Plans and Interventions for Depression and Anxiety Disorders, by Robert L. Leahy, Stephen J. F. Holland, and Lata K. McGinn.
Higher Education’s Embrace of “Emotional Reasoning”
Burns defines emotional reasoning as assuming “that your negative emotions necessarily reflect the way things really are: ‘I feel it, therefore it must be true.’ ” Leahy, Holland, and McGinn define it as letting “your feelings guide your interpretation of reality.” But, of course, subjective feelings are not always trustworthy guides; unrestrained, they can cause people to lash out at others who have done nothing wrong. Therapy often involves talking yourself down from the idea that each of your emotional responses represents something true or important.
Emotional reasoning dominates many campus debates and discussions. A claim that someone’s words are “offensive” is not just an expression of one’s own subjective feeling of offendedness. It is, rather, a public charge that the speaker has done something objectively wrong. It is a demand that the speaker apologize or be punished by some authority for committing an offense.
There have always been some people who believe they have a right not to be offended. Yet throughout American history—from the Victorian era to the free-speech activism of the 1960s and ’70s—radicals have pushed boundaries and mocked prevailing sensibilities. Sometime in the 1980s, however, college campuses began to focus on preventing offensive speech, especially speech that might be hurtful to women or minority groups. The sentiment underpinning this goal was laudable, but it quickly produced some absurd results.
Among the most famous early examples was the so-called water-buffalo incident at the University of Pennsylvania. In 1993, the university charged an Israeli-born student with racial harassment after he yelled “Shut up, you water buffalo!” to a crowd of black sorority women that was making noise at night outside his dorm-room window. Many scholars and pundits at the time could not see how the term water buffalo (a rough translation of a Hebrew insult for a thoughtless or rowdy person) was a racial slur against African Americans, and as a result, the case became international news.
Claims of a right not to be offended have continued to arise since then, and universities have continued to privilege them. In a particularly egregious 2008 case, for instance, Indiana University–Purdue University at Indianapolis found a white student guilty of racial harassment for reading a book titled Notre Dame vs. the Klan. The book honored student opposition to the Ku Klux Klan when it marched on Notre Dame in 1924. Nonetheless, the picture of a Klan rally on the book’s cover offended at least one of the student’s co-workers (he was a janitor as well as a student), and that was enough for a guilty finding by the university’s Affirmative Action Office.
These examples may seem extreme, but the reasoning behind them has become more commonplace on campus in recent years. Last year, at the University of St. Thomas, in Minnesota, an event called Hump Day, which would have allowed people to pet a camel, was abruptly canceled. Students had created a Facebook group where they protested the event for animal cruelty, for being a waste of money, and for being insensitive to people from the Middle East. The inspiration for the camel had almost certainly come from a popular TV commercial in which a camel saunters around an office on a Wednesday, celebrating “hump day”; it was devoid of any reference to Middle Eastern peoples. Nevertheless, the group organizing the event announced on its Facebook page that the event would be canceled because the “program [was] dividing people and would make for an uncomfortable and possibly unsafe environment.”
Because there is a broad ban in academic circles on “blaming the victim,” it is generally considered unacceptable to question the reasonableness (let alone the sincerity) of someone’s emotional state, particularly if those emotions are linked to one’s group identity. The thin argument “I’m offended” becomes an unbeatable trump card. This leads to what Jonathan Rauch, a contributing editor at this magazine, calls the “offendedness sweepstakes,” in which opposing parties use claims of offense as cudgels. In the process, the bar for what we consider unacceptable speech is lowered further and further.
Since 2013, new pressure from the federal government has reinforced this trend. Federal antidiscrimination statutes regulate on-campus harassment and unequal treatment based on sex, race, religion, and national origin. Until recently, the Department of Education’s Office for Civil Rights acknowledged that speech must be “objectively offensive” before it could be deemed actionable as sexual harassment—it would have to pass the “reasonable person” test. To be prohibited, the office wrote in 2003, allegedly harassing speech would have to go “beyond the mere expression of views, words, symbols or thoughts that some person finds offensive.”
But in 2013, the Departments of Justice and Education greatly broadened the definition of sexual harassment to include verbal conduct that is simply “unwelcome.” Out of fear of federal investigations, universities are now applying that standard—defining unwelcome speech as harassment—not just to sex, but to race, religion, and veteran status as well. Everyone is supposed to rely upon his or her own subjective feelings to decide whether a comment by a professor or a fellow student is unwelcome, and therefore grounds for a harassment claim. Emotional reasoning is now accepted as evidence.
If our universities are teaching students that their emotions can be used effectively as weapons—or at least as evidence in administrative proceedings—then they are teaching students to nurture a kind of hypersensitivity that will lead them into countless drawn-out conflicts in college and beyond. Schools may be training students in thinking styles that will damage their careers and friendships, along with their mental health.
Fortune-Telling and Trigger Warnings
Burns defines fortune-telling as “anticipat[ing] that things will turn out badly” and feeling “convinced that your prediction is an already-established fact.” Leahy, Holland, and McGinn define it as “predict[ing] the future negatively” or seeing potential danger in an everyday situation. The recent spread of demands for trigger warnings on reading assignments with provocative content is an example of fortune-telling.
The idea that words (or smells or any sensory input) can trigger searing memories of past trauma—and intense fear that it may be repeated—has been around at least since World War I, when psychiatrists began treating soldiers for what is now called post-traumatic stress disorder. But explicit trigger warnings are believed to have originated much more recently, on message boards in the early days of the Internet. Trigger warnings became particularly prevalent in self-help and feminist forums, where they allowed readers who had suffered from traumatic events like sexual assault to avoid graphic content that might trigger flashbacks or panic attacks. Search-engine trends indicate that the phrase broke into mainstream use online around 2011, spiked in 2014, and reached an all-time high in 2015. The use of trigger warnings on campus appears to have followed a similar trajectory; seemingly overnight, students at universities across the country have begun demanding that their professors issue warnings before covering material that might evoke a negative emotional response.
In 2013, a task force composed of administrators, students, recent alumni, and one faculty member at Oberlin College, in Ohio, released an online resource guide for faculty (subsequently retracted in the face of faculty pushback) that included a list of topics warranting trigger warnings. These topics included classism and privilege, among many others. The task force recommended that materials that might trigger negative reactions among students be avoided altogether unless they “contribute directly” to course goals, and suggested that works that were “too important to avoid” be made optional.
It’s hard to imagine how novels illustrating classism and privilege could provoke or reactivate the kind of terror that is typically implicated in PTSD. Rather, trigger warnings are sometimes demanded for a long list of ideas and attitudes that some students find politically offensive, in the name of preventing other students from being harmed. This is an example of what psychologists call “motivated reasoning”—we spontaneously generate arguments for conclusions we want to support. Once you find something hateful, it is easy to argue that exposure to the hateful thing could traumatize some other people. You believe that you know how others will react, and that their reaction could be devastating. Preventing that devastation becomes a moral obligation for the whole community. Books for which students have called publicly for trigger warnings within the past couple of years include Virginia Woolf’s Mrs. Dalloway (at Rutgers, for “suicidal inclinations”) and Ovid’s Metamorphoses (at Columbia, for sexual assault).
Jeannie Suk’s New Yorker essay described the difficulties of teaching rape law in the age of trigger warnings. Some students, she wrote, have pressured their professors to avoid teaching the subject in order to protect themselves and their classmates from potential distress. Suk compares this to trying to teach “a medical student who is training to be a surgeon but who fears that he’ll become distressed if he sees or handles blood.”
However, there is a deeper problem with trigger warnings. According to the most-basic tenets of psychology, the very idea of helping people with anxiety disorders avoid the things they fear is misguided. A person who is trapped in an elevator during a power outage may panic and think she is going to die. That frightening experience can change neural connections in her amygdala, leading to an elevator phobia. If you want this woman to retain her fear for life, you should help her avoid elevators.
But if you want to help her return to normalcy, you should take your cues from Ivan Pavlov and guide her through a process known as exposure therapy. You might start by asking the woman to merely look at an elevator from a distance—standing in a building lobby, perhaps—until her apprehension begins to subside. If nothing bad happens while she’s standing in the lobby—if the fear is not “reinforced”—then she will begin to learn a new association: elevators are not dangerous. (This reduction in fear during exposure is called habituation.) Then, on subsequent days, you might ask her to get closer, and on later days to push the call button, and eventually to step in and go up one floor. This is how the amygdala can get rewired again to associate a previously feared situation with safety or normalcy.
Students who call for trigger warnings may be correct that some of their peers are harboring memories of trauma that could be reactivated by course readings. But they are wrong to try to prevent such reactivations. Students with PTSD should of course get treatment, but they should not try to avoid normal life, with its many opportunities for habituation. Classroom discussions are safe places to be exposed to incidental reminders of trauma (such as the word violate). A discussion of violence is unlikely to be followed by actual violence, so it is a good way to help students change the associations that are causing them discomfort. And they’d better get their habituation done in college, because the world beyond college will be far less willing to accommodate requests for trigger warnings and opt-outs.
The expansive use of trigger warnings may also foster unhealthy mental habits in the vastly larger group of students who do not suffer from PTSD or other anxiety disorders. People acquire their fears not just from their own past experiences, but from social learning as well. If everyone around you acts as though something is dangerous—elevators, certain neighborhoods, novels depicting racism—then you are at risk of acquiring that fear too. The psychiatrist Sarah Roff pointed this out last year in an online article for The Chronicle of Higher Education. “One of my biggest concerns about trigger warnings,” Roff wrote, “is that they will apply not just to those who have experienced trauma, but to all students, creating an atmosphere in which they are encouraged to believe that there is something dangerous or damaging about discussing difficult aspects of our history.”
The new climate is slowly being institutionalized, and is affecting what can be said in the classroom, even as a basis for discussion or debate.
In an article published last year by Inside Higher Ed, seven humanities professors wrote that the trigger-warning movement was “already having a chilling effect on [their] teaching and pedagogy.” They reported their colleagues’ receiving “phone calls from deans and other administrators investigating student complaints that they have included ‘triggering’ material in their courses, with or without warnings.” A trigger warning, they wrote, “serves as a guarantee that students will not experience unexpected discomfort and implies that if they do, a contract has been broken.” When students come to expect trigger warnings for any material that makes them uncomfortable, the easiest way for faculty to stay out of trouble is to avoid material that might upset the most sensitive student in the class.
Magnification, Labeling, and Microaggressions
Burns defines magnification as “exaggerat[ing] the importance of things,” and Leahy, Holland, and McGinn define labeling as “assign[ing] global negative traits to yourself and others.” The recent collegiate trend of uncovering allegedly racist, sexist, classist, or otherwise discriminatory microaggressions doesn’t incidentally teach students to focus on small or accidental slights. Its purpose is to get students to focus on them and then relabel the people who have made such remarks as aggressors.
The term microaggression originated in the 1970s and referred to subtle, often unconscious racist affronts. The definition has expanded in recent years to include anything that can be perceived as discriminatory on virtually any basis. For example, in 2013, a student group at UCLA staged a sit-in during a class taught by Val Rust, an education professor. The group read a letter aloud expressing their concerns about the campus’s hostility toward students of color. Although Rust was not explicitly named, the group quite clearly criticized his teaching as microaggressive. In the course of correcting his students’ grammar and spelling, Rust had noted that a student had wrongly capitalized the first letter of the word indigenous. Lowercasing the capital I was an insult to the student and her ideology, the group claimed.
Even joking about microaggressions can be seen as an aggression, warranting punishment. Last fall, Omar Mahmood, a student at the University of Michigan, wrote a satirical column for a conservative student publication, The Michigan Review, poking fun at what he saw as a campus tendency to perceive microaggressions in just about anything. Mahmood was also employed at the campus newspaper, The Michigan Daily. The Daily’s editors said that the way Mahmood had “satirically mocked the experiences of fellow Daily contributors and minority communities on campus … created a conflict of interest.” The Daily terminated Mahmood after he described the incident to two Web sites, The College Fix and The Daily Caller. A group of women later vandalized Mahmood’s doorway with eggs, hot dogs, gum, and notes with messages such as “Everyone hates you, you violent prick.” When speech comes to be seen as a form of violence, vindictive protectiveness can justify a hostile, and perhaps even violent, response.
In March, the student government at Ithaca College, in upstate New York, went so far as to propose the creation of an anonymous microaggression-reporting system. Student sponsors envisioned some form of disciplinary action against “oppressors” engaged in belittling speech. One of the sponsors of the program said that while “not … every instance will require trial or some kind of harsh punishment,” she wanted the program to be “record-keeping but with impact.”
Surely people make subtle or thinly veiled racist or sexist remarks on college campuses, and it is right for students to raise questions and initiate discussions about such cases. But the increased focus on microaggressions coupled with the endorsement of emotional reasoning is a formula for a constant state of outrage, even toward well-meaning speakers trying to engage in genuine discussion.
What are we doing to our students if we encourage them to develop extra-thin skin in the years just before they leave the cocoon of adult protection and enter the workforce? Would they not be better prepared to flourish if we taught them to question their own emotional reactions, and to give people the benefit of the doubt?
Teaching Students to Catastrophize and Have Zero Tolerance
Burns defines catastrophizing as a kind of magnification that turns “commonplace negative events into nightmarish monsters.” Leahy, Holland, and McGinn define it as believing “that what has happened or will happen” is “so awful and unbearable that you won’t be able to stand it.” Requests for trigger warnings involve catastrophizing, but this way of thinking colors other areas of campus thought as well.
Catastrophizing rhetoric about physical danger is employed by campus administrators more commonly than you might think—sometimes, it seems, with cynical ends in mind. For instance, last year administrators at Bergen Community College, in New Jersey, suspended Francis Schmidt, a professor, after he posted a picture of his daughter on his Google+ account. The photo showed her in a yoga pose, wearing a T-shirt that read I will take what is mine with fire & blood, a quote from the HBO show Game of Thrones. Schmidt had filed a grievance against the school about two months earlier after being passed over for a sabbatical. The quote was interpreted as a threat by a campus administrator, who received a notification after Schmidt posted the picture; it had been sent, automatically, to a whole group of contacts. According to Schmidt, a Bergen security official present at a subsequent meeting between administrators and Schmidt thought the word fire could refer to AK-47s.
Then there is the eight-year legal saga at Valdosta State University, in Georgia, where a student was expelled for protesting the construction of a parking garage by posting an allegedly “threatening” collage on Facebook. The collage described the proposed structure as a “memorial” parking garage—a joke referring to a claim by the university president that the garage would be part of his legacy. The president interpreted the collage as a threat against his life.
It should be no surprise that students are exhibiting similar sensitivity. At the University of Central Florida in 2013, for example, Hyung-il Jung, an accounting instructor, was suspended after a student reported that Jung had made a threatening comment during a review session. Jung explained to the Orlando Sentinel that the material he was reviewing was difficult, and he’d noticed the pained look on students’ faces, so he made a joke. “It looks like you guys are being slowly suffocated by these questions,” he recalled saying. “Am I on a killing spree or what?”
After the student reported Jung’s comment, a group of nearly 20 others e-mailed the UCF administration explaining that the comment had clearly been made in jest. Nevertheless, UCF suspended Jung from all university duties and demanded that he obtain written certification from a mental-health professional that he was “not a threat to [himself] or to the university community” before he would be allowed to return to campus.
All of these actions teach a common lesson: smart people do, in fact, overreact to innocuous speech, make mountains out of molehills, and seek punishment for anyone whose words make anyone else feel uncomfortable.
Mental Filtering and Disinvitation Season
As Burns defines it, mental filtering is “pick[ing] out a negative detail in any situation and dwell[ing] on it exclusively, thus perceiving that the whole situation is negative.” Leahy, Holland, and McGinn refer to this as “negative filtering,” which they define as “focus[ing] almost exclusively on the negatives and seldom notic[ing] the positives.” When applied to campus life, mental filtering allows for simpleminded demonization.
Students and faculty members in large numbers modeled this cognitive distortion during 2014’s “disinvitation season.” That’s the time of year—usually early spring—when commencement speakers are announced and when students and professors demand that some of those speakers be disinvited because of things they have said or done. According to data compiled by the Foundation for Individual Rights in Education, since 2000, at least 240 campaigns have been launched at U.S. universities to prevent public figures from appearing at campus events; most of them have occurred since 2009.
Consider two of the most prominent disinvitation targets of 2014: former U.S. Secretary of State Condoleezza Rice and the International Monetary Fund’s managing director, Christine Lagarde. Rice was the first black female secretary of state; Lagarde was the first woman to become finance minister of a G8 country and the first female head of the IMF. Both speakers could have been seen as highly successful role models for female students, and Rice for minority students as well. But the critics, in effect, discounted any possibility of something positive coming from those speeches.
Members of an academic community should of course be free to raise questions about Rice’s role in the Iraq War or to look skeptically at the IMF’s policies. But should dislike of part of a person’s record disqualify her altogether from sharing her perspectives?
If campus culture conveys the idea that visitors must be pure, with résumés that never offend generally left-leaning campus sensibilities, then higher education will have taken a further step toward intellectual homogeneity and the creation of an environment in which students rarely encounter diverse viewpoints. And universities will have reinforced the belief that it’s okay to filter out the positive. If students graduate believing that they can learn nothing from people they dislike or from those with whom they disagree, we will have done them a great intellectual disservice.
What Can We Do Now?Attempts to shield students from words, ideas, and people that might cause them emotional discomfort are bad for the students. They are bad for the workplace, which will be mired in unending litigation if student expectations of safety are carried forward. And they are bad for American democracy, which is already paralyzed by worsening partisanship. When the ideas, values, and speech of the other side are seen not just as wrong but as willfully aggressive toward innocent victims, it is hard to imagine the kind of mutual respect, negotiation, and compromise that are needed to make politics a positive-sum game.
Rather than trying to protect students from words and ideas that they will inevitably encounter, colleges should do all they can to equip students to thrive in a world full of words and ideas that they cannot control. One of the great truths taught by Buddhism (and Stoicism, Hinduism, and many other traditions) is that you can never achieve happiness by making the world conform to your desires. But you can master your desires and habits of thought. This, of course, is the goal of cognitive behavioral therapy. With this in mind, here are some steps that might help reverse the tide of bad thinking on campus.
The biggest single step in the right direction does not involve faculty or university administrators, but rather the federal government, which should release universities from their fear of unreasonable investigation and sanctions by the Department of Education. Congress should define peer-on-peer harassment according to the Supreme Court’s definition in the 1999 case Davis v. Monroe County Board of Education. The Davis standard holds that a single comment or thoughtless remark by a student does not equal harassment; harassment requires a pattern of objectively offensive behavior by one student that interferes with another student’s access to education. Establishing the Davis standard would help eliminate universities’ impulse to police their students’ speech so carefully.
Universities themselves should try to raise consciousness about the need to balance freedom of speech with the need to make all students feel welcome. Talking openly about such conflicting but important values is just the sort of challenging exercise that any diverse but tolerant community must learn to do. Restrictive speech codes should be abandoned.
Universities should also officially and strongly discourage trigger warnings. They should endorse the American Association of University Professors’ report on these warnings, which notes, “The presumption that students need to be protected rather than challenged in a classroom is at once infantilizing and anti-intellectual.” Professors should be free to use trigger warnings if they choose to do so, but by explicitly discouraging the practice, universities would help fortify the faculty against student requests for such warnings.
Finally, universities should rethink the skills and values they most want to impart to their incoming students. At present, many freshman-orientation programs try to raise student sensitivity to a nearly impossible level. Teaching students to avoid giving unintentional offense is a worthy goal, especially when the students come from many different cultural backgrounds. But students should also be taught how to live in a world full of potential offenses. Why not teach incoming students how to practice cognitive behavioral therapy? Given high and rising rates of mental illness, this simple step would be among the most humane and supportive things a university could do. The cost and time commitment could be kept low: a few group training sessions could be supplemented by Web sites or apps. But the outcome could pay dividends in many ways. For example, a shared vocabulary about reasoning, common distortions, and the appropriate use of evidence to draw conclusions would facilitate critical thinking and real debate. It would also tone down the perpetual state of outrage that seems to engulf some colleges these days, allowing students’ minds to open more widely to new ideas and new people. A greater commitment to formal, public debate on campus—and to the assembly of a more politically diverse faculty—would further serve that goal.
Thomas Jefferson, upon founding the University of Virginia, said:
This institution will be based on the illimitable freedom of the human mind. For here we are not afraid to follow truth wherever it may lead, nor to tolerate any error so long as reason is left free to combat it.We believe that this is still—and will always be—the best attitude for American universities. Faculty, administrators, students, and the federal government all have a role to play in restoring universities to their historic mission.
Common Cognitive Distortions
A partial list from Robert L. Leahy, Stephen J. F. Holland, and Lata K. McGinn’s Treatment Plans and Interventions for Depression and Anxiety Disorders (2012).
1. Mind reading. You assume that you know what people think without having sufficient evidence of their thoughts. “He thinks I’m a loser.”
2. Fortune-telling. You predict the future negatively: things will get worse, or there is danger ahead. “I’ll fail that exam,” or “I won’t get the job.”
3. Catastrophizing.You believe that what has happened or will happen will be so awful and unbearable that you won’t be able to stand it. “It would be terrible if I failed.”
4. Labeling. You assign global negative traits to yourself and others. “I’m undesirable,” or “He’s a rotten person.”
5. Discounting positives. You claim that the positive things you or others do are trivial. “That’s what wives are supposed to do—so it doesn’t count when she’s nice to me,” or “Those successes were easy, so they don’t matter.”
6. Negative filtering. You focus almost exclusively on the negatives and seldom notice the positives. “Look at all of the people who don’t like me.”
7. Overgeneralizing. You perceive a global pattern of negatives on the basis of a single incident. “This generally happens to me. I seem to fail at a lot of things.”
8. Dichotomous thinking. You view events or people in all-or-nothing terms. “I get rejected by everyone,” or “It was a complete waste of time.”
9. Blaming. You focus on the other person as the source of your negative feelings, and you refuse to take responsibility for changing yourself. “She’s to blame for the way I feel now,” or “My parents caused all my problems.”
10. What if? You keep asking a series of questions about “what if” something happens, and you fail to be satisfied with any of the answers. “Yeah, but what if I get anxious?,” or “What if I can’t catch my breath?”
11. Emotional reasoning. You let your feelings guide your interpretation of reality. “I feel depressed; therefore, my marriage is not working out.”
12. Inability to disconfirm. You reject any evidence or arguments that might contradict your negative thoughts. For example, when you have the thought I’m unlovable, you reject as irrelevant any evidence that people like you. Consequently, your thought cannot be refuted. “That’s not the real issue. There are deeper problems. There are other factors.”
Greg Lukianoff is the president and CEO of the Foundation for Individual Rights in Education and the author of Unlearning Liberty. He is the the co-author of The Coddling of the American Mind, which originated as a September 2015 Atlantic story.
Jonathan Haidt is a social psychologist at the New York University Stern School of Business. He is the author of The Righteous Mind and the co-author of The Coddling of the American Mind, which originated as a September 2015 Atlantic story.
Is positive psychology all it’s cracked up to be?
Just over 20 years old, this field has captivated the world with its hopeful promises — and drawn critics for its moralizing, mysticism, and serious commercialization.
By Joseph Smith
The story of positive psychology starts, its founder often says, in 1997 in his rose garden.
Martin Seligman had just been elected head of the American Psychological Association and was in search of a transformational theme for his presidency. While weeding in his garden one day with his young daughter, Seligman found himself distracted and frustrated as Nikki, then 5, threw flowers into the air and giggled. Seligman yelled at her to stop, at which point Nikki took the professor aside. She reminded him how, from ages 3 to 5, she had been a whiner, but on her fifth birthday, had made a conscious decision to stop. If she could change herself with an act of will, couldn’t Daddy stop being such a grouch?
Seligman had an epiphany. What if every person was encouraged to nurture his or her character strengths, as Nikki so precociously had, rather than scolded into fixing their shortcomings?
He convened teams of the nation’s best psychologists to formulate a plan to reorient the entire discipline of psychology away from mostly treating mental illness and toward human flourishing. Then, he used his bully pulpit as the psychology association’s president to promote it. With Seligman’s 1998 inaugural APA presidential address, positive psychology was born.
Seligman told the crowd that psychology had lost its way. It had “moved too far away from its original roots, which were to make the lives of all people more fulfilling and productive,” he said, “and too much toward the important, but not all-important, area of curing mental illness.”
Seligman’s own experience made this deficit very clear. He had become famous, as he would later write in his autobiography, for his work on what he called “the really bad stuff — helplessness, depression, panic,” and that this had made him perfectly placed to “see and name the missing piece — the positive.”
The APA leader called on his colleagues to join him to effect a sea change in psychology and to create a science that investigates and nurtures the best human qualities: a science of strengths, virtues, and happiness. What Seligman named “positive psychology,” using a term coined in 1954 by humanistic psychologist Abraham Maslow, promises personal transformation through the redemptive power of devotional practices: counting blessings, gratitude, forgiveness, and meditation. And it is expressly designed to build moral character by cultivating the six cardinal virtues of wisdom, courage, justice, humanity, temperance, and transcendence.
Today, Seligman is the foremost advocate of the science of well-being. He had made his name in academia in the 1970s and ’80s for discovering the phenomenon of “learned helplessness,” in which individuals become conditioned to believe that negative events are inescapable, even when those events are within their control. In 1991, he came to the public’s attention with his book about combating these kinds of processes, Learned Optimism, which he claimed was the world’s first “evidence-based” self-help book.
But it was when Seligman shifted toward the psychology of happiness with the 2002 publication of Authentic Happiness, followed in 2011 with Flourish, that Seligman started to become a household name. The theory and practice of positive psychology caught fire in the public’s imagination, thanks in part to Seligman’s informal prose and optimistic message. Now, Seligman’s TED talk has been viewed more than 5 million times online; he has met heads of government and religious leaders, including the UK’s former prime minister David Cameron and the Dalai Lama, and has appeared on shows such as Larry King Now.
Despite his association with the science of happiness, Seligman is by his own admission brusque, dismissive, and a grouch. He casts himself as a maverick, butting heads with the academic establishment, and yet he’s the ultimate insider — probably the best-known, best-funded, and most influential psychologist alive. As a scientist, he insists on the value-neutral purity of the research he directs, yet presides over a movement that even its fans say seems to have some of the characteristics of a religion.
To many of its followers, the movement is a godsend, answering a need to belong to something larger than themselves and holding out the chance of better, fuller lives through truly effective techniques backed by science. To its critics, that science is undercut by positive psychology’s moralizing, its mysticism, and its money-spinning commercialization. But how valid are these concerns, and do they matter if positive psychology makes people happy?
Positive psychology has grown at an explosive rate since Seligman ushered it into the public conscious, surprising even Seligman himself. The field has attracted hundreds of millions of dollars in research grants. Its 2019 World Congress was attended by 1,600 delegates from 70 countries. It inspires tens of thousands of research papers, endless reams of popular books, and supports armies of therapists, coaches, and mentors.
Its institutional uptake has been no less impressive. More than a million US soldiers have been trained in positive psychology’s techniques of resilience just two years after the “Battlemind” program was launched in 2007. Scores of K-12 schools have adopted its principles. In 2018, Yale University announced that an astonishing one-quarter of its undergraduates had enrolled in its course on happiness.
Since that inaugural presidential address in 1998, Seligman has distanced positive psychology from its original focus. At its inception, the field sought to map the paths that end in authentic fulfillment. But with Flourish, Seligman changed course. Happiness, he declared, is not the only goal of human existence, as he’d previously thought.
The purpose of life, he said, is well-being, or flourishing, which includes objective, external components such as relationships and achievements. The road to flourishing, moreover, is through moral action: It is achieved by practicing six virtues that Seligman’s research says are enshrined in all the world’s great intellectual traditions.
“Positive psychology gives the impression you can be well and happy just by thinking the right thoughts.”This shift toward moral action hasn’t helped the critical response towards positive psychology’s lofty aims and pragmatic methods. Philosophers such as Chapman University’s Mike W. Martin say it has left the field of science and entered the realm of ethics — that it is no longer a purely factual enterprise, but is now concerned with promoting particular values.
But that’s not the only critique. Others decry positive psychology’s commodification and commercial cheapening by the thousands of coaches, consultants, and therapists who have jumped on the bandwagon with wild claims for their lucrative products.
In several high-profile cases, serious flaws have been found in positive psychology’s science, not just at the hysterical fringe, but in the work of big stars including Seligman himself. There are worries about its replicability, its dependence on unreliable self-reports, and the sense that it can be used to prescribe one thing and also its opposite — for example, that well-being consists in living in the moment, but also in being future-oriented.
And for a science, positive psychology can often sound a lot like religion. Consider its trappings: It has a charismatic leader and legions of rapturous followers. It has a year zero and a creation myth that begins with an epiphany.
“I have no less mystical way to put it,” Seligman wrote in Flourish. “Positive psychology called to me just as the burning bush called to Moses.”
Seligman’s inclusion of material achievement in the components of happiness has also raised eyebrows. He has theorized that people who have not achieved some degree of mastery and success in the world can’t be said to be flourishing. He once described a “thirty-two-year-old Harvard University summa in mathematics who is fluent in Russian and Japanese and runs her own hedge fund” as a “poster child for positive psychology.” But this can make well-being seem exclusive and out of reach, since accomplishment of this kind is not possible to all, or even most.
Professors Edgar Cabanas and Eva Illouz, authors of the 2019 book Manufacturing Happy Citizens, have accused positive psychology of advancing a Western, ethnocentric creed of individualism. At its core is the idea that we can achieve well-being by our own efforts, by showing determination and grit. But what about social and systemic factors that, for example, keep people in poverty? What about physical illness and underserved tragedy — are people who are miserable in these circumstances just not trying hard enough?
“Positive psychology gives the impression you can be well and happy just by thinking the right thoughts. It encourages a culture of blaming the victim,” said professor Jim Coyne, a former colleague and fierce critic of Seligman.
Then there are positive psychology’s financial ties to religion. The Templeton Foundation, originally established to promote evangelical Christianity and still pursuing goals related to religious understanding, is Seligman’s biggest private sponsor and has granted him tens of millions of dollars. It partly funded his research into universal values, helped establish the Positive Psychology Center at Seligman’s University of Pennsylvania, and endows psychology’s richest prize, the $100,000 Templeton Prize for Positive Psychology. The foundation has, cultural critic Ruth Whippman wrote in her book America the Anxious, “played a huge role in shaping the philosophical role positive psychology has taken.”
We should find this scandalous, Coyne says. “It’s outrageous that a religious organization — or any vested interest — can determine the course of scientific ‘progress,’ that it can dictate what science gets done.”
Despite the criticism, positive psychology remains incredibly popular. Books with “happiness” in the title fly off the shelves, and people sign up for seminars and courses and lectures in droves. We all seem to want what positive psychology is selling. What is it that makes this movement so compelling?
Sonja Lyubomirsky, professor of psychology at the University of California Riverside and an early star of the movement, told me that positive psychology was born at a time of peace and plenty. Many today “have the luxury to reflect and work on their own well-being,” she says. “When people are struggling to get their basic needs met they don’t have the time or resources or energy or motivation to consider whether they are happy.”
The 2008 financial crisis, though, seems to challenge this hypothesis. Suddenly, the luxury to reflect evaporated for vast numbers of people. But analysis by social scientists shows that the number of academic papers published on positive psychology and happiness continued to rise.
That’s led skeptics such as Coyne, Cabanas, and Illouz to suggest that positive psychology’s popularity today is less a question of demand than supply. There’s so much money in the movement now that it is propelled by the energy and entrepreneurial vim of the coaches, consultants, writers, and academics who make livings from it.
It’s also possible, however, that positive psychology’s entanglement with religion may contribute to its popularity. As Vox recently reported, secularism is on the rise in the US. But the propensity to believe in the divine runs very deep in the human psyche. We are, psychologists such as Bruce Hood say, hard-wired for religion. Positive psychology’s spiritual orientation makes it the perfect receptacle for our displaced religious impulses. Critics such as Coyne claim this is by design. The missionary tone, being called like Moses — these are all part of Seligman’s vision for positive psychology.
“Seligman frequently makes claims of mystical intervention that many of us dismiss as marketing,” Coyne told me.
But does the marketing matter if positive psychology helps people lead better lives? Skeptics, once again, question whether the benefits of positive psychology are really as great as claimed. Cabanas said that there “is no major conclusion in positive psychology that has not been challenged, modified or even rejected.” Yet the fact of positive psychology’s meteoric rise cannot be ignored; Seligman and his colleagues are very clearly doing something right, something that gives hope, optimism, and perhaps even happiness to millions of its consumers.
When I asked Seligman about the field’s connection to religion, he said most practitioners “would dissent from my strange beliefs,” and that those beliefs were his own. He referred me to the final chapter of his autobiography, in which he describes the death of his friend and mentor Jack Templeton, whose father’s foundation has funded Seligman’s research.
Seligman was bedridden at the time, but after reading a tract on positive Christianity, he had a “command hallucination” to rise and attend the evangelical memorial service.
The tract read: “Religion and science are opposed, but only in the same sense in which my thumb and forefinger are opposed — and between the two, one can grasp everything.”
Americans are the unhappiest they've been in 50 years, poll finds
Just 14% of U.S. adults say they're very happy.
June 16, 2020, 8:34 AM CDT / Source: Associated Press
By Associated Press
ST. PETERSBURG, Fla. — Spoiler alert: 2020 has been rough on the American psyche. Folks in the U.S. are more unhappy today than they've been in nearly 50 years.
This bold — yet unsurprising — conclusion comes from the COVID Response Tracking Study, conducted by NORC at the University of Chicago. It finds that just 14% of American adults say they're very happy, down from 31% who said the same in 2018. That year, 23% said they'd often or sometimes felt isolated in recent weeks. Now, 50% say that.
The survey, conducted in late May, draws on nearly a half-century of research from the General Social Survey, which has collected data on American attitudes and behaviors at least every other year since 1972. No less than 29% of Americans have ever called themselves very happy in that survey.
Science-backed ways to achieve happiness
Most of the new survey’s interviews were completed before the death of George Floyd touched off nationwide protests and a global conversation about race and police brutality, adding to the feelings of stress and loneliness Americans were already facing from the coronavirus outbreak — especially for black Americans.
Lexi Walker, a 47-year-old professional fiduciary who lives near Greenville, South Carolina, has felt anxious and depressed for long stretches of this year. She moved back to South Carolina late in 2019, then her cat died. Her father passed away in February. Just when she thought she’d get out and socialize in an attempt to heal from her grief, the pandemic hit.
“It’s been one thing after another,” Walker said. “This is very hard. The worst thing about this for me, after so much, I don’t know what’s going to happen.”
Among other finding from the new poll about life in the pandemic:
“It isn’t as high as it could be," she said. “People have figured out a way to connect with others. It’s not satisfactory, but people are managing to some extent.”
The new poll found that there haven't been significant changes in Americans’ assessment of their families' finances since 2018 and that Americans' satisfaction with their families’ ability to get along financially was as high as it's been over nearly five decades.
Jonathan Berney, of Austin, Texas, said that the pandemic — and his resulting layoff as a digital marketing manager for a law firm — caused him to reevaluate everything in his life. While he admits that he’s not exactly happy now, that’s led to another uncomfortable question: Was he truly happy before the pandemic?
“2020 just fast forwarded a spiritual decay. When things are good, you don’t tend to look inwards,” he said, adding that he was living and working in the Miami area before the pandemic hit. As Florida dealt with the virus, his girlfriend left him and he decided to leave for Austin. “I probably just wasn’t a nice guy to be around from all the stress and anxiety. But this forced an existential crisis.”
Berney, who is looking for work, said things have improved from those early, dark days of the pandemic. He’s still job hunting but has a little savings to live on. He said he's trying to kayak more and center himself so he’s better prepared to deal with any future downturn in events.
Reimagining happiness is almost hard-wired into Americans’ DNA, said Sonja Lyubomirsky, a psychology professor at the University of California, Riverside.
“Human beings are remarkably resilient. There’s lots and lots of evidence that we adapt to everything. We move forward,” she said, adding that she’s done happiness studies since the pandemic started and found that some people are slightly happier than last year.
Melinda Hartline, of Tampa, who was laid off from her job in public relations in March, said she was in a depressed daze those first few weeks of unemployment. Then she started to bike and play tennis and enrolled in a college course on post-crisis leadership.
Today, she’s worried about the state of the world and the economy, and she wonders when she can see her kids and grandkids who live on the West Coast — but she also realizes that things could be a lot worse.
“Anything can happen. And you have to be prepared,” she said. “Whether it’s your health, your finances, whether it’s the world. You have to be prepared. And always maintain that positive mental attitude. It’s going to get you through it.”
The survey of 2,279 adults was conducted May 21-29 with funding from the National Science Foundation. It uses a sample drawn from NORC’s probability-based AmeriSpeak Panel, which is designed to be representative of the U.S. population. The margin of sampling error for all respondents is plus or minus 2.9 percentage points.
Published June 7th, 2020
What we can learn from 'untranslatable' illnesses
By Zaria Gorvett
From an enigmatic rage disorder to a sickness of overthinking, there are some mental illnesses you only get in certain cultures. Why? And what can they teach us?
“DO NOT FEAR KORO,” screamed the headline in the Straits Times newspaper on November 7, 1967. In the preceding days, a peculiar phenomenon had swept across Singapore. Thousands of men had spontaneously become convinced that their penises were shrinking away – and that the loss would eventually kill them.
Mass hysteria had quickly taken hold. Men desperately tried to hold onto their genitals, using whatever they had to hand – rubber bands, clothes pegs, string. Unscrupulous local doctors cashed in, recommending various injections and traditional remedies.
The word on the street was that the sudden penis withering was caused by something the men had eaten. Specifically, the locals were suspicious of meat from pigs that had been vaccinated in a programme the government had imposed on Singaporean farms. Pork sales quickly plummeted.
Though public health officials scrambled to contain the hysteria outbreak, explaining that it was caused by “psychological fear” alone, it didn’t work. In the end, over 500 people sought treatment at public hospitals.
As it happens, the fear of losing one’s penis is more mainstream than you might think. It pops up fairly regularly in certain cultures across the globe. In South-east Asia and China, it’s common enough that it even has a name: “koro”, possibly – and rather graphically – after the Javanese word for tortoise, referring to how it looks when they retract their heads into their shells.
Koro has a history stretching back thousands of years, but the most recent outbreak occurred in 2015, in eastern India. It affected 57 people, including eight women, for whom it tends to manifest as a fear that their nipples are retreating into the body.
Koro is considered a culture-bound syndrome – a mental illness that only exists in certain societies. For decades, “untranslatable” disorders like these were studied as mere scientific curiosities, which existed in parts of the world where people apparently didn’t know any better.
Western mental illnesses, on the other hand, were viewed as universal – and you could guarantee that every “bona fide” problem would be found in the hallowed pages of the American psychiatric bible, the Diagnostic and Statistical Manual of Mental Disorders (more commonly known as the DSM). But today scientists are increasingly realising that this is not the case.
In the Islamic world, mental illnesses are often attributed to evil spirits, or “jinns”
In the central plateau region of Haiti, people regularly fall sick with “reflechi twòp”, or “thinking too much”, which involves ruminating on your troubles until you can barely leave the house. In South Korea, meanwhile, there’s “Hwa-byung” – loosely translated as “rage virus” – which is caused by bottling up your feelings about things you see as unfair, until you succumb to some alarming physical symptoms, like a burning sensation in the body. Dealing with exasperating family members is a major risk factor – it’s common during divorces and conflicts with in-laws.
Though to the uninitiated, these mental illnesses might sound eccentric or even made-up, in fact they are serious and legitimate mental health concerns, affecting vast numbers of people.
It’s estimated that Hwa-byung affects around 10,000 people in South Korea every year – mostly older married women – and research has shown that it has a measurable footprint in the brain. In 2009, brain scans revealed that sufferers had lower activity in an area known to be involved in tasks such as emotion and impulse control. This makes sense, given that Hwa-byung is an anger disorder.
The consequences of culture-bound syndromes can be devastating. Koro attacks can be so convincing that men cause serious damage to their genitals, as they try to stop them receding. Those who suffer from reflechi twòp are eight times more likely to have suicidal thoughts, while Hwa-byung has been linked to emotional distress, social isolation, demoralisation and depression, physical pain, low self-esteem, and unhappiness.
Intriguingly, some untranslatable illnesses have recently been disappearing, while others are spreading to new parts of the globe. Where do these sicknesses come from, and what determines where they’re found? The quest for answers has been gripping anthropologists and psychiatrists for decades – and now their findings are shaping our understanding of the very origins of mental illness itself.
The award for the culture-bound illness with the most surprising history surely has to go to “neurasthenia” (also known as “shenjing shuairuo”). Though it mostly occurs in China and South-east Asia today, it’s actually a colonial malady from the 19th Century.
Neurasthenia was popularised by the American neurologist George Miller Beard, who described it as an “exhaustion of the nervous system”. At the time, the Industrial Revolution was leading a massive upheaval of everyday life, and he believed that neurasthenia – a syndrome of headaches, fatigue and anxiety, among other things – was the result.
Sometimes culture-bound illnesses only occur in a certain social class or era of time (Credit: Science Museum/ Wellcome Collection)
“Once famous novelists like Marcel Proust were diagnosed, it became this super popular condition,” says Kevin Aho, a philosopher from Florida Gulf Coast University, who has studied the history of the illness. “It was almost fashionable and indicated sensitivity, intellectual creativity – it was kind of an indicator of one's own cultivated refinement.”
Eventually neurasthenia spread to European colonies around the world, where it was enthusiastically picked up by moustachioed officers and their wives, as a way to add a label to their general feelings of homesickness. According to a survey taken in 1913, neurasthenia was the most prevalent diagnosis among white colonisers in India, Sri Lanka (then Ceylon), China and Japan.
As the years passed, neurasthenia gradually lost its appeal in the West, as it became associated with more serious psychiatric problems. Now it’s been forgotten about altogether. But elsewhere, the opposite happened: it was adopted as a diagnosis that didn’t come with stigma of mental illness and remains in use to this day.
In some parts of Asia, people are more likely to say they have neurasthenia than depression. A 2018 study of a random sample of adults from Guangzhou, China, found that 15.4% identified as having the former versus 5.3% who said they had the latter.
The final twist is that now neurasthenia is vanishing from Asia too. “When I first interviewed patients at a psychiatric hospital in Ho Chi Minh, Vietnam, in 2008, almost all of them said that they had neurasthenia,” says Allen Tran, a psychological anthropologist from Bucknell University, Pennsylvania. “Then when I did some follow-up research 10 years later, I think only one person in my sample said they had it.”
So what’s going on?
There are two possible scenarios playing out here. First, there’s the idea that the entirety of humankind is susceptible to the same limited range of mental sicknesses – we all feel anxious and depressed, for example, but the way we talk about these things varies depending on when and were you live.
The fact that culture-bound illnesses can be gained and lost within a single community, and with such rapidity, is an important clue. This suggests that they’re not driven by, say, genetic factors, as this kind of change usually takes hundreds or thousands, rather than tens of years. Instead, the swift extinction of neurasthenia in Vietnam could be down to the growing popularity of the concept of anxiety, which has been imported from overseas. It’s possible that the actual incidence of mental illness has been the same all this time – but conceptually, one has been replaced with the other, says Tran.
In some cultures, distress and grief can manifest with physical symptoms (Credit: Alamy)
Along these lines, the author and medical historian Edward Shorter has suggested that each society has its own “symptom repertoire”, which is the array of symptoms from which we unconsciously draw when we start to feel mentally unwell.
For example, a grieving Victorian woman might say she felt faint, where her modern counterpart in the UK might suggest she felt anxious or depressed, and someone in the same position in China might explain they had a stomach-ache. In this scenario, they would all have had identical experiences – perhaps they all felt faint, on edge, or suffered physical pains – but the symptoms they paid the most attention to were different, depending on what was considered normal in their society.
In Britain, the out-dated illness “hysteria” – which was thought to mostly affect women, and caused fainting, emotional outbursts and nervousness – disappeared from the public consciousness in the early 20th Century. But Shorter suggests that it didn’t actually die out. Instead, the array of symptoms we look out for evolved. Today the same mental phenomenon hides behind other diagnoses, such as depression.
I would say that there are definitely instances where the meaning that is attributed to experiences actually changes biologically what that experience is – Bonnie KaiserThis fits with another concept that has been gaining in popularity, “idioms of distress”, which suggests that each culture has certain acceptable, established ways of expressing emotional anguish at any given time. In one society, you might drink excessively, while in others you might say you’re a victim of witchcraft, or diagnose yourself with illnesses like koro or depression.
For example, in the Islamic world, it’s widely believed that it’s possible to become possessed by “jinns”, or evil spirits. They can be good, bad, or neutral, but they’re generally blamed for erratic behaviour. The concept is so mainstream, it’s even in the Muslim holy book, the Koran. “A lot of my patients do hold these beliefs quite strongly,” says Shahzada Nawaz, a consultant psychiatrist at North Manchester General Hospital in the UK.
Nawaz explains that the ability to invoke jinns is particularly useful in Islamic cultures, because of the stigma that tends to accompany Western mental illnesses. One study of 30 Bangladeshi patients attending a mental health service in an east London borough found that, though they had been diagnosed with a variety of problems between them, such as schizophrenia and bipolar disorder, their family members often felt that jinn possession was responsible.
"Neurasthenia" is a colonial malady from the 19th century, which mostly occurs in China and Southeast Asia today (Credit: Getty Images)
But are culture-bound illnesses really just the result of differences in labelling? Another tantalising possibility is that the society we live in can actually shape the way we get sick.
Physical vs psychological pain
It turns out there is an invisible global divide in the way people experience distress. In the US, the UK, and Europe – at least in the 21st Century – it tends to occur in the mind, with symptoms like sadness, anger or anxiety prevailing. But this is actually pretty weird. In many parts of the world, in countries as diverse as China, Ethiopia and Chile it manifests physically instead.
For example, the most up-to-date edition of the DSM describes a panic attack as “an abrupt surge of intense fear or intense discomfort”. However, in Cambodian refugees, the symptoms tend to centre around their necks instead. Many non-Western mental illnesses, such as koro and Hwa-byung, fit this pattern of perceiving physical symptoms. The divide even extends to the way people in certain societies respond to exercise or surgery; where it’s more usual to experience physical pain, it’s more likely.
In contrast, mental illnesses that involve the perception of pain are rare in the Western world, and hotly debated. Some scientists think chronic fatigue syndrome and fibromyalgia fit into this category, though this is controversial.
In fact, it’s been known for years that our beliefs can have a powerful effect on the way we feel – even on our biology. One example is “Voodoo death”, in which a sudden demise is brought on by fear. In a famous case documented by an early explorer in New Zealand, a Maori woman accidentally ate some fruit from a place that was considered taboo. After announcing that the chief’s spirit would kill her for the sacrilegious act, she died the very next day.
Whether someone could bring about their own death though fear alone is not clear. (Read more about the contagious thought that could kill you.) But there is strong evidence that our thoughts and feelings can have a tangible physical impact, such as when a patient expects a medication to have side-effects, and therefore it does – known as the nocebo effect.
“I would say that there are definitely instances where the meaning that is attributed to experiences actually changes biologically what that experience is,” says Bonnie Kaiser, an expert in psychological anthropology at the University of California, San Diego. She gives the example of the illness kyol goeu, literally “wind overload”, an enigmatic fainting sickness which is prevalent among Khmer refugees in the US.
In their native Cambodia, it’s commonly believed that the body is riddled with channels that contain a wind-like substance – and if these become blocked, the resulting wind overdose will cause the sufferer to permanently lose the use of a limb or die. Out of 100 Khmer patients at one psychiatric clinic in the US, one study found that 36% had experienced an episode of the illness at some point.
Bouts usually proceed slowly, starting with a general feeling of malaise. Then, one day, the victim will stand up and notice that they feel dizzy – and this is how they know that the attack is starting. Eventually they’ll fall to the ground, unable to move or speak until their relatives have administered the appropriate first aid, which usually consists of massaging their limbs or biting their ankles.
While medications are helpful for a lot of people, those with certain cultural beliefs might be more comfortable with treatments like psychotherapy (Credit: Science Photo Library)
Kaiser points out that when most people experience light-headedness, they just shake it off. But if someone interprets that feeling as signalling the start of a kyol goeu attack, they think: “Oh my gosh, something terrible is happening.”
“They really attend to it and they panic,” she says.
The meaning that’s attributed to the feeling of dizziness changes everything. “Fundamentally the actual experience in the body becomes very different,” says Kaiser. “So, to me, this isn't something that has a different name in different places – this illness just doesn't exist in some places. The very biology of that experience is affected by the culture.”
According to Kaiser, in reality, it’s likely that for many mental illnesses, there is both a difference in the way people interpret the same physical experiences, and a positive feedback loop which allows their cultural ideas to shape how they manifest.
Revising Western illnesses
As our understanding of culture-bound illnesses has improved, some psychologists have begun to question whether certain Western mental health conditions fit into this category too. Though certain disorders appear to be universal – schizophrenia occurs in every country on the planet, at a relatively constant rate – this is not true for others. Bulimia is half as common in Eastern cultures, while pre-menstrual syndrome (PMS) is virtually non-existent in China, Hong Kong and India. It’s even been argued, somewhat controversially, that depression is an invention of the English-speaking world, stemming from the misguided notion that it’s normal to be happy all the time.
In the modern era, it would be naive to think that the mental illnesses we suffer from are independent of our way of life. “I think there's a tremendous arrogance in the way that we universalise these mental illnesses and don't see them as socially and historically specific,” says Aho, pointing out that attention deficit disorder (ADD) was only added to the DSM in 1980. “It's clear that children have a more difficult time paying attention now, because they're bombarded with sensory stimulations and their existence is largely mediated by screens. And so it's not as if we’ve only just discovered some discrete medical entity – you can see the way in which technology is shaping the mental and emotional and behavioural lives and children.”
Regardless of their cause, in an increasingly mobile world, some experts are concerned that culturally specific illnesses aren’t being recognised by mental health professionals. “In East Asian cultures, the vocabulary and language that people use to express their distress and symptoms is quite different,” says Sumin Na, a psychologist at McGill University. This means that when East Asian people migrate to places like North America, it’s often not clear when they need help.
Khmer refugees in the US often suffer from the fainting sickness kyol goeu, or "wind overload" (Credit: Getty Images)
“For instance in a lot of Western society, I think we see depression and anxiety as a chemical imbalance. And that leads us to seek help through our doctor and getting medication,” she says. “But in East Asia it’s seen as more of a social or spiritual or a family concern – so people might seek spiritual help or, you know, find ways to resolve family conflict.”
In order to get people the help they need, Na says it’s important to understand a patient’s backstory – the cultural norms where they come from and the loss of power and privilege they might have experienced when they moved, which can often lead to mental health problems down the line. “I think we also have to try to let go of what we think is 'correct' knowledge of mental health and mental illness and not to get really stuck on [them] or the DSM-5 as the only way of understanding and labelling mental illness,” she says.
Equally, it’s unreasonable to expect the same treatments to work for everyone. Na suggests that, while medications are helpful for a lot of people, those with certain cultural beliefs might be more comfortable with things like psychotherapy.
In an era that’s seeing drastic losses in diversity of virtually every other kind – from species to languages, it’s been suggested that we’re standing on a precipice, potentially about to lose our range of mental illnesses too. In the book “Crazy Like Us”, the author Ethan Watters describes how we’ve spent the last few decades slowly, insidiously Americanising mental illness – shoehorning the colourful array of emotional and psychological experiences that exist into a few approved boxes, such as anxiety and depression – and “homogenising the way the world goes mad”.
In the process, not only do we risk missing out on diagnoses and foregoing the most appropriate treatments, but the opportunity to understand how mental illnesses develop in the first place.
Is Everyone Depressed?
Suddenly, many people meet the criteria for clinical depression. Doctors are scrambling to determine who needs urgent intervention, and who is simply the new normal.
by James Hamblin
May 22, 2020
The word I keep hearing is numbness. Not necessarily a sickness, but feeling ill at ease. A sort of detachment or removal from reality. Deb Hawkins, a tech analyst in Michigan, describes the feeling of being stuck at home during the coronavirus pandemic as “sleep-walking through my life” or “wading through a physical and mental quicksand.” Even though she has been living in what she calls an “introvert heaven” for the past two months—at home with her family, grateful they are in good health—her brain has dissented. “I feel like I have two modes,” Hawkins says: “barely functioning and boiling angry.”
Many people are even more deeply unmoored. Michael Falcone has run an acupuncture clinic for the past decade in Memphis, Tennessee. When he temporarily shut it down, the toll on his mental health was immediate. “I went into a pretty instant depression when I realized that my actual purpose was disintegrating,” he says. He began spending his days staring at his bookshelves. Falcone and I have exchanged emails for weeks now, and while his notes have been full of whimsical musings about adjusting to home life, one included a jarring line: “I’ve lost faith in myself. I don’t know if I can actually justify taking up space and resources.”
After I confirmed with Falcone that he had no intent to harm himself, I recommended that he seek medical help. But given the unprecedented circumstances we’re all in, I’m not sure whether I under- or overreacted—or even what “help” should look like, exactly. The pandemic is a moment of historic loss: unemployment, isolation, stasis, financial devastation, medical suffering, and hundreds of thousands of deaths globally. Suddenly droves of people are being thrown into a state like Falcone’s, feeling lost, hopeless—in his words, “depressed.”
Over the past month, Jennifer Leiferman, a researcher at the Colorado School of Public Health, has documented a tidal wave of depressive symptoms in the U.S. “The rates we’re seeing are just so much higher than normal,” she says. Leiferman’s team recently found that people in Colorado have, during the pandemic, been nine times more likely to report poor mental health than usual. About 23 percent of Coloradans have symptoms of clinical depression.
As a rough average, during pre-pandemic life, 5 to 7 percent of people met the criteria for a diagnosis of depression. Now, depending how you define the condition, orders of magnitude more people do. Robert Klitzman, a professor of psychiatry at Columbia University, extrapolates from a recent Lancet study in China to estimate that about 50 percent of the U.S. population is experiencing depressive symptoms. “We are witnessing the mental-health implications of massive disease and death,” he says. This has the effect of altering the social norm by which depression and other conditions are defined. Essentially, this throws off the whole definitional rubric.
Feelings of numbness, powerlessness, and hopelessness are now so common as to verge on being considered normal. But what we are seeing is far less likely an actual increase in a disease of the brain than a series of circumstances that is drawing out a similar neurochemical mix. This poses a diagnostic conundrum. Millions of people exhibiting signs of depression now have to discern ennui from temporary grieving from a medical condition. Those at home Googling symptoms need to know when to seek medical care, and when it’s safe to simply try baking more bread. Clinicians, meanwhile, need to decide how best to treat people with new or worsening symptoms: to diagnose millions of people with depression, or to more aggressively treat the social circumstances at the core of so much suffering.
Clearly articulating the meaning of medical depression is an existential challenge for the mental-health profession, and for a country that does not ensure its people health care. If we fail, the second wave of death from this pandemic will not be directly caused by the virus. It will take the people who suffered mentally from its reverberations.
Like COVID-19, depression takes erratic courses. Some predictable patterns exist, but no two cases are exactly alike. Depression can percolate for long periods then quickly become severe. Some people will barely notice it, and others will be tested in the extreme.
Andrew Solomon, the author of The Noonday Demon: An Atlas of Depression, groups people based on four basic ways they’re responding to the current crisis. Two are straightforward. In the first are people who are drawing on huge stockpiles of resilience and truly doing okay. When you ask how they feel and they say “eh, fine,” they actually mean it. In the second, at the opposite end of things, are people who already have a clinical diagnosis of major depressive disorder or a persistent version known as dysthymia. Right now, their symptoms are at high risk of escalating. “They develop what some clinicians call ‘double depression,’ in which the underlying disorder coexists with a new layer of fear and sorrow,” Solomon says. Such people may need higher levels of medical care than usual, and may even need to be hospitalized.
The remaining two groups constitute more of a gray area. One group consists of the millions of people now experiencing depressive symptoms in a real way, but who nonetheless will return to their baseline eventually, as long as their symptoms are addressed. People in this group are in urgent need of basic interventions that help create routine and structure. Those might involve regularizing sleep and food, minimizing alcohol and other substances, exercising, avoiding obsessions with the news, and cutting back on other aimless habits that might be easier to moderate in normal times.
The fourth group encompasses people who are starting to develop clinical depression. More than simply a wellness regimen or a Zoom with friends, they need some type of formal medical intervention. They may have seemed fine and had adequate resilience in normal times, to deal with normal difficulties, but they’ve always had a propensity to develop overt depression. Solomon describes this group as “hanging on the precipice of what could be considered pathologic.” It can be especially precarious because people in this state—what some researchers refer to as “subclinical depression”—have not dealt with depression before, and may not have the capacity or resources to proactively seek treatment.
The earlier specific types of depression can be identified, the better people can be directed toward proper treatment. The mental-health system has always had barriers to identifying and helping people early—issues like access to care and stigma around seeking it out. In the midst of this pandemic, not only is the current population of psychiatrists insufficient to suddenly treat several times as many people as usual, but their basic capacities of diagnosis are also hindered by distance, volume, and confounding variables. “It takes considerable wisdom to delineate who has a clinical condition and needs medication and therapy, and who is just stressed out within the bounds of good mental health,” Solomon says. Clinicians train for years to understand that line, and placing people on one side or the other typically requires long interviews in which every element of a person’s affect is noted.
Even for people who manage to connect with clinicians, subtleties are difficult to read over video calls, says Meghan Jarvis, a trauma therapist who has been seeing a spectrum of reactions to the pandemic, including depression. Normally, Jarvis sends maybe one patient a year to the hospital for a pathologic response to trauma. Since March, she has already had to hospitalize four people. Typically, she explains, symptoms of depression are considered problematic if they last six weeks after a traumatic event. The precise length is arbitrary, but is meant to generally help distinguish depression from periods of grieving, such as after the death of a loved one. That distinction is largely useless in the pandemic. “I mean, we’re all going to have that,” Jarvis says, “because we’ve been in this mode for more than six weeks.”
Now Jarvis and others have to develop new thresholds. Just as, in the time of COVID-19, not everyone with a cough can go to the hospital, clinicians are working to identify and prioritize those who truly need in-person mental-health attention. Jennifer Rapke, the head of inpatient consultation at Upstate Golisano Children’s Hospital in New York, has seen a surge in teenagers reporting suicidal ideation and instances of self-harm, so she has been carefully turning away the less severe cases to make sure that inpatient facilities aren’t overwhelmed. “We’re only seeing people who absolutely need to be here,” she says. Meanwhile, those with milder, emerging cases are sometimes left in limbo. “The places we would normally send people, the things we would put in place to address the depression or the anxiety in early phases—they don’t exist or they’re unavailable,” Rapke says.
With less preventive and maintenance care accessible, people are more likely to come to hospitals in more severe states. During crises, extreme events like self-harm and suicide lag in time. At first, being anxious about the proximity of death, or sad about the loss of loved ones is logical; any other reaction would be bizarre. Our minds and bodies can’t endure that state for too long, though. The United States was slow to test for the coronavirus, and COVID-19 cases accumulated before we knew just how widespread it was. Rapke and others are now bracing for a similarly delayed wave of severe depression—and the difficult decisions they will have to make about treatments.
The elusive definition of depression has always been a source of academic tension with serious consequences. Among the many challenges the pandemic is posing, it is exposing the borders of medicine’s ability to distill human suffering into a billable diagnostic code. Some people with symptoms of depression will be told, “Everyone feels that way,” or advised to try breathing exercises when they need urgent medical attention. Others will be diagnosed with clinical depression, changing their life and self-conception indefinitely, when the problems were truly circumstantial. The system has never been flawless, but its limitations are now brought into stark relief.
For most of human history, depression was not treated in the same medical model as were diseases of the body. People with mental illnesses were written off as morally bankrupt or simply “insane.” Only in the latter half of the 20th century did the profession of psychiatry become a medical specialty and create systematic approaches to treatment. The process for diagnosing a condition in psychiatry and clinical psychology will never be as straightforward and objective as saying whether a bone is broken or not, or whether a person has had a heart attack. But it provides a common, basic language for what a clinician means when he or she diagnoses a patient with something like depression. It also helps patients get the insurance coverage and health-care service they need.
Today, depression—the clinical condition, otherwise known as major depressive disorder—is defined by the American Psychiatric Association in its Diagnostic and Statistical Manual as a mood disorder.* To receive the diagnosis, a person must have five or more symptoms such as the following, nearly every day during a two-week period: fatigue or loss of energy, feelings of worthlessness or inappropriate guilt, reduced physical movement, indecisiveness or impaired concentration, a decreased or increased appetite, and a greatly diminished interest or pleasure in regular activities.
Experts are trained to identify exactly how much “impaired concentration” or “loss of energy” is enough to qualify for a diagnosis, and the criteria are intentionally flexible enough to factor in patients’ individual circumstances. But as the pandemic has made clear, the DSM-5 and medical model as a whole don’t provide the richness of language to account for all the nuanced ways people might look or feel depressed, even when they don’t need medical intervention. Well-meaning attempts to standardize the diagnostic process have created a false binary wherein you are a person with depression, or you are not.
Outside of medicine, depression has been most cogently defined through metaphor. As Sylvia Plath wrote: “The silence depressed me. It wasn’t the silence of silence. It was my own silence.” David Foster Wallace described depression as feeling that “every single atom in every single cell in your body is sick.” Even some clinical models reach for alternative ways of articulating despair beyond the conventional medical model. James Hollis, a psychodynamic analyst and the author of Living Between Worlds: Finding Resilience in Changing Times, says that depression is sometimes the result of “intrapsychic tension,” a conflict between two areas of our psyche, or identity. The tension is created, Hollis observes, “when we’re forced to try to make acquaintances with ourselves in new ways.”
Many Americans do seem to be experiencing something like this tension during the pandemic. People who define themselves by their work can lose a basic sense of self if that work disappears. In such moments, Hollis says, many people regress. Many also try to escape—whether by organizing an already well-organized sock drawer, baking bread they don’t even want, or endlessly scrolling through Instagram. Jarvis, the trauma therapist, is seeing similar escapist tendencies: “For someone’s response to a huge global pandemic to be like, I’m going to work out really hard, is just as pathological and sort of dissociative as if you went to bed and didn’t get up for five days.”
For people whose response to the pandemic turns from acute anxiety into general malaise, Jarvis recommends facing the numbness head-on. It’s treatable, and not necessarily with medication. First, she says, create regimens of simple tasks that give structure to the day. The approach is working for Falcone, the acupuncturist. He starts every day with 30 minutes of stretching, no matter what. Then he walks his dog, makes coffee, and sits down to teach massage via Zoom. Deb Hawkins, the tech analyst, sent me a list of things she’s doing to help others and stay busy: She donated money to a couple of worthy causes, and made an appointment to give blood. She has created a small social bubble and signed up for an online ballet class. She says her sense of self is returning.
Small steps like these will not work for everyone, but they may help many in the subclinical realm to mitigate a dangerous slide. With the medical system already stretched thin, these could buy some time to build its capacity to care for the people who will emerge from the pandemic with severe and lasting symptoms. As important as preventive behaviors can be, human resilience has limits. Those will be tested for months to come.
The individual model of depression was never meant to address a significant percentage of a population. When the diagnosis seems to apply so widely, it’s not the people or the entire medical system that’s broken, but the social context. While many people will find ways to recalibrate their expectations and individual thresholds for joy in the pandemic, ultimately basic needs still have to be met. This means eliminating sources of anxiety, such as by ensuring financial, housing, and food security. In Colorado, Leiferman’s group is among those scrambling to help stem the tide of depressive symptoms. “Our nation is under stress. It may be that more people need [medical] treatment,” she says. “It may be that we need to, as a population, do more to relieve the stress.”
Patterns of pain: what Covid-19 can teach us about how to be human
We can expect psychological difficulties to follow as we come out of lockdown. But we have an opportunity to remake our relationship with our bodies, and the social body we belong to.
By Susie Orbach
When lockdown started, I was confused by bodies on television. Why weren’t they socially distancing? Didn’t they know not to be so close? The injunction to be separate was unfamiliar and irregular, and for me, self-isolating alone, following this government directive was peculiar. It made watching dramas and programmes produced under normal filming conditions feel jarring.
Seven weeks in, the disjuncture has passed. I, like all of us, am accommodating to multiple corporeal realities: bodies alone, bodies distant, bodies in the park to be avoided, bodies of disobedient youths hanging out in groups, bodies in lines outside shops, bodies and voices flattened on screens and above all, bodies of dead health workers and carers. Black bodies, brown bodies. Working-class bodies. Bodies not normally praised, now being celebrated.
We are learning a whole new etiquette of bodies. We swerve around each other, hop into the near-empty street, calculate distances at entrances to parks, avoid body contact, even eye contact, and keep a look out for those obliviously glued to their phones, whose lack of attention threatens to breach the two-metre rule. It’s odd and disconcerting and isn’t quite second nature.
Until the pandemic arrived, many of us were finding texting, email and Whatsapp more suitable to our speeded-up lives. But now we are coming to reuse the telephone, and to enjoy the sounds in our ears and the rhythm of conversation, instead of feeling rushed and interrupted. A few of my sessions as a psychoanalyst are now conducted on the phone but, for the most part, I am spending my time looking into a screen, and seeing faces rather than whole bodies. Until I learned to turn off the view of myself, I, like others, was disconcerted by the oddness of catching sight of myself – a view I don’t think we are meant to see.
Conversations in therapy defy many of the customs of social intercourse. There are silences, repetitions, reframings, links across time, reminiscences of fragments, rushes of emotion, shards of dreams, things told and then disavowed. There can be fidgeting or absolute stillness. These form the idiosyncratic and personal ambience between each therapeutic couple. As a therapist, I am also alert to how the dilemmas that beset the person or the couple I am seeing are brought to our relationship.
The conundrums that brought the person to seek therapy in the first place can be replayed right here. For example, a person fearful of intimacy can experience the therapy relationship or the therapist as too close. Someone else who worries they are too needy may be reluctant to show their longings directly to the therapist, although well able to talk about how things go wrong for them in other relationships. The therapy relationship and the sessions are our petri dish. The field of study is the human subject (and her, his or their ways of being able to develop and change).
The therapist works to understand an individual’s personal psychological grammar – to help the person take the risk of unlearning and then learning anew, finding ways to not be in so much hurt. So too with the body. Those with troubled bodies bring them to the session. They may sit too close, for example, or seem to be concave, or dress incongruously, as though presenting a different persona in each session. In the course of therapy, such an abject body experience can be addressed, and, in unlearning and then learning anew, the person finds a more comfortable way to sit in their body.
How is the dematerialisation of bodies affecting us and going to affect us? Me, my patients, you – all of us? For some of my patients, their screen or home is a prison. Their experience is full of woe and worry. Therapy keeps them just about on the border of sane, but it’s a sanity that hurts: isolation can maraud all of us as we miss the interactions, intimate or casual, that confirm our sense of our value, our place in our community, our work and the world.
Some of my clinical preoccupations centre on how we acquire a physical, corporeal sense of self. Although psychoanalysis is a theory of mind and body, its main emphasis has drifted to the development of the mind and its structures: what we call defences, and the relationship patterns we have absorbed. Bodies have been very much the bit player to the main drama of the mind, even when mental processes or disturbance have resulted in bodily symptoms such as eczema or a non-biologically induced paralysis. As therapists, we traditionally read back into the mind the troubles visited on the body, seeing them as the result of mental conflicts. And of course, they often are, but I have long been keen to understand body troubles and body difficulties in their own terms, and to build a theory about the development of the body.
Bodies have always been bound and marked by social rules. Different societies make different sense out of similar bodily actions or gestures. The variety of body adornment and transformations around the world, from rings around the neck to the recent upsurge in labial reductions and penis enlargements, has made it ever more apparent that the body is not simply the product of DNA. The body we inhabit develops within relationships to other bodies. Usually it is within the maternal orbit where, to take an obvious example, we first apprehend gender-based forms of comportment. When I grew up, being told to sit like a girl and not to climb trees were some of the ways we were treated differently to boys. Research across many cultures show that baby girls are weaned and potty-trained earlier, fed less at each feed, and held less, than boys. There may be no biological basis to this, but rather a social, unconscious basis that then informs how we personally experience our particular embodiment.
We have very few verified reports of humans growing up outside of human culture but the feral child Victor of Aveyron, who was discovered living wild in the woods of southern France in 1800, did not have body movements that were recognisably human. The body-to-body relationship that was foundational for him was with the bodies of the wolves he apparently grew up among. He seemingly mimicked their gait and moves, their posture and their vocalisations. Of course, we know this more familiarly, and less dramatically, from when youngsters develop their group identities by adopting the mannerisms of film actors or musicians.
Through screens, billboards and photoshopped images, we reduce the wide variety of bodily expression. It’s as though we are losing body diversity just as we are losing languages. The digitised, westernised body image predominates, and in the last two decades has spawned a cosmetic surgery industry worldwide – from leg-lengthening surgery using steel rods in China (now banned), to rhinoplasty in Iran (which has the highest rate of nose surgery per capita in the world) to double-eyelid surgery and jawbone reduction in South Korea. In the west, surgeons resculpt cheekbones, breasts and calves, and offer day procedures for facial ‘thread lifts’. Cosmetic surgery tourism hubs in Hungary, South Korea and Singapore were thriving until the lockdown.
One Chinese smartphone app allows the selfie-taker to adjust their portrait to bring it closer to a very specific standard of beauty known as wang hon lian, or “internet celebrity face”. It’s very popular: billions of wang hon lian images are uploaded every month.
The richest Europeans are not in tech, but in the business of beautifying bodies – the owners of fashion, luxury and cosmetics brands such as LVMH, L’Oreal and Zara. Increasing automation has led us to move from using our bodies to make things to turning our bodies the site and the product of our labour, through diet and exercise regimes, clothing and cosmetics. The surface body is meant to be on display.
Paradoxically, the sweating, smelling, holding, stroking body of the other becomes, for those socially distancing, too distant – while for others, such as those sharing a house with teenage boys, it’s all too present. All is on show for families and housemates, while all is hidden for those living alone during lockdown.
The experience of the body on FaceTime or Zoom contrasts with the pulsing, breathing, weeping, sighing, tired, achy or indeed springy and enthusiastic bodies we inhabit. We no longer have social communion in the flesh, the handshake or the hug, the pleasure of eating in a restaurant with a friend or lover while seated near strangers. Afraid of infection, for our protection, we collapse our social space.
During the second world war, the psychiatrist René Spitz studied orphan babies in care. He discovered that those closest to the nurses’ station thrived, while those at the end of the ward did not do so well. The difference was touch: the nurses would casually touch and interact with those closest to them, and this gave those infants the essential food for physical and psychological development. They absorbed the will to live. A decade later – in research now considered controversial for the way in which he removed baby monkeys from their mothers – the American psychologist Harry Harlow discovered that baby monkeys given ersatz mothers in the form of basic cloth puppets would find some crucial security and comfort even in this simulation of maternal touch; those baby monkeys deprived of any kind of maternal touch at all became highly disturbed, and many died.
Touch, feel and proximity are central to survival. Consider the genius of premature infants’ capacity to regulate their own and, extraordinarily, their parent’s body temperature, if they are held skin-to-skin in a pouch. The gaze – the search to be seen, to recognise and to influence the other – is also crucial to human subjectivity. In a fascinating video made by the developmental psychologist Edward Tronick, he instructs a mother playing with her baby to keep a still face and refrain from interacting with her infant for a minute or two. We observe as the infant girl seeks to engage the mother. When she is unable to, the baby collapses psychologically and physically until contact is restored. What is so shocking is how fast the collapse is.Trauma Therapy (EMDR)
I’ve been thinking of how impossibly difficult and challenging our quasi-dematerialised life through the Zoom screen is, whether chatting with friends or being in a meeting. Conflict and harmony become cartoonish as subtle gestures collapse and the conversations we have with our eyes are shut down.
Reading each other well enough is a new skill in the therapy room, too, for both people. By now we are used to the screens and the telephone, and the occasional technical blips. We are seeing a physical interior – a study, bedroom, shed or kitchen, and being surprised by an occasional child that floats in. We hear the suddenly hushed voice of someone not wanting their partner to get a drift of the conversation we are having. It illuminates aspects we didn’t see before. Is it better? No. Is it worse? Marginally. I miss noticing how people enter the therapy room – the subtle difference from the session before, or the way they may hold their face and body; above all, the animate body in the room. I suspect that I am more animated to make up for the loss of that precious physicality.
Former hostages Terry Waite, John McCarthy and Brian Keenan have all written and spoken eloquently about solitary confinement and their struggles to find a way through and back – or should I say forward – to familial and social life. It was tough. And although many of us are not self-isolating alone, unless one is able to do interesting or valued work during this period, or have enough people to hang out with, we can expect considerable psychological difficulties to follow as we come out of lockdown. How will we re-establish social interaction with other bodies? What kind of rhythms will we want and be able to have going forward?
Many have been ultra-busy with home schooling, working from home, managing three generations and so on. Time has bent and contracted in perplexing ways. Busyness has increased for some, while others, for whom slowing down is a foreign concept, have had idleness forced on them. Empty time feels alien – or at least did at the beginning. For many it has been an unexpected pleasure. No need to rush to social occasions. No need to dress. No need to get everything done and more. Being wanted, being needed, being in demand have been psychological supports that have melted away. Finding new ways to nourish one’s needs in this new reality – especially in the absence of touch and gaze, which we unknowingly rely upon to recognise ourselves – can be tricky.
Today, there is a frightened, wary, social body. A body that is tense, in which avoidance is the watchword. The covered face, whether by a hoodie or a veil, which formerly some found challenging, now offers reassurance. Indeed, many public places – from Eurostar trains to the streets of New York, Prague, Dubai, Havana and many more – now demand it. Meanwhile, much of society is now paying attention to bodies that had been scandalously overlooked. The bodies of working women, the carers who go in and out of the houses and homes of the people they look after. The faces of vast numbers of black, Asian and minority-ethnic bodies, particularly in the health service, who are finally being recognised for their value, and the shockingly disproportionate number of their losses.
Before Covid-19, the ruling party were happy to slash social and health funding, to put money into management in the NHS, and not into professional carers, doctors and nurses. Now society is waking up to the value of care and medical expertise that comes from the hospital floor – that is to say, from the doctors and nurses who are reorganising what occurs there. The people keeping society going in every sector – transport workers, small shopkeepers, workers in food production and delivery – are often first-generation immigrants. More people are seeing a more nuanced social landscape. The opportunity is here for reframing how we represent the social body. It is of necessity differently hued, and that needs acknowledging, as does the shame of our previous marginalising. Covid-19 is cleaning the lens, so we can see more clearly.
From the individual to the social body, and how it is being challenged by the pandemic, we turn to the corporate body – the body of state – and what we have been learning about how it has functioned. On 17 April, Prof Anthony Costello, a former director of the Institute for Global Health at UCL, told the select committee on health and social care that he feared Britain might have the highest number of deaths in Europe, which has now been confirmed. Costello had estimated 40,000 deaths; on 5 May the official UK death toll was just over 32,000, but the Financial Times reported the same day that the true figure had likely already surpassed Costello’s estimate. London and the north-west of England are showing higher rates of death than other regions, while according to the ONS, people in the most deprived areas of England and Wales are dying at twice the rate of the most affluent areas.
Costello argued for this figure because we were slow off the mark to take precautionary moves early on. He spoke to the chair of the committee, Jeremy Hunt, who has spent this period appearing to stress about the lack of testing, ventilators and PPE equipment. This is the same Hunt who, as the longest serving health secretary in British history, also had social care in his portfolio, and the pay of doctors, nurses and social care workers. Even more damningly, he was the minister in charge during Exercise Cygnus, the UK government’s drill to test our preparedness for a pandemic, carried out in 2016.
The full review of Exercise Cygnus has never been officially published, but leaks have revealed that it showed the UK’s health system and local authorities were woefully unprepared for such an eventuality. The exercise showed hospitals and mortuaries being quickly overwhelmed, and shortages of critical care beds, ventilators and personal protective equipment for hospital staff.
Cygnus, and other such exercises, are meant to show the government what they need to do to be prepared – which was not, as Hunt was doing, cutting beds. On 28 March of this year, when the Cygnus debacle came to light, we were told that the projections were not remedied because of worries that beds, ventilators and PPE would become outmoded or obsolete and that the government had worked on securing reliable supply chains. (As we have seen, in a pandemic, reliable supply chains become very quickly overwhelmed.) A 2018 Red Cross conference report on Cygnus and infectious diseases stated: “The financial and human cost of an outbreak can be staggering and early response reduces the cost.” Our government chose not to act.
Fund for Peace, the Washington-based NGO that publishes the annual Fragile States Index, lists criteria for a failed state. I think we have come dangerously close to fulfilling two of their criteria: the inability to provide public services for the poor, and the inability to interact with other states as a full member of the international community.
As these last months’ farcical developments show – the question about the independence of the Scientific Advisory Group for Emergencies (Sage), the alleged missing communications with the EU on PPE, the political decision not to cooperate with the EU, the posting out of tests without return envelopes, and the expired dates on PPE equipment – the government is in Fawlty Towers territory.
Plans for British companies to design new ventilator machines, detailed by the Financial Times, went belly up. Our government chose to source new ideas rather build to the existing plan under licence. Why, one must ask? Could it be Brexit hubris?
I don’t want to contrast the UK’s response with that of the EU, because the latter has not always covered itself in glory during the pandemic. The ethics of cooperation in Europe and the ethics of transparency and honesty have been mightily tested in the past months. Perhaps now though we can be encouraged by the joint project of the European Investment Banks and WHO to bolster global healthcare systems. Will the UK state be contributing? I think not. So much depends on the actions of citizens now to move things forward. In this light, it is encouraging to see the formation of a new independent panel of experts – a “rival” to Sage – led by the former UK government chief scientific adviser David King, whose deliberations are on YouTube for us to watch.
I am not sure how we characterise the following failure of the state, because it is in part the expression of public good: of the 750,000 people who signed up to volunteer to help the NHS, invited by the government, fewer than 100,000 have been deployed. As citizens, we want to contribute. This squandering of people’s generosity is disturbing. Fortunately, people such as Capt Tom Moore or the many making masks and contributing 3D printers keep on going. And the programme Feed NHS, in which the restaurant chain Leon and other chefs are prepping to feed patients, doctors, nurses, hospital porters and ambulance workers, is now in train. This voluntary work, in which groups of people self-organise, is outstanding, and yet it is in contrast to the inability of our state to mobilise those who wanted to help.
The Gates Foundation’s contributions to seven different vaccine programmes, and Twitter CEO Jack Dorsey’s donation of $1bn, are impressive. Will hedge funds in the UK such as Ruffer investment, which pocketed £2.4bn in March, or Somerset Capital (the fund Jacob Rees Mogg used to run) who see Covid-19 as a “once or twice in a generation” opportunity for investment, make a contribution, too?
There are several dozen UK-based hedge funds managing assets worth £1bn or more. Could the mood of the country be such that hedge fund investors and managers might be persuaded to donate some of their obscene profits to the coronavirus response or to sponsor migrants from beyond Europe (who work here as cleaners, carers, drivers), who do not earn the £30,000 currently demandedfor a work permit?
Covid is a sad story. It is also a story of resilience. The body of state has failed us. We need to grow up and recognise that. Covid-19 has exposed unforgivable systemic failure. In the years leading up to this, we’ve seen a reduction in the status of civil servants and a downgrading of health workers. We have seen teachers, doctors and academics hidebound in a managerial economy. At least it seems that micromanagement has been temporarily overturned in hospitals, thank goodness, because right now doctors and nurses need to be running the show.
And to return to our bodies – the live ones, so many devoid of touch and gaze, facing a long period of isolation, and frightened. How can I conclude?
In a way, I can’t. We are far from the other side of this crisis. Psychological therapies are going to have a huge part to play in the remaking of body and soul. I don’t much like the word trauma, because it has become so overused, but we are a society that is in trauma. A societal trauma gives opportunities for people to go through things together, rather than suffer alone, as long as we don’t bury or make light of what we have experienced and continue to experience. We will have to find new ways to live with our fears and discomforts, to overcome Covid-minted social phobias, with what we project on to other people’s bodies and the fears we have about our own vulnerabilities. We will need all the help we can get in reshaping our relationship to our own and each other’s bodies, to find a way to build bonds of attachment and respect.
What started with the dematerialisation of the individual body has now morphed into the dematerialisation of the body of state. The economist Joseph Stiglitz reminds us that, with the stripping back of the state under Ronald Reagan and Margaret Thatcher, we lost capacity. This needs to be addressed.
There is a lively debate from a range of economists on how to get to a more equitable economy. Moneyweek editor-in-chief Merryn Somerset Webb’s call for a sovereign wealth fund, with the government owning shares in bailed-out companies, is interesting, as is political economist Will Hutton’s idea of expanding the British Business Bank and the Future Fund. UCL economics professor Mariana Mazzucato insists that the state must invest in innovation.
We began trying to make a different kind of society after the second world war. We will have to do that again. Principally, we will need to recognise the contributions and the losses of the UK’s minority and working-class people, above all. Our governments have shamed themselves through creating divisions in society, particularly since austerity was imposed under David Cameron’s government. Now we have an unexpected chance to redress the divisive fallout of Brexit.
The impact of remote working and the need to balance domestic and work life, allied with dire warnings on mass unemployment, gives us an opportunity to write a social contract in which we divide work more fairly. At both ends of the pay scale, people overwork. The evidence for a more balanced relationship between work and home is compelling.
Since the crisis began, the outpourings of artists, musicians, programmers, cultural and scientific workers at all levels has been outstanding. The talent, the will, the desire is there to remake our world. The urgency is not in question. Globalism can’t simply be a celebration of “just-in-time” deliveries. It will need to be recast as mutuality – local and global mutuality – so that we learn from each other, including those who’ve been in lockdown in war zones.
Therapy under lockdown: 'I’m just as terrified as my patients are'
Our institutions will need to be rebuilt with transparency, with heart and by learning from the people who have been staffing them, not just the managers and owners. Doctors, nurses, carers and delivery people have things to say about how their institutions could be better run. The body politic and the politics of the bodies that make up our world must be reconfigured, and we need to start thinking about that now.
I conclude with Freud: “The aim of psychoanalysis is to turn hysteria into ordinary human unhappiness.” That is an accomplishment for an individual and for a society. We cannot escape unhappiness. It is constitutive of being human, just as are creativity, courage, ambition, attachment and love. Let’s embrace the complexity of what it means to be human in this time of sorrow as we think and feel our way to come out of this, wiser, humbler and more connected.
My ongoing exploration into therapy related topics.