Skip to main content
SearchLoginLogin or Signup

How to be Angry on the Internet (Mindfully), Part 1

To respond to anger skillfully, we must first consider how network affordances contribute to lizard-brain reactivity.
Published onAug 10, 2020
How to be Angry on the Internet (Mindfully), Part 1
·

This article is actually two articles. Part 1 is written by Whitney Phillips; Part 2 is written by Diane Grimes. Phillips is a media studies scholar who explores how and why polluted information spreads online. Grimes is an organizational communication scholar who explores contemplative communication practices. Our argument across both pieces is that mindfulness—which has been shown to improve anxiety and depression symptoms, is used to treat PTSD symptoms, and can enhance interpersonal communication offline—is directly applicable to online contexts as well. Not without some additional context, however, given the unique contours (and challenges) of socially networked spaces. So, we’re tackling network considerations first, actual mindfulness practices second.     

Throughout both parts, we’ll be focusing on the experience and expression of anger. We’ve chosen to spotlight anger for two basic reasons. First, there’s just so much of it online—a reflection of the fact that lots of different emotions, from anxiety to shame to hurt, transform into anger because anger is easier to feel. It’s older, simpler, and provides at least a sense of satisfaction when we project our frustrations outward (although that satisfaction rarely lasts for long). The second reason we’re focusing on anger is that mindfulness is often framed as incompatible with anger. If you’re mindful, you stay calm and unemotional; anger is a sign that you’re doing it wrong. That’s not true. Anger is critical to cultivating a more just society, and is part of a healthy emotional landscape. What we need isn’t to eradicate anger, but rather, to use anger mindfully—a balance that’s tricky to achieve in embodied spaces, and even more so online, given the complexities of network contours and affordances. This balance is, however, possible, and increasingly critical as we barrel towards the 2020 election and all the uncertainty after. So let’s get mad, but mindfully.       

  


Anger is a very old, very powerful emotion. We experience anger for all kinds of reasons, in all kinds of moments, including those that are actually about something else. We get angry when we’re scared. We get angry when we’re sad. We get angry when we’re in pain. Determining where our fear or despair or hurt ends and our anger begins can be difficult; all of it triggers a limbic fight-flight-freeze response commonly associated with the “lizard brain,” the part of our brain responsible for assessing threats and keeping us safe (there’s a nice overview of the neuroscience behind the lizard brain here). Limbic reactivity fueled by anger—at least, by what feels like anger, even if there’s something more subtle underneath—lives in the body regardless of where the object of anger is encountered; getting pissed off offline triggers the same physiological response as getting pissed off online.   

At the same time, anger works differently online, and has different consequences online, because online spaces are structured differently. Understanding what these differences are, and indeed, how online spaces are designed both to maximize and monetize anger, can be the difference between responding skillfully and reacting counterproductively. 

Ironically, one of the most critical differences between anger online and anger offline emerges from one of the most critical similarities between mediated and embodied anger—the fact that some anger is justified while some is the result of an overactive lizard brain, in which the threat we think is there, and are ready to punch in the face, isn’t actually there. When our anger is justified, pushback can be a moral imperative; not pushing back can be much worse. When our anger doesn’t line up with a true threat, pushing back can be a disaster for all involved. The problem is that telling the difference between necessary pushback and an overactive lizard brain can be extremely difficult online; our lives would be much easier if it were, but unfortunately, observation is not confirmation on the internet. Beyond that, the consequences of even the most justified pushback online can be extremely ambivalent—some audiences might benefit, but other audiences might be simultaneously endangered or harmed

These challenges can be mitigated when we zoom out, survey the wider media ecology, and locate our “you are here” stickers on the network map. Triangulating ourselves within the digital environment—which Ryan Milner and I advocate for in our new book on navigating polluted information—is an act of ecological mindfulness; it helps us better understand exactly what we’re dealing with, and therefore, exactly what needs a response. It also creates space for more traditional mindfulness practices, which we describe in Part 2. Ecological mindfulness is valuable even when the issue in question is something interpersonal, with fairly low ethical stakes; smaller-scale zoom outs are excellent practice for responding to systemic problems, from the informational dysfunction surrounding the 2020 election to the public health crisis caused by Covid-19 to the related public health crisis caused by systemic racism. 

As the media environment has grown more cacophonous—and simultaneously, as the political environment has grown more treacherous—an increasing number of practitioners have advocated for applying mindfulness to reading the news and navigating social media. Some radio stations have even begun integrating mindfulness into their programming blocks. Erin Carrol’s lovely piece in this collection on the importance of slowing down, paying attention, and (strategically) missing out makes a similar intervention. Our contribution to this body of literature is to zero in on anger specifically and highlight the network forces feeding limbic reactivity: the attention economy, affordances, algorithms, and absenteeism. Much of the anger that emerges from (or is outright catalyzed by) these 4As is, without question, justified. But the more reactive we are, the less strategic our actions tend to be. That’s the problem we need to combat, not the experience of anger itself. In fact, analyzing these four As may even broaden and intensify our anger. Ecological mindfulness doesn’t need us to stop feeling those feelings. It needs us to take a breath and to decide, with reflection, what to do next.    

Attention economy1 

What it is: The attention economy subsumes several overlapping imperatives, including: Tether ads to engagement (clicks, likes, shares). Hoover up and commoditize user data. Transform communities into markets. The winners within this neoliberal, economically darwinistic system—which legal scholar Kyle Langvardt likens to other habit-forming technologies like slot machines—tend not to be quiet, nuanced, long-read deep-dives. Instead, it’s a tyranny of the loudest, as the most emotionally resonant content (and very often, the most distressing content) is the most shared content, and the content that floats most quickly to the top of trending topic feeds (see below entry on algorithms). 

How the attention economy helps cultivate reactivity: The attention economy doesn’t just tolerate angry content, it often outright incentivizes it. Hate speech, for example, has always been good for social platforms’ business, recently prompting dozens of civil rights organizations and businesses to boycott Facebook for profiting so directly from white supremacy and disinformation. News organizations also profit from reactionary anger. Profit, of course, doesn’t mean approve of; the issue is that the attention economy is fueled in large part by strong negative emotions—both the posting of angry content and angry responses to that content. As long as it keeps generating a profit, anger-mongering and the doomscrolling it inspires will remain a viable business strategy. 

What the attention economy obscures: That not everyone is screaming at each other. The debate about mask wearing, for example, often spotlights the small MAGA subset throwing temper tantrums in Costcos, not the vast majority of people (including Republicans) who recognize mask-wearing and social distancing as critical to curbing the spread of Covid-19. Stories and posts about “maskholes,” however, are more clickable. There are so many productive conversations happening across social media. There are so many quiet, reflective moments. There are so many leaders and heros working to make their communities better. But we’re less likely to see the things that would temper our anger; the attention economy finds them boring. Limbic reactivity, on the other hand, now that’s very fun from the perspective of capitalism.   

Affordances2

What they are: Affordances are what technologies allow people to do with them. They don’t determine behavior, but they do put logistic parameters around behavior; you can write lots of different kinds of letters with a pencil, for example, but you can’t sail across the ocean on one. As Milner and I have illustrated, the affordances of digital media include the ability to modify content; to take bits and pieces from a larger text without destroying the original; to store content; and to access that content later through search indexing. Each lends itself to decontextualization, the severing of text from the wider context in which it was created. Platform-specific affordances like retweet, repost, and commenting functions also complicate context. By allowing streamlined (and habit-forming) content sharing across and between groups, these affordances engender unpredictably-comingled audiences (known as context collapse) and often-unanswerable questions about what a particular text “really” means (known as Poe’s Law). These affordances are, also, made of people; and those people encode biases and assumptions into the things they create. Digital affordances might shape our behavior invisibly, but they are not neutral, and they do not impact people equally.     

How affordances help cultivate reactivity: For all the information social media serves up, digital affordances ensure that much of it is decontextualized. In the absence of full context—including where something was created and why, what it has meant to all the people who have engaged with and remixed it, how many people are participating, how sincere those people are being, what other more measured takes on an issue might look like—it’s very easy to jump to conclusions about what something is and how best to respond to it, particularly when something is deemed bad because it looks bad. Maybe that thing really is bad, at least in terms of the effects it’s having. Still, when people jump to conclusions—because something sure feels like a personal attack, because a particular poster sure looks like a real person making a real argument (as opposed to, say, a bot spreading coordinated disinformation), because that 280 character tweet addressing an issue that requires at least 10 pages of analysis to explain properly sure makes their blood boil—lizard-brain thinking often follows. 

What affordances obscure: What’s actually happening in the world, including all the information we don’t have access to—notably, who’s doing what for what reasons. Affordances also obscure the ethical consequences of our own behavior. Most people wouldn’t do something if they knew it was harmful to others. But most people wouldn’t think twice about doing something if they didn’t realize it could be harmful to others. Affordances preclude, and are designed to preclude, that sort of ethical reflection; if everybody was encouraged by digital platforms and tools to worry about the consequences of their behavior online, they would probably post less. That would mean less content to monetize. So instead of designing systems that help slow communication down, developers design systems that help speed communication up. The result is that people a) keep pissing other people off and/or b) keep getting pissed off when they see something, think they know what it is, and react accordingly. 

Algorithms3

What they are: In the broadest sense, algorithms are sets of instructions for completing tasks. When people use the word to describe user experiences on social media sites like Facebook or search engines like Google, they’re referring to the things that determine the things people see. Algorithms overlap significantly with the attention economy (they’re what keep people clicking) and affordances (they set the parameters of what can be seen, by whom, and when). They’re also fundamentally opaque; social platforms keep their algorithms under lock and key. We can see the effects of Facebook’s and Twitter’s algorithms, just like we can see the effects of their moderation decisions (some of which are automated via machine learning algorithms). But just by looking, we can’t know exactly how they were designed, exactly what they were meant to do, or exactly who they were meant to benefit

How algorithms help cultivate reactivity: Social media algorithms do a range of things, but in the context of anger, their most critical effect is their ability—indeed their design—to keep people right where they are. Search and recommendation algorithms, for example, serve up content based on the logic, “here’s what you have engaged with in the past, so here are more things like that,” or even more diffusely, “here’s the kinds of things other people who share similar traits have engaged with in the past, so we’ll assume that you want to engage with it too.” To give an example of how this works, when Facebook set out to better understand how their algorithms contributed to polarization (the answer, which Facebook subsequently buried, was very much), they found that 64% of extremist group joins were facilitated by their recommendation tools. The algorithms didn’t and can’t create extremists from scratch; they identified who would be most likely to want to join such groups, and brought the groups to them. The targeted nature of these recommendations is critical to understanding the role algorithms play in what we see and when we see it. Algorithms are powerful perspective-shaping tools, but they aren’t random in what they docent people towards. Nor are they purely deterministic. They’re part of an informational feedback loop in which human behavior—including the biases and assumptions of users and designers—generates data, which help train algorithms, which nudge certain kinds of responses, which generate further user data, looping back to the start of the cycle. When angry data go in, angry recommendations tend to come out. 

What algorithms obscure: Issues of consent, for one thing. You may not want to keep seeing the same things. Algorithms don’t let you choose, but they do give you the illusion of choice. As a result, we might think we’re seeing all there is to see, and that we’ve dutifully done our homework on a particular issue. But when we can’t know what we’re not being shown, our ability to make fully-supported arguments, to vet the arguments made by others, or more basically, to know when we’re missing key information, can be significantly undermined. That doesn’t mean we’re not able to draw any valid conclusions about particular issues online; certain assertions are empirically false, period, and certain actions are morally repugnant, period; no amount of counterarguing or additional evidence will change those truths. However, when our anger is being docented by algorithms, it’s difficult to find the things that can help us catch our breath, better contextualize a controversy or person, and generally maintain perspective. Instead we get trapped, not just in doomscrolling, but in ragescrolling.      


Absenteeism4 

What it is: Social media platforms like Facebook and Twitter operate under a few basic assumptions: sharing information is good. The more information there is, the better. The faster that information spreads, the better. (All couched under the suffix: for profits.) These assumptions align with—indeed, grow out of—these platforms’ longstanding defenses of freedom of expression. Free speech might be the stated cause celebre for the likes of Mark Zuckerberg (however naive or bad faith those arguments might be). But it’s also an economic imperative; you can’t monetize content you’ve taken down. Maximalist approaches to free speech have even deeper ideological roots, which one former Twitter employee describes as an “original sin”: the blinding homogeneity of early leadership. The mostly male, overwhelmingly white people with decision-making power at these companies were the least likely to have been subject to identity-based harassment campaigns online, or relentless disinformation, or dehumanizing lies. There was little reason from a safety perspective to build in stronger guardrails against these kinds of attacks. As a result, hate speech, disinformation, and broad spectrum pollution were, from the outset, allowed to flourish or outright incentivized. Not as a bug within the system, but as a function of a system that looks around, sees the damage being done, and says, not our problem.   

How the absenteeism helps cultivate reactivity: When harmful, hateful content is allowed to flourish, and certainly when it’s economically incentivized, the result isn’t just to embolden those who are fueled by reactionary anger. It’s to create a perpetual-motion machine of injustice and dehumanization for those forced to wade through the steady stream of informational refuse. To say that this is angering would be an understatement; it’s an assault against civil rights and human dignity. Those who are not targeted by violent, reactionary voices might be inclined to shrug these problems off: to say it’s just the internet, that it’s up to individuals not to feed the trolls, or to lament, instead, so-called “cancel culture” when on the rare occasion those who harm and dehumanize are held to account. But that’s the myopia of safety talking—which is in itself infuriating for those who have never had that luxury.     

What absenteeism obscures: That our platforms, and our approach to harmful speech, could be different; that another network is possible. We have the internet we have not because it’s natural or necessary, but because neoliberal interests won out. Harm became profitable. Polluted information became profitable. Polarization became profitable. For some, anyway; the rest of us bear those costs daily, including the fact that social platforms have every incentive to keep us trapped in cycles of lizard-brain reactivity, making us less likely to see how we are being manipulated and what collective power we might have to say: no more. Fix this.  

Conclusion

When we find ourselves angry about something online, running through these four As allows us to reflect not just on the thing in question, but on why that thing exists; what structural conditions have allowed it to end up in our feeds, our DMs, our comments sections. Sometimes we’ll realize that we simply don’t have enough information to know what we should be angry about. Sometimes it will widen the aperture of our anger. Sometimes both, in opposing directions. In any case, identifying network structures—and just as critically, gaps in knowledge—through ecological mindfulness helps create more space between ourselves and the angering thing. That space is where more traditional mindfulness practices can begin. If we skip ahead to those practices without establishing an ecological staging-area, we’re very likely to end up responding skillfully to the wrong things.    

In all of this, the point is not to dull our anger or shame ourselves because we’re experiencing it (“if I were a good person, I’d be able to extend compassion to all”). As Buddhist meditation teacher Tara Brach explains in her reflections on anger, we’re unable to move from anger to action if we’re stuck in fight-flight-freeze. If we’re too consumed with fighting, we can’t craft a strategic response. If we’re too busy fleeing or freezing, we can’t contribute at all. To break the cycles of limbic reactivity, we must be able to contribute, and contribute wisely. We don’t get there through some singular, dramatic action. We get there through everyday practice.   

Whitney Phillips, ‘Lizard Girl’

Comments
0
comment
No comments here
Why not start the discussion?