Facebook and the Normalization of Deviance (2024)

When the sociologist Diane Vaughan came up with the term “the normalization of deviance,” she was referring to NASA administrators’ disregard of the flaw that caused the Challenger space shuttle to explode, in 1986. The idea was that people in an organization can become so accepting of a problem that they no longer consider it to be problematic. (In the case of the Challenger, NASA had been warned that the shuttle’s O-rings were likely to fail in cold temperatures.) Consider Facebook: for years, its leadership has known that the social network has abetted political polarization, social unrest, and even ethnic cleansing. More recently, it has been aware that its algorithms have promoted misinformation and disinformation campaigns about COVID-19 and vaccines. Over the past year, the company made piecemeal attempts to remove false information about the pandemic, issuing its most comprehensive ban in February. An analysis last month by the nonprofit group First Draft, however, found that at least thirty-two hundred posts making unfounded claims about COVID-19 vaccines had been posted after the February ban. Two weeks ago, the top post on Facebook about the vaccines was of Tucker Carlson, on Fox News, “explaining” that they don’t work.

Over the years, Mark Zuckerberg, Facebook’s C.E.O., has issued a cascade of apologies for the company’s privacy breaches, algorithmic biases, and promotion of hate speech, among other issues. Too often, the company seems to change course only after such issues become public; in many cases, it had been made aware of those failures long before, by Facebook employees, injured parties, or objective evidence. It took months for the firm to acknowledge that political ads on its platform were being used to manipulate voters, and to then create a way for users to find out who was paying for them. Last December, the company finally reconfigured its hate-speech algorithm, after years of criticism from Black groups that the algorithm disproportionately removed posts by Black users discussing racial discrimination. “I think it’s more useful to make things happen and then, like, apologize later,” Zuckerberg said early in his career. We’ve witnessed the consequences ever since.

Here’s what Facebook’s normalization of deviance has looked like in the first few months of 2021: In February, internal company e-mails obtained by ProPublica revealed that, in 2018, the Turkish government demanded that Facebook block posts, in Turkey, from a mainly Kurdish militia group that was using them to alert Syrian Kurdish civilians of impending Turkish attacks against them, and made clear, according to Facebook, “that failing to do so would have led to its services in the country being completely shut down.” Sheryl Sandberg, Facebook’s C.O.O., told her team, “I’m fine with this.” (Reuters reported that the Turkish government had detained almost six hundred people in Turkey “for social media posts and protests criticizing its military offensive in Syria.”)

Read Kate Klonick on Facebook’s Oversight Board.

On April 3rd, Alon Gal, the chief technology officer of the cybercrime-intelligence firm Hudson Rock, reported that, sometime prior to September, 2019, the personal information of more than half a billion Facebook users had been “scraped” and posted to a public Web site frequented by hackers, where it is still available. The stolen data included names, addresses, phone numbers, e-mail addresses, and other identifying information. But, according to Mike Clark, Facebook’s product-management director, scraping data is not the same as hacking data—a technicality that will be lost on most people—so, apparently, the company was not obligated to let users know that their personal information had been stolen. “I have yet to see Facebook acknowledging this absolute negligence,” Gal wrote. An internal memo about the breach was inadvertently shared with a Dutch journalist, who posted it online. It stated that “assuming press volume continues to decline, we’re not planning additional statements on this issue. Longer term, though, we expect more scraping incidents and think it’s important to... normalize the fact that this activity happens regularly.” On April 16th, it was announced that the group Digital Rights Ireland is planning to sue Facebook for the breach, in what it calls “a mass action”; and Ireland’s privacy regulator, the Data Protection Commission, has opened an investigation to determine if the company violated E.U. data rules. (Facebook’s European headquarters are in Dublin.)

On April 12th, the Guardian revealed new details about the experience of Sophie Zhang, a data scientist who posted an angry, cautionary farewell memo to her co-workers, before she left the company, last August. According to the newspaper, Zhang was fired for “spending too much time focused on uprooting civic fake engagement and not enough time on the priorities outlined by management.” “In the three years I’ve spent at Facebook, I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry,”Zhang wrote in the memo, which, the Guardian reports, Facebook tried to suppress. “We simply didn’t care enough to stop them.” A known loophole in one of Facebook’s products enabled corrupt governments to create fake followers and fake “likes,” which then triggered Facebook’s algorithms to boost their propaganda and legitimacy. According to the Guardian, when Zhang alerted higher-ups about how this was being used by the government of Honduras, an executive told her, “I don’t think Honduras is big on people’s minds here.” (A Facebook spokesperson told the newspaper, “We fundamentally disagree with Ms Zhang’s characterization of our priorities and efforts to root out abuse on our platform.”)

On April 13th, The Markup, a nonprofit, public-interest investigative Web site, reported that Facebook’s ad business was monetizing and reinforcing political polarization in the United States, by allowing companies to target users based on their political beliefs. ExxonMobil, for example, was serving liberals with ads about its clean-energy initiatives, while conservatives were told that “the oil and gas industry is THE engine that powers America’s economy. Help us make sure unnecessary regulations don’t slow energy growth.” How did ExxonMobil know whom, specifically, to target? According to the report, from Facebook’s persistent monitoring of users’ activities and behaviors on and off Facebook, and its delivering of these “custom audiences” to those willing to pay for ads on its platform.

On April 19th, Monika Bickert, Facebook’s vice-president of content policy, announced that, in anticipation of a verdict in the trial of Derek Chauvin, the company would remove hate speech, calls to violence, and misinformation relating to that trial. That accommodation was a tacit acknowledgement of the power that users of the platform have to incite violence and spread dangerous information, and it was reminiscent of the company’s decision, after the November election, to tweak its newsfeed algorithm in order to suppress partisan outlets, such as Breitbart. By mid-December, the original algorithm was restored, prompting several employees to tell the Times’ Kevin Roose that Facebook executives had reduced or vetoed past efforts to combat misinformation and hate speech on the platform, “either because they hurt Facebook’s usage numbers or because executives feared they would disproportionately harm right-wing publishers.” According to the Tech Transparency Project, right-wing extremists spent months on Facebook organizing their storming of the Capitol, on January 6th. Last week, an internal Facebook report obtained by Buzzfeed News confirmed the company’s failure to stop coördinated “Stop the Steal” efforts on the platform. Soon afterward, Facebook removed the report from its employee message board.

I bring a wealth of expertise in the domain of technology, social media, and the ethical implications surrounding them. My understanding spans a wide range of topics, including the intersection of technology with social and political issues. My knowledge is deeply rooted in extensive research, analysis of industry trends, and an ongoing commitment to staying informed.

The article you've shared delves into the concept of "the normalization of deviance," a term coined by sociologist Diane Vaughan. This phenomenon is evident in the case of NASA's oversight leading to the Challenger disaster, and the article draws parallels with Facebook's patterns of behavior.

Normalization of Deviance in the Challenger Disaster: Diane Vaughan's concept of the normalization of deviance is prominently illustrated in the Challenger space shuttle tragedy of 1986. NASA's administrators became so accustomed to the issue of O-ring failure that they no longer considered it problematic, ultimately leading to the explosion. This demonstrates how organizations can become accepting of problems to the point where they ignore or downplay them.

Facebook's Normalization of Deviance: The article then applies the concept to Facebook's behavior. Despite being aware of its role in political polarization, social unrest, and the spread of misinformation, Facebook's leadership seems to have normalized these issues, only taking action when forced to by external factors. The article highlights instances where Facebook has been informed about its shortcomings but failed to address them proactively.

Specific Examples of Facebook's Normalization of Deviance in 2021:

  1. Political Interference in Turkey (February): The article reveals that Facebook complied with the Turkish government's demand to block posts from a Kurdish militia group, showing a willingness to prioritize its interests over potential harm caused by censorship.

  2. Data Breach and Negligence (April): Facebook faced a significant data breach affecting over half a billion users. The company downplayed the incident, asserting that scraping data is not the same as hacking. The internal memo exposed Facebook's intention to normalize such incidents, reflecting a lax approach to user privacy.

  3. Civic Fake Engagement (April): The case of data scientist Sophie Zhang sheds light on Facebook's failure to address foreign governments' misuse of its platform for propaganda. The company reportedly neglected the issue, contributing to the normalization of deviance.

  4. Political Polarization and Ad Business (April): The article highlights Facebook's role in reinforcing political polarization by allowing advertisers to target users based on their political beliefs, emphasizing how the platform's algorithms contribute to divisive content.

  5. Failure to Combat Misinformation and Hate Speech (December): Facebook's decision to revert changes aimed at combating misinformation and hate speech, allegedly due to concerns about usage numbers and impact on right-wing publishers, further demonstrates the normalization of deviance within the company.

In summary, the article presents a compelling case for Facebook's normalization of deviance, drawing attention to the company's repeated patterns of acknowledging issues only after they become public, neglecting internal warnings, and failing to proactively address ethical concerns. This analysis underscores the need for accountability and responsible conduct in the tech industry.

Facebook and the Normalization of Deviance (2024)
Top Articles
Latest Posts
Article information

Author: Jerrold Considine

Last Updated:

Views: 6286

Rating: 4.8 / 5 (58 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Jerrold Considine

Birthday: 1993-11-03

Address: Suite 447 3463 Marybelle Circles, New Marlin, AL 20765

Phone: +5816749283868

Job: Sales Executive

Hobby: Air sports, Sand art, Electronics, LARPing, Baseball, Book restoration, Puzzles

Introduction: My name is Jerrold Considine, I am a combative, cheerful, encouraging, happy, enthusiastic, funny, kind person who loves writing and wants to share my knowledge and understanding with you.