The police killing of Keith Scott, only the latest in a chain of black Americans gunned down by law enforcement, has prompted two days of furious and at times destructive protests. But even a violent protest is still a protest — so why is Facebook responding to it by activating a feature built for terrorist attacks, gun massacres, and earthquakes?
Facebook’s “Safety Check” option was first offered to the company’s hundreds of millions of users in 2014, an occasional tool deployed in the case of a typhoon or other act of God — “disasters and crises,” as the company put it at the time. Since then, it’s been activated for manmade catastrophes, including this year’s mass shooting in Orlando and the November 2015 attack on the Bataclan theater in Paris. It’s a quick way for, say, a student studying abroad to calm worried family members, or anyone else who may be affected by a nearby disaster. The word “disaster” here is important, and it’s one Facebook deliberately uses to describe Safety Check, which can be found at Facebook.com/disaster.

What was originally a feature that had to be manually flipped on by humans at Facebook is now another portion of the service that’s been yielded to the control of an “algorithm.” Facebook hopes to offload all Safety Check disasters onto autonomous software and the crowd, i.e., its more than 1.7 billion monthly active users. This mirrors a similar shift within Facebook’s trending news unit, which went from human to bot control earlier this year. The results have been, you might say, disastrous, with the bots promoting patently false hoax “news” stories. As has been the case with virtually every other decision Facebook has made in recent years, it completely misses that the presentation of information on a mass scale is a deeply political function.
Calling a protest a “disaster” is a case in point: It’s a framing actively exploited by the nation’s increasingly rabidly racist right wing, for whom a Black Lives Matter rally is as much a terrorist gathering as a public ISIS execution in Raqqa. Publications like Breitbart News have been quick to depict this week’s protests as a lawless, orgiastic mob, a non-ideological swell of pure violence tantamount to, well, an earthquake. Converting a protest — even an angry and sometimes violent protest — into a “disaster” strips away the reasons why that protest is happening, distorting the nature of the demonstrations further and further every time a scared white Facebook user assures the world they’re “safe.” This is a move now typical of Facebook: An attempt to wash its hands of editorial and political judgment, to hand off all such responsibility to an opaque “algorithm” we’re supposed to trust as impartial and democratic. How these algorithms actually work is, like all good democratic processes, a trade secret.
The following screen shot was provided to The Intercept by a Facebook user with friends in the Charlotte area, who says she’s seen only white Facebook users nowhere near the vicinity of the actual protests checking in as “safe.”

If you click “more information” on the web version of this Safety Check page, you’re given an overview of recent posts about the protest: Lead among them is a story from the right-wing site The Blaze, headlined “No, They’re Not ‘Protesters.’ They’re Terrorists.”
This is anecdotal, of course, but is consistent with Facebook’s embrace of all things algorithmic, as explained in a statement from the company:
Safety Check can be activated multiple ways. The first is when Facebook notifies everyone in an affected area. This is used when an incident impacts a large number people and there’s value in reaching them quickly. We look at a combination of the scope, scale and duration of the incident to determine when it is appropriate for Facebook to notify everyone in an area.
We are also testing a way for communities to activate Safety Check. When a significant number of people post about a specific incident and are in a crisis area, they will be asked to mark themselves safe through Safety Check. Once they do, they can then invite friends in the affected area to mark themselves safe as well. This allows communities to use Safety Check for situations where folks in the area know who Safety Check is most relevant for. In certain circumstances, as a situation evolves, Facebook may decide to notify everyone in an area even after the community has already started using Safety Check.
Rather than a human who can assess whether or not an event is truly disastrous, a Safety Check can now be “triggered” if 50 or more Facebook users indicate that they’re feeling unsafe. If Facebook detects that you’re discussing an issue of safety in the vicinity of the event itself (as confirmed by a third party such as a government agency or news source), you might be presented with the option to check in as “safe,” as is the case right now in Charlotte. It does not seem to matter to Facebook whether this threatened safety is real (e.g., a tsunami) or perceived (a Black Lives Matter protest and destruction of property in a neighboring county). To be sure, this tool could be tremendously useful were it placed in the hands of actual protestors, one of whom was shot two nights ago and died yesterday. But the mortal danger posed to Black Lives Matter protestors by police is more nuanced than an earthquake, and algorithms do not handle subtlety well; see the blue-shaded blast radius painted over the entirety of Charlotte in the screen shot above, indicating that the “community-triggered” Charlotte Safety Check is being used to broadcast the safety of deeply safe TV broadcast viewers and office building onlookers, not endangered participants.
It’s not hard to imagine how, aside from intentional abuse, a computer’s attempt at guessing what is or isn’t serious could backfire — this summer’s imaginary mass shooting at a John F. Kennedy Airport terminal in New York City could’ve easily snowballed across the country (or globe) had it triggered a Facebook Safety Check based on misinformation and raw fear. Facebook says it is “continually working to find the best way for Safety Check to be helpful to the most people,” but there’s no indication that means tempering the output of its beloved software algorithms with more input from human overseers. So until the next opportunity for a hoax crisis or rumor run amok, we should expect to see Facebook’s Safety Check increasingly used the way most Americans use Facebook: as a means of confirming, and then spreading, their fears.
Correction:
The original version of this post misstated the condition of a protestor shot by a police officer; the protester died. Additionally, Charlotte police now say the protester was shot by another person in the crowd, not an officer.
Top photo: Demonstrators march during a protest on September 22, 2016, in Charlotte, North Carolina.