The police killing of Keith Scott, only the latest in a chain of black Americans gunned down by law enforcement, has prompted two days of furious and at times destructive protests. But even a violent protest is still a protest — so why is Facebook responding to it by activating a feature built for terrorist attacks, gun massacres, and earthquakes?
Facebook’s “Safety Check” option was first offered to the company’s hundreds of millions of users in 2014, an occasional tool deployed in the case of a typhoon or other act of God — “disasters and crises,” as the company put it at the time. Since then, it’s been activated for manmade catastrophes, including this year’s mass shooting in Orlando and the November 2015 attack on the Bataclan theater in Paris. It’s a quick way for, say, a student studying abroad to calm worried family members, or anyone else who may be affected by a nearby disaster. The word “disaster” here is important, and it’s one Facebook deliberately uses to describe Safety Check, which can be found at Facebook.com/disaster.
What was originally a feature that had to be manually flipped on by humans at Facebook is now another portion of the service that’s been yielded to the control of an “algorithm.” Facebook hopes to offload all Safety Check disasters onto autonomous software and the crowd, i.e., its more than 1.7 billion monthly active users. This mirrors a similar shift within Facebook’s trending news unit, which went from human to bot control earlier this year. The results have been, you might say, disastrous, with the bots promoting patently false hoax “news” stories. As has been the case with virtually every other decision Facebook has made in recent years, it completely misses that the presentation of information on a mass scale is a deeply political function.
Calling a protest a “disaster” is a case in point: It’s a framing actively exploited by the nation’s increasingly rabidly racist right wing, for whom a Black Lives Matter rally is as much a terrorist gathering as a public ISIS execution in Raqqa. Publications like Breitbart News have been quick to depict this week’s protests as a lawless, orgiastic mob, a non-ideological swell of pure violence tantamount to, well, an earthquake. Converting a protest — even an angry and sometimes violent protest — into a “disaster” strips away the reasons why that protest is happening, distorting the nature of the demonstrations further and further every time a scared white Facebook user assures the world they’re “safe.” This is a move now typical of Facebook: An attempt to wash its hands of editorial and political judgment, to hand off all such responsibility to an opaque “algorithm” we’re supposed to trust as impartial and democratic. How these algorithms actually work is, like all good democratic processes, a trade secret.
The following screen shot was provided to The Intercept by a Facebook user with friends in the Charlotte area, who says she’s seen only white Facebook users nowhere near the vicinity of the actual protests checking in as “safe.”
If you click “more information” on the web version of this Safety Check page, you’re given an overview of recent posts about the protest: Lead among them is a story from the right-wing site The Blaze, headlined “No, They’re Not ‘Protesters.’ They’re Terrorists.”
This is anecdotal, of course, but is consistent with Facebook’s embrace of all things algorithmic, as explained in a statement from the company:
Safety Check can be activated multiple ways. The first is when Facebook notifies everyone in an affected area. This is used when an incident impacts a large number people and there’s value in reaching them quickly. We look at a combination of the scope, scale and duration of the incident to determine when it is appropriate for Facebook to notify everyone in an area.
We are also testing a way for communities to activate Safety Check. When a significant number of people post about a specific incident and are in a crisis area, they will be asked to mark themselves safe through Safety Check. Once they do, they can then invite friends in the affected area to mark themselves safe as well. This allows communities to use Safety Check for situations where folks in the area know who Safety Check is most relevant for. In certain circumstances, as a situation evolves, Facebook may decide to notify everyone in an area even after the community has already started using Safety Check.
Rather than a human who can assess whether or not an event is truly disastrous, a Safety Check can now be “triggered” if 50 or more Facebook users indicate that they’re feeling unsafe. If Facebook detects that you’re discussing an issue of safety in the vicinity of the event itself (as confirmed by a third party such as a government agency or news source), you might be presented with the option to check in as “safe,” as is the case right now in Charlotte. It does not seem to matter to Facebook whether this threatened safety is real (e.g., a tsunami) or perceived (a Black Lives Matter protest and destruction of property in a neighboring county). To be sure, this tool could be tremendously useful were it placed in the hands of actual protestors, one of whom was shot two nights ago and died yesterday. But the mortal danger posed to Black Lives Matter protestors by police is more nuanced than an earthquake, and algorithms do not handle subtlety well; see the blue-shaded blast radius painted over the entirety of Charlotte in the screen shot above, indicating that the “community-triggered” Charlotte Safety Check is being used to broadcast the safety of deeply safe TV broadcast viewers and office building onlookers, not endangered participants.
It’s not hard to imagine how, aside from intentional abuse, a computer’s attempt at guessing what is or isn’t serious could backfire — this summer’s imaginary mass shooting at a John F. Kennedy Airport terminal in New York City could’ve easily snowballed across the country (or globe) had it triggered a Facebook Safety Check based on misinformation and raw fear. Facebook says it is “continually working to find the best way for Safety Check to be helpful to the most people,” but there’s no indication that means tempering the output of its beloved software algorithms with more input from human overseers. So until the next opportunity for a hoax crisis or rumor run amok, we should expect to see Facebook’s Safety Check increasingly used the way most Americans use Facebook: as a means of confirming, and then spreading, their fears.
Correction:
The original version of this post misstated the condition of a protestor shot by a police officer; the protester died. Additionally, Charlotte police now say the protester was shot by another person in the crowd, not an officer.
Top photo: Demonstrators march during a protest on September 22, 2016, in Charlotte, North Carolina.
Just checking in to let you know I feel safe…no, wait! A spider! OMG!!! I’m gonna die!…Oh, the dog just ate it.
.. I just posted a cute photo on Facebook of Fido throwing up. LOL!!
Americans gunned down by law enforcement is both a disaster and a crisis, I don’t see that on their list.
FB is just sophisticated classical conditioning for profit, Pavlov’s dogs made to salivate at the mere sound of a bell, the act of which not only rewards FB to the tune of billions but allows them to manipulate all manner of things on a massive scale.
And faceBook is a shopfront for …….?
lol.
Reliance on facebook for anything other than propaganda seems foolish.
When the time comes that your facebook life is more important than your IRL, just realise that you have betrayed yourself.
“the nation’s increasingly rabidly racist right wing”
Came here for the Facebook story got the Clinton meme instead.
“The police killing of Keith Scott, only the latest in a chain of black Americans gunned down by law enforcement, has prompted two days of furious and at times destructive protests.”
“The original version of this post misstated the condition of a protestor shot by a police officer; the protester died. Additionally, Charlotte police now say the protester was shot by another person in the crowd, not an officer.”
Conceptually, this is a great topic to address. Unfortunately, you killed it with both your opening and closing statements which are presented as factual rather than claims. For accuracy, especially while discussing topics/events heavily fueled by social media, it’s better to state facts rather than opinions on important things such as people dying. There are no facts to support the statement Scott was “gunned down by law enforcement” but you asserted it, nonetheless. There are no facts to support the statement a “protester was shot by a police officer” or “Charlotte police now say the protester was shot by another person in the crowd, not an officer.” But you asserted it as fact. You did update he has died but didn’t bother to mention his name (Justin Carr). There is no evidence Carr was shot by police and Charlotte’s police department aren’t “now saying” he was shot by a person in the crowd. They said it from the beginning – while they were carrying Carr (shot in the back of the head) away from the crowd. It, too, was recorded and posted live on facebook (no funny algorithm business).
In the middle of the article you suggest; “To be sure, this tool could be tremendously useful were it placed in the hands of actual protestors …”
The tool being FB’s Safety Check. Who is responsible for making sure “actual protestors” have this useful tool? I don’t think I’d be over inflating if I observe most of the “actual protestors” had IPhones in hand and FB was flooded with live shots. Did someone disable the Safety Check feature within the protest’s core of activity leaving only unaffected, white people in surrounding areas able to hysterically check-in?
Brietbart & “rabid racist” Co. aside; who gets to decide when a protest is a destructive riot and who gets to validate or invalidate people’s fear? You? Who determines whether or not widespread looting, burning, assaults, threats, ripping up arenas, and physical acts of violence along interstates are merely moments of violence displayed “at times”, rather than the main activity all night long? Me?
Lastly; I agree with your underlying critique of FB’s Safety Check and the algorithm hand washing – but that doesn’t excuse your misstatements or deliberate “framing” and “exploitation” of events. It does a disservice to the cause.
Correction to your correction – while the Charlotte police may be saying that the protester who was killed was shot by “another person in the crowd”, eyewitnesses say he was shot by cops using rubber bullets at close range.
A man was shot in Charlotte. Multiple videos were released on youtube showing groups of black people beating solitary whites in Charlotte. Five police officers were killed at a BLM protest in Dallas. Another BLM protester killed two police officers. Social networks were and are rife with threats against white people, and threats were and are carried out. . .but you object to a warning? From your article, it seems you object to even a mention of their existence.
This article is a good example of the Intercept’s pursuits of the social policies of Neo-Liberal racism (the de facto substitution of race or sex for class as a primary motivator in human society , which serves to obscure class division), which is intended to, and does, atomize the population.
The Intercept does some very good investigative journalism, and the reason why is because as long as the population is divided, they can know about anything. The Intercept publishes good investigative journalism because the billionaire that owns it wants valid news, and the Intercept divides the populace because the billionaire that owns it doesn’t want his warehouse workers to stop focusing on race and start focusing on class.
The primary social policy of Neo-Liberalism is to ensure a divided populace in order to make democratic consensus impossibility. This is the line the Intercept has chosen to adhere to with articles like this.
As the Intercept has added more and more Neo-Liberal propagandists to the staff, many are starting to sound exactly like the New Atheist cult when confronted by evidence of religious bigotry from Harris or Dawkins. They’ve got their deeply seated dogma and any evidence suggesting it could be amiss should obviously be disregarded without consideration.
Like virtually all academics, they are themselves a product of neoliberalism in academia and, for a great part unwittingly, working to prop up a system that they like to tell themselves that they are tearing down. “White” people in higher education are taught to hate themselves and feel worthy of inferior treatment (as long as the “they” is poor white racist trash, not those white college grads).
The banks and politicians can’t rob America blind if the journalists don’t keep Americans fighting in the streets, and that’s just what articles like this are intended to do. It’s like the first thief distracting a shopkeep while the second thief steals him blind. The Intercept, and accepted American journalism in general, is the first thief.
This Neo-Liberal anti-“white” racism can be traced back to the social sciences btw, which can themselves be traced back to John D Rockefeller Jr. when he established them in American universities from circa 1900-1945 to create think tanks to shape government and society (i.e, create our current Neo-Liberal system). The American system of higher education was built by wealth to serve wealth, not the downtrodden and oppressed.
One thing we need to keep front and center whenever we hear about AIs is that they are inherently racist. We’ve seen stories like https://www.theguardian.com/technology/2016/mar/30/microsoft-racist-sexist-chatbot-twitter-drugs but it’s important to get at the reason: Racism works. It’s a go-along-to-get-along solution. If you ran a diner in Montgomery Alabama in 1960, you could have given your customers boxes to upvote and downvote who came in, and the effect would have been just as racist as anyone the heroes of that day protested. The modern “social credit” schemes being patented by Facebook and rolled out in Britain and China can be expected to work the same way. And a good AI adapts and learns how to get those upvotes and not downvotes! Only a real person who knows what racism is and has a moral belief that it is wrong and has a personal code that keeps him or her from joining in with it, even when there is personal risk and sacrifice involved, can truly be non-racist.
But for companies like Facebook, the racism of AIs is not merely a bug, but a feature. Humans are not allowed to be racist – but they want to be racist, because there is a profit motive. “Social credit” that gives blacks lower scores is more pragmatically accurate when you take into account the risk of false prosecution, getting shot for reading a book, or (most realistically) having a family member sent to jail who spends the next ten years begging you for commissary money so he won’t get beat up again. So a company looking at two credit applications wants to pass the white, shitcan the black, but they can’t just do it — which is why they need AI. And Facebook is developing those resources, telling who is networked to whom, so that the companies can do it ‘legitimately’.
man you really need to read Machiavelli. You just aren’t seeing the forest through the trees.
This isn’t for Facebook users, and racial fairness is an utterly trivial concern in this instance. This is a tool for government.This is Facebook’s answer to Hitler youth, to create an oblivious army of snitches blanketing the country. This is a tool Facebook developed for dictators to subjugate populations.
I won’t say that isn’t true in a sense – but selling a tool to legitimize discrimination in lending is a valuable business opportunity right now, and “social credit” is a golden path toward it. There are many ways for Facebook to coordinate snitching, of which the most important at the present is still simply that people post stuff and AIs can crawl through it looking for incriminating data. The higher-order effects of AI voting algorithms don’t seem as important for just plain snitching, though they may have greater use in social (rather than racial) discrimination.
I do see uses like you suggest though. For example, the site can estimate users as “liberal” or “conservative” based on their associations. The next step is to treat the populations differently – e.g. serve a week of depressing news about people with no hope to the liberals before the election, while serving the conservatives articles about activists and protests and rallies and getting exercise and how to find their polling place. Or maybe vice versa this time, since Facebook cares a lot about the cheap foreign labor issue. However, in order to really realize the capability of the technology they have to use stuff like possibly the Google terahertz transmitters to beam monochromatic terahertz waves and disrupt specific DNA-protein associations on the target individuals on a nationwide level. I think they are developing this technology but I doubt they have it already.
Perhaps Safety Check is not a legitimate concept. Perhaps Fecesbook (diapers for the internet) is not a good way to share information. Perhaps we should prevent diapers from leaking their contents into the adult world.
What the hell? Have you read anything about climate change lately? Are 50 of us feeling unsafe about that? Where do I click?