On Tuesday, Facebook users began seeing a new “hate speech” report feature roll out by default on the primary news feed. Curiously, however, the new feature didn’t emerge under one of the site’s myriad “hamburger” and ellipses drop-down menus, nor did it pop up as a one-time warning at the top of the site after a fresh login.
Instead, Facebook began asking, using a yellow “warning” exclamation box and off-set text, if every post on users’ news feeds contained hate speech.
The “feature” was apparently live for less than half an hour on Tuesday. Some (but not all) Ars staffers were able to grab screenshots of news feeds constantly asking the same hate-speech question on every post. While testing this change we discovered, thanks to the site’s standard “refresh as you scroll” system, that the incessant prompt had been removed from every FB post by noon ET.
Choosing “no” makes the prompt go away with no further information, while choosing “yes” loads a pop-up box with follow-up questions and an apparently incomplete UI.
In neither case did the new system offer a clickable or expandable explanation of what “hate speech” might entail. That is also the case in Facebook’s existing “give feedback on this post” option, which is tucked under an ellipsis menu and includes other options like “harassment,” “suicide or self-injury,” “spam,” and “false news.” (Unfortunately, that “feedback” list of options still doesn’t include “possible inauthentic actor.”)
Perhaps worst of all for Facebook, while the prompt was active, the question of “does this post contain hate speech” appeared twice on every advertisement on the site.
It’s possible that Facebook had plans to roll out some form of hate-speech reporting feature on Tuesday due to it being the kickoff day for the company’s annual F8 developer conference. That event will start with a keynote speech at 1pm ET, which will likely include a speech from Facebook CEO Mark Zuckerberg and company announcements. In a response to Ars Technica, a Facebook spokesperson described the prompt as “an internal test we were working on to understand different types of speech, including speech we thought would not be hate.” Its launch was described as a “bug.”
Source: Ars Technica