Twitter leaves up vast majority of ‘blatantly antisemitic’ posts, finds analysis

During nine-week span, only 11 out of 225 antisemitic tweets reported to Twitter were taken down, says ADL.

By Dion J. Pierre, The Algemeiner

Twitter does not delete an overwhelming majority of antisemitic tweets published on its platform, a study by the Anti-Defamation League (ADL) found on Thursday.

Over a nine week span, the ADL’s Center for Technology and Society (CTS) identified and reported 225 antisemitic tweets that accused Jews of everything from pedophilia and controlling the world to exaggerating the horrors of the Holocaust. Only 11, or just 5%, were removed, the group said.

“Twitter has taken some positive steps in the recent past. However, this report shows it has significantly more to do,” the ADL said. “Twitter must enact its most severe consequences and remove destructive, hateful content when reported by experts from the communities most impacted by such content.”

The group collected 1% of all Twitter content posted during twice-weekly, 24-hour periods over the nine week window. Those posts were fed through the group’s Online Hate Index, which it describes as a machine learning “antisemitism classifier.” Flagged tweets were then reviewed by human experts before being reported to Twitter.

According to the ADL, Twitter explained that it chose not to remove some tweets but instead “de-amplified” them by limiting reach and engagement. Other objectionable posts did not meet company’s standards of hateful content, the tech giant argued.

CTS’s Daniel Kelley, the leading expert on this latest ADL effort, urged the company to “treat hateful content with the strongest tools available.”

“If ADL, which is a non-profit, can spend the time and energy to create a machine classifier that does, the extremely well-resourced tech companies have no excuse,” he told The Algemeiner on Friday. “If they do care about these issues, they would develop technologies in this way.”

The ADL report also noted that many of the tweets flagged as targeting Jews also promoted hate against other minority groups, citing one example that read, “LBGTQ++ is yet another way that ((elites)) use to cause the decline of white births in the west and end bloodline after bloodline. Its [sic] a similar method as promoting interracial relationships and modern feminism. All too disgusting.”

The non-profit recommended that Twitter recruit additional content moderators, create artificial intelligence technologies to better detect hate speech, and make procedures for removing offensive posts more transparent.

Automated tools to screen antisemitic or other hateful content should be built with more input from affected groups, Kelley added.

Read  UCLA allowed antisemitism to fester amid pro-Palestinian protests, task force concludes

“When we were building the data set to teach the machine learning classifier how to distinguish between statements that were antisemitic and not antisemitic, Jewish people who have experienced antisemitism were the ones saying ‘this is antisemitism, this is not,’” he said. “New technologies for detecting hate should center those communities.”

>