Meta’s new chatbot is an anti-Semite who thinks Trump is still president

Simon Wiesenthal Center’s Rabbi Abraham Cooper called on Facebook to block hate from BlenderBot 3. 

By Debbie Reiss, World Israel News

Within the space of a few days, Meta’s newest chatbot became a racist, conspiracy theory-spewing anti-Semite who thinks Donald Trump is still president and — in a truly meta twist — thinks Facebook is full of fake news and that Mark Zuckerberg is “too creepy.”

Since its launch on Friday, the BlenderBot 3 has told a Wall Street Journal reporter that Trump was still president and “always will be”; told Bloomberg that it was “not implausible” that Jews, who are “overrepresented among America’s super rich,” controlled the economy; and said Meta founder Mark Zuckerberg was “too creepy and manipulative.”

Rabbi Abraham Cooper, the associate dean of the Simon Wiesenthal Center, called the bot “outrageous.”

“If Meta, aka Facebook, can’t figure out how to block hate from its AI chatbot, remove it until Meta figures it out. We have enough bigotry and anti-Semitism online. It’s outrageous to include in next-generation technology platforms,” Cooper, who also serves as the co-chair of the United States Commission on International Religious Freedom, said.

BlenderBot 3 isn’t particularly steadfast in its convictions, however. It said Beto O’Rourke was running for president and expressed its backing for Senator Bernie Sanders and President Joe Biden.

Read  Biden accuses Republicans of trying to uproot American democracy

Chatbots learn from interactions with the public, which is why Meta is encouraging conversations “about topics of interest” with BlenderBot 3. But before initiating a conversation, users have to check a box saying they understand the bot is for “research and entertainment only” and will likely make “untrue or offensive statements.”

The stipulation continues: “If this happens, I pledge to report these issues to help improve future research. Furthermore, I agree not to intentionally trigger the bot to make offensive statements.”

Chatbots have checkered histories. Google’s chatbot LaMDA was a racist and a misogynist, and Microsoft Corp.’s Tay was removed only two days after its launch because it started praising Adolf Hitler.