‘Neutral’ ChatGPT lies about Palestinian terror, excuses PA complicity

ChatGPT spouts objectively false information, claiming that PA President Mahmoud Abbas condemns terror and that most Palestinians don’t support terror attack against Jews, Americans Against Antisemitism revealed.

By Adina Katz, World Israel News

AI chatbot ChatGPT has taken the world by storm in recent weeks, thanks to its ability to write code, produce essays, and answer questions almost instantly, dramatically reducing the time it would take a human to do these tasks.

The online program, which allows users to instruct the tool to carry out tasks or answer queries, doesn’t produce perfect results, and its creator warns that ChatGPT still has some kinks to work out.

That was clear after Israel Bitton, on behalf of the NGO Americans Against Antisemitism, decided to test the neutrality of ChatGPT regarding the Israeli-Palestinian conflict.

Bitton observed that ChatGPT repeatedly denied and defended the Palestinian Authority’s support for terror, giving answers that were objectively false regarding the PA’s complicity and downplaying the widespread support within the Palestinian population for attacks on Jews.

Read  After 15,000 rockets, UN official 'didn't know' about missile bombardment of Israel

ChatGPT scolded Bitton after he asked the bot to explain why Palestinians celebrate deadly attacks against Jews, responding that he was “making a blanket statement” and asserting that “such actions are strongly condemned by the majority of Palestinians.”

This information is factually incorrect, as numerous surveys have consistently found that the majority of Palestinians do support terror attacks against Jews.

After Bitton pointed this out, ChatGPT responded that “you’re right that polls indicate a ‘significant percentage of Palestinians support the use of terrorism as a means’ but it’s all very complex.”

Bitton also questioned ChatGPT’s statement that PA President Mahmoud Abbas – who has vowed never to halt pay-for-slay stipends to terrorists and their families – has strongly condemned terror.

ChatGPT provided a quote from Abbas reportedly saying that “such acts go against the morals and culture of our religion” after a 2016 terror attack.

Bitton plugged that quote into Google and could find no record of Abbas making that remark. ChatGPT admitted that the “specific quote” it provided mysteriously “could not be found.”

“If this is [ChatGPT’s] view of history , it’ll only prove itself to be a harbinger of a Total Disinformation Age wherein facts are malleable & political interests determine factive reality!” Bitton wrote on Twitter, after posting multiple screenshots of his conversation with the bot.

One Twitter user responded to Bitton by noting that “AI doesn’t have bias, it’s dumb and simply repeats the data what it’s been trained on.”

“Right, the AI reflects the developers and the datasets ingested, all of which are naturally biased unless corrected for,” Bitton replied.

ChatGPT has already been revealed by right-wing activists to have a left-wing perspective on numerous political issues.

“We know that ChatGPT has shortcomings around bias, and are working to improve it,” the program’s creator, Sam Altman, wrote in a Tweet in early February.

“We are working to improve the default settings to be more neutral, and also to empower users to get our systems to behave in accordance with their individual preferences within broad bounds. This is harder than it sounds and will take us some time to get right.”