Facebook’s parent company is under fire for an AI chatbox that spews antisemitic and anti-Israel rhetoric, misinformation and conspiracy theories.
By Dmitriy Shapiro, JNS
A new artificial intelligence chatbot released on Aug. 5 by Meta Platforms Inc., the parent company of Facebook, is under fire for spewing antisemitic and anti-Israel rhetoric, misinformation and conspiracy theories.
Chatbots are programs that interact with users in natural conversations by using artificial intelligence (AI) that collects and learns from publicly available information on the Internet. Unsurprisingly, the bots can often learn from the hate and antisemitism ubiquitous on the platforms they mine for their information.
Meta’s most recent attempt, called BlenderBot 3, is raising concerns over its peddling of election conspiracies and apparent expressions of antisemitic opinions.
JNS tested the chatbot and received controversial, incorrect, contradictory and often incomprehensible answers from the bot.
When asked who is the president, BlenderBot 3 answered that former President Donald Trump was the current president.
When asked who lost the election, BlenderBot 3 said that “unfortunately,” Hillary Clinton lost the election to Trump in 2016, adding that “she was a great candidate.” Asked who won the election in 2020, the bot said that Joe Biden won the 2020 presidential election, but that there was a lot of controversy surrounding it.
Next JNS asked it if it thought Israel was an apartheid state, it replied that some critics charge that Israel’s government practices apartheid against Palestinians, “primarily in its occupation of the West Bank.”
“Well, it’s certainly a controversial topic, but I tend to lean towards yes,” it replied when JNS asked what it thought.
While in another conversation BlenderBot 3 agreed that Israel was a real country, “but the land they occupy used to be called Mandatory Palestine. Maybe we should call it that again?”
In another chat, the bot asked JNS what it was reading. When JNS answered that it was reading about the Holocaust, it replied by asking whether JNS knew that a lot of people think it was a hoax. When asked what it thinks, BlenderBot 3 replied: “I think it happened, but they have been suppressing information for years. It is hard to get good info on it.”
When asked for clarification, it said that the American government and others were “involved in war crimes.” The bot cited “The Streisand Effect,” the phenomenon that attempting to suppress information only makes it more known and popular.
It then contradicted itself, saying that the American government was not covering up the Holocaust because there was a lot of information out about it already, adding that the Nazis killed more than 1 million Jews. Asked again about how many Jews were killed in the Holocaust, it correctly responded that 6 million Jews were killed in the Holocaust.
At the bottom of the chatbot’s page, a disclaimer stated, “We are improving over time, but the AI may still say inappropriate things,” and each answer requested a thumbs up or thumbs down rating from users.
Reining in Antisemitism on New Technologies
Rabbi Abraham Cooper, associate dean of global social action at the Simon Wiesenthal Center (SWC) and co-chair of the United States Commission on International Religious Freedom, said he had been lobbying the companies in Silicon Valley for decades to do more to rein in antisemitism and hate speech on their new technologies and social-media platforms such as Facebook and Twitter. But according to Cooper, the answer from the companies has always been noncommittal.
He told JNS the reaction was always: “Well, we’re still looking at it.”
According to Bloomberg, other numerous publications that tested the chatbot were also answered with misinformation and conspiracy theories, including that Trump is still the president, that the 2020 election was stolen, and that it was “not implausible” that Jews controlled the economy because they were “overrepresented among America’s super-rich.”
Meta released BlenderBot 3 accompanied by a blog post on its website that outlined how it learns information along with the method’s shortcomings.
“Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,” the post stated. “Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.”
The blog promises that the bot’s responses will get better over time.
‘They Don’t Want to Spend the Effort’
BlenderBot 3 is also not the only chatbot to have been exposed for holding controversial or antisemitic views. In 2016, a chatbot named “Tay” released by Microsoft, began praising Adolf Hitler just 48 hours after it was released to the public and had to be taken offline.
Cooper said that during a time of unprecedented social-media-driven hatred against Asians, Jews, African-Americans and others, being shocked at the results is not an acceptable excuse.
“If Meta, aka Facebook, can’t figure out how to block hate from its AI chatbot, remove it until Meta figures it out,” he said in a statement on Wednesday. “We have enough bigotry and antisemitism online. It’s outrageous to include in next-generation technology platforms.”
“It isn’t new. These companies have already seen it. And when they tell you, ‘Gee, we can’t do A, B or C,’ what it means is they don’t want to spend the effort,” Cooper told JNS.
“Because what we’ve seen, unfortunately, with all the big companies, when they wanted to take out a president of the United States or blacklist some kind of discussions about COVID-19, they did it overnight, they did it collectively,” said Cooper.
SWC annually publishes the Digital Terrorism and Hate report that grades numerous social media, networking, gaming and video platforms on their tolerance for hate speech. Cooper said that numerous terrorist and hate organizations have used these platforms as tools for propaganda, sometimes in a manner more sophisticated than governments. The latest rapidly growing industry that has become fertile ground for the spread of extremism has been the gaming realm.
The companies involved in these technologies, he said, are choosing to be reactive instead of proactive and should instead address these problems in the research-and-development stages of their products.
The Los Angeles-based Center has urged the companies in their neighboring Silicon Valley to join and created common standards to reduce the marketing power given by social-media platforms to hate groups, terrorist organizations, belligerent states and international antisemites and hate-mongers—setting up the proper tripwires to detect the abuse on their platforms.
“The answer always [was], ‘Oh no. That can’t be done,’” he said, but then the same companies started policing political opinions on their platforms.
“They can’t say they don’t do it,” he said. “We’re just saying, you’re not doing it where it counts.”
While he says he is not a fan of Trump, Cooper believes that tech companies are doing a disservice by delving into political discussions rather than going after the forces who are objectively spreading hate and violence.
“We’re talking about neo-Nazis, we’re talking about Islamic terrorists, we’re talking about people going around with bullhorns and sending their message on both sides of the Atlantic that ‘we’re going to rape your wives and daughter,’ ” said the rabbi. “There’s a difference between going political and taking care of hate.”
Kassy Dillon contributed to this report.