Facebook defends Muslim sharia law censorship, champions blasphemy laws

Aside from the well-publicized incidents affecting a number of high public profile Facebook users, less-known social media users are censored and banned all the time.

By Judith Bergman, The Gatestone Institute

Recent events illustrate how Facebook — which has previously championed blasphemy laws — continues its “sharia censorship” regarding content it apparently deems contrary to its “Community Standards“.

A report published in the Wall Street Journal on January 8 noted that Facebook — and Twitter — executives removed activist Laura Loomer from their platforms after Zahra Billoo, the executive director of the Council on American Islamic Relations’ (CAIR) San Francisco Bay Area chapter, complained to them. What Facebook fails to disclose is that CAIR was an unindicted co-conspirator in the largest terror-financing case in US history. CAIR has also been designated a terror organization by the United Arab Emirates.

It should, however, be of little surprise that CAIR executives are able to wield such power over social media. According to Islamist Watch’s Sam Westrop:

“Since 2008, the Silicon Valley Community Foundation (SVCF) has granted $330,524 to two Islamist organizations, the Council on American-Islamic Relations (CAIR) and Islamic Relief…. SVCF is America’s largest community foundation, with assets of over $8 billion. Its corporate partners include some of the country’s biggest tech companies — its largest donation was $1.5 billion from Facebook founder Mark Zuckerberg.”

Silicon Valley, in other words, appears to be in the habit of financially supporting Islamists.

Billoo herself, according to Jihad Watch, “In tweets that remain publicly available… has expressed her support for an Islamic caliphate and Sharia law. She also claims, in multiple tweets, that ISIS is on the same moral plane as American and Israeli soldiers, adding that ‘our troops are engaged in terrorism.'”

Also in January, Facebook removed ads promoting a “Britain First” petition against the redevelopment and expansion of a mosque in the UK, on a Facebook page called Political Gamers UK. “Britain First” announced it would sue the social media giant for “political discrimination.”

The latest two instances of Facebook censorship were far from unique. In 2018, some of the publicized incidents of Facebook censorship included:

The news website Voice of Europe reported that it had been repeatedly censored and suspended for posting articles that contained content reflecting the critical stance of Central and Eastern European politicians against migration. An example is a book review of former Czech President Vaclav Klaus’s Europe All Inclusive, in which he said: “The migrant influx is comparable to the barbarian invasions of Europe.”

According to Voice of Europe, “We’ve now decided we will not post all our news on Facebook anymore, because we don’t want to lose our page.”

German Catholic historian and author Dr. Michael Hesemann had his comments on the historic role of Islam in Europe deleted because they supposedly did not correspond to Facebook “community standards.” Hesemann had written, “Islam always plays only one role in the 1700-year-old history of the Christian Occident: the role of the sword of Damocles which hung above us, the threat of barbarism against which one needed to unite and fight. In this sense, Islam is not part of German history…”

Frontpage Magazine editor Jamie Glazov was banned from Facebook for 30 days for posting screenshots of a Muslim threats to him. Facebook also banned him for 30 days on another occasion, for writing an article on the 17th anniversary of 9/11 on how to best prevent future 9/11s, “9 Steps to Successfully Counter Jihad.”

(Most recently, another social media giant, Twitter, warned Glazov that his new book, Jihadist Psychopath: How He Is Charming, Seducing, and Devouring Us , is in violation of Pakistan’s penal code, according to which Glazov is apparently “defiling the Holy Quran.” For the moment, Twitter has not taken any action, but it shows the extent to which social media is willing to take sharia laws into consideration).

Facebook closed down Australian imam Mohammad Tawhidi’s Facebook page “after he made a post mocking the terrorist group Hamas, and speaking in sarcastic terms about ‘peaceful Palestinian protests.'”

Facebook permanently banned the entire European branch of the anti-migration youth movement, Generation Identity, from Facebook. It deleted the movement’s pages for containing “extremist content.”

Facebook censored a post critical of Islam’s treatment of gays as “hate speech” and banned the editor of the website behind the post, Politicalite, for 30 days.

Facebook regularly blocks posts from historian and author Robert Spencer’s Jihad Watch website. It happened, for example, both in September and in December.

Following European Commission’s code

These represent just an extremely small selection of publicized incidents affecting a number of high public profile Facebook users; less-known social media users are censored and banned all the time. In Germany, for instance, lawyer, journalist and anti-censorship activist Joachim Nikolaus Steinhöfel runs a website that documents Facebook censorship in Germany alone. There appears to be an enormous amount to document — as of June 2017, Facebook removed an average of 288,000 posts a month globally, according to its own statistics.

This should not come as a surprise – Facebook has, for example, signed the European Commission’s Code of Conduct on countering illegal online hate speech online, which commits the social media giant to review and remove, within 24 hours, “illegal hate speech.” Facebook’s Vice President of Public Policy Richard Allan wrote in 2017:

“Our current definition of hate speech is anything that directly attacks people based on what are known as their ‘protected characteristics’ — race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, or serious disability or disease.

“There is no universally accepted answer for when something crosses the line…

“Sometimes, it’s obvious that something is hate speech and should be removed – because it includes the direct incitement of violence against protected characteristics, or degrades or dehumanizes people. If we identify credible threats of imminent violence against anyone, including threats based on a protected characteristic, we also escalate that to local law enforcement.”

Facebook, however, appears to be “creatively” selective in how it chooses to follow its own rules. It removes, for example, “content that glorifies violence or celebrates the suffering or humiliation of others.” In Sweden, however, Ahmad Qadan posted status updates to his public Facebook profile, asking for donations to ISIS. The posts stayed online for two years. Facebook deleted the posts only after the Swedish Security Service (Säpo) approached it.

In November 2017, Ahmad was sentenced to six months in prison after having been found guilty of using Facebook to collect money to fund weapons purchases for the ISIS and Jabhat al-Nusra terror groups and for posting messages calling for “serious acts of violence primarily or disproportionately aimed at civilians with the intention of creating terror amongst the public.”

‘On occasion we make mistakes’

Facebook responded: “On occasion we make mistakes. When that happens, we correct them as soon as we are made aware”.

In September, Canadian media exposed that a Toronto terrorist leader, Zakaria Amara, currently serving a life sentence for plotting Al Qaeda-inspired truck bombings in downtown Toronto, nevertheless had a Facebook page on which he posted prison photos and notes about what made him a terrorist. Only after Canadian media outlets reached Facebook to ask about the account did Facebook delete Amara’s account “for violating our community standards.”

In France, a prisoner identified as Amir was accused in November of publishing ISIS propaganda from his prison cell using a smuggled phone. Facebook, apparently, took no notice.

Most recently, in Germany, a parliamentarian from the anti-immigration Alternative for Germany party (AfD), Frank Magnitz, was severely wounded in a violent attack, which his party called “an assassination attempt.” A German “Antifa” group, Antifa Kampfsausbildung, posted “thank you” in response to the assault. Facebook found the group’s support of violence against a member of parliament perfectly in accord with its “standards”.

Perhaps Facebook’s selectivity is due to the loyalties it has already openly displayed. In July 2017, Joel Kaplan, Facebook’s vice president of Public Policy, reportedly promised Pakistan’s Interior Minister Chaudhry Nisar Ali Khan that Facebook would “remove fake accounts and explicit, hateful and provocative material that incites violence and terrorism.”

“The spokesperson said that while talking to the Facebook vice president, Nisar said that the entire Muslim Ummah was greatly disturbed and has serious concerns over the misuse of social media platforms to propagate blasphemous content… Nisar said that Pakistan appreciates the understanding shown by the Facebook administration and the cooperation being extended on these issues.”

Intent on censorship now more than ever

Facebook CEO Mark Zuckerberg now appears to be more intent on censorship than ever. In a recent memo, written in mind-numbing, bureaucratic obfuscatese, he described his plan to discourage “borderline content,” a concept appearing to be so meaningless as to encompass anything that Zuckerberg and Facebook might ever want to censor. This is how Zuckerberg defines it:

“One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content… At scale it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.

“Our research suggests that no matter where we draw the lines for what is allowed, as a piece of content gets close to that line, people will engage with it more on average…

“This is a basic incentive problem that we can address by penalizing borderline content so it gets less distribution and engagement. By making the distribution curve look like the graph below where distribution declines as content gets more sensational, people are disincentivized from creating provocative content that is as close to the line as possible.

“Interestingly, our research has found that this natural pattern of borderline content getting more engagement applies not only to news but to almost every category of content. For example, photos close to the line of nudity, like with revealing clothing or sexually suggestive positions, got more engagement on average before we changed the distribution curve to discourage this. The same goes for posts that don’t come within our definition of hate speech but are still offensive.

“This pattern may apply to the groups people join and pages they follow as well. This is especially important to address because while social networks in general expose people to more diverse views, and while groups in general encourage inclusion and acceptance, divisive groups and pages can still fuel polarization. To manage this, we need to apply these distribution changes not only to feed ranking but to all of our recommendation systems for things you should join.”

It is curious that Zuckerberg would present his idea of disincentivizing “borderline content” as something new when, in fact, it has been a staple on Facebook for at least several years. In November 2017, for example, “traffic to Jihad Watch from Facebook dropped suddenly by 90% and has never recovered,” according to the website’s creator, Robert Spencer.

Facebook is evidently still championing blasphemy laws.

>