On Monday, Reuters reported that web giants Facebook, YouTube, Twitter, Google and Microsoft are set to remove what they consider “extremist” content by establishing a database of “hashes” — what Reuters describes as “unique digital fingerprints they automatically assign to videos or photos – of extremist content they have removed from their websites to enable their peers to identify the same content on their platforms.”
According to Reuters:
Tech companies have long resisted outside intervention in how their sites should be policed, but have come under increasing pressure from Western governments to do more to remove extremist content following a wave of militant attacks.
YouTube and Facebook have begun to use hashes to automatically remove extremist content.
But many providers have relied until now mainly on users to flag content that violates terms of service. Flagged material is then individually reviewed by human editors who delete postings found to be in violation.
Twitter suspended 235,000 accounts between February and August this year and has expanded the teams reviewing reports of extremist content.
“We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online,” the companies said.
“YouTube and Facebook have begun to use hashes to automatically remove extremist content,” Reuters added.
But what, exactly, constitutes “extremist” content?
Consider: As we recently reported, Facebook said a picture of an eagle superimposed on a U.S. flag violates the site’s community standards and issued a 30-day ban. Will the U.S. flag be labeled “extremist?”
On Monday, Tami Jackson reported that YouTube removed a video of a Muslim who told his story of abandoning his hatred of Jews. According to YouTube, the video constituted hate speech.
“And what made this even more ludicrous?” Jackson asked. “Concurrently on Fox News’ Happening Now with Jon Scott and Jenna Lee, Judge Andrew Napolitano was discussing why the hate-filled videos of radical Muslim cleric Anwar al-Awlaki were still on YouTube.”
Fortunately, she added in an update, the issue was resolved later in the day.
“Kudos to YouTube for doing the right thing,” she added.
But the “right thing” would have been for YouTube to not censor the video at all.
As I have said time and again, this censorship takes place because of Section 230 of the 20-year-old Communications Decency Act, which a federal judge said in 2014 lets social media sites censor even Constitutionally-protected speech. Sadly, far too many have been conditioned to believe that sites do this because they’re private companies and can do whatever they want.
Yes, they’re private companies, but that’s not why they do what they do. The fault lies with Congress and only Congress can fix this.
Incidents like this, by the way, are the reason Adina Kutnicki, an investigative journalist based in Israel, and I wrote “Banned: How Facebook enables militant Islamic jihad.” The book, endorsed by Pamela Geller, is available at WND and Amazon.com.
Meanwhile, Reuters said this database will be online in 2017 and more companies will be involved.
The real question at this time is: Will the contents of this database be made known to the public?
I’m not holding my breath…
- Facebook: U.S. flag violates community standards, earns user 30-day ban
- YouTube under fire for ‘Heroes’ program that lets trolls mass-delete videos they disagree with
- Lawsuit: Federal government lets Facebook, Twitter, YouTube censor ‘anti-Islam’ speech
- Jamie Glazov Productions Features: ‘BANNED: How Facebook Enables Militant Islamic Jihad’
- Twitter suspends “alt-right” accounts because they helped elect Trump
Is Biden's Vaccine Mandate Unconstitutional?
And if you’re as concerned about Facebook censorship as we are, go here and order this new book: