According to an exclusive report from Reuters, Facebook and Google are looking at employing a special software designed to automatically block the videos of “extremists” from their social media. Some are worried that the blatant censorship of such videos could create problems, others are hopeful that the big companies will adopt their software.
According to Reuters,
Some of the web’s biggest destinations for watching videos have quietly started using automation to remove extremist content from their sites, according to two people familiar with the process.
The move is a major step forward for internet companies that are eager to eradicate violent propaganda from their sites and are under pressure to do so from governments around the world as attacks by extremists proliferate, from Syria to Belgium and the United States.
YouTube and Facebook are among the sites deploying systems to block or rapidly take down Islamic State videos and other similar material, the sources said…
And these aren’t even the only ones…other blocking programs have been proposed.
Blocking Content – open censorship
A group called the Counter Extremism Project, (which was founded by Frances Townsend and Mark Wallace from the George W. Bush era) has proposed software that will block the extremist content for the big companies.
Recruiting for extremist groups like ISIS has led to this creation of this automated method of control. Finding ways to stop that problem is a difficult balancing act.
But there’s another issue:
Who’s an “extremist?” It depends on who you ask. Some in our own government think people like you and I are “extremists” because we talk about “tyranny” and “guns.”
It sounds well and good for the targeting of groups like ISIS, but what about patriots? That one could hold a tinderbox of issues. Will a video of target shooting be tagged as extremist? How would they separate out training videos for militias or regular military?
Nearly everyone on social media platforms has had a video yanked for copyright issues because of the “hashes” – digital fingerprints embedded in the videos that cause them to be removed.
But how companies can balance removing videos tagged as “extremist” and still keep the First Amendment is going to be tricky. Up until now they’ve relied on people reporting things to help them control the content. Now, it may be up to an automated Big Brother.
“It’s a little bit different than copyright or child pornography, where things are very clearly illegal.” Seamus Hughes, deputy director of George Washington University’s Program on Extremism
“There’s no upside in these companies talking about it. Why would they brag about censorship?” Matthew Prince, chief executive of content distribution company CloudFlare
(Editor’s note: This is all enabled, thanks to Section 230 of the Communications Decency Act, which Adina Kutnicki and I discuss in our new book, “Banned: How Facebook enables militant Islamic jihad,” available here and here.)
- Director of Israeli Strategic Policy Center: New book skewers Facebook’s politically-correct zealotry
- Facebook reverses course after censoring iconic Vietnam War photo
- Facebook page opposing Sharia Law violates community standards, gets unpublished
- New book explains how Facebook enables militant Islamic jihad