Columns » Technicalities

Halting social media propaganda has one major roadblock

by

comment
SHUTTERSTOCK
  • Shutterstock
Facebook, Google, Microsoft, and Twitter have formed the Global Internet Forum to Counter Terrorism, using their combined resources in an effort to combat the spread of extremist content online. A joint announcement outlined their purpose:
“The Spread of terrorism and violent extremism is a pressing global problem and a critical challenge for us all. We take these issues very seriously, and each of our companies have developed policies and removal practices that enable us to take a hard line against terrorist or violent extremist content on our hosted consumer services. We believe that by working together, sharing the best technological and operational elements of our individual efforts, we can have a greater impact on the threat of terrorist content on line.”

Sounds like a noble effort from these tech industry giants, but it may not be an exercise in altruism as much as a good PR move. Extremist groups and individuals have been using these companies' platforms for years to distribute their propaganda. As The Guardian reported earlier this year, Google alone has lost millions in advertising dollars because of extremist content on YouTube.

I spoke of this in a past article: Filtering extremist content from the web is no easy task. But even when you can, what about our freedom of speech?

The technology these companies have created — and are now collaborating on — is impressive, all with the purpose is to detect terrorist material. Google, for instance, brings Jigsaw to the table, using words alone to identify everything from, say, what type of soap people like, to potential ISIS recruits.

The announcement notes a database of hashes several times. Hashes are unique digital fingerprints that can be used to index and retrieve items in a database very quickly, for instance, to identify terrorist content and remove it from the social media outlets. If one of the companies removes violent or extremist content, a unique hash is assigned and logged into the database — the more hashes there are, the easier it will be to flag and remove content from other media sources.

Even with all of this great technology, and good intentions set aside, it's not far-fetched to ask  what if humans or technology filters out content that is perfectly legitimate, such as a documentary on American gang violence, or ISIS recruitment? Censorship is a bad word in our society. With billions of video minutes streaming in, it is possible that we’ll end up, as they say, throwing the baby out with the bathwater.

Attempts by tech industry giants to curb extremist propaganda look great on paper, but some content will always get through. On top of that, freedom of speech is something that we as a nation hold very dear. We'd rather protect that freedom — for all speech — then to live in a society without it.

Tech companies waging war on extremist propaganda by flagging their own users' content will soon find themselves walking a very thin line, if there's one at all. Germany is combating the same challenge, trying to balance combating hate speech with protecting free speech with a task force evaluating if flagged content is either inappropriate or acceptable freedom of speech. An approach these tech giants may want to keep an eye on.

Thomas Russell is a high school information technology teacher and retired Army Signal Corps soldier. He is the founder of SEMtech (Student Engagement and Mentoring in Technology) and an Advisory Board Member of Educating Children of Color. His hobbies include writing, photography and hiking. Contact Thomas via Russell’s Room on Facebook, or email at thruss09@gmail.com, and his photography at thomasholtrussell.zenfolio.com.

Add a comment

Clicky Quantcast