Target Identified: Combating Extremism with Automatic Deletion

7th September 2016 Crista Renouard News No comments

In a recent article which appeared in tech magazine The Verge, reporter Amar Toor says that social media and search engine platforms are under pressure from governments to stop the spread of extremist propaganda. The idea is to use an image identification technology to automatically remove images which are tied to extremist propaganda. The new algorithm eGlyph, “uses so-called ‘hashing’ technology to assign a unique fingerprint to images, videos, and audio that have already been flagged as extremist, and automatically removes any versions that have been uploaded to a social network,” says Toor.

shutterstock_251313145sm-600x411.jpgThough the application is new, the concept is not; eGlyph is modeled on PhotoDNA, an algorithm used to track and identify child pornography as it circulates the web. The idea behind using eGylph in an attempt to stop terrorism to stop the transmission of material at the source. Ideally, eGylph would remove the possibility of viral terrorist propaganda thereby discouraging posts of extremist content in the first place.

Deciding What Should Be Deleted

Determining what counts as terrorist propaganda, however, is where it gets tricky. It’s easy to garner support for blocking viral videos of beheadings, but more subtle material may be somewhat difficult to justify automatically deleting. Additionally, laws vary widely from place to place. The US with its first amendment rights is likely to have a broader swath of protected speech than many European nations. Also, the possibility of blocking legitimate news content that may deem segments of recruiting videos as newsworthy.

While removing humans from the equation may be desirable in terms of expediency, there is still a measure of human touch which is inescapably necessary for the identification of terrorist propaganda. Beyond the scope of reused images, there has to be a point at which these images are collected and tagged. Additionally, even the most stringently imposed systems of propaganda elimination would still be subject to the limits of its own definition; it would never be able to address threats in the earliest stages. That is where Smoothwall changes the game.

Getting Ahead of the Problem

To identify that kind of early threat, a softer touch is required. These types of problems are grounded in patterns of behavior, and recognizing those patterns requires an element of human touch. This softer touch is called safeguarding, and it keeps the balance between vigilance and discernment without necessarily interfering.

Safeguarding has a couple of distinct advantages over a purely automated approach. In the first place, it allows a problem to be viewed from its earliest stages, before any kind of intervention can be justified. This early awareness provides a deeper look into the context of the behavior. Because of the awareness of the context, those in charge of oversight can really start to understand how these types of problems unfold, providing a key piece of information which is distinctly missing in our present understanding of terrorist recruitment and radicalization.

Using a sophisticated system of alerts, the Safeguarding reporting suite by Smoothwall provides that kind of soft-touch oversight. And when used in tandem with these stronger measures imposed by social media sites, search engines, and the content-aware quality in our web filtering, it’s hard to think of a more well-rounded solution for keeping people secure.

Picture of Crista Renouard

Crista Renouard