It was announced this week that Internet giants Microsoft, Twitter, YouTube and Facebook have teamed up to create an information sharing project that aims to tackle and remove extreme terrorist content from their platforms.
We hope this collaboration will lead to greater efficiency as we continue to enforce our policies to help curb the pressing global issue of terrorist content online.
In their Newsroom release, Facebook outlined how this shared database will work, and unique digital fingerprints called “hashes” which will help to identify and remove any content that it would consider terrorist or extremist. This kind of database is similar to how Facebook deals with child pornography images or copyright protected files. The difference is that the terrorist content will not automatically be removed; it will first be identified and reviewed by each organisation.
Once the material is spotted, the other platforms will be notified using the hash, then reviewed by Microsoft, Twitter, YouTube and Facebook, and if it violates the codes of each company then it will be removed.
There is no place for content that promotes terrorism
The platforms each have different policies on terrorist content, so the initiative will use the hashes for any extreme images, recruitment videos, files and other material that violates these policies.
The databases will constantly be updated to keep up with any newly released content in the hope of reducing its spread. In the future, the Internet giants involved in the collaboration would like this database to be available for many large companies.
Some may argue that these organisations have no place deciding what content and news is ‘right’ or ‘wrong’. However, given the influence and user base that social media sites like Facebook and Twitter can have on global sharing of content, these organisations can take responsibility for what is shared and seen on their platforms.