THREE jihadist attacks in Britain in as many months have led to a flood of suggestions about how to fight terrorism, from more police and harsher jail sentences to new legal powers.
One idea has gained momentum in both Europe and America, however—that internet companies are doing the jihadists’ work for them. Technology giants such as Google and Facebook are accused of turning a blind eye to violent online propaganda and other platforms are charged with allowing terrorists to communicate with each other out of reach of the intelligence services.
It is only the latest such charge. The technology companies also have been condemned for allowing the spread of fake news and harboring bullies, bigots and trolls in the pursuit of profit. In the past they were accused of enabling people to evade copyright and of hosting child pornography.
In all these areas politicians are demanding that the technology giants take more responsibility for what appears on their networks. Within limits, they are right.
The days when the technology companies needed nurturing are long gone, however. In the past decade they have become the world’s most valuable companies. As their services have reached deeper into every aspect of everyday life, online activity has gained more potential to cause offline harm. For every Spotify there is a Wannacry.
Technology companies complain that this combination of novelty and commercial success makes them a convenient target for politicians, some of whom seem to regard regulating the internet as a shortcut to solving complex social problems such as hate speech. Eager to protect their special status, technology companies have emphasized that online recruitment is only part of the terrorist threat. Besides, they say, they are platforms, not publishers, and cannot possibly monitor everything.
The companies can act when they want to, though. Before Edward Snowden exposed them in a huge leak in 2013, they quietly helped American and British intelligence monitor jihadists. Whenever advertisers withdraw business after their brands end up alongside pornographic, violent or extremist material, they respond remarkably quickly.
As with car accidents or cyber-attacks, perfect security is unattainable. Nonetheless, an approach based on “defense in depth,” combining technology, policy, education and human oversight, can minimize risk and harm.
When self-interest is not enough, governments can prod the companies to tighten up—as German lawmakers have, threatening huge fines. According to a voluntary agreement with European regulators, the big companies have set a target of reviewing and, when appropriate, removing within a day at least 50% of content flagged by users as hateful or xenophobic. The latest figures show that Facebook reviewed 58% of flagged items within a day, up from 50% in December. For Twitter the figure was 39%, up from 24%. YouTube’s score fell from 61% to 43%, however.
The strongest measure is new laws. In 2002, for example, Britain made internet service providers liable for child pornography if they did not take it down “expeditiously.” The ISPs used a charity to compile a list of blocked URLs that it updated twice daily. The charity works closely with law-enforcement agencies in Britain and abroad. Similarly, American lawmakers have clamped down on copyright infringement.
As in the offline world, legislators must strike a balance between security and liberty. Especially after attacks, when governments want to be seen to act, they may be tempted to impose blanket bans on speech. Instead they should set out to be clear and narrow about what is illegal—which will also help platforms deal with posts quickly and consistently. Even then, the threshold between free speech and incitement will be hard to define. The aim should be to translate offline legal norms into the cyber domain.
Before legislators rush in, they also need to think about unintended consequences. If internet companies are threatened with fines, they may simply remove all flagged content, to be safe. Regulation that requires lots of staffers to take down offensive posts will most hurt small startups, which can least afford it. Laws mandating cryptographic “back doors” in popular messaging apps would weaken security for innocent users. Bad actors would switch to unregulated alternatives in countries that are unlikely to help Western governments. They would thus become harder for the intelligence services to watch.
© 2017 Economist Newspaper Ltd., London (June 10). All rights reserved. Reprinted with permission.
Image credits: Adrian Dennis/Agence France-Presse/Getty Images