Extremists “are using fewer and fewer telephone lines and more and more Internet connections,” French Interior Minister Gerald Darmanin said at a recent press conference. This makes a lot of sense: on the one hand, everyone uses fewer phone lines and more internet connections, and on top of that, internet conversations can be more difficult to locate than phones.
After the 2015 terrorist attacks, France launched a trial using algorithms to detect terrorist activity online. In 2017, the country passed a law allowing it to monitor instant messaging apps (although enforcement is sometimes a challenge). But it was mostly experimental in nature so far.
The new bill would give France more power to detect potential terrorists, with Darmanin noting that intelligence services will, for example, be able to spot someone who has accessed extremist websites multiple times.
This is generally in line with global trends. Social networks like Facebook and Instagram are already using algorithms in the form of AI (artificial intelligence) to monitor hate speech and even drug traffickers.
“Our technology is able to detect content that includes images of drugs and describes intention to sell with information such as price, phone numbers or usernames for other social media accounts,” explained Kevin Martin, head of US public policy at Facebook.
It makes sense that counterterrorism follows suit. But the use of AI and big data, in general, has sparked intense debates between those who support the expansion of these technologies and those who see them as a threat to personal freedom.
“A new revolution has started in counterterrorism – the artificial intelligence revolution,” a recent study noted. “The traditional delicate balance between the effectiveness of the fight against terrorism and the liberal democratic values of society becomes even more crucial when the fight against terrorism involves AI and big data technology.”
Concerns are further heightened by fears that the algorithms just aren’t that good. Of course, the field of AI has made progress in recent years, but there is still concern that for every terrorist detected there could be 100,000 false positives.
Proponents of mass surveillance and this type of detection algorithm draw a distinction between data and metadata collection with the idea that the latter is a lesser form of privacy breach. But algorithms aren’t very good at separating from each other, and their results tend to be worse when they only work with metadata.
“Given the paucity of data sets used for machine learning in the fight against terrorism and the privacy risks associated with mass data collection, policymakers and other relevant stakeholders should critically reassess the likelihood of success of algorithms and the data collection they depend on, ”another recent study warned.
Yet even imperfect information can be useful, say the French authorities. The terrorist who killed a police employee in Rambouillet, south of Paris, days ago, watched extremist videos just before carrying out his attack, a prosecutor said – and reporting this kind of activity can save Lives.
For France, the stake is a little political. Current President Emmanuel Macron is soon re-elected and his candidates are particularly keen to attack him for security. Far-right candidate Marine Le Pen, Macron’s candidate in the last election, said the French were “surrounded by delinquency and crime”.
Since 2017, the terrorist attacks in France have killed 25 people. Three quarters of these attacks were carried out by French nationals, which prompted Prime Minister Jean Castex to stress that immigration policies and terrorist policies must be separated because they are two different things.
It’s hard to say how good French algorithms are. If they sound like what the research literature talks about, authorities will need to be careful that they don’t end up causing more problems than they solve, but it is not difficult to understand the desire of the country to maximize security. Ultimately, France, like all other countries, must decide how much of its freedom it wants to concede for the purpose of security – a decision in which computers and algorithms will play an increasing role. central.