British spies will have to use artificial intelligence (AI) to counter a series of threats, according to an intelligence report.
Opponents are likely to use the technology for attacks in cyberspace and the political system, and AI will be needed to detect and stop them.
But AI is unlikely to predict who might be on the brink of serious crimes, such as terrorism – and will not replace human judgment, he said.
The report is based on unprecedented access to British intelligence.
The Royal United Services Institute (Rusi) think tank also argues that the use of AI could give rise to new privacy and human rights considerations, which will require further guidance.
Opponents of the United Kingdom “will no doubt seek to use AI to attack the United Kingdom,” Rusi said in the report – and this may include not only states, but also criminals.
Fire by fire
Future threats could include using AI to develop fake fakes – where a computer can learn to generate convincing fake videos of a real person – to manipulate public opinion and elections.
It can also be used to mutate malware for cyber attacks, making it harder to detect normal systems – or even to reuse and control drones to carry out attacks.
In these cases, AI will be needed to fight AI, the report said.
“Adopting AI is not only important in helping intelligence agencies deal with the technical challenge of information overload. It is very likely that malicious actors will use AI to attack the UK in many ways, and the intelligence community will need to develop new AI-based bases. defense measures, “said Alexander Babuta, one of the authors.
The independent report was commissioned by the British security service GCHQ and had access to much of the country’s intelligence community.
Britain’s three intelligence agencies have made the use of technology and data a priority for the future – and new MI5 chief Ken McCallum, who takes over this week, said the one of its priorities would be to make better use of technology, including machine learning.
However, the authors believe that AI will have only “limited value” in “predictive intelligence” in areas such as the fight against terrorism.
The fictitious reference often cited is the film Minority Report where technology is used to predict those who are about to commit a crime before they have committed it.
But the report argues that this is less likely to be viable in real national security situations.
Acts such as terrorism are too rare to provide historical data sets large enough to search for patterns – they occur much less often than other criminal acts, such as burglary.
Even within this dataset, the backgrounds and ideologies of the perpetrators vary so much that it is difficult to construct a model of a terrorist profile. The report argues that there are too many variables to make prediction simple, as new events can be drastically different from previous ones.
Any type of profiling could also be discriminatory and cause new human rights concerns.
In practice, in areas like the fight against terrorism, the report argues that “augmented” intelligence – rather than intellectual intelligence – will be the norm – where technology helps human analysts to sift through and prioritize increasingly large amounts of data, allowing humans to make their own judgments. .
It will be essential to ensure that human operators remain accountable for decisions and that AI does not act like a “black box”, from which people do not understand the basis on which decisions are made, says The report.
Step by step
The authors are also wary of part of the hype around AI, and that it will soon be transformative.
Instead, they believe we will see the gradual increase in existing processes rather than the arrival of new futuristic capabilities.
They believe the UK is in a strong global position to take the lead, with a concentration of capacity within the GCHQ – and more broadly in the private sector, and in organizations like the Alan Turing Institute and the Center for Data Ethics and Innovation.
This could allow the UK to position itself at the forefront of AI use, but within a clear ethical framework, they say.
The deployment of AI by intelligence agencies may require new directions to ensure safeguards are in place and that any intrusion into privacy is necessary and proportionate, the report said.
Learn more about Gordon:
One of the thorny legal and ethical questions for spy agencies, especially since the revelations of Edward Snowden, is how justified it is to collect large amounts of data from ordinary people in order to filter it out. and analyze them to find those who may be involved in terrorism or other criminal activity.
And there is the related question of how far privacy is breached when data is collected and analyzed by a machine versus when a human sees it.
Privacy advocates fear that artificial intelligence requires collecting and analyzing much larger amounts of data from ordinary people in order to understand and search for patterns, which create a new level of intrusion. The report authors believe that new rules will be necessary.
But overall, they say it will be important not to worry too much about the potential drawbacks of using the technology.
“There is a risk of stifling innovation if we focus too much on hypothetical worst-case results and speculation about a future AI-driven dystopian surveillance network,” said Babuta.
“Legitimate ethical concerns will be overshadowed unless we focus on the likely and realistic uses of AI in the short and medium term. “