Used responsibly, artificial intelligence (AI) technology will in future help cyber analysts fact check and detect deepfake media to tackle disinformation, map international networks enabling human, drugs and weapons trafficking, and crack down on child sexual abuse, according to a paper produced by GCHQ, the UK’s national signals intelligence and information assurance agency.
In Ethics of AI: Pioneering a new national security, GCHQ sets out why AI technology will inevitably find itself at the heart of its core mission to protect the UK’s national security, and how it can be used appropriately. It has been released ahead of an upcoming government review of security, defence, development and foreign policy.
GHCQ said that AI would be a critical issue for the UK’s security in the 21st century, and that while many are excited by the opportunities it presents, left unchecked it too readily reflects the inherent beliefs and assumptions – whether good, bad or neutral – of those who design it.
It said the UK needs increased dialogue and debate around the use and protection of AI so that it can be used in a way that maximises the positives, while minimising the risk to individual privacy.
The paper outlines how GCHQ will ensure it uses AI fairly and transparently, applying existing tests of necessity and proportionality – including establishing an AI ethical code of practice, ensuring diversity of thought and experience in its development and governance, and protecting privacy and ensuring systematic fairness.
“AI, like so many technologies, offers great promise for society, prosperity and security. It’s impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people and way of life,”said GCHQ director Jeremy Fleming.
“It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.
“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.
“Today, we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”
The paper also details how GCHQ is supporting the wider AI industry in the UK, including setting up an industry AI Lab, mentoring and supporting startups clusters in Cheltenham, London and Manchester, and backing the creation of the Alan Turing Institute.
Panintelligence chief technology officer Ken Miller said the paper’s publication was a crucial step in the development of AI technology, and one that all stakeholders could benefit from following, not just in matters of cyber crime and national security but for other uses besides.
“As a society, we are still somewhat undecided whether AI is a friend or foe, but ultimately it is just a tool that can be implemented however we wish,” he said.
“Make no mistake, AI is here and it touches many aspects of your life already, and most likely has made decisions about you today.
“It is essential to build trust in the technology, and its implementation needs to be transparent so that everyone understands how it works, when it is used and how it makes decisions. This will empower people to challenge AI decisions if they feel it necessary and go some way to demystifying any stigma.”
Miller conceded that while it would likely take time for the general public to be completely comfortable with AI-based decision-making, accountability and strict regulation would help speed that process along.
“We live in a world that is unfortunately full of human bias, but there is a real opportunity to remove these biases now. However, this is only possible if we train the models effectively, striving to use data without limitations,” he said.
“We should shine a light on human behaviour when it displays prejudice, and seek to change opinions through discussion and education – we must do the same as we teach machines to ‘think’ for us.”