NCSC Warns AI Will Surge Cyber Threats – by Kevin Z and Henry L

23 May 2024

A report from the UK’s National Cyber Security Centre (NCSC) cautions that artificial intelligence (AI) will significantly escalate cyber threats, particularly ransomware attacks, over the next two years. The report highlights that AI will bolster threat actors’ abilities, leading to more convincing phishing attacks that deceive individuals into divulging sensitive information or clicking on malicious links.

The UK’s National Cyber Security Centre (NCSC) has issued a stark warning that these attacks involve hackers using malicious software to encrypt victims’ files or entire systems and then demanding a ransom payment for the decryption key.

The NCSC assessment predicts that AI will enhance threat actors’ capabilities, especially in crafting more convincing phishing attacks that trick individuals into providing sensitive information or clicking on malicious links. “Generative AI can already create convincing interactions like documents that fool people, free of the translation and grammatical errors common in phishing emails,” the report states.

Generative AI, which can produce convincing interactions and documents devoid of common phishing red flags, is identified as a key factor contributing to the escalating threat landscape. The NCSC highlights challenges in cyber resilience, including difficulties in verifying the legitimacy of emails and password reset requests due to generative AI and large language models. Additionally, the shrinking time window between security updates and threat exploitation complicates rapid vulnerability patching for network managers.

James Babbage, director general for threats at the National Crime Agency, commented, “AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed, and effectiveness of existing attack methods.”

However, the NCSC report also points out that AI could enhance cybersecurity through improved attack detection and system design. The report calls for further research on how advancements in defensive AI solutions can mitigate evolving threats.

Currently, advanced AI-powered cyber operations are primarily feasible for highly capable state actors due to the need for quality data, skills, tools, and time. The NCSC warns that these barriers to entry will progressively diminish as capable groups monetize and sell AI-enabled hacking tools. Lindy Cameron, CEO of the NCSC, emphasized, “We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat.”

To address these emerging high-tech threats, the UK government has allocated £2.6 billion under its Cyber Security Strategy 2022 to strengthen the country’s resilience. AI is poised to significantly alter the cyber risk landscape, making continuous investment in defensive capabilities and research crucial to counteract its potential to empower attackers.

The UK government has also implied many AI security laws to reduce malicious sites such as the National Cyber Security Strategy which will last from 2022-2030: while not specific to AI phishing, the strategy emphasizes the importance of defending against evolving cyber threats, which include those powered by AI technologies. The strategy promotes the development of advanced cybersecurity measures and research into defensive AI capabilities. 

Another example of a law created by the UK government that prevents or limits AI scamming websites is the Online Safety Bill which is an ongoing program. It addresses harmful online content and includes provisions that may impact the use of AI in generating and spreading malicious content, such as phishing emails.