NCSC Warns AI Already Being Used By Ransomware Hackers – Silicon UK

5 minutes, 17 seconds Read

The UK’s National Cyber Security Centre (NCSC), part of GCHQ, has issued a warning about global ransomware threat levels, in a world where artificial intelligence (AI) systems are becoming increasingly pervasive.

NCSC on Wednesday announced that “artificial intelligence will almost certainly increase the volume and impact of cyber attacks in the next two years.”

The British cyber guardian had last November warned that the threat to the UK’s most critical infrastructure was ‘enduring and significant’, amid a rise of state-aligned groups, an increase in aggressive cyber activity, and ongoing geopolitical challenges.

The NCSC’s headquarters in Victoria. NCSC

AI impact

But now it is issuing a warning after focusing on how AI will impact the efficacy of cyber operations and the implications for the cyber threat level.

NCSC in a chilly cyber assessment of the near-term impact of AI, concluded that AI is already being used in malicious cyber activity and will almost certainly increase the volume and impact of cyber attacks – including ransomware – in the near term.

NCSC’s ‘near-term impact of AI on the cyber threat’ report also found that “by lowering the barrier of entry to novice cyber criminals, hackers-for-hire and hacktivists, AI enables relatively unskilled threat actors to carry out more effective access and information-gathering operations.”

It said that this enhanced access (thanks to AI), combined with the improved targeting of victims afforded by AI, will contribute to the global ransomware threat in the next two years.

The agency stressed that ransomware continues to be the most acute cyber threat facing UK organisations and businesses, with cyber criminals adapting their business models to gain efficiencies and maximise profits.

Government action

It highlighted the £2.6 billion investment from the Government in its Cyber Security Strategy to improve the UK’s resilience, with the NCSC and private industry already adopting AI’s use in enhancing cyber security resilience through improved threat detection and security-by-design.

It also noted the Bletchley Declaration, agreed at the UK-hosted AI Safety Summit at Bletchley Park in November, also announced a first-of-its-kind global effort to manage the risks of frontier AI and ensure its safe and responsible development.

In the UK, the AI sector already employs 50,000 people and contributes £3.7 billion to the economy, with the government dedicated to ensuring the national economy and jobs market evolve with technology as set out under the Prime Minister’s five priorities.

“We must ensure that we both harness AI technology for its vast potential and manage its risks – including its implications on the cyber threat,” said NCSC CEO Lindy Cameron.

“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” said Cameron.

“As the NCSC does all it can to ensure AI systems are secure-by-design, we urge organisations and individuals to follow our ransomware and cyber security hygiene advice to strengthen their defences and boost their resilience to cyber attacks,” Cameron concluded.

Specific AI cyber threats

Analysis from the NCA (National Crime Agency) suggests that cyber criminals have already started to develop criminal Generative AI (GenAI) and to offer ‘GenAI-as-a-service’, making improved capability available to anyone willing to pay, NCSC noted.

Yet the NCSC’s new report found that the effectiveness of GenAI models will be constrained by both the quantity and quality of data on which they are trained.

The growing commoditisation of AI-enabled capability mirrors warnings from a report jointly published by the two agencies in September 2023 which described the professionalising of the ransomware ecosystem and a shift towards the “ransomware-as-a-service” model.

Last week a Cybernews report found that ransomware attacks rose to record numbers in 2023, with a 128.2 percent rise in victims.

According to the NCA however, it is unlikely that in 2024 another method of cyber crime will replace ransomware due to the financial rewards and its established business model.

“Ransomware continues to be a national security threat,” said James Babbage, Director General for Threats at the National Crime Agency. “As this report shows, the threat is likely to increase in the coming years due to advancements in AI and the exploitation of this technology by cyber criminals.”

“AI services lower barriers to entry, increasing the number of cyber criminals, and will boost their capability by improving the scale, speed and effectiveness of existing attack methods,” said Babbage. “Fraud and child sexual abuse are also particularly likely to be affected.”

Cybersecurity nightmare

The fact that the NCSC report found that AI will make it difficult to spot whether emails are genuine or sent by scammers and malicious actors, including messages that ask computer users to reset their passwords, was noted by Mike Newman, CEO of My1Login.

“This report highlights that the NCSC is very concerned over the impact of AI on cybercrime in the future,” said Newman. “The report predicts there will be an increase in attacks from all threat actors, while ransomware will increase, and phishing emails will become harder to detect.”

Mike Newman, CEO of My1Login.
Image credit My1Login

“From an enterprise perspective, these forecasts spell a cybersecurity nightmare,” said Newman. “Phishing is the number one cybercrime tactic today and it is the most common method criminals utilise to steal corporate passwords from employees.”

“The threat provides big returns for criminals, but it isn’t always successful because phishing emails often contain spelling errors, or strange imagery that raise red flags for recipients and make them think the emails could be fake,” said Newman. “With AI, all these tell-tale signs are completely removed.”

“Font, tone, imagery and branding is all perfect with the emails generated via AI, which will make them much harder to detect as malicious,” said Newman. “The only way to counter this threat is to remove valuable information, like passwords, from employee hands so they don’t have the ability to disclose them to phishing actors.”

“Using a modern workforce identity management solution, that provides Single Sign-On and enterprise password management, this enables passwords to be used where applications rely on them, but have them hidden from the workforce, which significantly improves the user experience and enhances security,” said Newsman.

“This means even when sophisticated AI generated phishing scams do reach the user’s inbox, they don’t have the ability to disclose their passwords because they simply don’t know them,” Newman concluded.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts