Chatham House Cyber 2024 – how AI creates new cybersecurity dimensions – diginomica

12 minutes, 18 seconds Read
image

The AI Spring that bloomed in 2022 has left us all in an exciting but uncertain world. And that is partly because the speed at which many organizations – or individuals within them – have adopted Large Language Models (LLMs) and generative tools has taken policymakers by surprise.

Almost overnight, workers and students have taken to chatbots like ChatGPT, assistants like Copilot, and generative tools like Stable Diffusion as a means to access instant insights, expertise, and/or work that would previously have been produced by seasoned talents. And all for free, or a subscription to the wares of a trillion-dollar company.

A leveller and democratizer of sorts, then. But also, a security risk: a means to divulge privileged data to vendors in the cloud, find copyrighted work being given away, or encounter hallucinations that mislead unwary users. Meanwhile, fraud is rising, and disinformation can spread just as fast as factual insights – deliberately or otherwise.

So, the question is, will humans still be able to tell fact from fiction? While another is, who benefits if they can’t? Whatever the answers may be, an uncomfortable fact remains: vendors’ profit from fraudsters just as much as they do from enterprise users. But what can we do about it?

AI Safety Summits have been and gone – with another to follow this autumn. But while doubtless well intentioned, their frontier-model focus has, so far, side-lined many ethical issues, while broadly accepting AI vendors’ right to behave in ways that challenge fair business practice.

But while AI can pose security risks, it can also help the fight against hostile states, malign actors, cybercriminals, and fraudsters by catching unusual traffic. It can even help identify threats that pass themselves off as normal communications, or hackers who are ‘living off the land’ (lurking behind apparently safe network traffic).

That was according to a panel at Chatham House Cyber 2024 – a high-level conference hosted by the international affairs think tank last week. So, what are the key issues?

Jen Ellis is founder of cybersecurity consultancy NextJenSecurity (geddit?) and a former Cabinet Office advisor. She said:

The first strand is the use of AI in defence and cybersecurity programs. The second is the use of AI by attackers to launch cyberattacks, or more successful attacks. And the third is attacks against AI systems that may have nothing to do with security. They may be in cars, or running hospital systems, for example. But I would like to quote The Hitchhiker’s Guide to the Galaxy and say, ‘Don’t panic’. The reality is people are talking about this a lot. They make it sound dire, but we’re not really seeing it [these AI threats] at the moment.

In a world of hype – both for and against AI – realism and critical thinking are desirable. Even so, users should remain vigilant, given that many uses of AI by cybercriminals may currently be invisible. She continued:

Let’s take the idea of AI being used for defensive purposes. The good news is there is loads of activity happening in this area. We have use cases and proven results. Plus, there is a lot of focus around what development should look like and how we do ‘secure by design’. For security professionals, this is an opportunity. And what that will do, hopefully, is lighten the load on some of the more repeatable things that take up so much time for them, so they can focus on more complex, strategic stuff.”

This is the eternal promise made by Industry 4.0 vendors, of course: that their technology will automate boring tasks, thus freeing humans up to be creative. But there is mounting evidence that their technologies are doing the opposite: automating creativity, and freeing up professionals to concentrate on boring support tasks instead. Just look around you!

Attackers

So, what about attackers using AI themselves? Ellis seemed dismissive of the idea – oddly for a former advisor to the same government that launched the AI Safety Summits. She explained:

We’ve heard so much about this, except from the attackers themselves! And most security researchers are like, ‘Not yet they’re not!’ It doesn’t mean it won’t happen, though. It probably will at some point. But here’s the reality: attackers don’t make their lives more complicated and expensive than they need to. And I hate to tell you this, but we ain’t winning already. Right now, they don’t need to invest heavily in building an AI infrastructure, on building the next generation of attack, because the current generation is pretty successful for them. That’s not great news, but also not a desperate reason to panic. But when we do start seeing attacks that use AI, a lot of it will be stuff that we have already seen – such as much better phishing emails.

Not a new threat, therefore, but an enhancement of an old one.

So, what about the third strand: attacks against the rising number of critical AI systems, in a world in which a previously hyped technology, autonomy, is poised for release into the wild at scale? Ellis said:

This is concerning because, one, we’re building AI into everything. So, attacks against AI systems will be attacks against all systems, potentially. And two, we seem to have returned to the idea of ‘move fast and break things’ in innovation. I would say that AI is one area where we don’t want to move fast and break things, but instead move cautiously and build great things. And this is where I think we will see a lot of government intervention being focused.

An excellent point. However, the challenge is the accelerated development faction – aka the ‘effective acceleration’ (e/acc) movement – among those AI vendors that are keen to seize billion- or trillion-dollar market opportunities. In other words, nearly all AI companies in this decade’s lucrative tech bubble. There is little appetite among them for pausing development while civil society debates the ethics.

All of this will make the tech supply chain more challenging to manage and secure, said Ellis:

AI is going to increase that complexity, but it’s not going to massively change the dynamics of it. It will change the scale and scope. It’s a speed factor, but we’re not there yet. So, what I would caution people about is: don’t get burnt out on this topic. Just because we’re not seeing the reality of it today, that doesn’t mean it’s not going to emerge. But it’s good that we are getting ahead of it.

Lawyers

Fair point. But it is not just developers, users, and cybersecurity professionals who are in the spotlight on these issues, observed Christian Toon, Head of Cyber Professional Services at international law firm, Pinsent Masons. He explained:

What we’re seeing is that lawyers seem to be pushed front and centre to provide the answers on how to deal with this problem [AI’s potential threat to cybersecurity]. So, we’ve got the macro perspective on how we’re going to deal with this from a societal point of view. Plus, some of the points that Jen mentioned, such as how does that translate to my organization? And what should I focus on?

A cynic might observe that Pinsent Masons has done a good job of pushing itself front and centre at technology conferences in recent years, rather more than it has been pushed by others. That is not a criticism, of course: it has seen an opportunity and grabbed it. But this should tell us something: if lawyers are profiting, it means the market is risky and uncertain.

At this point, Toons touched on recent controversies over the scraping of copyrighted content by AI companies to train their LLMs and generative systems. He said:

Suddenly, the model is in violation of intellectual property law, or there are cases where data required consent or certain permissions in the training of these models – plus the issue of data residency, of where it’s going to be held. And, in turn, the safeguarding controls for that. But the great saving grace here – again, to reiterate something Jen said – is let’s not panic about this. This is just another technology type, which means we already have the frameworks and standards to deal with this.

But do we? After hearing from expert witnesses on all sides of this debate, the UK’s House of Lords’ Communications and Digital Committee inquiry into LLMs last year (see diginomica, passim) certainly agreed with the view that copyright has been violated, and that AI vendors have acted unfairly in profiting from it.

But as I explained in my most recent report on that subject, governments have, collectively, sat on their hands on that policy question, while the challenge of which jurisdiction applies in such cases is also far from clear.

So, the reason why lawyers have been ‘pushed front and center’ on AI is because nobody else will help copyright holders, and governments are happy to leave it to the courts to decide. Protecting the interests of trillion-dollar innovators seems more important to them than defending what might, loosely, be called old-economy companies.

But the flipside of this is that the law has evolved in the old economy, not the new one. It is running a decade or two behind technology innovation, which itself is moving faster and faster

So, the problem with policymakers’ avoidance of action – ‘sitting on their hands on the bus of survivors’, to misquote David Bowie – is that only copyright holders who are wealthy enough to take the richest companies on Earth to court are likely to be compensated for vendors’ theft. Everybody else will be the silent victims of a rapacious business model.

Engagement 

But are all policymakers equally dis-engaged from these issues? Fortunately, no. Alexi Drew is Technology Policy Advisor for the International Committee of the Red Cross. She said:

I guess I’m the token civil rights activist or humanitarian in the room, so I will try and fill that role. [AI] is not something we haven’t seen before. It’s a continuation of a trend that we’ve seen with cyber, robotics, and even simple processing changes. It’s a well-trodden path of an emerging technology, which shifts capabilities, shifts our dynamics, and creates both opportunities and risks.

But now that we are hopefully aware of what shape this paradigm shift will take – even if there will be nuances – the question is: Are we up to the challenge of learning the lessons [that already exist]? For example, cyber, for us in the humanitarian space, is understood to create new potential for types of harm and risk that are caught up in armed conflict, and which amplify old wounds. It is creating a new attack surface, but data is now something which can be targeted, intentionally or unintentionally, as well as information operations.

AI is the same. It creates the same potential for new types of harms or the amplification of old ones. However, the difference, or one of the differences, is the added component of unpredictability. And that’s due to the nature of how these systems are developed, designed, and employed. In reality, many of the outputs of the use of artificial intelligence in the civilian sector, as in the military, cannot be clearly predicted in advance, or even explained after the fact. And if you put those two aspects together, what you create is an amplification of the potential for new and emergent harms, alongside the increased difficulty of understanding or predicting what they might be. So, you’re lowering the bar to entry, because, as we all know, anyone can use AI.

She added:

I’m very glad that I no longer lecture, because I don’t have to deal with the issue of how to decide whether essays have been written by one of the LLMs on the market. But that challenge is going to be the same for even low-complexity uses of AI. You don’t need AI to do harm. But the fact that you can [use it for harm] is going to increase the chances that those who previously didn’t have the skills or resources can use it – and therefore will. That will increase the number of actors who decide to go down the route of using offensive, AI-driven cyber capabilities in the course of armed conflicts. And again, because of the unpredictability, this will increase the potential harm to civilians caught up in it.

Drew then made a really interesting and challenging point alongside the stark warnings she had already issued:

Being unpredictable means that [perpetrators] aren’t necessarily beholden to international humanitarian law. That’s because you need to be able to make an effective legal ruling on whether something is going to happen. But if it’s unpredictable, you cannot do that.

Given that policymakers are reluctant to even engage with more obvious violations of existing laws, such as copyright, this would be a troubling state of affairs. She added: 

So, we’ve got an increase in the likelihood of unpredictable cyber operations, which would not only be incompatible with international humanitarian law, but also increase the likelihood of unintended harm to civilians, and not just within the area of intended effect. As Jen mentioned, AI is in everything. So, it is not just the vector of attack, it is also a new attack surface, which is taking a huge space in civilian and critical national infrastructure.

The ability to access these tools and use these capabilities will be supplemented further by AI to give people skills or resources that they didn’t previously have. But will they have the maturity to understand the operational environment in which they’re engaged – and the secondary and tertiary harms beyond the operational area of impact? Probably not. So, while we are not seeing the doomsday scenario of AI, with cyber conflicts running wild, that lowering of the bar for even low-complexity, simple capabilities, does increase the potential harm to civilians in armed conflicts.

But not everything is bleak, said Terry Wilson, Global Partnership Director with safer internet non-profit, the Global Cyber Alliance. AI may speed up the exposure of bad actors in our midst. He said:

What it does allow is to expose, more quickly and efficiently, and at a broader scale, what we would call traditional crimes. We’re looking at the impact of AI through a human lens, and by providing the cybersecurity tools and solutions that we make available free of charge to understand end-user communities. And based on our early observations, there’s more good news. We’re seeing that the tools that are already available to mitigate basic cybersecurity risks appear to still be fit for purpose.”

My take

A challenging and thought-provoking session. So, let’s hope that the adoption of AI will always be accompanied by similar levels of human contemplation and engagement, and not by a collective abandonment of critical thought.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts