Assessing and quantifying AI risk: A challenge for enterprises – CSO Online

5 minutes, 51 seconds Read

It’s a challenge to stay on top of it since the vendors can add new AI services any time, Notch says. That requires being obsessive about staying on top of all the contracts and changes in functionalities and the terms of service. But having a good third-party risk management team in place can help mitigate these risks. If an existing provider decides to add AI components to its platform by using services from OpenAI, for example, that adds another level of risk to an organization. “That’s no different from the fourth party risk I had before, where they were using some marketing company or some analytics company. So, I need to extend my third-party risk management program to adapt to it — or opt out of that until I understand the risk,” says Notch.

One of the positive aspects of Europe’s General Data Protection Regulation (GDPR) is that vendors are required to disclose when they use subprocessors. If a vendor develops new AI functionality in-house, one indication can be a change in their privacy policy. “You have to be on top of it. I’m fortunate to be working at a place that’s very security-forward and we have an excellent governance, risk and compliance team that does this kind of work,” Notch says.

Assessing external AI threats

Generative AI is already used to create phishing emails and business email compromise (BEC) attacks, and the level of sophistication of BEC has gone up significantly, according to Expel’s Notch. “If you’re defending against BEC — and everybody is — the cues that this is not a kosher email are becoming much harder to detect, both for humans and machines. You can have AI generate a pitch-perfect email forgery and website forgery.”

Putting a specific number to this risk is a challenge. “That’s the canonical question of cybersecurity — the risk quantification in dollars,” Notch says. “It’s about the size of the loss, how likely it is to happen and how often it’s going to happen.” But there’s another approach. “If I think about it in terms of prioritization and risk mitigation, I can give you answers with higher fidelity,” he says.

Pery says that ABBYY is working with cybersecurity providers who are focusing on GenAI-based threats. “There are brand-new vectors of attack with genAI technology that we have to be cognizant about.”

These risks are also difficult to quantify, but there are new frameworks emerging that can help. For example, in 2023, cybersecurity expert Daniel Miessler released The AI Attack Surface Map. “Some great work is being done by a handful of thought-leaders and luminaries in AI,” says Sasa Zdjelar, chief trust officer at ReversingLabs, who adds that he expects organizations like CISA, NIST, the Cloud Security Alliance, ENISA, and others to form special task forces and groups to specifically tackle these new threats.

Meanwhile, what companies can do now is assess how well they do on the basics if they aren’t doing this already. Including checking that all endpoints are protected, if users have multi-factor authentication enabled, how well can employees spot phishing email, how much of a backlog of patches is there, and how much of the environment is covered by zero trust. This kind of basic hygiene is easy to overlook when new threats are popping up, but many companies still fall short on the fundamentals. Closing these gaps will be more important than ever as attackers step up their activities.

There are a few things that companies can do to assess new and emerging threats, as well. According to Sean Loveland, COO of Resecurity, there are threat models that can be used to evaluate the new risks associated with AI, including offensive cyber threat intelligence and AI-specific threat monitoring. “This will provide you with information on their new attack methods, detections, vulnerabilities, and how they are monetizing their activities,” Loveland says. For example, he says, there is a product called FraudGPT that is constantly updated and is being sold on the dark web and Telegram. To prepare for attackers using AI, Loveland suggests that enterprises review and adapt their security protocols and update their incident response plans.

Hackers use AI to predict defense mechanisms

Hackers have figured out how to use AI to observe and predict what defenders are doing, says Gregor Stewart, vice president of artificial intelligence at SentinelOne, and how to adjust on the fly. “And we’re seeing a proliferation of adaptive malware, polymorphic malware and autonomous malware propagation,” he adds.

Generative AI can also increase the volumes of attacks. According to a report released by threat intelligence firm SlashNext, there’s been a 1,265% increase in malicious phishing emails between the end of 2022 to the third quarter of 2023. “Some of the most common users of large language model chatbots are cybercriminals leveraging the tool to help write business email compromise attacks and systematically launch highly targeted phishing attacks,” the report said.

According to a PwC survey of over 4,700 CEOs released this January, 64% say that generative AI is likely to increase cybersecurity risk for their companies over the next 12 months. Plus, gen AI can be used to create fake news. In January, the World Economic Forum released its Global Risks Report 2024, and the top risk for the next two years? AI-powered misinformation and disinformation. Not just politicians and governments are vulnerable. A fake news report can easily affect stocks price — and generative AI can generate extremely convincing news reports at scale. In the PwC survey, 52% of CEOs said that GenAI misinformation will affect their companies in the next 12 months.

AI risk management has a long way to go

According to a survey of 300 risk and compliance professionals by Riskonnect, 93% of companies anticipate significant threats associated with generative AI, but only 17% of companies have trained or briefed the entire company on generative AI risks — and only 9% say that they’re prepared to manage these risks. A similar survey from ISACA of more than 2,300 professionals who work in audit, risk, security, data privacy and IT governance, showed that only 10% of companies had a comprehensive generative AI policy in place — and more than a quarter of respondents had no plans to develop one.

That’s a mistake. Companies need to focus on putting together a holistic plan to evaluate the state of generative AI in their companies, says Paul Silverglate, Deloitte’s US technology sector leader. They need to show that it matters to the company to do it right, to be prepared to react quickly and remediate if something happens. “The court of public opinion — the court of your customers — is very important,” he says. “And trust is the holy grail. When one loses trust, it’s very difficult to regain. You might wind up losing market share and customers that’s very difficult to bring back.” Every element of every organization he’s worked with is being affected by generative AI, he adds. “And not just in some way, but in a significant way. It is pervasive. It is ubiquitous. And then some.”

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts