Enterprises’ best bet for the future: Securing generative AI – IBM

6 minutes, 39 seconds Read

Enterprises’ best bet for the future: Securing generative AI   – IBM Blog

Modern office space and team meeting

IBM and AWS study: Less than 25% of current generative AI projects are being secured 

The enterprise world has long operated on the notion that trust is the currency of good business. But as AI transforms and redefines how businesses operate and how customers interact with them, trust in technology must be built.  

Advances in AI can free human capital to focus on high-value deliverables. This evolution is bound to have a transformative impact on business growth, but user and customer experiences hinge on organizations’ commitment to building secured, responsible, and trustworthy technology solutions.  

Businesses must determine whether the generative AI interfacing with users is trusted, and security is a fundamental component of trust. So, herein lies the one of the biggest bets that enterprises are up against: securing their AI deployments. 

Innovate now, secure later: A disconnect 

Today, the IBM® Institute for Business Value released the Securing generative AI: What matters now study, co-authored by IBM and AWS, introducing new data, practices, and recommendations on securing generative AI deployments. According to the IBM study, 82% of C-suite respondents stated that secure and trustworthy AI is essential to the success of their businesses. While this sounds promising, 69% of leaders surveyed also indicated that when it comes to generative AI, innovation takes precedence over security. 

Prioritizing between innovation and security may seem like a choice, but in fact, it’s a test. There’s a clear tension here; organizations recognize that the stakes are higher than ever with generative AI, but they aren’t applying their lessons that are learned from previous tech disruptions. Like the transition to hybrid cloud, agile software development, or zero trust, generative AI security can be an afterthought. More than 50% of respondents are concerned about unpredictable risks impacting generative AI initiatives and fear they will create increased potential for business disruption. Yet they report only 24% of current generative AI projects are being secured. Why is there such a disconnect? 

Security indecision may be both an indicator and a result of a broader generative AI knowledge gap. Nearly half of respondents (47%) said that they are uncertain about where and how much to invest when it comes to generative AI. Even as teams pilot new capabilities, leaders are still working through which generative AI use cases make the most sense and how they scale them for their production environments. 

Securing generative AI starts with governance 

Not knowing where to start might be the inhibitor for security action too. Which is why IBM and AWS joined efforts to illuminate an action guide and practical recommendations for organizations seeking to protect their AI. 

To establish trust and security in their generative AI, organizations must start with the basics, with governance as a baseline. In fact, 81% of respondents indicated that generative AI requires a fundamentally new security governance model. By starting with governance, risk, and compliance (GRC), leaders can build the foundation for a cybersecurity strategy to protect their AI architecture that is aligned to business objectives and brand values. 

For any process to be secured, you must first understand how it should function and what the expected process should look like so that deviations can be identified. AI that strays from what it was operationally designed to do can introduce new risks with unforeseen business impacts. So, identifying and understanding those potential risks helps organizations understand their own risk threshold, informed by their unique compliance and regulatory requirements. 

Once governance guardrails are set, organizations are able to more effectively establish a strategy for securing the AI pipeline. The data, the models, and their use—as well as the underlying infrastructure they’re building and embedding their AI innovations into. While the shared responsibility model for security may change depending on how the organization uses generative AI. Many tools, controls, and processes are available to help mitigate the risk of business impact as organizations develop their own AI operations. 

Organizations also need to recognize that while hallucinations, ethics, and bias often come to mind first when thinking of trusted AI, the AI pipeline faces a threat landscape that puts trust itself at risk. Conventional threats take on a new meaning, new threats use offensive AI capabilities as a new attack vector, and new threats seek to compromise the AI assets and services we increasingly rely upon. 

The trust—security equation 

Security can help bring trust and confidence into generative AI use cases. To accomplish this synergy, organizations must take a village approach. The conversation must go beyond IS and IT stakeholders to strategy, product development, risk, supply chain, and customer engagement. 

Because these technologies are both transformative and disruptive, managing the organization’s AI and generative AI estates requires collaboration across security, technology, and business domains. 

A technology partner can play a key role. Using the breadth and depth of technology partners’ expertise across the threat lifecycle and across the security ecosystem can be an invaluable asset. In fact, the IBM study revealed that over 90% of surveyed organizations are enabled via a third-party product or technology partner for their generative AI security solutions. When it comes to selecting a technology partner for their generative AI security needs, surveyed organizations reported the following: 

  • 76% seek a partner to help build a compelling cost case with solid ROI.  
  • 58% seek guidance on an overall strategy and roadmap. 
  • 76% seek partners that can facilitate training, knowledge sharing, and knowledge transfer. 
  • 75% choose partners that can guide them across the evolving legal and regulatory compliance landscape. 

The study makes it clear that organizations recognize the importance of security for their AI innovations, but they are still trying to understand how best to approach the AI revolution. Building relationships that can help guide, counsel and technically support these efforts is a crucial next step in protected and trusted generative AI. In addition to sharing key insights on executive perceptions and priorities, IBM and AWS have included an action guide with practical recommendations for taking your generative AI security strategy to the next level. 

Learn more about the joint IBM-AWS study and how organizations can protect their AI pipeline

Was this article helpful?


More from Security

What you need to know about the CCPA rules on AI and automated decision-making technology

9 min readIn November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT).  The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world’s biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

Data privacy examples

9 min readAn online retailer always gets users’ explicit consent before sharing customer data with its partners. A navigation app anonymizes activity data before analyzing it for travel trends. A school asks parents to verify their identities before giving out student information. These are just some examples of how organizations support data privacy, the principle that people should have control of their personal data, including who can see it, who can collect it, and how it can be used. One cannot overstate…

How to prevent prompt injection attacks

8 min readLarge language models (LLMs) may be the biggest technological breakthrough of the decade. They are also vulnerable to prompt injections, a significant security flaw with no apparent fix. As generative AI applications become increasingly ingrained in enterprise IT environments, organizations must find ways to combat this pernicious cyberattack. While researchers have not yet found a way to completely prevent prompt injections, there are ways of mitigating the risk.  What are prompt injection attacks, and why are they a problem? Prompt…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

Subscribe now

More newsletters

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts