Unlocking the potential of Generative AI starts with a secure foundation – CSO Online

2 minutes, 37 seconds Read
image

Generative AI’s impact cannot be understated, as more than 55% of organizations are already piloting or actively using the technology. For all its potential benefits, generative AI raises valid security concerns. Any system that touches proprietary data and personally identifiable information must be protected to mitigate risk while enabling business agility.

CISOs tasked with bringing generative AI tools online quickly have the opportunity to ensure that best practices are followed at every step. Some of these steps will be familiar, while others are unique to generative AI’s capabilities. Securing the digital estate going forward requires companies to start by understanding the issues and establishing new ground rules to help ensure the safe use of AI by all.

Quantifying the risks

A recent Information Security Media Group (ISMG) survey found that the top AI implementation concerns fall into a handful of categories, led by:

  • Data security/leakage of sensitive data
  • Privacy
  • Hallucinations
  • Misuse and fraud
  • Model and output bias

Data is the lifeblood of AI systems, meaning that the protection and validation of data is a central focus for CISOs.

Not only do CISOs need to protect against data security concerns such as the leakage of sensitive data, over-permissioned data, and inappropriate data exchanges between internal users, but they also need assurances that their chosen AI tools will produce accurate results grounded in real-world, real-time insights.

To help protect against these risks, CISOs must ensure they’re applying the same security and governance protocols to generative AI as they would any other technology tool.

Prep your environment for generative AI success

Moving forward with responsible, trustworthy generative AI practices starts with familiar models and common frameworks, including basic security hygiene standards that can protect against 99% of attacks.

For example, implementing a Zero Trust model can help ensure that only users with both the need and the authorization can access systems and data—working to alleviate common data security and privacy concerns around generative AI. NIST also introduced an AI risk management framework in January 2023 to give organizations a common methodology for mitigating concerns while supporting confidence in generative AI systems.

Another strategy for building a secure foundation for AI adoption is to establish a strong data security and protection plan grounded in defense-in-depth principles. This helps to ensure employees across the enterprise can maintain data privacy best practices. Similarly, organizations looking to invest in AI should define an AI governance structure complete with processes, controls, and accountability frameworks that govern data privacy, security, and development of their AI systems, including the implementation of Responsible AI Standards.

Mapping a secure path to AI transformation

There needs to be a balance between rushing to AI-enabled systems before organizations are truly ready for it and moving too slowly to adopt this transformative technology.

Achieving that balance requires planning, governance, and vision, along with selecting a provider that is equally committed to enabling AI responsibly. Effective security and privacy not only protect data and systems but drive confidence in the results, empowering users to accomplish more.

Learn how Microsoft amplifies generative AI security to protect enterprises and empower users to achieve more: https://blogs.microsoft.com/on-the-issues/2023/07/21/commitment-safe-secure-ai/

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts