Addressing CISOs’ Concerns About Generative AI Security – GovInfoSecurity.com

1 minute, 18 seconds Read

Artificial Intelligence & Machine Learning
,
Big Data Security Analytics
,
DevSecOps

Microsoft’s Oberoi on Executive Awareness, Governance Challenges in AI Security


Herain Oberoi, general manager, Microsoft Security

As conversations around the intersection of artificial intelligence and cybersecurity continue to intensify, CISOs are increasingly voicing their top concerns regarding the use of generative AI, data protection and regulatory governance, said Herain Oberoi, general manager, Microsoft Security.

See Also: Generative AI’s Role in Secure Software Development

To address these concerns, Oberoi said, Microsoft has introduced initiatives such as the AI Hub in Microsoft Purview, which provides “end-to-end visibility” into AI applications and associated risks, as well as AI posture management in Microsoft Defender for cloud.

“AI risk isn’t just cybersecurity risk. AI risks bleed into privacy risk. You have to think about copyright risk. You have to think about content provenance. Where does this generated content come from? And so it becomes the responsibility not just for the CISO, it’s a responsibility for chief data officers. It’s oftentimes the responsibility for the general counsel and the legal departments and organizations as well,” he said.

In this video interview with Information Security Media Group at RSA Conference 2024, Oberoi also discussed:

  • Strategies to ensure data security and privacy, such as classification, labeling and data loss prevention;
  • The importance of executive and board-level awareness of the risks associated with generative AI;
  • How organizations can effectively govern the use of AI within their operations.

Oberoi has experience working at startups and midsized and large organizations across data, AI/ML and cybersecurity domains.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts