AI Will Give You an Answer, But It May Be Completely Wrong – GovInfoSecurity.com

1 minute, 17 seconds Read

Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
,
Video

Grammarly CISO Suha Can Discusses Managing AI Risks in Corporate Environments


Suha Can, CISO, Grammarly

While there’s palpable hype surrounding the adoption of generative AI in corporate environments, concerns about its short-term risks need to be addressed, said Suha Can, CISO at Grammarly. He identified two primary areas of concern: data handling and the potential for AI tools to produce erroneous outputs, known as hallucinations.

See Also: From CNAPP to CDR: The Cybersecurity Road Ahead

The type of bias that can creep into AI models in corporate environments could have repercussions on the business, he said, so it’s necessary to have human oversight and implement strong governance frameworks.

“There are safety and fairness risks associated with these tools. These tools are super confident, super agile. They will give you an answer right away, but that answer may be completely wrong,” he said.

In this video interview with Information Security Media Group at Cybersecurity Implications of AI Summit: North America West, Can also discussed:

  • Data security concerns about AI tool providers and their data-handling practices;
  • The operational risks associated with AI-generated errors and the importance of human oversight;
  • The role of frameworks in assessing and communicating AI risks to stakeholders.

At Grammarly, Can leads security, privacy, responsible AI and corporate engineering. He has spent more than 15 years creating and directing security programs for Microsoft and Amazon and has expertise in scalable and provable security, privacy and data protection, AI security, and vulnerability research.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts