OpenAI sets up new safety body in wake of staff departures – CIO

0 minutes, 57 seconds Read
image

Just last week, 16 big users and creators of AI, including OpenAI as well as its top competitors Google, Amazon, Meta and xAI as well as frenemy Microsoft, signed up to the Frontier AI Safety Commitments, a new set of safety guidelines and development outcomes for the technology.

Demonstrating that AI is secure is essential to companies like OpenAI whose business depend on its widespread adoption. That’s because one of the largest challenges in enterprise and consumer perception of AGI relates to security, according to Jain. This perception “is often influenced by scenarios depicted in science fiction movies,” he said. “Therefore, it is essential to integrate security measures, risk management, and ethical considerations from the design stage, rather than as an afterthought.”

He’s not alone in that believe. Nicole Carignan, vice president of strategic cyber AI at cybersecurity firm Darktrace, said, “The risk AI poses is often in the way it is adopted,” and it’s important to encourage AI leaders to promote its responsible, safe, and secure use. “Broader commitments to AI safety will allow us to move even faster to realize the many opportunities and benefits of AI,” she said.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts