OpenAI executive Jan Leike resigned – says “security” is no longer a priority – Gizchina.com

3 minutes, 30 seconds Read

In a recent development, Jan Leike, the co-director of OpenAI’s Superalignment team, has announced his resignation from the company. This move comes as a significant blow to OpenAI, which has been at the forefront of AI research and development. Leike’s departure is particularly noteworthy given his role in ensuring the safety and alignment of AI systems with human intentions. In his parting statement, Leike expressed deep concerns about OpenAI’s shift in priorities. He claims that the company has neglected internal culture, and safety guidelines, and is now more focused on launching “eye-catching” products at a high speed.

OpenAI executive Jan Leike

OpenAI established the Superalignment team in July 2023 with the primary objective of ensuring that AI systems with “superintelligence” and “smarter than humans” can follow human intentions. At the time of its inception, OpenAI committed to investing 20% of its computing power over the next four years to ensure the security of AI models. This initiative was seen as a crucial step towards developing responsible and safe AI technologies.

Recent Turmoil at OpenAI

Leike’s resignation comes on the heels of a period of turmoil at OpenAI. In November 2023, nearly all of OpenAI’s employees threatened to quit and follow ousted leader Sam Altman to Microsoft. This move was in response to the board’s decision to remove Altman as CEO. At the time, they cited a lack of candour in his communications with the board. The situation was eventually resolved with Altman’s return as CEO, accompanied by a shakeup in the company’s nonprofit arm board of directors.

Response from OpenAI Leadership

In response to Leike’s concerns, OpenAI’s Greg Brockman and Sam Altman have jointly stated that they “have increased their awareness of AI risks and will continue to improve security work in the future to deal with the stakes of each new model”. This response suggests that OpenAI is aware of the importance of AI safety and will address these concerns. However, Leike’s resignation and the disbanding of the Superalignment team raise questions about the company’s ability to prioritize safety and security in its pursuit of AI advancements.

Gizchina News of the week

OpenAI Sora

An excerpt from their joint response reads

“We are very grateful for all Jan has done for OpenAI, and we know he will continue to contribute to our mission externally. In light of some of the questions raised by his departure, we’d like to explain our thinking on our overall strategy.

First, we have increased awareness of AGI risks and opportunities so that the world is better prepared for them. We have repeatedly demonstrated the vast possibilities offered by scaling deep learning and analyzed their impact; made calls internationally for governance of AGI (before such calls became popular); and conducted research in the scientific field of assessing the catastrophic risks of AI systems. groundbreaking work.

Second, we are laying the foundation for the secure deployment of increasingly robust systems. Making new technology secure for the first time is not easy. For example, our team did a lot of work to safely bring GPT-4 to the world, and has since continued to improve model behavior and abuse monitoring in response to lessons learned from deployments.

Third, the future will be more difficult than the past. We need to continually improve our security efforts to match the risks of each new model. Last year we adopted a readiness framework to systematize our approach to our work…”

Conclusion

Jan Leike’s resignation is a significant development that highlights the ongoing challenges and tensions within the AI research community. As AI technologies continue to advance at a rapid pace, companies like OpenAI must prioritize safety. These brands should also uphold high security to ensure that these technologies work responsibly. Leike’s concerns about OpenAI’s shift in priorities serve as a reminder of the need for transparency and accountability. AI brands need to show commitment to ethical practices in AI research and development.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts