Microsoft Engineer Raises Concerns Over AI Image Generator’s Security – TechRound

2 minutes, 18 seconds Read

A Microsoft AI engineer, Shane Jones, has brought attention to potential security flaws in OpenAI’s DALL-E 3 model, used in Microsoft’s Designer AI image creator.

Jones claims to have discovered a vulnerability in early December that allowed users to bypass safety guardrails, leading to the creation of explicit and violent deepfake images, including those of singer Taylor Swift.

Jones expressed his worries by sending a letter to Washington State’s Attorney General and US senators, alleging that Microsoft downplayed the severity of the flaws in DALL-E 3.

Microsoft, in response, has stated that the reported techniques did not breach their safety filters, and they are addressing any remaining concerns with the concerned employee.

Allegations of Downplaying and Microsoft’s Response

Jones, in his letter, contends that Microsoft was aware of the vulnerabilities and the potential for misuse but did not adequately address the issues. He further claims that, after reporting the matter to Microsoft, he was instructed to send the details to OpenAI, the technology’s developer.

Even though he attempted to bring attention to the flaws, Jones asserts that he did not receive a response from either Microsoft or OpenAI.

In response to these allegations, Microsoft stated that they encouraged the employee to report through OpenAI’s channels and that they investigated the concerns raised.

An OpenAI spokesperson affirmed that the reported technique did not bypass their safety systems, and they have implemented additional safeguards for their products, including declining requests that ask for a public figure by name.
 

More from News

 

Taylor Swift Deepfake Incident

The explicit deepfake images of Taylor Swift, allegedly generated using Microsoft’s Designer AI and OpenAI’s DALL-E 3, have ignited concerns about the potential misuse of AI in creating harmful content.

Jones points to the vulnerabilities in DALL-E 3 and similar products as posing a risk to public safety, especially with the capacity to generate disturbing images.

Microsoft CEO Satya Nadella, when asked about the Taylor Swift deepfakes, expressed concern, stating, “we have to act.” The company, responding to the emergence of these deepfakes, reinforced its commitment to providing a safe and respectful experience for users.

Silencing Concerns and Calls for Government Intervention

Shane Jones claims that Microsoft’s legal department demanded the removal of his public letter urging OpenAI to address the DALL-E 3 vulnerabilities. Despite his willingness to assist in fixing the specific vulnerability, Jones alleges that Microsoft’s legal team did not respond or communicate directly with him.

In his letter to Washington State’s Attorney General and US representatives, Jones advocates for the creation of a government system to report and track AI-related issues.

He emphasises the importance of ensuring that employees can raise concerns without fear of retaliation and suggests that companies developing AI products should be held accountable for disclosing known risks.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts