Microsoft Allegedly Kept DALL-E 3 Vulnerabilities from the Public, Whistleblower Claims

AI companies continue to face problems with their AI tools, partly because many creators claim that their copyrighted works were used without permission, but also because the tools are being misused. An engineer saw the potential trouble with DALL-E 3 and informed the higher-ups, but he claims that he was silenced.

OpenAI | Microsoft
(Photo : Jonathan Raa/NurPhoto via Getty Images)

Engineer Tries to Warn Microsoft Superiors

Microsoft engineer Shane Jones found security vulnerabilities in OpenAI's DALL-E 3 model and tried to tell the public about them, but not before he approached his superiors at Microsoft to alert them of the potentially disastrous problem.

Jones believed that the AI model was capable of generating violent or explicit images through text prompts, which he found was possible by using an exploit to bypass security measures. After approaching Microsoft leadership, he was instructed to "personally report the issue directly to OpenAI."

He posted a public letter through LinkedIn on December 14th, 2023 addressed to the OpenAI non-profit board of directors and urged them to take DALL-E 3 offline until it was fixed, and despite Microsoft instructing him to take the problem to OpenAI, the company ordered his post removed.

The engineer told Microsoft about the letter since the software giant was a board observer and he was following what was suggested by the company. Unfortunately, the response was his manager asking him to delete the post.

After the order, the manager informed Jones that Microsoft's legal department would get in touch to justify why the post was asked to be taken down, but no such email arrived. An OpenAI spokesperson tried to clarify matters.

In response to the issue, the spokesperson told Engadget that the issue was immediately investigated after the report was received on December 1, and found that the mentioned exploit did not bypass the safety systems of the AI model.

In addition to that, OpenAI implemented additional safeguards for their products such as ChatGPT and the DALL-E API. Microsoft also said that they are "committed to addressing any and all concerns employees have" in accordance with company policies.

Read Also: Democrats Push for Gov't Crack Downs on Deepfakes, AI Robocalls

Whistleblower Takes the Issue to Capitol Hill

With the spread of the explicit deepfake images of Taylor Swift across the web, Jones' warning is being taken more seriously. He sent a letter to Washington state's attorney general and Congressional representatives.

In the letter, the Microsoft engineer expressed that it was "an example of the type of abuse I was concerned about and the reason why I urged OpenAI to remove DALL·E 3 from public use and reported my concerns to Microsoft," as per GeekWire.

He added that the vulnerabilities in DALL-E 3 and products like Microsoft Designer that use DALL-E 3 make it easier for people to abuse AI to generate harmful images, and that Microsoft was already aware of such issues.

 Microsoft CEO Satya Nadella said that the issue of the deepfakes was "alarming and terrible," regarding the explicit Taylor Swift images, and said that they have to act. The company further stated that they were "committed to providing a safe and respectful experience for everyone."

Related: OpenAI Partners with Common Sense Media to Educate Teens, Families About AI

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost