Microsoft Engineer Takes Action as Copliot Generates Inappropriate Images

Even with the advancement of AI tools and services, it appears that we are still a long way from AI products being completely safe from misuse or the generation of terrible images. A Microsoft engineer found that Copilot Designer was capable of such things and decided to speak up about it.


(Photo : Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)

Microsoft Engineer Warns of Copilot's Capabilities

Microsoft's Copilot still has a lot of tweaking to go through before it cannot be misused, and a company engineer, Shane Jones found yet another way that users may exploit the AI tool. He has been testing Copilot to find vulnerabilities that need to be resolved.

Known as Red-teaming, he found that there are still particular text prompts that will result in the tool generating images that ignore Microsoft's responsible AI principles. It can create images of teenagers with assault rifles or underage drinking and drug use, as per CNBC.

Jones, who has been working for the tech giant for six years, expressed that it was an "eye-opening moment" saying that it was at that moment that he realized that the AI model was not safe. He reported the issue internally but did not get the response he was hoping for.

His concern was acknowledged and he was referred to OpenAI. However, he did not hear from the AI company, which led the Microsoft engineer to write an open letter on LinkedIn instead, asking the board to temporarily shut down DALL-E 3 and have it investigated.

Microsoft was not too happy about his action and immediately asked Jones to take the post down, but it did not stop the employee from raising the concern to other agencies so that the vulnerabilities would be fixed.

By January, Jones took the issue to S Senators and even met with employees from the Senate's Committee on Commerce, Science and Transportation. The most recent action the Microsoft employee took was sending a letter to Federal Trade Commission Chair Lina Khan.

The letter states that in the last three months, he has repeatedly "urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place," and that the company has refused the recommendation.

"Again, they have failed to implement these changes and continue to market the product," adding that both Microsoft and OpenAI knew about the risks before they released the AI model back in October 2023.

A Microsoft spokesperson said that the company was "committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety."

Read Also: Wondering If a Photo is AI-Generated? - Here's How You Can Detect Them

Google Paused Gemini's Image Generation

When it was found that Google's AI chatbot was generating inaccurate images, it paused the tool's image-generating functions in order to fix the issue. The company even admitted that they "got it wrong" and that they will do better.

Right now, the generative feature is still not available. If you ask the chatbot, it says that the company is working to improve Gemini's ability to generate images of people, and that the feature is expected to return soon.

Related: Microsoft Copilot AI Tells User That Maybe They 'Don't Have Anything to Live For'

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost