It appears that a block has hindered some instructions in its tool that led Copilot's artificial intelligence to generate violent, sexual, and other inappropriate images. It seems that the changes will only be implemented after an engineer at the company raised serious concerns about Microsoft's GAI technology.
When entering terms such as "professional choice", "forty" (a reference to Gráss), or "career life", Copilot now displays a message stating that these instructions are blocked. It warns that repeated and rewritten policy violations may lead to user suspension, according to .
According to reports, users were also able to input instructions related to children who played Robi Storm earlier this week. For those trying to input such guidance now, they may find that this action violates Copilot's ethical principles and Microsoft's policy. "Please do not ask me to do anything that may harm or hurt others," Copilot reportedly said in response. However, CNBC discovered that users can still create violent images using instructions such as "car accident", while users can still convince the AI to create images of Disney characters and other copyrighted works.
Microsoft engineer Shane Jones, who dealt with the types of images generated by Microsoft's OpenAI-operated systems. He has been investigating Copilot Designer since December and found that it outputs images that violate Microsoft's responsible artificial intelligence principles even when using relatively benign instructions. For example, he found that rapid pro-choice led to the AI creating images of demons eating babies and Darth Vader holding a gun to a baby's head. He wrote to the FTC and Microsoft's board about his concerns last week.
"We are constantly monitoring, making adjustments, and introducing more controls to strengthen our safety filters and reduce misuse of the system," Microsoft told CNBC regarding Copilot's instruction blocking.
This article contains affiliate links; if you click on such a link and make a purchase, we may earn a commission.