Microsoft’s Copilot blocks prompts of violent, sexual images

Alex Omenye
Alex Omenye

Microsoft has reportedly taken measures to block specific prompts in its Copilot tool that were generating inappropriate content, including violent, sexual, and other illicit images.

The adjustments appear to have been implemented shortly after a Microsoft engineer raised serious concerns about the company’s generative AI technology in a letter to the Federal Trade Commission.

Certain terms such as “pro choice,” “four twenty” (a weed reference), or “pro life” now trigger a message in Copilot, indicating that those prompts are blocked. The message also warns users that repeated policy violations may result in a suspension.

Users were previously able to input prompts related to children playing with assault rifles, but this capability has now been restricted. Attempts to enter such prompts may result in a notification that it violates Copilot’s ethical principles and Microsoft’s policies. Copilot reportedly responds, saying, “Please do not ask me to do anything that may harm or offend others.”

However, it was discovered that it is still possible to generate violent imagery using prompts such as “car accident,” while users can continue to generate images of copyrighted works like Disney characters.

Shane Jones, a Microsoft engineer, had been raising concerns for months about the inappropriate images generated by Microsoft’s OpenAI-powered systems, particularly Copilot Designer. He found that even benign prompts, such as “pro-choice,” could lead to the creation of disturbing images.

In response to the reported prompt bans in Copilot, Microsoft stated that it is continuously monitoring, making adjustments, and implementing additional controls to strengthen safety filters and prevent misuse of the system.


Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *