Google plans new policy, targets AI apps

Alex Omenye
Alex Omenye

Google is going after potentially troublesome generative AI apps with a new policy that will go into effect early next year.

The policy requires Android application developers who publish their work on the Play Store to include a way for users to report or flag inappropriate content created by AI.

According to the new policy, developers must use the report to guide their own approaches to filtering and moderation and must demand that flagging and reporting may be done within the app.

The policy was changed in response to the proliferation of AI-generated apps, some of which users had misled into producing NSFW content.

The usage of AI picture generators raises even more significant problems because paedophiles have been found to generate child sexual abuse material on a large scale utilising open-source AI techniques.

Concerns have also been raised about the use of artificial intelligence to produce “deepfakes,” or fake images, in order to deceive or mislead voters ahead of the next elections.

In its announcement, Google reminded developers that all apps—including those that generate content using artificial intelligence—must adhere to its current developer regulations, which forbid the use of restricted content like CSAM and other materials that support dishonest behaviour.


TAGGED: ,
Share this Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *