Australia will require search engines like Google and Bing to take action to stop the sharing of artificial intelligence-generated child sex abuse content.
Search engines will be required to make sure that such content is not shown in search results under a new code being developed by the industry’s biggest names at the government’s request, according to a statement from e-Safety Commissioner Julie Inman Grant.
Additionally, she stated, search engine AI functions must not be able to create deep-fakes of the same content.
“The use of generative AI has grown so quickly that I think it’s caught the whole world off guard to a certain degree,” Inman Grant said.
The code provides an illustration of how the proliferation of devices that automatically create lifelike content is changing the legal and regulatory environment surrounding internet platforms.
A representative for the Australian advocacy group Digital Industry Group Inc, of which Google and Microsoft are members, expressed satisfaction that the regulator had accepted the updated version of the code.
The agency registered safety regulations earlier this year for a number of other internet services, including social media, mobile applications, and equipment suppliers. Late in 2023, those codes come into force.
The regulator is still establishing safety regulations for private messaging services and online storage, which have encountered opposition from privacy advocates across the globe.