The Nigerian government on Sunday, along with the US and UK among other eighteen nations signed a 20-page document that is expected to be the gold standard for AI safety.
The countries decided that companies creating and applying AI must make sure that, by guarding against abuse, their systems prioritise the protection of their clients and the general public.
The non-binding agreement addresses concerns about preventing hackers from taking advantage of AI technology and makes suggestions like releasing models only after sufficient security testing. The guideline also provides guidance on how to protect AI systems from abuse, maintain data integrity, and put strict screening procedures in place for software vendors.
Nigeria is the only African country to participate in the pact, although the action is typical given that the nation has not yet enacted AI legislation.
Several governments have voiced their opinions in the worldwide discussion around AI safety and regulation. In the OpenAI story, there was some controversy about AI safety as some individuals feared the technology was developing too quickly without taking its effects into account.
The introduction of language models such as GPT-3 raised questions about AI’s potential to produce inaccurate or dangerous information. The business’s opaque research methods also prompted concerns about oversight and responsibility.
Initiatives like this one from global governments highlight the significance of AI and the necessity to direct its advancement while guaranteeing everyone’s safety.