Nigeria is poised to enact legislation that would make it one of the first African countries to formally regulate artificial intelligence, strengthening oversight of a rapidly expanding digital market that has seen years of minimal control as global technology firms grew their presence.
The proposed National Digital Economy and E-Governance Bill would grant regulators broader authority over data, algorithms and digital platforms, addressing a regulatory gap that has persisted since Nigeria released its draft AI strategy in 2024.
The bill, expected to be approved by lawmakers before the end of March, would impose tougher oversight on higher-risk AI systems, such as those deployed in finance, public administration, surveillance and automated decision-making, and require developers to submit annual impact reports covering risks, safeguards and performance.
The proposal would also empower regulators to impose fines of up to 10 million naira ($7,000) or 2 per cent of an AI provider’s annual gross revenue in Nigeria, though it does not specify how such penalties would be calculated.
In an interview, the director general of the National Information Technology Development Agency, Kashifu Abdullahi, said the legislation aims to put rules in place early, rather than after the fact, as AI use expands rapidly across finance, public services and the private sector, according to Bloomberg.
If approved, the bill would place Nigeria among the first African countries to implement an economy-wide framework for regulating artificial intelligence, Abdullahi added.
Although nations such as Mauritius, Egypt and Benin have outlined AI strategies, they lack comprehensive laws governing the technology.
The legislation would also introduce ethical standards on transparency, fairness and accountability, while applying a risk-based regulatory model similar to those developing in Europe and parts of Asia — a shift that could significantly influence how companies from Google to Chinese cloud providers operate in Africa’s most populous nation.
“In the area of governance, we need to put the safeguards and guardrails in place to make sure the AI we are building is within that guardrail,” Abdullahi said. “That way, if there are bad actors, you can easily detect and contain them.”
The proposed legislation would empower regulators to demand disclosures, issue compliance directives, and suspend or restrict AI systems found to be unsafe or in breach of the rules.
It also introduces supervised testing frameworks that allow startups and institutions to experiment with new technologies under regulatory oversight, while supporting innovation.

