Tuesday, April 7, 2026
Health

Projected 70 Million Nigerian Women and Girls at Risk of AI-Driven Online Abuse by 2030 – Study

A recent report from Gatefield warns that by 2030, as many as 70 million women and girls in Nigeria could be subjected to online abuse facilitated by artificial intelligence if proper legal measures are not implemented. The report highlights the urgent need for comprehensive regulations.

6 min read4 views
AI AbuseGatefieldNigeriaOnline SafetyWomen Rights

A report recently published by Gatefield has issued a stark warning that up to 70 million women and girls in Nigeria might face artificial intelligence- enabled online abuse by 2030, unless decisive legal actions are taken to establish protective measures.

The analysis, titled “Industrialised Harm: The Scale of AI-Facilitated Violence in Nigeria,” indicates that an estimated 30 to 35 million women and girls could be targeted annually through methods such as deepfakes, impersonation, and organized harassment campaigns without urgent regulatory measures being enacted.

Gatefield has based its projections on the rapid expansion of internet access in Nigeria, observed trends in online abuse, and the expansive impact of generative AI technologies.

The findings draw from data presented in the State of Online Harms 2025 report, along with universally accessible statistics and conservative predictive modeling.

Examining the data, PREMIUM TIMES notes that Nigeria's internet user base is expected to approach 200 million by 2030, nearly half of whom will be women. Current statistics from Gatefield reveal that around half of Nigerian internet users experience online harm, with women constituting 58% of those affected.

The report distinguishes between general exposure to harmful online content and specific instances of targeting, such as non-consensual sexual imagery, impersonation, and organized misinformation. Further analysis suggests that while 50-60% of women and girls online could be exposed to harmful content each year, 25-30% may face direct targeting through AI-driven methods, even with conservative modeling approaches.

An image of an online stalker used to illustrate a story

Aside from the reputational and financial damage, the report underscores profound psychosocial effects on women who are victims of online abuse. It references studies pertaining to Nigerian women who have experienced non- consensual image dissemination, revealing that nearly 90% reported suffering from depression or suicidal ideation, with some contemplating or attempting suicide.

The study indicates that the powers of generative AI enhance these harms by augmenting the speed, realism, and dissemination of abusive content, often outpacing the victims' abilities to respond effectively.

To illustrate how AI-facilitated abuse manifests, the report presents recent Nigerian cases, pointing out enforcement gaps in current laws. It highlights the instance of Ayra Starr, whose image was manipulated to create a digitally altered nude deepfake via X’s AI chatbot, Grok. Although the content was reported, it proliferated across various platforms before the responsible account was suspended, without any criminal probe initiated.

The report also mentions Senator Natasha Akpoti-Uduaghan, who was targeted with fake deepfake audio and video amid her public accusations of sexual harassment against Senate President Godswill Akpabio, underscoring the strategic use of AI to damage women's credibility.

Another notable incident involves Kehinde Bankole, a Nollywood actress, who faced AI-generated nude deepfakes in 2025. Gatefield asserts that her experiences reflect a wider trend of abuse facilitated by inadequate content moderation and legal shortcomings, with generative AI tools exploiting publicly available images.

The report reveals that Nigeria lacks a cohesive legal and institutional framework to tackle problems related to AI-powered abuse. It also notes the absence of specific regulations regarding AI, undefined legal criteria for deepfakes or synthetic media, and the lack of acknowledgment for AI-enabled gender-based violence as a unique issue.

Currently existing cybercrime and criminal statutes are not equipped to address the impact of automated systems that amplify abuse across multiple platforms and jurisdictions. Furthermore, institutional oversight is noted to be disjointed, with agencies like the National Information Technology Development Agency (NITDA), the Nigerian Communications Commission (NCC), law enforcement, and the Ministry of Justice functioning without unified objectives concerning AI-related offenses, which limits victims' ability to seek effective recourse.

The report compares Nigeria’s regulatory backdrop with regions that have implemented more robust protections. In the European Union, regulatory measures such as the AI Act and General Data Protection Regulation (GDPR) enforce transparency and impose financial repercussions. France has taken steps to penalize non-consensual sexual deepfakes through fines and incarceration.

The United States' TAKE IT DOWN Act mandates efficient removal of harmful content, while the UK's Online Safety Act establishes responsibilities for platforms regarding user welfare. Gatefield argues these precedents indicate a growing global acknowledgment of AI-facilitated harm, contrasting sharply with Nigeria's lack of specialized legislation and regulatory resources.

In terms of policy implications, Adewunmi Emoruwa, chief executive officer of Gatefield, remarked that artificial intelligence has fundamentally transformed the landscape of online harm in Nigeria. He noted that such technologies lower the cost of causing abuse, enhance its dissemination, and increase its psychological toll, especially on women in the public eye.

Emoruwa pointed out that tools like deepfake generators enable even ordinary users to create damaging content on a large scale, frequently targeting women in prominent positions. He emphasized that Nigeria's legal structure is inadequately prepared to handle the complexities of automated systems facilitating large-scale abuse, leaving victims struggling with sluggish and unclear reporting processes.

Both Emoruwa and Shirley Ewang, advocacy head at Gatefield, stressed the necessity for stringent platform responsibilities, clear legal definitions, and swift response mechanisms to shift accountability from victims to those systems that contribute to the harm. They remarked that deepfakes, misinformation, and coordinated harassment efforts are increasingly weaponized against women in political, media, and activist spheres during a period of ongoing growth in Nigeria’s digital demographic.

They concluded that addressing AI-driven abuse mandates coordinated efforts among the government, technology platforms, and civil society through enforceable regulations, victim-centered approaches, and broad digital literacy campaigns. Together, these stakeholders can transition Nigeria from a reactive response to a proactive, rights-based AI governance framework.

In light of these findings, Gatefield urges Nigerian governmental bodies to formulate mandatory guidelines targeting AI-facilitated violence, proposing clear provisions to cover non-consensual sexual content, impersonation, and AI-generated disinformation. The report also advocates for the establishment of precise legal definitions for deepfakes and synthetic media, obligating platforms operating in Nigeria to categorize AI-generated content, conduct routine assessments of algorithmic risks, and issue regular reports detailing moderation and takedown measures.

Furthermore, it calls for accessible mechanisms for reporting and redress, enforceable timelines of 24 to 48 hours for removing harmful AI-driven content, and specific protections for women, children, and vulnerable populations. The study underscores the importance of recognizing the distinction between general exposure to harmful content and targeted, deliberate abuse for formulating effective responses ranging from content moderation to enforceable legal frameworks and victim aids.

Stay connected with us:

Comments (0)

You must be logged in to comment.

Be the first to comment on this article!