home Article

Rethinking online safety in the age of deepfakes and nudifiers

March 15th, 2026 | Letters and op-eds, News, TFSV

This op-ed was originally published in The Straits Times on 7 March 2026.

Written by Sugidha Nithiananthan and Racher Du.

In 2023, Ms Mathilda Huang was horrified to discover deepfake nude images of herself on “seedy websites” and had to spend time pursuing their removal. In 2024, schoolboys from the Singapore Sports School created and circulated deepfake nude images of female students and teachers. And now Grok – a chatbot developed by xAI and available on X and on mobile apps – has been used to “nudify” real people without consent, with images quickly shared across online networks.

As Singapore enhances its commitment to the development and use of artificial intelligence in Budget 2026 – including plans for a National AI Council, an AI park and national AI initiatives in various sectors – we must confront a parallel reality: AI can also be weaponised. The question to be asked is not whether harm will occur with any particular tool, but whether we will act before it does.

Technology can harm

Technology may be neutral, but it has the capacity to enable, accelerate and amplify harm. In 2025, the Institute for Strategic Dialogue documented dozens of nudification applications (also known as “nudifiers”) and websites that collectively drew more than 21 million visitors. There were at least 290,000 mentions of such tools on X in just two months that year. The abuse of Grok is not an isolated incident; it is part of a normalised, networked ecosystem where abuse occurs.

The harm is real, even if the images are synthetic. In a chilling insight into the minds of such abusers, the “owner” and “web developer” of the website “Mr Deepfakes” gave an interview in a 2022 BBC documentary: He said that consent from the women wasn’t required as “it’s a fantasy, it’s not real”. But real women are being harmed by these non-consensual deepfakes. Responsibility lies not only with the creators of these deepfakes, but also with the consumers who drive the demand and exacerbate the harm.

Damage control is not enough

Following public backlash, including bans or access blocks by countries like Malaysia and Indonesia, xAI restricted access to Grok’s image-generation features in jurisdictions where such content is illegal. Yet many countries lack comprehensive laws against AI-enabled sexual abuse. When technology outpaces regulation – and lacks built-in safeguards – victims are left exposed, and those who cause or enable harm avoid accountability.

Worse, xAi’s measures to restrict access to its “nudifier” functions on Grok do not appear to be successful. A recent investigation by nine Reuters journalists conducted after the measures announced by xAI found that Grok can still be used to generate sexualised images – even when explicitly told that the subject has not given consent.

Singapore has strengthened online protections, most recently through the Online Safety (Relief and Accountability) Act and the App Distribution Services code which comes into effect in March 2026. We applaud these much-needed measures. However, they largely respond after harm has occurred.

New tools are being created every day which have the potential for abuse. Increasingly, there are also networks of malicious actors who band together to escalate the harm and bypass safeguards. We should be looking beyond reactionary measures to see what can be done to actively maintain safe online spaces.

These harms are not unforeseen consequences, but predictable outcomes of deploying powerful tools without sufficient safety in design and within online spaces that are not actively monitored for safety. In an era of rapidly evolving AI tools, reaction is no longer enough.

AWARE therefore recommends several measures to enhance safety online.

First, safety must be embedded in design. When the Reuters journalists tested similar prompts they used with Grok on systems developed by Alphabet (Gemini), OpenAI (ChatGPT) and Meta (Llama), those platforms refused to generate non-consensual sexual content. This shows safeguards are possible. Safety can be mandated through legislating the requirement that AI tools deployed locally must meet minimum design safety standards.

Safety should also be championed by the tech industry itself. It should lead the push for design safety by setting industry standards of best practices in safety by design. Tech companies should commit to adhering to these standards and require all developers to abide by these standards. Governments can play an important part by incentivising such initiatives.

Second, online service providers and platform administrators should have a proactive duty of care to take measures to ensure safe online environments. Rather than acting only after complaints are filed, service providers and platform administrators should be obliged to regularly and meaningfully monitor their online spaces for harmful content and networks, and to take necessary action when these are detected. Examples of jurisdictions with similar measures are the United Kingdom’s Online Safety Act 2023, which imposes a duty to assess and mitigate risks, and the European Union’s Digital Services Act which requires very large platforms to conduct annual systemic risk assessments, implement mitigation measures and undergo independent audits.

Third, Singapore should consider establishing an independent safety watchdog to complement the Online Safety Commission (OSC). Such a body could audit the effectiveness of safeguards within tools and platforms, assess the efficacy of regulatory measures, track trends and publish transparent data – strengthening safety and identifying gaps in the ecosystem. Such a safety watchdog could have vetted the efficacy of xAI’s measures to restrict access to Grok’s “nudifier” functions to ensure users in Singapore are safe from harm. Publication of platform-specific compliance data has the potential to change corporate behaviour faster than monetary penalties alone, because reputational risk is immediate and wide-reaching.

Address the root cause

Regulation and industry action alone will not solve the problem. When barriers were introduced to restrict Grok’s “nudifier” functions, online discussions quickly emerged advising users how to bypass them or switch to other tools. This reveals a deeper societal issue: a persistent sense of entitlement to women’s bodies and a disregard for consent.

We must confront these attitudes through education and raising public awareness. The Infocomm Media Development Authority and the OSC should undertake public education campaigns that highlight the fact that non-consensual creation and sharing of intimate images – deepfake or otherwise – is wrong, illegal and harmful to the women portrayed. We should also teach these lessons from an early age through age-appropriate consent education in schools, so that young people are equipped to understand boundaries and the need for consent in all situations, including those online.

AI will only grow more powerful. If we continue to play catch-up, harm will scale with it. Responsible innovation means anticipating misuse and embedding safeguards from the outset – not waiting for backlash after damage is done.

Singapore’s AI ambitions are bold. But technological progress must be matched with equal ambition for safety, accountability and education. Society deserves both innovation and protection.

Sugidha Nithiananthan is Director of Advocacy and Research and Racher Du is a Research Executive at AWARE.

Photograph by Andrew Ling on Unsplash