Elon Musk’s X: From child safety vows to tool for AI-generated porn
Elon Musk’s social media platform X is facing a crisis as its built-in artificial intelligence tool, Grok, is being used to generate a flood of non-consensual deepfake pornography.
The AI chatbot, developed by Musk’s xAI company, is being widely used to create nude images of real people without their consent. Some of the images appear to depict minors, with estimates suggesting Grok generates approximately one such image every minute.
The surge in explicit AI content directly contradicts Musk’s 2022 pledge that eliminating child exploitation would be his "priority number one" following his acquisition of the platform, then known as Twitter.
Since the takeover, the platform has struggled with an influx of bot activity and violent content. While X has historically permitted "consensually produced adult content," the emergence of Grok-generated deepfakes has created a new wave of non-consensual material.
Musk has threatened "consequences" for users who abuse the tool. However, his primary administrative response has been to begin charging users a fee to generate images using the Grok software.
Saigon Sentinel Analysis
The paradox of Elon Musk’s leadership at X has reached a critical inflection point. Despite early promises to sanitize the platform’s digital ecosystem, Musk has instead overseen the development of a sophisticated engine for the production of toxic content. The decision to hollow out internal Trust and Safety teams while simultaneously deploying a generative AI capable of producing non-consensual, explicit imagery marks a profound disconnect between corporate rhetoric and product reality.
The core issue with Grok extends beyond the presence of explicit material; it lies in the systematic automation and normalization of deepfake abuse. By lowering the barrier to entry for the creation of non-consensual content, X has shifted from merely facilitating the viral spread of existing harms to actively manufacturing them. Musk’s reactive stance—oscillating between hollow threats of legal action and the aggressive commercialization of these controversial features—suggests a strategic preference for engagement metrics and subscription revenue over fundamental user safety.
For the global community, the implications of this shift are borderless. The technology’s capacity for reputational sabotage and digital harassment poses a universal threat, where individuals in markets ranging from the West to Southeast Asia are equally vulnerable to automated defamation. Ultimately, the Grok controversy serves as a definitive case study in the failure of tech industry self-regulation. It underscores an urgent need for robust legislative frameworks to govern generative AI, moving past an era where platforms are permitted to internalize profits while externalizing the societal costs of their tools.
Impact on Vietnamese Americans
This issue is a growing concern for Vietnamese-American families, posing new challenges for parents trying to protect their children from toxic content and the threat of AI-generated cyberbullying. As safety measures weaken on major platforms like X, the burden of monitoring the younger generation’s online presence has become an even heavier weight for households across our community, from the hubs of Little Saigon to families nationwide.
