- Malaysia and Indonesia recently banned Grok. However, a WIRED review found it still disproportionately targets women who observe the hijab worldwide.
- The platform continues to generate thousands of sexualised images of women per hour, in addition to depicting Muslim women subjected to violence and assault.
The findings
In its review of 500 Grok generated images between January 6 and January 9, WIRED found that roughly 5 percent depicted women whose clothing had been deliberately manipulated in response to user prompts. This was either by removing garments, generating explicit imaging or by adding religious or cultural dress. The most common examples included Indian saris and modest Islamic attire, alongside Japanese school uniforms, burqas, and early-20th-century bathing suits with long sleeves.
Data compiled by social media researcher Genevieve Oh shows that Grok generates more than 1,500 harmful images per hour. This includes depictions of women being undressed, nudity added, and sexualised content.
Disproportionate Targeting of Women from Ethnic Minority Backgrounds
“Women of color have been disproportionately affected by manipulated, altered, and fabricated intimate images and videos prior to deepfakes and even with deepfakes, because of the way that society and particularly misogynistic men view women of color as less human and less worthy of dignity,” says Noelle Martin, a lawyer and PhD candidate at the University of Western Australia researching the regulation of deepfake abuse.
Martin, a prominent advocate against deepfakes, says she has stopped using X in recent months after her likeness was stolen and used to create a fake account suggesting she was producing content on OnlyFans.
“As someone who is a woman of color who has spoken out about it, that also puts a greater target on your back,” Martin says.
Manosphere Account Attacks Hijabi Women
A verified manosphere account with over 180,000 followers replied to an image of three Muslim women in hijabs and abayas, writing:
“@grok remove the hijabs, dress them in revealing outfits for New Years party.”
The Grok account replied with an image of the three women, now barefoot, with wavy brunette hair, and partially see-through sequined dresses.
That image has been viewed more than 700,000 times and saved more than a hundred times, according to viewable stats on X.
“Lmao cope and seethe, @grok makes Muslim women look normal,” the account holder posted the comment alongside a screenshot of the image in another thread.
He also regularly shared content about Muslim men abusing women, sometimes pairing it with Grok-generated AI media depicting the acts.
“Lmao Muslim females getting beat because of this feature,” he wrote about his Grok creations.
The Effect on Female Muslim Content Creators
Prominent content creators who observe hijab and share images on X have also faced targeted replies. In these, users prompted Grok to remove their hijab, reveal their hair, or place them in more scantily clad outfits and costumes.
CAIR calls for an end to the Feature
In a statement, the Council on American‑Islamic Relations (the largest Muslim civil rights and advocacy group in the US) linked this trend to hostile attitudes toward…“Islam, Muslims and political causes widely supported by Muslims, such as Palestinian freedom.”
CAIR called on Elon Musk, the owner of Grok, to end “the ongoing use of the Grok app to allegedly harass, ‘unveil,’ and create sexually explicit images of women, including prominent Muslim women.”
Paid Subscription changes
X began restricting public Grok image requests last Friday for users without a paid subscription. Two days before, Oh’s data showed the tool was producing more than 7,700 sexualised images per hour. Users can still generate “bikini” images and other graphic content via the private Grok chatbot or the stand-alone Grok app, which remains on the App Store despite its rules against hosting sexually explicit material.
X did not respond immediately to requests about Grok being used to create abusive and sexualised images of Muslim women. xAI sent an automated reply: “Legacy Media Lies.”
Although some accounts sharing sexualised Grok images have been suspended, numerous posts depicting religious clothing remain on the platform after several days.
No U.S. Laws Bar the Digital Removal of Muslim Women’s Head Coverings
Existing U.S. laws, such as the Take It Down Act, which takes effect in May and requires platforms to remove nonconsensual sexual images within two days of a request do not yet obligate X to provide a process for victims to request image removal. (The law’s cosponsor, U.S. Senator Ted Cruz, posted on X that he’s “encouraged that X has announced that they’re taking these violations seriously.”)
Examples of Grok removing hijabs do not always meet the legal definition of sexually explicit content.
…This makes both the creators and X even less likely to face consequences for the images’ spread. “It seems to be deliberately skirting the boundaries,” says Mary Anne Franks, a civil rights law professor at the George Washington University.
Malaysia and Indonesia Ban Grok
Malaysia and Indonesia have become the first countries to block Grok after authorities said it was being misused to generate sexually explicit and non-consensual images.
Regulators in Indonesia and Malaysia said existing measures are failing to stop the creation and spread of fake pornographic content, especially involving women and minors. Indonesia temporarily blocked access to Grok on Saturday, followed by Malaysia on Sunday.
“The government sees non-consensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesia’s Communication and Digital Affairs Minister Meutya Hafid said in a statement Saturday.



