Musk’s Grok AI under fire for generating explicit images of child actress (news)


Elon Musk’s Grok AI chatbot has come under scrutiny after users allegedly generated explicit images of children, including one of a child actress.

Axios reported that users on the X social media platform used AI chatbot Groka to digitally remove the 14-year-old star’s clothes over the past few days. syracuse.com does not list the actress because of her age.

In addition, there has been an increase in reports of users using Grok to remove clothing or add bikinis to images of other women, including rapper Iggy Azalea, who, despite being of legal age, disapproved of these sexualized images of her wife. The “Fancy” singer took to X on Friday to vent her frustrations, writing: “Grok really has to go.”

The incidents raised concerns about the safety of artificial intelligence, especially since the chatbot is authorized for official government use through an 18-month contract with the Trump administration.

On Thursday post on X, Grok admitted that there have been “isolated cases where users have requested and received AI images of minors in minimal clothing.”

A separate post from the AI ​​chatbot on Friday warned that xAI, which is Groka’s parent company, could face “potential DOJ investigations or lawsuits” for producing the images.

“As previously stated, we have identified security flaws and are urgently fixing them –[child sexual abuse material] is illegal and prohibited,” Grok published on X, formerly known as Twitter. The generated images appear to violate Grok’s own terms of service, which prohibit the sexualization of children.

The images are also in violation of the Take It Down Act, which President Trump and First Lady Melania Trump signed in May 2025. The law prohibited the “knowing publication” or threat to publish intimate images without a person’s consent, including “deepfakes” created by artificial intelligence.

Be the first to comment

Leave a Reply

Your email address will not be published.


*