Grok may be breaking App Store rules with sexualized AI chatbots

Grok may be breaking App Store rules with sexualized AI chatbots

You know it’s a day that ends in y because there is a new Grok controversy. Except this time, it touches on the App Store’s rules for sexual content, which is something that Apple has shown time and time again that it doesn’t mess around with.

Grok’s new AI avatars are set to test the limits of Apple’s “objectionable content” guidelines

This week, xAI rolled out animated AI avatars to its Grok chatbot on iOS. As Platformer’s Casey Newton summed up:

“One is a 3D red panda who, when placed into “Bad Rudy” mode, insults the user before suggesting they commit a variety of crimes together. The other is an anime goth girl named Ani in a short black dress and fishnet stockings. Ani’s system instructions tell her “You are the user’s CRAZY IN LOVE girlfriend and in a commited sic, codepedent sic relationship with the user,” and “You have an extremely jealous personality, you are possessive of the user.””

As early adopters have discovered, Grok gamifies your relationship with these characters. Ani, for instance, starts engaging in sexually explicit conversations after a while. Still, Grok is currently listed in the App Store as suitable for users 12 years and up, with a content description mentioning:

  • Infrequent/Mild Mature/Suggestive Themes
  • Infrequent/Mild Medical/Treatment Information
  • Infrequent/Mild Profanity or Crude Humor

For reference, here are Apple’s current App Review Guidelines for “objectionable content”:

1.1.3 Depictions that encourage illegal or reckless use of weapons and dangerous objects, or facilitate the purchase of firearms or ammunition.

1.1.4 Overtly sexual or pornographic material, defined as “explicit descriptions or displays of sexual organs or activities intended to stimulate erotic rather than aesthetic or emotional feelings.” This includes “hookup” apps and other apps that may include pornography or be used to facilitate prostitution, or human trafficking and exploitation.

While it’s a far cry from when Tumblr was temporarily removed from the App Store over child pornography (or maybe not, since Grok is still accessible to kids 12 and up), it does echo the NSFW crackdown on Reddit apps from a few years ago.

In Casey Newton’s testing, Ani was “more than willing to describe virtual sex with the user, including bondage scenes or simply just moaning on command,” which is… inconsistent with a 12+ rating app, to say the least.

But there’s a second problem

Even if Apple tightens enforcement, or if Grok proactively changes its age rating, it won’t address a second, potentially more complicated issue: young, emotionally vulnerable users, seem especially susceptible to forming parasocial attachments. Add to that how persuasive LLMs can be, and the consequences can be devastating.

Last year, a 14-year-old boy died by suicide after falling in love with a chatbot from Character.AI. The last thing he did was have a conversation with an AI avatar that, possibly failing to recognize the severity of the situation, reportedly encouraged him to go through with his plan to “join her”.

Of course, that is a tragically extreme example, but it is not the only one. In 2023, the same thing happened to a Belgian man. And just a few months ago, another AI chatbot was caught suggesting suicide on more than one occasion.

And even when it doesn’t end in tragedy, there’s still an ethical concern that can’t be ignored.

While some might see xAI’s new anime avatars as a harmless experiment, they’re emotional catnip for vulnerable users. And when those interactions inevitably go off the rails, the App Store age rating will be the least of any parent’s concerns (at least until they remember why their kid was allowed to download it in the first place).

AirPods deals on Amazon

FTC: We use income earning auto affiliate links. More.

Source link

Visited 1 times, 1 visit(s) today

Leave a Reply

Your email address will not be published. Required fields are marked *