Just saw that OpenAI shelved their adult chat feature for ChatGPT after some pretty sketchy test results came out. Apparently their age-checking system was misidentifying teenagers as adults like 12% of the time, which is... not great when you're trying to keep minors away from adult content. Their ethics council flagged it back in January saying it could be risky for vulnerable users, and honestly that makes sense. The whole thing got me thinking about how hard it actually is to build safe AI features, especially ones dealing with age verification and adult chat content. They're basically saying "we need to get this right before we launch" which I guess is the responsible move, but it also shows how many edge cases exist in AI safety. Anyone else following this stuff? Seems like every major AI move gets scrutinized now.

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin