Today: Jul 27, 2024

Thousands of AI-generated child abuse reports flood into the US.

6 months ago

TLDR:

  • The US National Center for Missing and Exploited Children (NCMEC) has received 4,700 reports in 2023 of content generated by artificial intelligence (AI) that depicts child sexual exploitation.
  • The NCMEC is expecting this problem to grow as AI technology advances.

The US National Center for Missing and Exploited Children (NCMEC) has reported that it received 4,700 reports last year of content generated by artificial intelligence (AI) that depicted child sexual exploitation. This reflects a growing concern among child safety experts and researchers about the risk that generative AI technology poses in exacerbating online exploitation. The NCMEC, which serves as the national clearinghouse to report child abuse content to law enforcement, has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022, it received reports of about 88.3 million files.

John Shehan, senior vice president at NCMEC, confirmed that they are receiving reports from generative AI companies, online platforms, and members of the public, indicating that this issue is occurring across various sources. The chief executives of Meta Platforms, X, TikTok, Snap, and Discord also testified in a Senate hearing recently about online child safety, where they were questioned about their efforts to protect children from online predators.

In a report by researchers at Stanford Internet Observatory, it was highlighted that generative AI could be utilized by abusers to repeatedly harm real children by creating new images that resemble a child’s likeness. Additionally, content flagged as AI-generated is becoming “more and more photo-realistic,” making it challenging to determine if the victim is a real person. To address this issue, OpenAI, the creator of the popular ChatGPT, has established a process to send reports to NCMEC, and conversations are underway with other generative AI companies.

In conclusion, the rise of AI-generated child abuse content poses a significant risk in online spaces and demands the attention of both technology companies and law enforcement agencies. Efforts must be made to develop effective strategies for identifying and mitigating this form of exploitation to protect children from harm.