Britain to Make AI-Generated Child Abuse Images Illegal, Introducing New Offenses

LONDON: Britain announced on Saturday that it will make it illegal to use artificial intelligence tools to create child sexual abuse images, becoming the first country in the world to introduce specific AI-related sexual abuse offenses.

Under current law in England and Wales, possessing, creating, distributing, or viewing explicit images of children is already a criminal offense. The new legislation, however, will target the use of AI to manipulate or “nudeify” real-life images of children.

This move comes amid growing concerns about online criminals using AI to create such abusive material, with reports indicating a near five-fold increase in explicit images in 2024, according to the Internet Watch Foundation.

Britain’s interior minister, Yvette Cooper, emphasized the urgency of tackling both online and offline child sexual abuse, stating, “We know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person. It is vital that we tackle child sexual abuse online as well as offline so we can better protect the public from new and emerging crimes.”

AI tools are also being used by predators to conceal their identities and blackmail children with fake images, forcing them into further abuse, including live-streamed exploitation, the government added.

The new offenses will criminalize the possession, creation, or distribution of AI tools designed to produce child sexual abuse content, as well as the possession of AI “paedophile manuals” providing instructions for creating such material.

Read More: A Fight to Protect Pakistan’s Children from Big Tobacco

Additionally, those operating websites that distribute child sexual abuse content will face specific penalties. Authorities will also be empowered to unlock digital devices for inspection in connection with these crimes.

These measures will be incorporated into the upcoming Crime and Policing Bill in Parliament. Earlier this month, Britain also announced plans to criminalize the creation and sharing of AI-generated “deepfake” videos, images, or audio clips used to create explicit content.

Comments are closed, but trackbacks and pingbacks are open.