YouTube deploys AI to detect child users

14

YouTube has begun using artificial intelligence (AI) to detect children posing as adults on its platform, in a bid to protect minors from inappropriate content.

The measure, now being introduced in the United States, comes as Google-owned YouTube and other platforms, such as Instagram and TikTok, face mounting pressure to shield young users from adult-oriented material.

James Beser, YouTube’s Youth Director of Product Management, said the AI—specifically a form of machine learning—will estimate a user’s age based on factors including video preferences and account history.

“This technology will allow us to infer a user’s age and, regardless of the birthday listed on the account, deliver age-appropriate experiences and protections. We’ve already used this approach successfully in other markets,” Beser explained.

The AI system will complement existing age-verification tools. Users flagged as likely minors will be prompted to confirm their age using a credit card, selfie, or government-issued ID.

The move comes amid wider global debate about children’s safety online. In Australia, Communications Minister Anika Wells recently announced that children under 16 would be banned from YouTube from 10 December under new social media laws. She cited a report showing four in ten Australian children had encountered harmful content on the platform.

YouTube maintains that it is a video-sharing service rather than a social media network, but the Australian legislation—one of the world’s strictest—has sparked interest from other countries considering similar restrictions.