TikTok Launches AI Age Detection System Across Europe to Remove Underage Users
Amsterdam, Saturday, 17 January 2026.
TikTok begins rolling out advanced age-detection technology across European markets to identify and remove accounts belonging to children under 13. The system analyzes profile information, posted videos, and behavioral signals to predict underage accounts, which are then reviewed by specialist moderators rather than automatically banned.
European Rollout Follows Year-Long UK Testing Phase
The age-detection technology represents the culmination of a year-long pilot program conducted in the United Kingdom, where thousands of accounts belonging to children under 13 were successfully identified and removed [2][3]. ByteDance’s TikTok announced on January 16, 2026, that it would begin implementing this system across European markets in the coming weeks [1]. The technology analyzes three key data points: profile information provided by users, content within posted videos, and behavioral signals that indicate potential underage usage patterns [1][2]. When the system flags a potentially underage account, specialist moderators review the case rather than implementing automatic bans, ensuring human oversight in the decision-making process [1].
Multi-Layered Verification Technology Addresses Privacy Concerns
TikTok’s comprehensive approach incorporates facial-age estimation technology provided by Yoti, the same company that Meta uses for age verification on Facebook [1]. For users who dispute account removals, the platform offers multiple verification pathways including credit card checks and government-issued identification [1]. This multi-tiered system was specifically designed for European regulatory requirements and developed in collaboration with Ireland’s Data Protection Commission, TikTok’s lead EU privacy regulator [1]. The technology addresses previous enforcement challenges posed by European privacy legislation while maintaining compliance with data protection standards [3]. Currently, TikTok removes approximately 6 million accounts globally each month based on age verification checks [6].
Regulatory Pressure Intensifies Across Global Markets
The European implementation follows a wave of international regulatory action targeting social media platforms’ age verification practices. Australia implemented a social media ban for children under 16 on January 15, 2026, resulting in 4.7 million account deactivations within the first month [1][9]. Platforms operating in Australia face potential fines reaching 28.066 million euros for failing to enforce adequate age controls [9]. The European Commission initiated an investigation over a year ago to assess TikTok’s child protection measures [2], while the European Parliament pushed for age limits on social media platforms on November 26, 2025 [1]. Denmark announced plans on November 7, 2025, to ban social media access for users under 15 [1].
Financial and Legal Stakes Drive Platform Innovation
TikTok’s proactive approach comes amid significant financial penalties and ongoing legal challenges. The platform received a €500 million fine from the European Union for violating European privacy laws, specifically for failing to demonstrate that data transferred to China maintained EU-level security standards [2][3]. A Delaware state judge is scheduled to hear TikTok’s bid on January 16, 2026, to dismiss a lawsuit filed in 2025 by parents of five British children [1]. These legal pressures coincide with broader concerns about platform safety, particularly following cases where TikTok secured dismissal of lawsuits accusing the platform of contributing to child fatalities in 2022 [1]. Belgian Minister of Digitalization Vanessa Matz has criticized TikTok’s approach, arguing that the platform positions itself as ‘judge and controller’ while collecting massive amounts of personal data and requiring sensitive verification information [6].