HomeEurope NewsEurope’s uneasy quest to regulate childhood online

Europe’s uneasy quest to regulate childhood online


Across Europe, governments are grappling with an issue that is no longer abstract or theoretical: the digital spaces in which children spend hours every day are exposing them to unprecedented levels of risk. The debate is no longer about whether minors are harmed online, but about how far states, the European union and tech platforms should go to address these harms, which range from addictive platform design and AI-generated abuse to cyberbullying, porn access and extremist influence.

However, the continent’s response resembles a patchwork of urgent experiments, political reflexes, and deep philosophical disagreements concerning rights, surveillance, and childhood itself. How to protect minors online is quickly becoming one of Europe’s most politically sensitive and defining regulatory battles.

A shared crisis, different triggers

The debate has erupted in several countries. In June 2025, Austria was shaken when a 21-year-old former student killed ten people at a secondary school in Graz before taking his own life. Investigators discovered that he had up to 30 social media profiles, and that he was becoming increasingly withdrawn from the real world and immersed in the virtual one. The attack set off a political shockwave: Vice-Chancellor Andreas Babler called for a ban on social media for children under 15, arguing that the algorithms of global platforms had become too dangerous to leave unchecked.

Meanwhile, Greece was confronted with two chilling cases in September 2025: a 13-year-old who used AI tools to generate and share fake child pornography, and a 25-year-old who extorted explicit images from underage girls via social media. In France, a combination of increasing “screen addiction”, the TikTok Lite “rewards” scandal, and growing concerns about children’ s exposure to violence and sexualised content pushed the issue to the forefront of the political agenda. 

Receive the best of the independent European journalism straight to your inbox every Thursday

In Bulgaria, rising bullying, ubiquitous smartphone use in schools, and a spike in exploitation cases have fuelled calls to act.

The situation across Europe is strikingly similar: children are spending more time online at younger ages and with less protection than ever, while the harms they face are increasing.

The ban temptation

In response, an increasing number of European governments have considered or adopted the idea of prohibiting social media access for minors. In addition to Austria’s Chancellor, Greece’s Prime Minister and Bulgaria’s Education Minister have all publicly discussed the prospect of restricting access for under-15s, while a European Parliament committee has even suggested the implementation of a continent-wide “digital adulthood” threshold of 16.

However, the instinct to ban is immediately met with equally forceful resistance across Europe – from digital rights experts, child safety NGOs, regulators, and psychologists. In Austria, experts such as Moussa Al-Hassan Diaw of the extremism prevention association Derad and Verena Fabris (Extremism Advice Centre) have argued that, while a ban might send a political signal, it could also make platforms even more attractive to teenagers. Digital rights advocates such as Thomas Lohninger of the social participation NGO Epicenter Works have insisted that minors have a right to participate in the digital world. “The internet is not just TikTok and Instagram,” he told Der Standard. For them, restriction would cause “unacceptable collateral damage”. 

In Bulgaria, child safety expert Antoaneta Vasileva described outright bans as “a symbolic act” that creates a false sense of security, as children can bypass them using fake profiles or their parents’ devices. She emphasised that the main risk is not simply accessing the internet, but a lack of education, emotional maturity and supportive relationships that would teach children how to navigate online spaces safely. 

Yet across capitals – from Vienna to Athens to Paris – experts echo the same message: the most effective line of defence remains the relationship between children and adults. Not firewalls. Not bans. Not apps

The same concern resonates in France, where children as young as four are being drawn into the endless scroll – often guided by parents persuaded by influencer marketing. Educators report that many young users mimic adult content without realising its implications. National hotlines such as e-Enfance / 3018 handle thousands of cases of harassment, sexual extortion and manipulative content. 

For experts, regulators and educators alike, banning access seems both impractical and insufficient – it solves nothing unless children are taught how to behave online, recognise risks and trust responsible adults.

The rise of age verification – promise and peril

Where Europe is converging is not on bans, but on age verification. The question is how to verify age without creating mass surveillance.

Spain attempted the boldest move in 2024: the government proposed requiring all users accessing online pornography to verify their age using a national electronic ID via the Cartera Digital app. Its goal was to prevent minors under 18 from accessing adult content. Yet encryption experts, privacy specialists and civil-liberties advocates immediately warned the system was technically flawed, legally risky, and dangerous for personal privacy. The plan collapsed under public pressure and technical objections, reports El Confidencial.

Other countries, such as Greece, are now testing more sophisticated tools. As part of an EU pilot programme, Greece plans to incorporate age verification, parental controls and content filters into a national app called Kids Wallet, which is expected to be launched by the end of 2025.

France is also experimenting with age-verification pilots in line with the Digital Services Act (DSA). Regulators emphasise that any system must balance effectiveness with data protection and avoid creating a universal digital ID.Yet digital rights advocates warn that a verification system powerful enough to block minors will likely also identify and track adults. The question therefore becomes not only how to block children, but also whether age verification should be permitted at all in a free digital democracy.

Platforms’ role: from addictive design to accountability gaps

Across Europe, a second common thread emerges: the design of social media platforms themselves. For many experts, the danger is not just what children see, but how they are pushed to see it.

In 2024, TikTok Lite launched a “rewards” system in France and Spain that gave users points and incentives for watching more videos and logging in daily. The move sparked outrage, especially among child-safety advocates. Under pressure, the European Commission opened a formal DSA investigation into “addictive design” – and TikTok suspended the feature. The case illustrated for the first time that the DSA could force global platforms to reverse design choices across borders.

In France, a parliamentary inquiry examined the risks posed to minors’ mental health by recommendation algorithms, AI-driven virality, and the structural pressures exerted by platforms. Regulators are increasingly treating these issues as design flaws rather than mere user misconduct, demanding that platforms rebuild their services with children’s well-being in mind.

Schools, parents and the vanishing line between online and offline life

Schools across Europe are becoming battlegrounds for digital norms. In Bulgaria, for example, the education ministry has extended its smartphone ban to all “screen devices” during school hours, effective from November 2025. Meanwhile, in France, the authorities are debating restrictions on screen use in early education, while recommending stricter supervision of teenagers’ online activities, and a “digital curfew” lasting from 10pm to 8am.

However, experts insist that schools cannot bear this burden alone. From a very young age, children are immersed in a digital world that is often shaped by the adults around them. In many households, parents create social media profiles for toddlers or encourage them to post like influencers – sometimes without fully understanding the risks.

Regulators such as France’s data protection authority CNIL now issue guidelines about “sharenting”, urging parents to think twice before posting photos or videos of their children online. Several European countries have adopted legislation similar to France’s 2020 law on children’s image rights, which grants minors control over photos and videos of them posted by their parents for commercial use. A new law in 2024 extended most of its provisions to children’s daily lives, placing a legal duty on parents to protect their child’s digital image and allowing minors to request the removal of content. 

Yet across capitals – from Vienna to Athens to Paris – experts echo the same message: the most effective line of defence remains the relationship between children and adults. Not firewalls. Not bans. Not apps.

Receive the best of the independent European journalism straight to your inbox every Thursday

As Antoaneta Vasileva, child-safety expert in Bulgaria, puts it: “The safest protection is an open dialogue and children know that they are not alone, even when they find themselves in a difficult situation online.”

Children need to know they can seek help, report harassment, and recognise manipulation. Adults  – parents, teachers, caregivers – need to build trust, teach digital literacy, and guide children in navigating the online world safely and responsibly.

The EU’s role: more ambitious than its member states

At the European level, child protection has become one of the most contested fronts of digital regulation. The Digital Services Act and the upcoming Artificial Intelligence (AI) Act impose unprecedented obligations on platforms to assess risks to minors, design safer services, and remove harmful content.

But the EU’s most controversial proposal remains the Child Sexual Abuse Material Regulation (CSAR) — nicknamed “Chat Control” by critics. It aims to require platforms to detect, remove and report Child sexual abuse material (CSAM), even in encrypted spaces and private messages.

Supporters of the measure argue that it is the only realistic way to detect modern forms of grooming, extortion and image-based exploitation, which currently go undetected. Vasileva and other child protection experts believe that “Chat Control” could offer a much higher level of protection than age-based bans.

However, Vasileva, along with privacy advocates and digital rights organisations, warns that the mass scanning of private messages would normalise surveillance for everyone. They claim that it would undermine encryption and transform private communications into regulated, monitored flows – a significant change with serious consequences for personal freedom across Europe.

Meanwhile, on 26 November, the European Parliament passed a resolution by a large majority, calling for stronger protection against manipulative strategies that can lead to addiction and hinder children’s ability to concentrate and interact with online content in a healthy way. MEPs also proposed a harmonised EU digital minimum age of 16 for accessing social media, video-sharing platforms, and AI companions. They also allowed 13- to 16-year-olds to access these with parental consent, and supported the Commission’s work to develop an EU age verification app and the European digital identity (eID) wallet, while preserving the privacy of minors at the same time. 

On the same day, the EU member states agreed on a common position on a regulation to prevent and combat child sexual abuse, including online. Once adopted, the new law will oblige digital companies to prevent the dissemination of material relating to the sexual abuse of children and the solicitation of children. National governments will be able to require companies to remove content and block access to it, or, in the case of search engines, remove it from search results. The proposed regulation also establishes a new EU agency, the EU Centre on Child Sexual Abuse, which will support member states and online providers in implementing the law.

The balance between protecting children online and safeguarding the privacy of millions lies at the heart of Europe’s policy. The question is whether it is possible to achieve both, or if they are mutually incompatible.

What kind of digital world will Europe choose for its children?

Taken together, Europe’s national experiments paint a picture of urgency without consensus. While some governments view bans as a politically appealing shortcut, even their backers concede that prohibiting social media access for minors is challenging to enforce and could push children into the darker, less visible corners of the internet.

Age-verification systems are promoted as a more realistic alternative, but they raise their own dilemmas: while they promise technical safeguards, they also threaten to erode privacy by tying identity to online behaviour. Although platform regulation is advancing through the DSA and other EU instruments, enforcement remains slow and largely reactive, always a step behind the next viral trend, design feature or form of manipulation.

Meanwhile, schools and parents are becoming increasingly overwhelmed as they are expected to counteract the psychological power of algorithmic feeds and the omnipresence of screens. At the European level, the ambition to tackle child sexual abuse material head-on through the proposed Chat Control regulation has created a deep divide between those who argue that it is necessary to protect children, and those who fear that mass scanning could undermine fundamental rights.

Rather than a unified European model taking shape, what truly emerges is a shared understanding across the continent: childhood is now inseparable from the digital world, and leaving that world unregulated is no longer an option. This forces Europe to confront a difficult choice — whether to rely on restrictive bans, invest in education-driven strategies, redesign platforms for safety, expand surveillance tools, or, more realistically, attempt a fragile balance of all these approaches.

Yet even in this fragmented landscape, one principle cuts through the noise: protection cannot come at the expense of children’s ability to navigate their own digital environment. As Antoaneta Vasileva reminds us, “We need to develop children’s skills to recognise risks and deal with online challenges in a mature and considered way – skills that will help them become resilient, critical thinkers and responsible participants in the digital world.”

🤝 This article was produced as part of the European PULSE project. György Folk (EUrologus/HVG) Manuel Ángel Méndez and M. Mcloughlin (El Confidencial), Giota Tessi (Efsyn) and Desislava Koleva (Mediapool) contributed to it.
📄 Have your say in European journalism! Join readers from across Europe who are helping us shape more connected, cross-border reporting on European affairs.

Do you like our work?

Help multilingual European journalism to thrive, without ads or paywalls. Your one-off or regular support will keep our newsroom independent. Thank you!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img