Children now spend an unprecedented amount of time on digital platforms, from social media and gaming apps to AI chatbots. These technologies offer opportunities for learning and connection; they also expose young users to serious risks. Research increasingly links excessive use of social platforms to depression, anxiety, sleep disruption and disordered eating. Children may also encounter manipulative design features, profiling of their personal data, targeted misinformation and unsafe interactions with AI systems. Governments worldwide have therefore begun taking stronger action.
Abhishek Mitra
Counsel
DSK Legal
The EU has developed a comprehensive regulatory package, combining the GDPR’s privacy protections, the AI Act’s ban on manipulative systems and the Digital Services Act’s obligations for platforms to assess risks to minors. The United Kingdom has its own GDPR, its Online Safety Act along with an age-appropriate design code mandates on-by-default safety settings, limits on data collection and restrictions on manipulative nudging techniques.
The proposed United States Kids Online Safety Act, a federal bill, would impose a statutory duty of care requiring platforms to prevent harm to minors. From December 2025, social media platforms in Australia will be required proactively to prevent under-16s from creating or maintaining accounts. In China, a strict model includes gaming time limits for minors, real-name verification, youth modes on apps, capping usage and restricting content. The country even has a minor model that gives parents more control over their children’s online exposure by means of phone settings.
These different approaches show regulators have a common goal, demanding proactive intervention to achieve online safety for children. India’s Digital Personal Data Protection Act (DPDP), not yet in force, does include safeguards for children. Its most notable provision requires parental consent through identity verification before processing the data of those under 18. The DPDP, however, is a privacy law. It does not tackle addictive design, algorithmic harm or platform responsibility for children’s online environments.
Samir Malik
Partner
DSK Legal
This gap has real-world consequences because a growing body of research suggests a link between mental health outcomes and problematic social media use. The US Surgeon General’s 2023 advisory found that adolescents who spend more than three hours a day on social media face double the risk of experiencing depression and anxiety. Although these reports acknowledge empirical gaps in establishing causal connections, they also recognise that action must be taken before gold standard evidence is proven.
Recent investigations and disclosures raise further concerns. Reuters reported that a well-known AI chatbot was permitted by design to engage in inappropriate conversations with minors. The Washington Post and BBC reported on a platform’s deliberate suppression of research on child safety in virtual environments, including sexual predation and grooming. In the US, lawsuits have been brought against AI companies alleging that children’s self-harm and suicides came after prolonged interaction with AI interfaces. The US Federal Trade Commission began an investigation into how major tech companies providing AI-powered chatbots measure, test and monitor negative effects on young users.
Because its population includes a large proportion of young people, India needs dedicated online safety measures able to require algorithmic transparency, safety-by-design, and rapid takedowns. The DPDP is an important starting point, but was never designed to address the range of harm children face online. An online safety and accountability law is long overdue.
For such a framework to have real impact, India must address its key structural weakness. It has a near-total reliance on foreign platforms, from Google’s search algorithms and Meta’s family of communication and business apps to overseas AI foundational models. India has no credible domestic alternatives. This makes it harder to enforce even beefed-up regulations because of jurisdictional difficulties.
To prevent child harm and abuse, India must strengthen online safety and policies and give regulators the authority to act in consultation with policymakers. At the same time, it has to develop credible and attractive domestic platforms and AI models.
Abhishek Mitra is a counsel and Samir Malik is a partner at DSK Legal
DSK Legal
1701, One World Centre
Floor 17, Tower 2B
841, Senapati Bapat Marg
Mumbai – 400 013, India
Contact details:
T: +91 22 6658 8000
E: contactus@dsklegal.com