A quiet war is underway in Europe – one that will decide whether journalism remains a pillar of democracy, or dissolves into the hum of the data economy.
For years, news organisations have had an uneasy pact with Silicon Valley. Google and Meta took the lion’s share of advertising money, but sent readers our way. It was a lopsided but functional deal: visibility in exchange for traffic.
That bargain has now collapsed. Generative artificial intelligence (GPAI) systems – software applications such as OpenAI’s ChatGPT or Google’s Gemini and “AI Overviews” – are trained on millions of news articles. Many of these articles were behind paywalls and were taken without permission or payment. GPAI systems summarise and repackage journalists’ work but seldom direct readers to the source. What’s left is a veneer of reality: journalism reduced to raw material, stripped of authorship and context.
This is not just a debate about innovation. It is a fight for ownership, fairness, and truth.
Publishers are reacting in two ways. Some, including The Financial Times, News Corp, Axel Springer and The Atlantic, have struck licensing deals with OpenAI – an acknowledgement, at least, that journalism has value. These agreements involve payments, attribution and links to original reporting.
Others are taking a harder line. The New York Times is suing OpenAI and Microsoft for “massive copyright infringement”. The question before the courts is fundamental: Can AI companies freely train on journalistic content, or should they pay for what they use?
Governments are stirring. Australia and Canada already compel tech platforms to pay for news. In Europe, the Directive on Copyright in the Digital Single Market (DSM) acknowledges the right of publishers the right to opt out of text and data mining. The EU’s new AI Act demands a certain degree of transparency: The companies running general purpose AI systems must provide a summary of the data they use to train their models and make it public. However, the level of detail required may not be sufficient to enable copyright holders to enforce their rights.
The European Parliament’s forthcoming report by German Christian-democratic MEP Axel Voss – Copyright and Generative AI: Opportunities and Challenges – has injected a new sense of urgency into the debate. In his draft report, Voss maintains that existing EU exceptions for text and data mining were never designed for industrial-scale AI training and have created “enormous legal uncertainty”.
He proposes an opt-in system requiring publishers’ explicit consent, backed by a central EU database to record licences and refusals, and full transparency over which copyrighted works are exploited.
If AI companies refuse to reveal their training data, Voss suggests, that should amount to a presumption of copyright infringement. He further argues that AI-generated works should not enjoy copyright protection and that news outlets deserve payment and credit when their journalism is used to train models. The principle is simple: Progress cannot come at the expense of journalism’s survival.
Although the final report will be non-binding, it will constitute the European Parliament’s view on the subject and could call on the European Commission to come forward with a legislative proposal.
The UK is still on the fence. While Brussels sharpens its legal armour to defend journalism, London toys with loopholes. The government’s plan to let AI firms mine copyrighted work unless publishers actively “opt out” is really an invitation to plunder. It dresses exploitation as innovation and leaves Britain’s media naked in a fight for its very survival. Consultation papers and polite working groups will not save journalism if the law itself tilts toward those who take without asking.
A newspaper kiosk in Paris, 1975. (Photo by Francois LOCHON/Gamma-Rapho via Getty Images)
The threat to the free press is not only legal or financial, but structural. Advertising revenue has already been siphoned off by Silicon Valley. Now, even the dwindling traffic from search and social media is at risk, as AI tools answer users’ requests for information directly. Every query satisfied by a chatbot is one less visit to a news site.
Fewer clicks means lower subscriptions, fewer reporters, less investigation – and, ultimately, weaker democracies.
A society that allows its news to be mined and regurgitated without remuneration doesn’t merely impoverish journalists; it starves itself of truth. Freedom of expression is meaningless if there are no reporters left to verify the facts.
The lawsuits in the United States and the policy battles in Brussels are about more than money. They will determine whether journalism can still exist as an economic activity in the age of AI.
If European courts conclude that “free use” of journalistic content is lawful, independent reporting will wither. Press freedom will not be abolished by censors but starved to death – by the gradual disappearance of the means to survive.
Does it have to end that way? Requiring AI companies to pay for the news content that they use could restore some balance. The value of quality journalism would once again be recognised; data could be shared fairly; and Europe could set a global precedent – proving that innovation and democracy need not be enemies.
The battle lines are drawn. What’s at stake is not just the future of the media, but the future of truth itself.
Peter Vandermeersch is a Belgian journalist. He is a former newspaper editor of De Standaard in Brussels and NRC Handelsblad in Amsterdam and is the former CEO of The Irish Independent and the Belfast Telegraph.


