Artificial Intelligence (AI) did not exist when more than 5,000 private capital firms signed on to the Principles for Responsible Investment (PRI), set up with the United Nations 20 years ago.
Investors have yet to fully extend their responsibility mandate to the way their investee companies adopt and deploy AI.
If they want to continue claiming to differentiate themselves as responsible, and not just as investors, they must do so – and sooner rather than later.
Literally every business will eventually incorporate AI. To not do so would mean missing out on all the efficiency and productivity gains.
But those investment gains will come with societal loss.
We are already witnessing how adopting AI forces trade-offs from the benefits businesses receive from increased productivity.
For investors who call themselves responsible, the test will be how to ensure the adoption of AI while mitigating harm from job losses, increases in income inequality, and the drain on carbon and water resources.
Those that fail to consider those harms and forego steps to mitigate them will be no different from any other investor that pumps capital into investment with no thought given to societal impacts – which could eventually catch-up with them financially.
The productivity promise
From automation to large-language and agentic systems, AI redefines the possibility of productivity. The Federal Reserve Bank of St Louis notes that AI adoption is outpacing every major technology before it. But that growth comes at a cost.
The International Energy Agency (IEA) projects that global data-centre electricity use will double by 2030, driven by AI’s growing demand for computing power. Data centres already consume over 1.9 trillion litres of water each year, a figure expected to rise sharply by 2030 as cooling and chip production intensify.
Each new AI model adds to the load – more energy use, more emissions, and greater strain on power grids already stretched thin.
The social cost could be even worse.
Automation wipes out routine and entry-level jobs faster than new ones appear. The rungs of the career ladder disappear, and re-skilling cannot keep pace.
The environmental impact
The IEA warns that global electricity demand from data centres will more than double over the next five years, consuming as much electricity by 2030 as the whole of Japan does today.
That surge cannot be powered by renewables alone. Over 40 per cent of the new demand will be met from fossil fuels – a stark reminder that the AI boom adds new demand.
Blackout today are not necessarily triggered by AI, but illustrate a critical point: even today’s grids are struggling to manage variable renewable loads. As AI-driven demand accelerates, the risk of instability will only grow.
The reality investors must confront: The impossibility of separating AI’s growth from its increasing emissions.
With AI accelerating demand faster than renewables can scale, the gap widens between what we build and what we can power cleanly.
To be a responsible investor, one will need to account for how their increased energy footprint will exacerbate the climate crisis.
Job losses
The pressure AI places on our energy systems mirrors its impact on labour markets.
Both are being stretched by speed: faster demand and faster disruption.
According to the World Economic Forum, automation could displace 83 million jobs by 2027 while creating only 69 million new ones, a net loss of 14 million globally.
The conditional becomes the categorical.
The saying, “people will not lose their job to AI, they will lose their job to someone who knows how to use AI”, proves to hogwash to millions.
Microsoft laid off 9,000 employees this year, about 4 per cent of its workforce citing “heavy investments in AI infrastructure and cost pressures”.
In Singapore, DBS Group plans to shave around 4,000 roles over three years as it pivots to AI-driven operations.
In India, Tata Consultancy Services has laid off over 12,000 employees as AI automates core IT and back-office functions.
Yes, jobs will be created. But the gains remain concentrated in high-skill, tech-driven roles, while routine and entry-level opportunities disappear – including in the sustainability profession.
In Southeast Asia, about 57 per cent of the workforce could see their jobs reshaped or disrupted by AI, with women particularly at risk: roughly 70 per cent of female workers in Indonesia and Singapore hold roles most exposed to automation.
Income inequality
AI’s disruption doesn’t stop at the factory floor or the office – it runs deeper. Beyond jobs and energy, it redraws the map of global inequality, where wealthier nations design the systems, and poorer ones provide the labour to keep them running.
The US and China dominate AI research, talent, and capital. Lower-income economies, meanwhile, provide the invisible labour behind the system – millions of underpaid workers labelling data, moderating content, and annotating images so AI can “learn”.
This imbalance extends beyond economics. Governance gaps deepen: biased algorithms, disinformation, and opaque decision-making are eroding trust in institutions.
In healthcare alone, AI diagnostic systems risk excluding up to 5 billion people because training data draws mostly from high-income countries.
Without inclusion and oversight, AI will reinforce the inequalities it was meant to solve, not just between nations, but within them.
AI tests the credibility of the responsible investor
Being a responsible investor means incorporating intent with capital on the premise that doing so will increase financial returns.
The race for efficiency from AI can pull investors away from the principles that define responsibility.
For example, failing to rethink how we train and develop entry-level talent may save costs now but will leave a generation of dangerously naïve leaders unprepared for what’s next.
Without real investment in reskilling, the fallout will extend far beyond lost jobs – deepening inequality, eroding social cohesion, and threatening national stability.
Yes, investors will and should focus on achieving the gains to their returns from efficiency.
To not do so would go against their fiduciary responsibility.
But what does it mean to be a “responsible” investor in the AI era?
It must matter to those who call themselves responsible investors when financing companies that prioritise productivity without considering their impact on the environment or livelihoods.
As Aron Cramer, CEO of consultancy BSR notes, the key questions which will define how well – and how responsibly – companies are deploying AI, are:
- Is AI being deployed in a way that respects human rights and privacy?
- Will companies ensure that the energy needed to support growth in AI does not produce a U-turn on emissions?
- How can companies deploy AI in a way that does not lead to massive job losses?
Pre-AI there was a general agreement on what was required to credibly call oneself a responsible investor.
Investors who claim to be responsible should be asking themselves how they can justify that descriptor when pumping capital into AI. If they can’t, they need to rethink that label.
Steven Okun is CEO of APAC Advisors, a Singapore-headquartered consultancy focused on geopolitics and responsible investing. Megan Willis is APAC Advisors’ senior advisor and Noemie Viterale is an associate.


