HomeAsiaQ&A: Call for 'moral courage' as AI expert charts road ahead |...

Q&A: Call for ‘moral courage’ as AI expert charts road ahead | News | Eco-Business


Vilas Dhar is the president of the US-based Patrick J. McGovern Foundation, a global philanthropy that is working to expand the responsible use of artificial intelligence (AI) aimed at helping under-represented groups and promoting equity.

Dhar, through the foundation, has helped steer more thanĀ US$500 millionĀ in commitments to groups advancing public health, education, climate action and democratic governance as it seeks to promote the responsible use of AI.Ā 

He is scheduled to appear atĀ Trust Conference, the Thomson Reuters Foundation’s flagship gathering of leaders and experts.Ā Ā 

Dhar spoke with Context ahead of the conference about AI, where the public and private sectors fit into its development, and why he remains optimistic about the benefits of AI – despite its many dangers.

Here’s what he had to tell us:

Looking at recent developments in AI, would you say AI has been more of a force for good or a source of division?

I’m a very hopeful person about AI, but I have to be careful. I’m optimistic about what technology promises, but only when it’s directed by public, human, community-based interests.

Over the last five years we’ve seen a lot of evidence of the first part – of all the things that AI could do.Ā 

But for us to build what AI should do requires us to lean in and bring a human-centric lens.

And I think if we do that, then we could really build a digital future that works for everyone.

ā€œ

It requires of us a moral courage to bring rights, values and norms into AI decision-making and an urgency to ensure that they’re embedded quickly before we (build) too much on top of it.

Vilas Dhar, president, Patrick J McGovern Foundation

When we talk about governance of AI and government policy, what might that look like? Are there good role models out there?

I’ll give a very clear example, which is the Chilean government, in collaboration with a number of regional partners, has recently deployed theĀ first Spanish languageĀ open-sourceĀ large language model.

And it’s made publicly available for anybody who wants to build AI infrastructure in Spanish-speaking regions.

It’s a very good example of how governments can step in and bring public funding and financing to build what is … a digital public good.Ā 

It’s a part of public infrastructure, it’s free to use, it’s open source, but it lets people build the things they need.Ā  Ā Ā 

You say India, not necessarily the obvious choice, is positioning itself as aĀ global leaderĀ on public-interest AI. Can you tell us more?Ā  Ā Ā 

Rather than having private-sector models that are commercially accessible, India is working on building open-source models themselves and an ecosystem where private-sector players can build on top of those to create products and tools.

And because they have such good experience around this with all the work they’ve done in the past around the identity system (and) the payment system that they’ve built, there’s a model here that’s very different from the (U.S.) or Chinese models.

The American model, of course, is very centered on private- sector action (and) commercialisation.Ā 

The Chinese model (is) government first.

India is more saying ā€˜how do we build the ecosystem and then let a variety of actors come in and build on top of it?’

Can you talk about recent AI developments at the UN, and any takeouts for the wider world from its deliberations?Ā Ā 

The big thing is we need a mechanism by which society leads on questions of AI, and that’s not a question of how we regulate code but how we set norms of conscience.

How do we ask fundamental questions like if we live in a world where AI will produce these massive economic benefits, (that) there’s a mechanism by which we think about equitably distributing those benefits? About how we ensure that we build the platform and the application layer of AI that actually serves needs that aren’t necessarily market-driven.

In order to create systemic change we need alignment at the macro level.Ā 

The UN is a really promising platform where we bring together this multi-sectoral approach: governments that have the capacity to invest in public AI at scale, industry that kind of knows… the emerging frontier technology, and civil society that can speak to the needs of communities and actually organise and deliver a targeted action.

Lastly, any big, closing thoughts we didn’t touch on?

I’m going to say one last thing, which is, we’re in a moment (in) time where all of the structures and infrastructure of our AI future are being set and decided right now, which means we have an opportunity to bring in the frameworks of what you’ve heard me say so many times – rights and norms and values and principles – and embed them in that firmament.

But there’s also an urgency – because if we don’t get this right in the next five years, then we set a very different path for humanity for the next decades.

So it requires of us a moral courage to bring rights, values and norms into AI decision-making and an urgency to ensure that they’re embedded quickly before we (build) too much on top of it.

This interview has been edited for length and clarity.Ā  Ā Ā 

This story was published with permission from Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, climate change, resilience, women’s rights, trafficking and property rights. Visit https://www.context.news/.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img