Vilas Dhar is the president of the US-based Patrick J. McGovern Foundation, a global philanthropy that is working to expand the responsible use of artificial intelligence (AI) aimed at helping under-represented groups and promoting equity.
Dhar, through the foundation, has helped steer more thanĀ US$500 millionĀ in commitments to groups advancing public health, education, climate action and democratic governance as it seeks to promote the responsible use of AI.Ā
He is scheduled to appear atĀ Trust Conference, the Thomson Reuters Foundationās flagship gathering of leaders and experts.Ā Ā
Dhar spoke with Context ahead of the conference about AI, where the public and private sectors fit into its development, and why he remains optimistic about the benefits of AI – despite its many dangers.
Hereās what he had to tell us:
Looking at recent developments in AI, would you say AI has been more of a force for good or a source of division?
Iām a very hopeful person about AI, but I have to be careful. Iām optimistic about what technology promises, but only when itās directed by public, human, community-based interests.
Over the last five years weāve seen a lot of evidence of the first part – of all the things that AI could do.Ā
But for us to build what AI should do requires us to lean in and bring a human-centric lens.
And I think if we do that, then we could really build a digital future that works for everyone.
ā
It requires of us a moral courage to bring rights, values and norms into AI decision-making and an urgency to ensure that theyāre embedded quickly before we (build) too much on top of it.
Vilas Dhar, president, Patrick J McGovern Foundation
When we talk about governance of AI and government policy, what might that look like? Are there good role models out there?
Iāll give a very clear example, which is the Chilean government, in collaboration with a number of regional partners, has recently deployed theĀ first Spanish languageĀ open-sourceĀ large language model.
And itās made publicly available for anybody who wants to build AI infrastructure in Spanish-speaking regions.
Itās a very good example of how governments can step in and bring public funding and financing to build what is ⦠a digital public good.Ā
Itās a part of public infrastructure, itās free to use, itās open source, but it lets people build the things they need.Ā Ā Ā
You say India, not necessarily the obvious choice, is positioning itself as aĀ global leaderĀ on public-interest AI. Can you tell us more?Ā Ā Ā
Rather than having private-sector models that are commercially accessible, India is working on building open-source models themselves and an ecosystem where private-sector players can build on top of those to create products and tools.
And because they have such good experience around this with all the work theyāve done in the past around the identity system (and) the payment system that theyāve built, thereās a model here thatās very different from the (U.S.) or Chinese models.
The American model, of course, is very centered on private- sector action (and) commercialisation.Ā
The Chinese model (is) government first.
India is more saying āhow do we build the ecosystem and then let a variety of actors come in and build on top of it?ā
Can you talk about recent AI developments at the UN, and any takeouts for the wider world from its deliberations?Ā Ā
The big thing is we need a mechanism by which society leads on questions of AI, and thatās not a question of how we regulate code but how we set norms of conscience.
How do we ask fundamental questions like if we live in a world where AI will produce these massive economic benefits, (that) thereās a mechanism by which we think about equitably distributing those benefits? About how we ensure that we build the platform and the application layer of AI that actually serves needs that arenāt necessarily market-driven.
In order to create systemic change we need alignment at the macro level.Ā
The UN is a really promising platform where we bring together this multi-sectoral approach: governments that have the capacity to invest in public AI at scale, industry that kind of knows⦠the emerging frontier technology, and civil society that can speak to the needs of communities and actually organise and deliver a targeted action.
Lastly, any big, closing thoughts we didnāt touch on?
Iām going to say one last thing, which is, weāre in a moment (in) time where all of the structures and infrastructure of our AI future are being set and decided right now, which means we have an opportunity to bring in the frameworks of what youāve heard me say so many times – rights and norms and values and principles – and embed them in that firmament.
But thereās also an urgency ā because if we donāt get this right in the next five years, then we set a very different path for humanity for the next decades.
So it requires of us a moral courage to bring rights, values and norms into AI decision-making and an urgency to ensure that theyāre embedded quickly before we (build) too much on top of it.
This interview has been edited for length and clarity.Ā Ā Ā
This story was published with permission from Thomson Reuters Foundation, the charitable arm of Thomson Reuters, that covers humanitarian news, climate change, resilience, womenās rights, trafficking and property rights. Visit https://www.context.news/.