🎙️ Q&A with Vilas Dhar

In today's tech landscape, fostering public trust in AI requires accountability for both technical and societal decisions. Education is key; civil society, alongside public and private sectors, must empower individuals to shape technology's impact on society.

Vilas S. Dhar (Photo credit: Patrick J. McGovern Foundation)

Vilas Dhar is a leading global voice on equity in a tech-enabled world and serves as the President and Trustee of the Patrick J. McGovern Foundation. Dhar is an entrepreneur, technologist, and human rights advocate with a lifelong commitment to creating robust, human-centered social institutions. Dhar champions a new social compact for the digital age that prioritizes individuals and communities in the development of new products, inspires economic and social opportunity, and empowers the most vulnerable. You can follow Dhar on X (formerly Twitter) @vilasdhar and/or on LinkedIn.

Sudeshna Mukherjee (SM): Who or what has shaped your passion for ensuring equitable and socially responsible progress in Artificial Intelligence (AI) and technology?

Vilas Dhar (VD): Just within my lifetime, we have experienced transformations of power, capacity, and possibility driven by new technologies, from the advent of the internet to the global flourishing of social media. But at each turn, we have failed to define and prioritize shared values of equity, justice, and societal wellbeing, leading to a greater divide between the so-called “haves” and “have-nots”. With AI, we have a new opportunity to apply the best of human ingenuity to shared global problems - and ensure we do not repeat the same pattern. AI could become a tool to empower everyone to decide the contours of our shared future.

SM: What are the most promising approaches for building public trust and oversight in AI and data technologies? Is education a key piece of this?

VD: Fostering public trust requires identifying accountability for technical decisions - as well as for the social and political decisions that bind how technology will shape our society. In the case of AI, civil society, including NGOs, nonprofits, advocacy groups, and community organizations,  provides a bulwark against the institutions and powers that allow technological progress to benefit the few, while further marginalizing communities around the world. 

With education and skill building, civil society can collaborate with the public and private sectors to empower every human to participate in and shape a new social compact that defines how institutions should protect and preserve our rights and shared aspirations as humans. 

Education can help create and disseminate a foundational language for this very purpose. But to ensure that communities can continue to self-advocate and hold institutions accountable, we need pathways towards shared ownership in the outcomes of technology creation and governance, from mechanisms of co-design and co-creation to public consultation processes that restore community agency in local policymaking.

SM: How have international perspectives on AI ethics and governance evolved in recent years? Where do you see international perspectives shifting as we look ahead to the next 1 - 5 years?

VD: The international community is coming to a new shared realization that our overall approach to technology governance has often been reactive, exclusionary, and limited in scope. With participation and narratives from communities at the frontlines of major global challenges, we are now witnessing a collective shift from surface-level conversations around how we can build better, more profitable products, to a deeper exploration of the type of world we all want to live in as AI becomes ubiquitous. 

Institutions should make more concrete commitments to work with communities to define what constitutes responsible AI practices, how we can embed societal values into our design and regulation of AI, and where participatory representation can help us achieve more equitable outcomes. Our biggest challenge in the next 1-5 years will be whether our leaders can intentionally de-center traditional private sector interests, and instead amplify and prioritize the many voices of the Global Majority. I am filled with great hope when I think about our partners who are already redefining tech ethics based on traditional Indigenous values or creating new criteria of success that involve translating community wisdom into broadly accessible digital platforms for improving national health policy. Imagine what we could all do together if we created more space for these experts and changemakers across the governance and global AI landscape.

SM: Where do you see the greatest opportunity to use AI to amplify humanity's positive qualities like creativity, democracy, and human rights? Where are the biggest threats to these qualities and how can these threats be mitigated?

VD: The true potential of AI rests not within the technology itself, but within the indomitable spirit of human curiosity and aspiration that define how these new innovations are used for our shared benefit.

We have an immense capacity to adapt to changes in our environment, to evolve, and to reshape our circumstances. AI is already playing a critical role in this process, allowing humans to unlock new ways of building relationships with others, strengthen our political identity and purpose, express our creativity through art and literature, and step into our power as advocates and stewards of not only our community, but others as well –whether we are using a generative AI model to write a poem, or working with earth defenders to build AI tools that have a better chance of detecting and preventing environmental abuses in the Amazon.

The true potential of Artificial Intelligence (AI) rests not within the technology itself, but within the indomitable spirit of human curiosity and aspiration that define how these new innovations are used for our shared benefit.

The biggest threat to this, however, is the failure to build AI that supports and enhances human capacity. Tools that are trained exclusively on Global North languages or specific cultural values tend to restrict the agency and creativity of more marginalized communities across the globe. Design paradigms that intentionally erase diversity across the spectrum of races, genders, cultures, and geographies reduce humanity to a predictable machine that perpetuates inequitable power structures. To overcome this, we need commitments and aligned action to prioritize diversity and representation in our AI future, center humanity in our technology, and ensure that every tool we build reaffirms our humanity instead of suppressing it.

SM: As an instructor of a LinkedIn course on Generative AI and ethics, you are passionate about fostering responsible tech experts. What advice do you have for early-career professionals seeking to work in the tech ethics space?

VD: Success in an AI future depends more on your courage and willingness to actively engage with these topics from your own experience and life perspective, than becoming a technical expert. 

My advice for early-career professionals looking to engage with tech ethics in any capacity, professional or otherwise, has two parts. whether you’re a sociologist, writer, artist, lawyer, doctor, or even a student, you already have a valuable perspective and stake in how our shared AI future looks. What we need is for everyone to share those perspectives and become a curious, creative participant in the broader dialogue around the creation, use, and governance of AI in our world.

The next piece is that we all have a lived experience – often separate from our career or education – in which we’ve had to navigate a unique context, overcome a particular challenge, or at least help others to get ahead. Tech ethics is a practical journey that requires us to apply those same experiences to ensure that we are all making decisions that are responsible, community-informed, and targeted toward societal well-being while holding those in power accountable for a more equitable and just world. 

SM: What is your favorite publication on AI ethics?

VD: Here are several publications from great scholars and practitioners that are particularly excellent at describing why we need AI ethics and how we can build a new vocabulary and social frame for ethical action in technology:

Edited by: Vance Ricks

Subscribe to The Ethical Tech Digest

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe