AI holds up a mirror to us: Cambridge University's Verity Harding

The director of AI and geopolitics project at Cambridge University talks about how artificial intelligence can mimic the best and worst of society, and the guardrails we need

Divya J Shekhar
Published: Sep 30, 2024 12:34:15 PM IST
Updated: Sep 30, 2024 12:44:17 PM IST

Verity Harding, director, AI and geopolitics project at Cambridge UniversityVerity Harding, director, AI and geopolitics project at Cambridge University

Artificial intelligence (AI) is already part of our day-to-day life. Think about language-generating chatbots like ChatGPT. AI is in our homes, offices, and increasingly in public spaces too. Technology, including AI, reflects the best and worst of society, and it “will only serve society in the best way if we have checks and balances”, says Verity Harding, director of the AI and geopolitics project at Cambridge University. It is important, she adds, that regulators and the people building AI listen to people already vulnerable to this technology, and take into account diverse viewpoints.

Harding was one of Time magazine’s 100 Most Influential People in AI in 2023. She was a special advisor to former British Deputy Prime Minister Nick Clegg, then worked at Google and later at its AI Lab DeepMind, where she co-founded its research and ethics unit, and took the lead on global policy. Her new book, AI Needs You, takes a historical and intersectional perspective to help us navigate and understand AI. Edited excerpts from a conversation for the From the Bookshelves podcast:

Q. You say that AI will mimic the humans that create it. So, what are the guardrails that we need?

In some ways, AI holds a mirror up to us and shows us what we are like, particularly these generative AI technologies that are built on existing human data and language online, from books, scripts and blog posts. Although the companies involved have tweaked the algorithms to try their best to ensure it doesn’t bring up the worst sides… of course, anything that can show the best side of us can also show the worst. So, when I talk about checks and balances, firstly, what I mean is just being aware of the fact that technologies aren’t created in a vacuum. They are created in societies, and reflect those societies by best and worst parts. Once we understand that, how do we emphasise the best parts and de-emphasise the worst parts, and build technology that helps us build a happier, healthier, more prosperous and productive society? Only by being aware of the worst aspects can we really focus on the potential for this to help society.

Read More

Q. AI has played a role in spreading disinformation. Have we lost control of something we ourselves are developing?

I think we will lose control if we are not proactive in making sure there are rules and regulations, and guardrails, around the worse uses of AI. Generative AI brings great excitement and creativity to people, but it is definitely something that can be misused in terms of disinformation. And it’s important that people looking to regulate these technologies try and consult as widely as possible, because the people who face the most problems when it comes to this technology are people who are already vulnerable. For example, deepfake technology is used most often against women. Women suffer disproportionately when it comes to online abuse, and now that’s translating into AI. That’s why it’s important that we listen to people already being affected by this type of technology, make sure that people governing and regulating, as well as people building the technology, are the most diverse viewpoints possible.

Also read: Killer or a breeder? AI's impact on creativity and jobs

Q. What do people, leaders or organisations building AI need to keep in mind?

The technology industry is male-dominated, and there is a certain dominating worldview within that community. While it is important that worldview be represented too, diversity will give us a better outcome ultimately, because you might catch something earlier that could be a problem later. For example, facial recognition technology did not work as well, or maybe did not work at all, on darker skin tones. It took a lot of campaigning from predominantly women of colour to say these technologies did not work, and then they [the industry] reacted. Stakeholder consultations and bringing in people who you have not represented around the table is very important. That is why, when I was at DeepMind in the very early days, I helped to start ethics and society research to ensure that we were working with different perspectives. It should not be that just the technologists and technology companies are represented. We need to make sure that we have civil society, academic, other businesses and governments, all talking to each other.

Q. How can we use AI to augment human intelligence and not instead of it?

The more recent descriptions of AI as a separate being, a new species, something completely different from humanity, is unhelpful when it comes to thinking about how we can harness and use this technology for good. If we think about it as something that is inevitable and just going to happen to us no matter what, and that we just have to put up with it and make the best of it, it is probably going to lead to more of the worst sides of AI coming to the forefront. Instead, if we think, what sort of future do we want and how can AI help us achieve that? For example, how do we get better health care to more people who do not have access? Can we use AI for better or earlier diagnostics to help people fight diseases? There are many other examples where AI might be able to help us, but we need to be purposeful and intentional about that. 

(This story appears in the 04 October, 2024 issue of Forbes India. To visit our Archives, click here.)

X