I was recently invited to speak to an expert meeting on the challenge of Artificial Intelligence at the United Nations in Geneva – specifically, on issues raised for Human Rights. Here’s a summary of what I had to say.
I read some astonishing news the other day.
So you remember when last year a woman was killed by a “self-driving” Uber car? It was in March of 2018, and her name was Elaine Herzberg. She was pushing a bicycle and “jaywalking” – crossing the road where she shouldn’t have been.
Well, the astonishing news from an official inquiry is that the o-so-clever AI system driving the car had never been programmed to recognize jaywalkers. In other words, it assumed that pedestrians would only cross the road at pedestrian crossings.
You really can’t make this up. They just assumed that no-one does what we all do – take a risk and cross the road in between crossings when we’re in a hurry.
It’s the perfect example of the need for what’s being called “algorithmic transparency” – letting everyone see the computer programmes that run AI systems. It’s hard to believe no-one would have noticed this glaring, awful, ridiculous, omission if the assumptions built into the system had been on the Internet.
This is actually really important for businesses, even though they may not see it – as well as to citizens/consumers. If companies like Uber want to avoid civil (and perhaps criminal) liability, they need to open up to scrutiny.
As this sad example shows, we are moving into new territory. Companies want to keep their secrets secret. They’re good at lobbying governments to protect their “intellectual property.” Yet as their power grows, we have to hold them to account.
That’s why it’s profoundly significant that we are now beginning to discuss AI within the framework of human rights. So far, issues of human rights have generally been assessed within the context of the relation of the individual to governments. We suddenly confront a situation in which it’s private companies whose power needs to be reined in – including some (like Facebook and Uber and Google) with more influence and resources than many smaller nations.
How are we to manage these huge organizations and the growing power they are exercising through AI systems?
These seem like completely new issues, but there are key facts we already know that should help us frame the 21st century discussion as we look ahead.
First, it’s nothing new for tech companies to ignore the ethical implications of their products. If you want a stunning example, see Edwin Black’s book IBM and the Holocaust.
Second, the need for “transparency” was pre-figured more than a generation ago when the emergence of the U.S. credit-rating companies came under scrutiny. Many years later, the principles (algorithms) by which they determine if we are credit-worthy remain secret. (They were also recently hit by a huge hacking breach that released millions of people’s secrets . . . .)
Third, while there are all sorts of issues raised by AI, the key to the ethics agenda is Human Rights. There’s much to be learned from the way that Human Rights has anchored the global discussion of the parallel agenda in bioethics. The two major international bioethics agreements are the (Council of Europe) European Convention on Human Rights and Biomedicine, and the (UNESCO) Universal Declaration on Bioethics and Human Rights.
Fourth, there’s a terrific opportunity to encourage civil society groups to act on behalf of consumers/citizens. One model is the kitemark/trustmark approach – charities and NGOs could set their own standards and choose to award a seal of approval to companies like Uber and Facebook – or not. Again, there are examples out there where this has been done before. Look at The Good Pharma Scorecard, pioneered by my friend Jennifer Miller as a way of helping keep drug companies honest. Why not let loose groups like Friends of the Earth and Liberty and the Church of England to evaluate the algorithms and hold the companies to account? Let the companies compete for their approval.
Meanwhile, governments should be working through the United Nations for a “universal instrument” setting out the core principles of accountability that will ensure Human Rights are taken seriously as AI becomes more and more important.
Let’s not forget Elaine Herzberg.
If you’d like to read up on the UNESCO bioethics declaration, including some of its problems, see my article on The Making of the Universal Declaration on Bioethics and human Rights.