Rough notes from the AI for Global Good Summit
The AI for Global Good summit brought together UN agencies, researchers and innovators to explore the potential of AI to address the Sustainable Development Goals (SDGs). This was the second summit, and came with expectations and scheduling that it was time to get some new collaborative projects underway.
It’s hard to tell whether it met that aim — several groups of projects (mostly proposed, or indeed underway, before the summit) were showcased and encouraged participation (and resourcing) from those present. The idea of a community, in the sense it is often meant, was less clearly supported — how will participants (both in the room, and following online) stay in touch and build up these new ideas? The summit did succeed in showcasing projects and ideas across sector and subject boundaries, and it seemed like the right people were in the room to start planning some concrete projects. At the same time, it felt like some important things were missing. One of those things was shared appreciation of a few key concepts.
The people present came from very different places and backgrounds, and so the discussion was confused at times, especially around the tradeoffs between AI for good, and avoiding bad outcomes. Is it OK to scan crowds with high resolution imaging to spot potential criminals? Reducing crime is good, but surveillance and face recognition in public spaces maybe not good.
Trust was a big topic throughout the three days. Everyone agreed that trust and trustworthiness was important in AI, especially for powerful machine computation systems applied to vulnerable populations and critical applications such as health and water. A lot of the discussion was framed as “how can we get people to trust these systems that we’re building for their own good” — a line of thinking I keep coming across in tech and policy conversations, and very frustrating given the important nuances which thinking about trust and powerful technologies needs.
What we meant by AI was also somewhat confused through the summit. Roger Penrose’s keynote at the start of the event was excellent, and useful in that it sought to ground the summit with clarity on the difference between computation (which was basically what the summit was about) and intelligence. (A pity that the glossy demo later of Sophia detracted from this!) Many later questions kept going back to human-like intelligence, though.
Penrose’s keynote was great — on why algorithmic systems possess no understanding (and why AI is an unhelpful misnomer). He used a specific biological argument — that AI may plausibly get to cerebellum level, but not to full human intelligence. Human understanding is about transcending rules — not the rules themselves, that might be easily codified, but how we understand them which creates the value of human intelligence. A computer cannot understand infinity — we can. A computer can process the Schrödinger equation, but cannot comprehend Schrödinger’s cat. None of today’s computers, or the AIs we are developing, even start to progress towards this sort of intelligence. (He noted that there may be stuff going on in the brain that we don’t understand — subtle quantum effects, beyond computation — that connect to consciousness. But we are a very long way from quantum computing that would even hint at replicating this, if such a thing were possible.)
There was a sense that things had moved a lot since the 2017 summit, which Wendell Wallach described as “naive,” having a sense that if only we could get all the data, we could solve all the problems. This year issues with control and protection were at the fore, and the summit was not so subservient to technology idealists, instead considering what could go wrong, and how we can protect each other. This lead to a useful separation between AI for good, addressing the sustainable development goals and thinking of the billion most vulnerable people on the planet, for instance, and mitigating harms and attending to what could go wrong. It was good to see the different reasons for harm articulated — system failures through bad design, incompetence, underestimating risks, appropriation by elites, and rogue actors — which may need different approaches to tackle. How can we bring appropriate oversight to the tech we develop, and spot gaps in that oversight, and address those gaps? (It is refreshing to hear this framed as oversight, rather than ‘regulation’, which is a term increasingly thrown around without thought for what it means.)
We didn’t talk about power, or the potential side effects of AI use on human rights (beyond bias), much at all (except a few civil society folks muttering in the breaks, and Joe Westby from Amnesty who mentioned this in his talk). Several people expressed frustration that, as ever when a long standing topic becomes ‘hot’, many people were ignoring or simply unaware of relevant past work and so were starting again from scratch.
Despite a lot of talk about technologists — I overheard someone saying that “the world’s top AI technologists are here” — there weren’t many around, and most were from small ventures, not the R&D departments of big tech. The power of “the AI leaders”, and perhaps the wake of their egos, was visible though — with even Michael Møller, Under-Secretary-General of the United Nations, apologising for his lack of technical skills. Would any AI leader have started their keynote with an apology for lack of knowledge of the UN and its work?
Some of the more interesting ideas and stories, in no particular order:
- the importance of equitable access to technology, as well as mitigating potential downsides (given how compelling dystopian visions can be). And the example of which countries are buying robots (US, S Korea…).
- digital compliance, and ideas like privacy laws, as a global development challenge
- on infrastructure investment, how will countries still lacking 3G catch up, when others are rushing ahead to 5G?
- Aimee van Wynsberghe on how ethics is often seen as a hindrance to technology, but that we should see it instead as a tool to push engineers further, designing in values such as wellbeing, dignity, humanitarianism, and the environment.
- Technology can fit within our picture of the good life, and help us achieve it; or it can change what the good life means.
- When we think about cooperative robots helping with human tasks, we forget that people are often non-cooperative themselves.
- The tech economy is shaped such that gains in productivity and other successes tend to go to the best off. There was mention of how progress can be dehumanising, interfering with basic human flourishing, for the majority.
- The challenge of translating terminology between sectors, and even between regions, when people use language so differently. Is there a role here for ISO? Perhaps in coming up with clarity around new terms — cobots, digital twins, cybercrews etc. But standards for some words don’t work, because people have deep complex understandings, built on culture and hundreds of years of thinking about ‘fairness’ or ‘privacy’ or ‘trust’, for instance. Transparency is also a contested term. We may not be able to have specific definitions, but need an ongoing dialogue to create appreciation of different perspectives and values.
- Michael Møller’s point about unknown knowns — the things we know, but forget we know, because they are in the background of our values and beliefs
- Møller gave a good overview in general, pointing out the difficulty of knowing what to think about the future when our top AI leaders give such different utopian (Sundar Pichai) and dystopian (Elon Musk) visions. We cannot ban AI, as there are too many people and locations where it can be built, and we cannot envision all scenarios that may come to pass. Tech is always repurposeable or multipurpose. Ultimately we must see it as about dealing with human action, and we must tackle it as we do other questions of change, through inclusive and interactive discussion with everyone at the table, a focus on people and our values, and a holistic viewpoint where we can ask unexpected questions. (I couldn’t’ve put this better myself!)
- If machine learning can solve the “Netflix challenge”, why could it not count residences in the Zataari refugee camp from satellite imagery? (The agency concerned ended up doing this by hand.) Are we missing something in the incentive structures for how we do tech for good? What models for machine learning for good are financially viable or sustainable? What are the barriers to implementation?
- It was refreshing to hear more subtle points about trust between people, between people and tech, between nation states, between companies — just the sort of thing our work at Cambridge on Trust and Technology is exploring. Potential users not trusting a new technology, so not using it and missing out on genuine benefits. Countries not trusting each other, competing on AI, and ending up in an arms race, with ethics and beneficial uses of AI forgotten in a race to be top. Loss of trust in, and therefore reduced use of, a health diagnosis AI after it makes a mistake a doctor would never make. Technologists and policy-makers not trusting each other and blocking important discussion and development. Trusting of tech depending on context — a self-driving car developed in Mountain View may be very trustworthy there, but very unreliable in Mumbai. The loss of trust in charities, when they apply technology to their problems in a poorly thought out way. Overtrust leading to too much reliance on systems. Distrust as a key element of holding power to account — you need some distrust to interrogate it. People jay walk more in front of autonomous cars — because they have seen other people do this and the cars have (so far) always stopped. And the temporal dimension of trust.
- Even more subtle — the language barriers! French for instance has different nuances and different concepts around trust and confidence, so straightforward translation does not work for these deeper discussions around trust, trustworthiness, confidence, reliability. We may need other ways to set expectations with each other — ideas such as rational trust vs emotional trust.
- Some protocols introduce deliberate false signals, intentionally to build mistrust, and therefore reliability. An example is astronomical data used in black hole merger detection — researchers thought the first apparent detection of a merge was a false one, and so interrogated the data carefully. (but it was, in fact, real).
- An the fraught space of humanoid robots, and bots which present as human, a useful distinction between artificial care and artificial friendship, vs deception in the form of genuinely counterfeited humanity and companionship.
- Recognition that different cultures have different values and concerns around AI — which play out in all sorts of ways, such as the different marketing of Big Hero 6 in the US and Japan
- Stories and culture set public expectations of AI, and also influence the people who grow up to become AI researchers. Science fiction can be a big influence. So whose culture, values and goals are articulated in visions and implementations of AI? Culture is rarely neutral about intelligent machines — tends to utopia or dystopia. We need more stories of the everyday, middle ground.
- Equally — are we exceptionalising AI too much? How does human machine trust work in other systems — automatic braking in cars, smart traffic control via traffic lights — and how different are these cases to the often basic machine learning tools which we hear concerns about?
- We need to be able to discuss tradeoffs. Ethical design is not black and white. A privacy-preserving system may be slower, consume more energy. Increasing privacy and decreasing bias might be in tension; as might privacy vs other data rights.
- There was also something about ‘regulation’. What do people actually mean when they call for this? Is it really a regulator, an organisation enforcing laws, or ‘governance’ more broadly? What other tools do we have to hold tech to account, and maintain good standards? This could mean anything from imbuing tech with different values to corporate oversight to soft law or soft governance (which includes industry practices and procedures, standards, lab practices, insurance policies, etc). Such things are agile and can be created or dissolved quickly if needed, so it’s easier to try them out than to actually implement a law. The weakness is, of course, that they aren’t enforceable, so there needs to be something more which government or some other serious entity can enforce — but that might be as a last solution, not a first one.
- Human rights is the only ethical framework, universal, based on binding laws, which all countries sign up to — so we should be striving for human rights compliant AI, rather than trustworthy AI. Plus, Amnesty International’s experience is that only hard laws will hold power to account.
- Ultimately most sessions called for projects like a digital commons or agile governance — great ideas, but lots of technical, practical and political work needed to make them real. It’s all about frameworks for cooperation and working together, which I guess the UN has expertise in :)
- a fascinating brief example of stakeholder engagement around the introduction of a machine learning system, from the comprehensive nuclear test ban treaty organisation. They sought to replace human experts, who analyse data (eg seismology) from around the world via a rule based system to create bulletins of suspected nuclear events for distribution to nation states, with an automated system. For these high stakes situations, trust is a firm belief in the reliability, honest and ability of someone — how does this translate to a machine? Is the newsystem able to do the task it was intended for? Does it do this better than the old system (and what does better mean, here? what is the ground truth?) How can you establish reliability, when training data may be dominated by natural events, not the cases the system is trying to detect? How can a model be designed in these circumstances, and tests run to establish if it works? Stakeholder culture here influences perception — there were fears that the model was “not physical” or was “a black box”. These perceptions can be changed through transparency, explanations, and involvement in the training and use phases of the new system. It was really interesting to see a case study from such an unusual context.
- Shenzen’s David Li presented his perspective“from the street.” We have universal access to knowledge about machine learning — anyone can learn about this stuff because it’s online; foundational AI code is all open source. You don’t have to be in the Valley. 99.95% of entrepreneurs around the world aren’t venture backed, they aren’t startups like the tech scene might imagine. The next ideas and businesses could come from anywhere.
The Leverhulme Centre for the Future of Intelligence announced the Trustfactory as a way of defragmenting the AI and trust space, bringing together projects from different places. Will be interesting to see where this goes. One of the project ideas within this is helping policymakers and technologists to work together and understand each other, which has great links to Doteveryone’s digital leadership work.
Finally, there were also two fairly good jokes:
Vicky Hansen from the ACM: It’s fifty years since 2001: A Space Odyssey, where we see the utopian dream, and then the fear as HAL turns out to have an agenda of its own — just as my computer turns out to have its own agenda today.
Roger Penrose: Thank goodness we can swear at today’s Turing machine-style computers. We won’t be able to do that when they feel like we do.