Notes: trust and cultural angles on AI; internet stuff; climate response

Thanks Luis Villa for this write up of the current Vizio case (GPL enforcement). 

The head of the National Audit Office, Gareth Davies, gave a thoughtful speech to Parliament on why good governance matters, especially for the UK's public sector, now. 

Trust and AI was the topic of Bruce Schneier's September talk at the Harvard Kennedy School:

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.
In this talk, I am going to make several arguments. One, that there are two different kinds of trust—interpersonal trust and social trust—and that we regularly confuse them. Two, that the confusion will increase with artificial intelligence. We will make a fundamental category error. We will think of AIs as friends when they’re really just services. Three, that the corporations controlling AI systems will take advantage of our confusion to take advantage of us. They will not be trustworthy. And four, that it is the role of government to create trust in society. And therefore, it is their role to create an environment for trustworthy AI. And that means regulation. Not regulating AI, but regulating the organizations that control and use AI.
...And we need one final thing: public AI models. These are systems built by academia, or non-profit groups, or government itself, that can be owned and run by individuals.
The term “public model” has been thrown around a lot in the AI world, so it’s worth detailing what this means. It’s not a corporate AI model that the public is free to use. It’s not a corporate AI model that the government has licensed. It’s not even an open-source model that the public is free to examine and modify.
A public model is a model built by the public for the public. It requires political accountability, not just market accountability. This means openness and transparency paired with a responsiveness to public demands. It should also be available for anyone to build on top of. This means universal access. And a foundation for a free market in AI innovations. This would be a counter-balance to corporate-owned AI.
We can never make AI into our friends. But we can make them into trustworthy services—agents and not double agents. But only if government mandates it. We can put limits on surveillance capitalism. But only if government mandates it.
Because the point of government is to create social trust.

Simon Willison writes about trust and AI too, in an exploration of some of the practical realities of the trust crisis in AI today

Ryan Broderick on Garbage Day (from November) with a useful summary of two of the key terms in the AI futures noise:

The altruists, which includes folks like Elon Musk and Sam Bankman-Fried, believe that maximum human happiness is a math equation you can solve with money, which should be what steers technological innovation. While the accelerationists believe almost the inverse, that innovation matters more than human happiness and the internet can, and should, rewire how our brains work. Either way, both groups are obsessed with race science, want to replace democratic institutions with privately-owned automations — that they control — and are utterly convinced that technology and, specifically, the emergence of AI is a cataclysmic doomsday moment for humanity. The accelerationists just think it should happen immediately. Of course, as is the case with everything in Silicon Valley, all of this is predicated on the unwavering belief in its own importance.
Contrast with Ben Thompson:
In this I do, with reluctance, adopt an accelerationist view of progress; call it r/acc: regretful accelerationism. I suspect we humans do better with constraints; the Internet stripped away the constraint of physical distribution, and now AI is removing the constraint of needing to actually produce content. That this is spoiling the Internet is perhaps the best hope for finding our way back to what is real.
Henry Farrell on how LLMs, chatGPT etc could be more interesting through a cultural lens:
“Gopnikism,” as Cosma has dubbed this understanding of LLMs, implies that these models are incapable of hallucinating falsehooods: in Yiu, Kosoy and Gopnik’s description, they are incapable of distinguishing between “veridical and nonveridical representations in the first place.” Or as Cosma puts it more bluntly, “an LLM isn't doing anything differently when it ‘hallucinates’ as opposed to when it gets things right.” Our capacity as humans to get things right or wrong depends on our relationship to base reality, and our ability to try to solve the “inverse problem” of mapping how this reality works. LLMs don’t have that opportunity to explore and try to figure out what causes what.
...Gopnikism argues that LLMs are incapable of innovating, but they are good at imitating, and for some purposes at least, they are much better at it than human beings.
That is why we can think of LLMs as a cultural technology. A lot of human culture involves imitation rather than innovation...
Now, we have a new technology for cultural transmission - LLMs. The vast corpus of text and data that they ingest is a series of imperfect snapshots of human culture. Gopnikism emphasizes that we ought pay attention to how LLMs are likely to transmit, recombine and re-organize this cultural information, and what consequences this will have for human society. ...
You can envisage a future in which the consequences of LLMs (and perhaps other kinds of ML too) are largely negative, creating conformity, or (though this is not a major focus right now) sterile dissension. Or you could see a future in which they’re used to engender human creativity and problem solving, even if they cannot replace it. Understanding LLMs as cultural technologies presses us to think about which future is more likely, and, perhaps, how best to reach the better futures and avoid the worse ones.
Which reminded me of Danny O'Brien's article about polarization from March 2023:
When the polarisation truly began to hit in the United States, back in 2015, I read a lot about the Reformation in Europe. It’s hard to extract much solace from the 100 years war, but I did. The West crafted a ceasefire from the religious wars that spilled out from those 95 new axes’ of freedom. The United States, in particular, was an unexpected commitment between religious maniacs, so intolerant that they were physically as well as conceptually displaced thousands of miles away, maniacs who thought that their neighbors — only a little more distant than those crammed into Southern England or Holland — were literally irredeemable. Somebody wants you dead in 2023? These people thought you deserved to die, then burn in hell for all eternity. ....
Anyway, one of the things I’m messing around with is to use GPT as a bridge across that gulf. I get it to take some post that I don’t like, that I can’t read because it irritates me so much, the thing that shuts me off from new or distant ideas, and I automatically ask my pet GPT to rewrite it so I won’t bounce off it. Not buy into it: but not be alienated by its apparent proximity or distance from the worlds I do believe I understand. Texts in Chinese, in Hindu; local beliefs expressed in sneers and in dismissals. Love I don’t understand, fears I can’t sympathise with.
That article has some other interesting ideas in it too:
Everyone worries about polarization, and online radicalization. But we don’t often seem to worry about our own process of radicalization. Like many of my friends, I’d characterise my politics as having grown sharper over time, in contrast to the softening that I’d been told to expect comes with age.
... I sometimes think of online polarisation as being how the inflationary universe was described to me once (and oh boy, if I’m wrong about some things, I really bet I’m wrong about the structure of the early universe). The universe is expanding, I was told, but from any one spot, you won’t see it expanding. You just see everything moving, on average, further apart. Like ink marks on the surface of a balloon that’s being inflated, the universe is always unbounded, but somehow the distances grow in every direction.
That’s what the world’s opinions feel like to me. Some of it is that the Internet provided us with better space telescopes to see across this universe: Europeans knew something of America, but now they hear directly from Americans, and vice versa. Who knew what evil lurked in the hearts of men, until NextDoor came along?
I'm learning so much about the state of online culture from Garbage Day recently! All the TikTok and instagram content I don't see except very, very indirectly. Here's a highlight from last month:
The most important story in both tech and politics going into 2024 (an election year, mind you) will be the lack of cultural consensus in America. After almost 15 years of platforms semi-reliably tracking our national conversation — or at least giving the illusion that they were — we’re now beginning to realize they don’t. We already can’t figure out if something’s really trending or not or what caused it to trend if it is. But we now live in a world where our media and political apparatuses require that kind of information to function. The loudest and oftentimes most influential parts of American society have been looking at what’s online and repackaging it back to us for a decade and now that trick doesn’t work anymore.
Here's another, older excerpt from Garbage Day (highlight mine):
Sweeney went on to write, “Guys who got rich off of garbage reality TV always told themselves that they were populists who were in touch with what consumers REALLY want and it turns out their success was actually a product of linear TV's addictive ‘watch whatever's on’ flow rather than a real preference.” Which has so many corollaries to digital media that it makes my head overheat like an old computer when I consider them. It is very possible that the entire pop-cultural landscape right now is imploding because people are no longer interested in just mindlessly staring at trash on their screens and everyone at the top of the companies making that trash are doing everything they can to fight against that.
...I want to zoom out and talk about GPT-4 diagnoses because this isn’t the first story like this I’ve read. Back in March, GPT-4 analyzed a dog’s blood charts and accurately diagnosed what was wrong, as well.
As these stories pop up people act like they’re an incredible marvel of AI superintelligence. They’re no doubt interesting, but I’m old enough to remember when people were sharing the same stories… about Google. WebMD and its ilk are so bad nowadays that it’s become a universal punchline, but the real takeaway here, if you ask me, is that it turns out platforms are useful when they aren’t beaten down by ads and spam and all the other capitalistic trappings of the current internet landscape. You don’t need a chat bot to use the internet this way. You just need an internet that isn’t full of junk.
And another:
Which means our near-future will most likely be streaming shows referencing memes you’ve never heard of featuring talent famous on an app you only half-recognize while the news on your TV — which you watch clips of on YouTube — tells you about Jason Bateman’s new podcast, which is apparently very popular, which you can watch it, not listen to it, on a streaming app you might not have.


Giles Turnbull notes we need really, properly simple tools to make the web weird again. I might add that this was the original form of the ways the web could empower people to share - not a dense stack of rapidly-obsoleting frameworks. (HT Adrian McEwen

Excavating further in my dusty notes pile, I came across this from David Bent last June, in response to the question of what is needed to respond to the current state of the climate and the climate community ecosystem. I liked the way David brings together many ideas into quite a short set of recommendations. (David's writing, as he explores "what's next" for him, has had a lot of gems in it lately.)

The following recommendations are wrong, but hopefully useful.
People will be fearful and looking for certainty. Therefore, we need to develop an overarching direction of ‘security for all through renewal’. The alternative is ‘security through protection’, which leads to authoritarian responses or fascism.
We need PILLARS of activity:
  • Out with the old (aka ‘degrowth’). The managed decline of destructive industries and disrupting rent-seeking investors and incumbents.
  • In with the new (aka ‘growth’). Industrial strategies for mitigation and adaptation, using missions, nurturing niches, and driving diffusion of ready innovations.
  • Multi-layered resilience. Making our societies anti-fragile at supra-national, national and sub-national levels.
The FOUNDATIONS, or principles, for those pillars include:
  • Effectiveness over efficiency: contribution to multi-dimensional long-term goals more important than narrow measures of volume of activity.
  • Exploring over navigating: learning from experimentation, not just improving the status quo.
  • Creating inclusive ‘imagined communities’, so there is collective meaning-making and sense of belonging.
  • Shifting governance toward: procedural justice, democratic pluralism, and mechanisms against institutional capture (seeing as the role of the state is going to grow).
  • Accepting a pluriverse, a patchwork quilt of different kinds of political economies which can exist together, rather than one hegemon.
David Finnigan writes insightfully about, I suppose, the polycrisis, and how we gather strength as we go along:
You've already, in the last years and decades, absorbed so much climate grief and loss - and you're still okay. You've already downgraded your expectations of the future, how safe and successful you expect to be in your life - and you're still okay.
Compared to what we imagined for our futures back in the 1980s, 90s, or 2000s, our visions of the future in the 2020s are grim. Our political systems are unravelling, the background murmur of ecological breakdown has risen to a roar, and our hopes for our retirement have dwindled to an expectation that we will grow old in a poor and conflict-ridden society on a scorched planet.
If your younger self from 10 or 20 years ago were suddenly jumped forward in time to this moment, they'd be shocked. The ecosystems that have already collapsed! The political extremism and corruption that has rotted our society at the root! If you read the last twenty years of headlines in one hit, you'd be devastated.
But you're okay. You're carrying on. You're devastated, but not all the time. You still feel happy, maybe more often than you'd expect. There is beauty in the people around you, in the bright moments of the day. Seeing a full moon is just as lovely now as it was back then.
What I mean is this: the coming climate shocks and ecological destruction will bring great grief to your life, not to mention inconvenience and difficulty. A life lived above 40 degrees celcius is a tough life for a mammal like you. But it won't change who you are, and it won't stop you feeling joy and delight.
...Covid is a good example of a medium-sized global shock. By the middle of this century, we can expect to be getting shocks around that size a couple of times a decade - along with occasional bigger shocks, and many more frequent small ones.
Did Covid make us weaker or stronger?
The answer is, obviously, weaker. Covid smashed global supply chains, hammered our healthcare systems, killed millions and continues to cause massive waves of sickness and death. It left us sicker and poorer and more vulnerable. At the physiological level, repeated bouts of Covid don't make you strong as if you're going to the gym. Socially and politically, we learned nothing.
But the paradox is: we learned so much. Each of us learned what it is to navigate a pandemic crisis. We learned how to live through global supply chain shocks. We learned how to survive being trapped at home for weeks and months. We learned how to cope when our plans dissolve in our hands. We learned how fringe social movements gather strength in times of dislocation, how they divide families and friends from each other. We all had a taste of something very new and very hard.
Some painful but thoughtful reflections about convening and facilitating, from Rich Bartlett:
It raises questions about what kind of experiences I want to design for. Until recently, I designed for “inclusion at all costs”. ... I know I can design events that minimise discomfort. But my new learning edge is how to design for the “right kind of suffering”. There’s a particular type of discomfort which is the unavoidable symptom of growth. So if I’m vigilant about avoiding discomfort, I’m also avoiding opportunities for growth.

Thanks for sharing all this, Rich. 

And finally: on being listed in the court document of artists whose work was used to train Midjourney with 4,000 of my closest friends...
Cat And Girl, click for full size