Commonplace book: not entirely AI
There's been a lot of AI in the air, and that is reflected in this collection; at least the bits connecting with other worlds (etiquette, nature, craft and Enlightenment values) get a little outside the bubble.
Kevin Beaumont describes how outsourcing IT to reduce costs means "we've normalised ransomware":
Because organisations are busy trying to automate everything and put IT at the heart of everything to reduce cost, the risk and the threat increases.
When you combine cost pressures, capitalism, automation and a digital economy — there’s risks which have developed here. Many orgs are, essentially, in a race to the bottom when it comes to cost. Races to the bottom don’t end well. ...
For example, the press has barely mentioned the Jaguar Land Rover incident after the first two days — save for when they admitted “some data” may be impacted. That became another news cycle. But… why? The primary impact here is the UK government may have to effective bail out the motor sector. Not that some data may have been taken.
Alvaro M. Bedoya writes about how he became a populist - working for the Federal Trade Commission. I didn't really know what to expect from this, but it was a fascinating selection of vignettes of American life, showing how regular folks of many different kinds are affected by corporate power and wealth. HT danah boyd.
I used to think that the defining fight for our country was between the left and the right. Now, I am much more worried about the money at the top crushing everyone underneath.
Maggie Appleton writes about the current era of sycophantic consumer AI, and its failure to augment human thinking, inspired by an op-ed by David Bell. It's a long essay but a good one, and I've been trying out some of her approaches to pushing Claude to support more critical thinking.
Part of this problem is not just the prompts, but the generic interface of the helpful chatbot assistant. We are attempting to use an all-in-one text box for a vast array of tasks and use cases, with a single system prompt to handle all manner of queries. And the fawning, deferential assistant personality is the lowest common denominator to help with most tasks...
Domains like law, scientific research, philosophy, public policy, politics, medicine, writing, education, and engineering – to name a few – all require engaging in discourse that is sometimes difficult, complex, and uncomfortable. We might not rate the experience five stars in a reinforcement learning loop.
... the problem is that models are not trained to support critical thinking. And that the interface affordances and design decisions built into generic chatbots do not encourage or support critical thinking workflows and interactions. And that users have no clue they’re supposed to be compensating for these weaknesses by becoming expert prompt engineers.
... While I understand the economic incentives reward automating cognitive work more than augmenting human thinking, I’d like to think we can have our grossly profitable automation cake and eat it too. The labs have enough resources to pursue both.
I think we’ve barely scratched the surface of AI as intellectual partner and tool for thought . Neither the prompts, nor the model, nor the current interfaces – generic or tailored – enable it well.
But perhaps disruption will come from an entirely different direction. This week Mariella Moon reported at The Verge that Switzerland has released Aspertus, an open source AI language model that its public-institution creators, the Swiss Federal Technology Institute of Lausanne (EPFL), ETH Zurich and the Swiss National Supercomputing Centre (CSCS), say was trained solely on publicly available data that conforms to copyright and data protection laws. Maybe the new “two guys in a garage” is a national government.
Huh.
Johnny Ryan writes in the Guardian about the importance of Europe using its new anti-coercion instrument:
Trump is putting Europe under pressure to water down its digital rulebook. But now more than ever, Europe should hold large US tech firms accountable for anti-competitive market rigging, snooping on Europeans, and preying on our children. Brussels must hold Ireland accountable for failing to enforce Europe’s digital rules on US firms....
The real danger of this moment is that if Europe does not act now, it will never act again. The longer it waits, the deeper the erosion of its confidence in itself. The more it will believe resistance is futile. The more it will accept that its laws are not binding, its institutions not sovereign, its democracy not self-determined. When that happens, the path to authoritarianism becomes inevitable, through algorithmic manipulation on social media and the normalisation of lies. If Europe continues to cower, it will be drawn into that same abyss. Europe must act now, not only to push back against Trump, but to create space for itself to exist as a free and sovereign entity.
And in doing so, it must plant a flag that the rest of the world can see. In Canada, South Korea and Japan, democracies are watching. They are wondering if the EU, the last bastion of liberal multilateralism, will resist foreign pressure or surrender to it.
Luis Villa reflects on what AI/LLMs mean for open source - and finds a lot of different angles.
Danny O'Brien is using AI in a different way to many:
A few people have asked me (an old man) how I manage to use LLMs in my life without being driven insane by their horrid new-fangledness, their hallucinations, their wanton sycophancy, the hype, the grift, and the everpresent risk of being lured into psychosis. The simple answer is that, as a command-line fogey, I use Simon Willison’s excellent llm program in the terminal, and trap the poor things in confines of being just another unix utility in my toolkit, along with sed, pandoc, and the rest.
Danny more recently notes on etiquette around AI use
I don’t, as he warns, just say “Hey I asked ChatGPT this and it said”, or (even worse) paste the slop directly into the chat. I do think, like Fedora may, that one should be transparent about AI usage. And my added twist is that I really want to make my use of LLMs transparent and reproducible, to the extent that LLMs are.and thence to Alex Martsinovich, quoted by Danny above:
For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.
Now, AI has made text very, very, very cheap. Not only text, in fact. Code, images, video. All kinds of media. We can't rely on proof-of-thought anymore. Any text can be AI slop. If you read it, you're injured in this war. You engaged and replied – you're as good as dead. The dead internet is not just dead it's poisoned. So what do we do?
Garbage Day notes how OpenAI is no different from any other tech company:
OpenAI’s quick ascendancy was based on the promise that they would be able to build artificial general intelligence, a computer that’s capable of human-like thought and independence. And the longer they go without realizing that dream — which, we should be clear, may not even be possible — the more apparent it is that OpenAI is basically just another tech company. In fact, not only did OpenAI launch a social feed this week, they also announced an “Instant Checkout” feature for ChatGPT, which integrates with Etsy and Shopify. And the fact that they’ve already announced Pulse, a push notification-based alert system based on your conversation history, should make it clear that their big revolutionary idea for a brand new internet is exactly the same as every other company’s: User-generated content, ads, and social shopping. The only difference is their company has a Clippy at the center of it that has a worse version of Google search, can convince you that you’re the second coming of Jesus and need to divorce your spouse, and also do most of what the Adobe Creative Suite can do for a slightly lower monthly subscription price.
Renée DiResta and Rachel Kleinfeld have a new paper on how institutions need to change how they communicate and think about influence:
For most of the twentieth century, institutions whose authority rested on nonpartisan expertise influenced America’s public sphere through a top-down communications model. Think tanks, modern universities, and social and advocacy organizations produced writing and research aimed at policymakers and elites. Public communication was seen as the last stage of the process. ... The public—especially the educated and politically engaged—could be reached through a relatively small number of channels.
Even as cable news expanded and partisan outlets like Fox News emerged, this approach held: Institutions created, gatekeepers validated, audiences consumed. ...
That system has collapsed.
Today, public attention flows through a far more diffuse, competitive ecosystem—one where influence is shaped by networks rather than hierarchies. The old gatekeepers have been replaced by new ones: algorithms that curate content, high-follower social media accounts that influence what goes viral, and deeply engaged niche creators who enjoy immense legitimacy within their communities. The newly influential are not simply broadcasters at the top of a different hierarchical order—they determine what content matters in conjunction with active, participating audiences. Legitimacy is now conferred on those who master resonance, immediacy, emotional connection, and authenticity.
Organizations and scholars grounded in fact-based argumentation—and the philanthropists and advocates who support research-backed policy influence—must grapple with this shift.
John Elkington writes about Ben & Jerry's, the founders and mission, and the changes of corporate acquisition and beyond:
At a time when the corporate responsibility and sustainability agendas are becoming increasing political and politized, issues like Gaza are increasingly polarizing. The fault-lines between Ben & Jerry’s and Unilever may be more sharply defined than between other big corporations and their once-independent and mission driven subsidiaries, but I very much doubt that this will be the last such example of founders making dramatic exits.
The critical question now for anyone advising such founders on whether—and, if so, how—to sell will be how to ensure a better integration of values between acquirer and acquired.
Usman Haque has been working on a project in a London park:
Co‑created with residents of the estate, the project integrates community knowledge, natural infrastructure and hyper-local AI to give voice to a London plane tree that has witnessed centuries of change, so people can ‘talk’ to the tree, and hear what it has to ‘say’ back. The project merges physical, social, technological and ecological systems to create a more‑than‑human experience.
The title, T(h)ree, references the three genetically identical London plane trees on the site, connected underground via shared root and mycorrhizal networks, reinforcing themes of interconnection and multiplicity.
... There’s a dilemma at the heart of the project. We’re struggling to challenge anthropocentrism, yet the most obvious interface, the ‘voice’ of the tree, is founded on anthropomorphism, raising the potential misrepresentation of non-human perspectives, not to mention making use of AI systems that reinforce human-centric biases. We acknowledge this, but consider this merely a very first step.
Via Justin Pickard, this film from Max Park:
The designer, poet, activist and author William Morris published News from Nowhere in 1890, imagining his romantic vision of a future Britain rooted in socialist and ecological ideals. In Nowhere there is time for everything; people approach life and work as artists and craftspeople. He argues that the world becomes not only more just, but more beautiful as a result. This vision stood in stark contrast to the Victorian London Morris was witnessing during the height of the Industrial Revolution – and to the London we see today. Despite aspects of this future being tangibly within reach, we seem to be actively eradicating them: literally, through environmental degradation but also culturally, as we homogenise and commodify human creativity, experiences and futures through algorithmic systems. This project explores what it means to craft in the digital age, and projects how artificial intelligence (AI) might participate in a future shaped by Morris’ ideals.
Douglas Rushkoff writes about the billionaire elite class and the polycrisis. Nothing particularly new, but I was struck by the perspective that it's rather sad that today's most powerful men can't imagine a positive future at all. But we ordinary folks can, and must.
Phil Gyford wrote about what it was like going online in 1994 (and not at a university). A lovely detailed reminder of those early internet times, thanks to Phil's compelling writing and digital archiving.