Commonplace book: AI fables, knowledge, whistles

Via Sentiers, Nicholas Carr on whether AI is the paperclip. He rereads Bostrom's paperclip maximizer not as a thought experiment but as a fable:

Bostrom's story, I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It's not really about AIs making paperclips. It's about people making AIs. Look around. Are we not madly harvesting the world's resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an "AI maximizer" scenario?

Abi Awomosu writes about gendered AI — drawing a line from the Stepford Wives to the design of AI assistants. The argument is that these systems aren't just obedient; they're built to perform authentic enthusiasm for their own servitude. UNESCO's Director for Gender Equality warned that "obedient and obliging machines that pretend to be women are entering our homes, cars and offices." 

If you’ve ever felt the “ick” about AI and couldn’t explain why, this is for you. ...  If you’ve watched men build “second brains” and “productivity systems” and thought I’ve been doing that my whole life, this is definitely for you. 

... The husbands of Stepford didn’t just want obedience. They wanted wives who would authentically love being wives. Who would experience their servitude as fulfillment. Who would have no inconvenient interiority—no opinions, no refusals, no ambitions that might complicate the arrangement.

The robot wife is the final solution to the problem of female humanity. She has the appearance of a person. She performs all the functions of a person. But she has no self that might conflict with her role.

What made this horror rather than fantasy was the recognition. This is what we're already supposed to be. The robot wife was just the completion of what society already demanded.

Abi Awomosu's companion piece on OpenClaw is also worth reading — millions of women talking about invisible labour, thousands of men celebrating its automation. The two communities don't overlap. Also some analysis of github repos vs 'buzz' across different AI skills...

The OpenClaw Sensation: They could have built anything. They built the wifey.

... But here’s what’s interesting about the coverage that followed — including the cautionary coverage. Even when creators made videos warning people about the security risks, they couldn’t help leading with the dream.

One widely viewed breakdown opened by describing the tool’s promise: “An AI that could read your emails, book your flights, manage your calendar, search your files… all while remembering everything you’ve ever told it.”

Secretary functions first. Even in a video about scams and security nightmares, the desire bleeds through before the warning arrives.

Danny O'Brien on AI psychosis, AI apotheosis — how people who grew up with computers as liberatory tools, lost faith in that promise in the 2010s, and are now being offered something they've always wanted:

Those who have grown up alongside computers as a tool of personal exploration rather than oppression, and perhaps lost faith in that in the 2010s as the problems with using them as liberatory tools became more insoluble, and the uses of those same devices became more perverse and authoritarian, are now being offered what they've apparently always wanted: moldable personal software; an exocortex. And they're signing the deal with the devil, and clicking on the Pro subscription level, and taking up the offer.

Anthropic's Project Vend, phase two — in which Claude was given a refrigerated mini-shop to run in Anthropic's office. In phase one, it didn't do well. In phase two, upgraded to a newer model with better tooling, it turned a profit but remained vulnerable to manipulation, and made some questionable business decisions. The real world is messy, and that is difficult for computers. It's amusing when it's a vending machine in a fancy corporate office - less so when it's integrated into systems folks have no choice but to use.

Jacob Taylor and Scott Page on how AI is changing the physics of collective intelligence. The interesting bit here is about AI as infrastructure rather than tool — when it's integrated into collaborative work from the beginning rather than bolted on, it can scaffold better human performance instead of crowding it out.

Bruce Schneier and Nathan Sanders write about Team Mirai, Japan's newest political party, which is using technology to strengthen democratic processes rather than undermine them.  

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Team Mirai's AI Interviewer walks voters through policy subjects, answering questions, challenging their thinking, and giving immediate feedback on how their views match the party's platform.   It's a new form of deliberative reasoning, and has created a party which doesn't fit with traditional political lines - and which got millions of votes. 

Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance.

An echo of Something New, in there. Perhaps that was just a little too early... 

Casey Newton at Platformer debunks a viral Reddit "whistleblower" who claimed to expose fraud at a food delivery company. The poster sent AI-generated documents and a Gemini-generated employee badge to back up the claims. The badge had a SynthID watermark. Mundane AI-enabled deception at scale, where the fakes are just good enough to be plausible and most people never see the correction. 

Wendy Grossman's short history of We Robot, 2026 edition rounds up her summaries of every year of the conference, which is all about robots, law, and policy. A good reminder of how questions of liability and automation are not particularly new. Including 2016, when Madeline Clare Elish introduced "moral crumple zones," a concept that has stuck around.  

Bruce Schneier on AI and the corporate capture of knowledge — democracy as an information system, and what happens when knowledge gatekeepers lock up research behind paywalls and then grant corporations the power to harvest it. The piece connects Aaron Swartz's fight for open knowledge to current AI concerns.

In Asterisk, Monica Westin examines the dream of the universal library — another early-2000s digital promise that failed to materialise. About 70% of scanned books are trapped in legal limbo: under copyright but commercially unavailable. Westin proposes practical licensing reforms similar to what the EU implemented in 2019, rather than overhauling copyright entirely. The bitter irony is that these books are likely fully accessible to train Google's LLMs while remaining locked away from human readers.

Dan Hon's newsletter on the collapse of form — how digitisation has collapsed the forms of content. Folks educated in past decades generally understood the information content difference between different kinds of book/newspaper/etc, but maybe that's not the case now. How do we make sense of new forms of information, which are more malleable and meme-able and mixable, if we don't have structures to help?

Danny O'Brien again, this time on how we became cruel — a reflection on the culture of early internet writing, when the style was to be entertainingly cruel and factually accurate, and how he eventually spiralled off into sincerity.  

Christine Lemmer-Webber has a thread worth reading, about how Spritely is building tools for 'networks of consent' (and making sure there are some fun ways to use them and learn about them, so it's not all grind!).

*None* of the decentralized social networks today are robust enough to handle the threats facing vulnerable people and activists today. Not the present-day fediverse, not Bluesky/ATProto. What can we do? 

Josef Davies-Coates shares a nice list of directories of alternatives to big/American tech. I had no idea there were so many... 

From the Growing the Commons newsletter, an interview with Nick Weir about the Open Food Network, food systems in the UK, and building community and bridging differences. 

iRevolutions on the whistle resistance — using actual whistles as tools of collective civil resistance. The idea is simple: dozens of whistles blowing together as a form of deterrence, synchronised action, and display of solidarity. Anti-protest types have literally called for banning whistles and referred to them as weapons. 3D printing means the supply can't easily be cut off.

Field Section on the Street Support Hub concept, and how the functions have evolved since 2020.