Commonplace book: mental models, autonomy, a poem
Doug Belshaw has a thoughtful post about the financial/work advice often being given to young folks today:
The financial guidance given to young people today comes from people who experienced an entirely different economic reality. Boomers hold fundamentally different mental models about how wealth accumulation works, learned in a completely different era. For example, my dad bought a house for 1x his annual salary in his twenties, whereas My first house cost me about 5x my annual salary. ...
Wages are currently growing at an average of 0.1% annually rather than the 2.7% my parents' generation could expect. ...
Traditional career advice, the kind that I received when I was younger, assumes linear progression. The world is no longer like that and this model longer exists. What some might call “job-hopping” is actually strategic career building in a market that's changed fundamentally. ... 39% of Gen Z are juggling a part-time or full-time job with freelance work. This is a rational approach when any single income stream might disappear.
... The good news, I think, is that mental models can be updated. Structures can be rebuilt. Pathways can be created that don't require young people to either accept permanent precarity or take moonshot bets on prediction markets. What we will need is political bravery requiring different thinking, different policies, different advice.
Our first step is to admit that the current advice isn't working. And that conversation hasn't even started yet.
Steve Taylor draws an unexpected parallel between... local transport in the 1800s, and Level 3 autonomy :) Here in Cambridge,
Horse-drawn trams followed fixed routes. The horses knew the stops. Drivers were present mainly for exceptions. The system ran inside a tightly defined operational design domain.
In modern language: constrained autonomy with human fallback.
And then, in 1897, the entire system nearly collapsed.
... The tram system did not fail because horses forgot how to walk the route.
It failed because the infrastructure was fragile, undercapitalised, and underappreciated...
An interesting point for modern autonomous vehicles.
James Bridle has a lovely thread about #actuallyexistingsolarpunk - great stuff.
A while ago, there was some news doing the rounds that Microsoft had turned off the email of Karim Khan at the ICC in the Hague, after Trump issued an executive order. Turns out this probably wasn't quite what happened, and:
Smith at the end of April said Microsoft would push back on orders to suspend European cloud operations, in an attempt to assuage fears about a Trump-ordered kill switch.
The company announced then that it would add a binding clause to its contracts with European governments and the European Commission, stating that it would keep the option open to go to court in the event other governments ordered it to suspend or cease cloud operations.
Garbage Day notes the rise of the troll state:
Maduro’s arrest is connected to a new kind of politics I’ve spent the last year struggling to describe. A profoundly embarrassing collision of violent nationalism, illiterate social chatter, and memetic fascism that’s been spreading across the globe since the pandemic. We’ve seen hints for years now that the elites of the world are just as addicted to — and dependent on — the same social platforms that we are. Ignoring the near-constant public embarrassments of our shitposter ruling class that play out on platforms like X every day, our leaders are also digitally networking with each other behind closed doors.
Amanda Sloat writes to European friends with a call to action, and a very clear summary of what the current US administration has been doing in international policy.
Back in November, Careful Industries wrote up the Careful Consequence Check, which "can be used to make a rapid assessment of the potential risks and issues involved in using or adopting an AI-powered product, feature, or service." It is delightful to see something that I had some small involvement in the very early days of is still evolving, and being used and useful.This seems less than optimal:
![]() |
| https://tldr.nettime.org/@remixtures/115464596177491635 |
One of the benefits of a diverse network and professional history is that you sometimes get fascinating news from fields you wouldn't normally encounter. This article by Rachel Brazil, who formerly worked at Nesta on the Crucible fellowship, was a delightful glimpse into how there's still radical new frontiers in how computing chips might work in the future.
Lauren Leek looked into Google Maps restaurant ratings...
The most important result isn’t which neighbourhood tops the rankings - it’s the realisation that platforms now quietly structure survival in everyday urban markets. London’s restaurant scene is no longer organised by taste alone. It is organised by visibility that compounds, rent that rises when discovery arrives, and algorithms that allocate attention long before consumers ever show up. What looks like “choice” is increasingly the downstream effect of ranking systems.
For policy, that shifts the frame. If discovery now shapes small-business survival, then competition, fairness, and urban regeneration can no longer ignore platform ranking systems. Councils can rebuild streets and liberalise licensing all they like - but algorithmic invisibility can still leave places economically stranded. Platform transparency and auditability are no longer niche tech debates; they are quietly becoming tools of local economic policy. At minimum, ranking algorithms with this much economic consequence should be auditable. We audit financial markets. We should audit attention markets too.
Via Alex Deschamps-Sonsino, I learned of the Receiver of Wreck. What a job title. Strangely, newer than I initially imagined the role.
My old Fairphone could now be a webserver.
In the meantime AI is sucking all the air out of the room. So here's some AI notes.
Tim O'Reilly and Mike Loukides write about different possible AI futures and ways to think about signals and scenario planning. Nice big picture perspective without any particular angle to push. (Some bits of it already feel out of date, and I'm not paying close attention, which points to the importance of finding signals in the noise.) Jon Evans discusses the AI bubble:
…You’d think there’d be quite a range between “end of the species” and “immortal luxury for all,” but no, those are the two main camps. (If you’re thinking, hey, those both sound insufficiently weird: you’re right.) Kind of hard to talk about those outcomes and also, in the same breath, include sober quantified analysis of investment growth by sector and how long a recession might last, you know?
But let’s try. Another way to frame AI beliefs is as a spectrum:
AI is a once-in-a-species tech that will utterly transform the world by 2030.
AI is useless and counterproductive, the technological equivalent of asbestos.
…There’s a sort of horseshoe theory here where people on either extreme end of this spectrum, despite having what are theoretically opposed views, sound a lot more like each other than they probably want to admit: evangelistic, hectoring, suspiciously disingenuous when it comes to ignoring countervailing evidence, convinced their opposite counterparts are scamming grifters, and ultimately thoroughly unconvincing.
The problem is that the reality is probably neither an endpoint or the midpoint on that spectrum, and where exactly will dictate just how ill-advised this bubble... Nobody really knows, yet, and anybody who says that they do is selling something.
And where we are on that 1-3 spectrum determines how things go when the bubble pops. Evans also notes the potential significant damage done to other sectors, either because of the perception that jobs in the sector might disappear, or because there's no capital available for anything that isn't AI, or of course both.
Similar topic came up in this interview:
Kedrosky: historically, the U.S. has been very good at speculative bubbles. This is one of our main core competencies here. They tend to be about real estate, or they tend to be about technology, or they tend to be about loose credit, and sometimes they even have a government role with respect to some kind of perverse incentive that was created. This is the first bubble to have all four. We’ve got a real estate component, we have a loose credit component, we have a technology component, we have a huge government component, because we’re told “we’re in an existential crisis with China, and we must win this at any cost.”... This is the first one where you end up in the rational bubble theory of all of this, where everyone feels like they’re doing something rational. Yet in aggregate, all of these different people who are looking at the problem through their own lenses are actually profligate contributors to the problem, because it’s the first one that combines all of the forces that historically have made some of the largest bubbles in U.S. history.
... So this is probably one of the narrowest moments with respect to risk capital in the last 30 years in terms of either the money is going to one thing or it’s going to nothing, which is to say venture, secondary credit, growth capital, it’s all going into AI, which is having this impact in those centers which are most prone to having companies doing this kind of work... it’s narrow geographically and it’s narrow sectorally which is really unusual.
I think the flip side of that, and the point I always make, is that whenever all of this capital is flowing to a single thing, it also means that it’s not flowing somewhere else. I think that’s incredibly important to understand. I gave the Taiwan example earlier, where if you’re in AI or semiconductor manufacturing in Taiwan, you’re awash in capital. If you’re a manufacturer of literally everything else, you cannot get a loan. The same thing is true in the U.S, where if you’re an early stage company or a mid-stage company looking for growth capital for almost anything and it doesn’t have an AI component, you’re out of luck, my friend.
This notion of starving not just manufacturers, but growth companies for capital because of the narrowness of the spending almost always has historical consequences. We saw this in the 90s, with the rise of China and sort of coincident with the telecom bubble, and how U.S. manufacturers are increasingly starved of capital because it was all flowing sectorally to telecom. We’re seeing the same thing now. That will play out over the next few years. But it’s dramatic right now.
Krugman: ... in international economics it’s “the Dutch disease.” There was this famous period when after the Netherlands discovered natural gas, it really killed their manufacturing sector.
This week also saw Simon Willison's annual review of LLMs. A nice summary of many things, and for me useful because I had missed October's launch of Claude code for web, which has the useful feature of offering a nice safe sandbox environment. From there to this blog post about how there may be a worrying complacency about AI security:
However, we see more and more systems allowing untrusted output to take consequential actions. Most of the time it goes well, and over time vendors and organizations lower their guard or skip human oversight entirely, because “it worked last time.”On security: the threat is not always the obvious thing. Koi Research found:
This dangerous bias is the fuel for normalization: organizations confuse the absence of a successful attack with the presence of robust security.
We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms. We expected to find a handful of obscure extensions-low install counts, sketchy publishers, the usual suspects.
The results came back with something else entirely.
Near the top of the list: Urban VPN Proxy. A Chrome extension with over 6 million users. A 4.7-star rating from 58,000 reviews. A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
A startling poem from 1961 (really!) about AI. Wow. That final stanza.
