Notes: tech safety, loudspeakers, infraordinary
Taps microphone Is this thing on?
Some of these notes are well aged, like a cheese. A whole lot of life, and death, since last notes.
MySociety propose reframing civic tech as pro-democracy tech.
Ethan Zuckerman has many excellent thoughts on how the Signalgate story is forking reality.
Rob Horning doesn't listen to podcasts. Neither do I, mostly. Some interesting thoughts on parasocial relationships, conversation, etc.
Nick Hunn on the government push for net zero, in terms of how well smart metering has gone:
I wonder how many UK householders know that part of their electricity bill is a payment to the Hebrew University of Jerusalem? ..... So far, the Smart Metering marketing campaign has spent about £500 million trying to persuade us to do something which is free. If the UK is going to meet the Government’s decarbonisation targets for home energy, their next task is to persuade us all to sign up for something which will cost millions of home owners tens of thousands of pounds each. The prospects are not looking good. ... The marketing campaign has been an abject failure, but that hasn’t stopped Smart Energy GB spending around £500 million, all of which has been added as an invisible charge on our electricity bills. Despite that, it looks as if the Government is planning on the same approach to their next imaginary target, which is decarbonising home heating by persuading us all to use heat pumps.
... Everyone has heard of ambulance chasers – the lawyers who chase round after ambulances, hoping that they can persuade the patient in the back to sue someone for causing whatever accident they’ve had. We’ve now got a new variant – the Cavity Wall Insulation chaser.Nathan Mathias has many excellent thoughts on the science of technology safety:
Here’s how the traffic jam happens. High standards of evidence slow down the search for public safety when we use science as a gate-keeper: setting a high threshold of evidence before searching for safer alternatives. In the over-simplified version of this (illustrated below), scientists only start studying something after significant public concern. Then we only start testing interventions until after the harm is proven and shown to be widespread. Next, scientists advise that interventions for safer technology be adopted only after further testing, efforts that are hindered by the tech industry’s stranglehold on research.
With each step, scientists are making principled decisions under the circumstances. But from the bird’s eye view, that’s how 60 years can pass before PFAS are replaced and how decades can pass between the first voiced concerns about social media and our current intense debates on what to do about it.
It's more acceptable to critique AI than it was. Thanks Sentiers for this:
The author connects modern defences of AI to Lewis Mumford’s concept of “the magnificent bribe.” Mumford argued that people accept potentially harmful technological systems because they are offered what appears to be a “generous bargain”—material benefits in exchange for accepting the expansion of what he called “megatechnics.” (I wish I’d heard about this before I started using and tagging with “big tech.”) The bribe works by promising a share of technological benefits while distracting from how these systems concentrate power in few hands—and then share very little.
(also check out the lovely Tony Cragg sculpture depicted at the top of that article)
The loudspeaker was invented just 100 years ago.
A script to turn a Youtube video into a text web page with screenshots.
A lovely radio station of calm, mundane infrastructure news. (unclear whether this is now a recording rather than live, though.) Discovered at Interesting, thanks Russell for another great year! There were so many good things. Watershed's pioneering toilets, of course. And the nascent Society for Hopeful Technologists - this survey is open for another week at the time of typing.
I continue to find Garbage Day makes me think:
But the most fitting synchronicity of all might be that the day that 4chan died — which is also the same day the Titanic sank fwiw — was the same day it was revealed by The Verge that OpenAI is building a social network. A literal changing of eras right before our very eyes. The demise of the text-based, anonymous website that overran the rest of the internet happening the same week we discover the company that continues to promise a new internet may be actually trying to build one. A new internet not just full of autoplaying videos and verified user names, but one where a machine would sort through the human chaos we upload every second of the day. Chaos that, thanks to 4chan, we have to begrudgingly accept is somehow innate to what people will just do when they are safely anonymous behind a computer.
And so, yes, we did lose something this week. And it is almost certainly a better world without it. But it’s also possible we look back one day and wish the internet still felt as messy and, more importantly, human as it did when 4chan ruled the world.
Here's one on the AI Super Bowl ads:
Here is something Silicon Valley has decided you need and you’re going to have to use it. And it can be hard to remember that up until around 2020, Silicon Valley did not typically operate this way.
Most of the big tech companies that are now shoving AI down our throats got as big as they did, not because they sold us a revolutionary new product they dreamed up out of nothing, but because they found, oftentimes, insidious ways to solve a digital infrastructure problem with a private business. Google figured out how to help us find content we were already looking for, Facebook figured out how to help us find people we already knew, Amazon, physical products we already wanted, etc. Yes, these companies would eventually flood the airwaves with ad campaigns, but Google was already a multi-billion-dollar tech company and Chrome had over 100 million active users when they dropped their first Super Bowl commercial, “Parisian Love,” in 2010. That still-very clever ad told a story about someone one falling in love through the mundane Google searches everyone makes every day. Google’s Super Bowl ad last night, “Dream Job,” depicted a dad getting ready for a job interview by talking out loud in his kitchen to an AI voice assistant, something I am very confident no one has done ever. But that doesn’t matter because Silicon Valley believes they are big enough now to create the future, rather than scale up to meet it.... Thanks to apps like TikTok, Shein, Temu, and, most recently, DeepSeek, we know that China has caught up to the US and its tech industry has figured out how to innovate in ways ours can't or won’t. You might not like machine-learning-based short-form video apps or gamified social shopping platforms, but they are genuinely new ways of interacting with the web. And US regulators can’t actually stop the tide from turning — at best, the US will become an island surrounded by a global internet run by Chinese software. But what elevates this from lame to genuinely dangerous is that this delusion that Silicon Valley can now decide how the future should look has infiltrated the highest levels of the US government. And AI is the technology powering this delusion.
... All kinds of stupid, awful, ugly shit is popular. That doesn’t mean we have to accept it as inevitable. And, most importantly, it doesn’t make it above the law.
Cat and Girl, excellent as ever. Check out the Great Wave which includes the latest lazy marketing archetypes: