Fortnightnotes: care, degrowth, hype

Starting with a perceptive and interesting idea from incredible women:

https://twitter.com/CassieRobinson/status/1181139137212174336

Doteveryone released a report on Better care in the age of automation. It's a stonker - lots of really insightful points (and worth reading the detail) 

Superflux produced an accompanying video which illustrates one possible care future. It doesn't go where you'd expect the story to end up.  Recommended. (Find out more about the film.)
There's a short video featuring some of the experts involved in the report - including carers and those receiving care.


Dan Hon looks into StreetScore as an example of machine learning, the way papers are published and new projects started. His title says it all: Streetscore, or: how I learned to stop worrying and love machine learning.
I am just not sure that it is responsible to create a dataset for 21 cities, based on 3,000 images from two cities, while simultaneously admitting that the model under-scores images due to features missing from the admittedly anemic dataset and that the datasets should be useful. Forgive me for being a bit weird, but would it not be easier to ask people how they feel about the perceived safety of their neighborhoods?

There is a bit about scaling here. In the abstract, the paper talks about the need for data about the appearance of a neighborhood and that while crowdsourcing is good, it only has a limited throughput. A phrase like a limited throughput is roughly equivalent, in my mind, of it does not scale, and one thing I feel like I’ve learned lately is it does not scale means something like this is hard and we would like to cheat, or for it to be easier, or for it to be less expensive. It feels like there is less rigor involved, and that scaling or increased throughput via the novel application of machine learning and image classification techniques is just code for creating more data.
If you missed it last year, check out Joy Buolamwini's AI, Ain't I A Woman, and wonder about all the money thrown at these human applications of AI, how weak they are, and how they are permeating our lives.


In a recent BIG newsletter, Matt Stoller includes personal stories from corporates and startups, highlighting some of the simple effects of financialisation on what actually happens in organisations. Perhaps this is why productivity is low (not to mention why we end up with the technologies and services we do). 

A new exit route for startups is proposed by Nathan Schneider - "exit to community":
When a startup company takes early investment, typically the expectation is that everyone is working toward one of two “exit” events: selling the company to a bigger company or selling to retail investors in an initial public offering. In either case, the startup is a hot potato. One group of investors buys in order to sell to another group of investors who buy in to sell to the fools down the road. There’s something sort of pyramid-scheme-ish about all this. The exit event, also, is often the beginning of the end of any positive social vision that the company might have held.

What if startups had the option to mature in a way that gets them out of the investors’ hamster wheel?

In the coming months, I will be exploring strategies and stories that could help create a new option for startups: Exit to community. In E2C, the company would transition from investor ownership to ownership by the people who rely on it most. Those people might be users, workers, customers, participant organizations, or a combination of such stakeholder groups. The mechanism for co-ownership might be a cooperative, a trust, or even crypto-tokens. The community might own the whole company when the process is over, or just a substantial-enough part of it to make a difference.
This is an interesting way to recognise the risk and uncertainty of early stage startups, where a small group need to be agile in decision-making to get through the toughest times, whilst moving towards a more equitable scale model.

Some months back I wrote about age in technology jobs.  I was reminded of this again by a NYTimes piece about the salary differences between STEM majors and arts and humanities, over the course of a career.

Nick Hunn on government IT and the excitement of hype:
As with most Government IT projects, it always looks better to add a version number, as it gives the impression that they know what they’re doing.  In practice, it generally indicates that they’re going to jump on whatever new technology bandwagon is passing.  Which is where Smart Border 2.0 is likely to be headed.
Alex Deschamps-Sonsino monitors the state of IOT hype, often calling out claims for the future which are ridiculously outdated.  In a tweet poking at yet another mention of the classic "internet fridge" she links to a blog post she wrote in 2014 with a better idea for how smarts could support better kitchen organisation - "pantry."  I'd pay good money for this.

You should buy her book on smarter homes.


I've not read Smil's Growth, but I have read an interview in the Guardian.

I feel I've only just started learning about degrowth, or decoupling . (We touched on this at the Festival of Maintenance, briefly summarised in my report. In the meantime I can recommend Shannon Mattern's essay of her talk for the Festival this year.) Smil points out that we in the UK and other highly developed nations probably need degrowth, but others such as Nigeria still need growth. It's unevenly distributed. The interview alone is packed with facts.

Shannon writes:
Facebook, Apple, Amazon, Netflix and Google depend as much on the extraction and the expenditure of environmental resources as any other growth-oriented industry. By that same token, their “limits to growth” will, similarly, confront us on our city streets, our coastlines, and our farm towns, on private properties and in the commons. As we contemplate legal, economic, and ethical strategies for limiting tech’s rampant growth, we need to look beyond privatised and individual solutions like setting “screen-time limits” or quitting Facebook. As with other degrowth endeavours, we need to strategise at the community, national, infrastructural, and ecological scale—and to acknowledge the crucial importance of maintenance and care at each of those scales.
That said: if we were to de-grow a digital universe monopolised by Alphabet and Verizon, how might we start to repair the vast disparities in informational resources and sustain widespread—and critical—digital literacy? How would we build and maintain infrastructures that promote community-responsive connectivity? How would we recognise the legacies of digital redlining and data colonialism, offer reparations, and care for those communities that have historically been marginalised and exploited? How can we develop regulations and digital pedagogies that prioritise “sharing,” “simplicity,” “conviviality,” “care” and “commoning” above growth?

Whether we find ourselves amidst the vast terrain of the commercial internet; in our libraries, archives and museums; or between the parks, public housing facilities and utility infrastructures of our cities, thinking beyond growth as an end in itself requires attending to maintenance and care: who deserves it, who performs it, and to what end. This new world is one that we can choose to build deliberately and in incremental steps—at a Triennale or a brainstorm at a conference–or it could be forced upon us, necessitating triage and reactionary care. We should start planning for the former.

In the US, a major driver for 'tech ethics' is concerns about Immigrations and Customs Enforcement (ICE) use of tech.  This Twitter thread explores open source, ethics, and the extent to which licences might be a tool for change.


The general sense is that licences aren't the right enforcement mechanism and we should not dilute the open source definition. Andrew Katz covers this nicely in a response to the proposed Ethical Open Source Licence.


What we call things matters.
Although Autopilot may have serious consequences for the driver, passengers, and those in the vicinity, car manufacturers have nearly free rein when it comes to labeling those systems. Contrast that with the USDA's food labeling standards, which are regulated with the seriousness of an application, approval, and review process. Specific labels are given tight boundary controls (“Arroz con pollo must contain at least 15% cooked chicken meat”). But if you want to brand your car's systems as Auto-magic-pilot-drive-yourself, there is little today that the US Department of Transportation or Federal Trade Commission will do to prevent you.

If food labeling is important enough for such scrutiny, vehicle systems should be given equal attention. Errors caused by mislabeling in the food category tend to be limited to an individual consumer. Driving motor vehicles, on the other hand, can have a very large amplification effect for non-consenting pedestrians, bicyclists, and other drivers.

Language drives expectations, which establishes our relationship with technology. In using terms like Autopilot (Tesla), ProPILOT (Nissan), and Pilot Assist (Volvo), research shows that 40% of survey respondents believe those systems “drive themselves.”
From Trucks newsletter, an interesting update from the world of safety standards:
UL releases the preliminary draft of UL4600, the first comprehensive safety standard for autonomous products (link, link to draft). 'Other existing safety standards prescribe 'how to do safety' by following step 1, step 2 and step 3. UL 4600, in contrast, is about 'how you’ve done [safety] enough...If you can't say what it means to be safe, and you can't explain why you think the system is actually safe, then probably your system is not safe.' 
In an ideal world I could link to the standard, but this (as far as I can tell) is like many standards in requiring a steep fee to access it. 

Cyber attacks aren't just about data breaches: the costs of ransomware can be astonishing and we don't hear about much of it.

A thoughtful thread from Nathaniel Heller, as he moves on from the Open Government Partnership, on what it means to play a long game in policy and activism.

OMG Climate is coming to London for an event on October 18th.

You can support the world's first recycled ceramic tableware - from a Liverpool group - Granbyware.

There's a whole conference coming on the computer mouse. One day, maybe there will be one about touchpads.

Boring technology is great.  Here's a short article on how kettles turn themselves off. This sort of innovation isn't as flash as a hyped startup, but it pays well over time. John Taylor went on to fund and create the Corpus Clock.

Phil Gyford tried to help a Lime scooter, but it ended badly.