Latenotes: open IP, roadmaps, re-engineering humanity, and the challenges of sustainable design

Turns out having a new job significantly delays writing things. But here we are, with a backlog.

http://catandgirl.com/at-work-in-the-content-mines/

I've been reading a recent issue of the Journal of Peer Production, which focusses on open (starting with some reflections on the journal's development).  One article dives into open hardware, and particularly looks beyond the common forms (electronics, mechanical) to see what open means in other areas - such as larger scale equipment, or architecture - and how you might assess and reason about this. Sadly doesn't dive down to the silicon layer...

In software ---
Analytically, the more distributed, informal and open organizations tend to produce the more modular codebases. On the contrary, the more centralized, hierarchical and closed organizations produce more monolithic software. 
What does this mean for hardware?

Francesca Pick writes about the tensions between chaos and order for emerging organisations.
When things are messy and unclear, most of us tend to want to tidy up. I of all people love to create structure and find it hard to resist the urge to organize everything around me. Could it ever make sense to purposefully maintain a status of chaos?
...
Organizations like OuiShare and Enspiral are trying to operate across borders and sectors, as well as outside of binary non-profit / for-profit categories, and the more we grow, the larger the pressure becomes to replace chaos with order to conform with the administrative and legal requirements of the various countries we operate in.
I do think this is a really interesting space, having juggled with similar questions at Open Knowledge and the Digital Life Collective, and to a lesser extent, at Field Ready.

Jonas Ehrnsperger spoke at a recent open IP event (check out his working paper). He set out some ways of thinking about open IP, including a taxonomy of IP pledges (with examples!)


Extract from Jonas's paper https://www.repository.cam.ac.uk/handle/1810/289453
and an IP licensing taxonomy:
Extract from Jonas's paper https://www.repository.cam.ac.uk/handle/1810/289453

Thanks to Jonas for the permission to include these diagrams. They really helped our discussion and set out the landscape of licensing and pledges in a comprehensible way.

One of the papers our open IP research group has discussed recently is about the science and technology studies perspectives of being involved in the synthetic biology roadmap. If that sounds dry, it's not!  This struck me as an interesting counterpoint to the previously discussed topic of "roadmaps" as a way of coordinating across industry, showing the difficulties of bringing a more social angle to roadmapping (and therefore to sector planning).  I struggled to pick which bit to quote - so much good material -
First, the future is always uncertain, so attempts to predict or control the development of any technology are necessarily fraught with difficulties. Collingridge’s “dilemma of control” was a pioneering attempt to engage with this unpredictability from an STS perspective. The dilemma arises because, in its early stages, it is hard to predict how a technology is likely to develop, so it is difficult to intervene and shape it, although the power to control and influence its development is high. However, once the consequences of the technology become apparent, the power to control its development is limited because it will have become part of an entangled material, economic, and social fabric (Collingridge 1980). Other authors have built upon Collingridge’s dilemma to demonstrate how innovation is an irreversibly branching evolutionary process that is “shaped” by or “coproduced” with society and that choices between alternative technological pathways tend to get closed down over time. As a result, not all that is scientifically realistic, technically practicable, economically feasible, or socially viable will be historically realizable. Technologies are path-dependent and can become “locked-in” (Bijker and Law 1992; David 1985; Jasanoff 2004; Stirling 2008). The (often misunderstood) lesson of the Collingridge dilemma of control is not that we should give up on attempts to govern the future but that it is necessary to act while acknowledging uncertainty. Collingridge (1980, 12) argues that instead of focusing on better predictions, we should develop a “theory of decision making under ignorance”. Since prediction and control will not be possible, it becomes necessary to incorporate flexibility, resilience, and diversity into technological developments to avoid lock-in.
Second, as literature on the sociology of expectations demonstrates, discourses about the future are not mere speculation, they are performative: they have real effects in the present because actions in the present are made legitimate through promises about the future (Brown 2003). If a technology is expected to succeed, people will invest (time, energy, political support, finances) into it, meaning it is more likely to succeed. Expectations also attach hopes and concerns to new technologies, in this way embedding specific roles for different actors, and thus influencing what the technology becomes.
The paper goes on to talk about the power landscape in the interdisciplinary work, how presumed public concerns were a dominant topic ... thought of as a roadblock for innovation and business, an obstacle to be surmounted. Also - synthetic biology became reified as a technology that will necessarily deliver promised (economic) goods, as long as it is given appropriate support. 

But why are these framings so entrenched? Even when alternative arguments are put forward and appear to be heard, why do they seem to have no lasting effects? We think this resistance to change is built upon four interlocked layers of assumptions about relationships between science and society that reinforce one another in a cumulative manner like the layers of an onion. These layers are (1) the ELSI model of social scientific engagement, (2) the technocratic model of risk, (3) the deficit model of public understanding of science, and (4) the linear model of innovation. Each of these layers of assumptions acts to push the “social” outside of the realm of the “scientific,” and all of them were at work in the Synthetic Biology Roadmap. Addressing one set of assumptions alone can only scratch the surface because each layer builds on the others.

It would be great to see today's internet tech issues being subject to similarly reflective roadmapping and policy discussion.

Even people who I think of as keen on innovation seem to be changing view. For example, Ben Thompson at Stratechery, as part of a review of Google's announcements, says:
many of the problems with YouTube, for example, stem from The Pollyannish Assumption that treats technology as an inherent good instead of an amoral force that makes everything — both positive outcomes and negative ones — easier and more efficient to achieve.

Brett Frischmann came to Cambridge and spoke about his book - written with Evan Selinger - Re-engineering Humanity.  This was a good excuse to read it in advance, and I was impressed with the diversity of examples showing different issues where tech is affecting us, and the various perspectives on each.  I often seem to need a reference for tech folks who simply can't see any problems with various aspects of social technology and a connected world, and this feels like a great overview for them - clear, not too academic, lots of digestible nuggets. We tweeted key points as best we could through a lively presentation (even if we didn't end with Rage Against The Machine's Wake Up blasting out, as the speaker would have wished!).

What if people are becoming more like machines, just as machines are designed to be more like people?


A good talk, for me, leaves you with some ideas, and Brett managed this well:



On to small bits and pieces.

The best article I've seen explaining how misinformation arrives and propagates online - a lovely long piece by danah boyd. The vulnerabilities of search, the way people can be swept up into strange views of the world not by fake news but by content seeding doubts. Social networks as a fabric, not captured by the 'social graph' of tech platforms. Tech's focus on generalisation and abstraction being a problem when it comes to communicating information. Fascinating.

The FT's Lex discusses Extinction Rebellion. The paper has had some great coverage of XR; this snippet seemed to capture why the movement matters -
Local dissent can help dispel the collective action problem that discourages vote-hungry politicians from acting. Transition to a low-carbon economy might then occur more smoothly, less chaotically and at lower cost to everyone.
Tom Coates's thread on social media and how we use it.

An anarchist HCI (human computer interaction). I agree with the assertion that HCI lacks a political position - at least, outside HCI at present, that is my impression. There's always an interesting question when helping startups who are seeking to be ethical (or, as I prefer to think about it, responsible) - they want to do the right thing, but often realise that this requires them setting out a politic and they really don't want to do that. So it's good to have a (counter-?) example.

An essay about Medium.com and typography.  Whatever you think of Medium, it's entertaining to see an example of typical Silicon Valley puff about a web service picked apart.
When I originally wrote this piece, I didn’t understand the motivations of Mr. Williams. It seemed to me that if you were a billionaire who really wanted to “move thinking forward,” you’d have so many wonderful options—fund scholarships, endow professorships, start research programs, open nonprofit organizations. Yet Mr. Williams chose to sidestep all those possibilities. Instead, he created another tech startup. The billionaire’s typewriter.
I was amused by this -
As someone who had a good run in the tech world, I buy the theory that the main reason successful tech founders start another company is to find out if they were smart or merely lucky the first time. Of course, the smart already know they were also lucky, so further evidence is unnecessary. It’s only the lucky who want proof they were smart.
From Warren Ellis's newsletter:
And let me tell you, I am (re)building a hell of a film and tv library over here, and so much of it isn't available on streaming.  The days of "well, everything's going to be on the internet, right?" are long gone.
A long personal narrative of living through the acceleration of the internet from the 1990s until now, seeing tech go from full of potential to everywhere (and with mixed outcomes).

Technical debt is nodded to in this XKCD.  Talking with Luke Church, we discussed the technical debt incurred when a technical person volunteers to help someone or some organisation. By setting up a website, or a social media account, or installing some software, or introducing a tool, however helpful the intent or near term effect, we are creating a liability for the person we help. They will have to maintain the website or account, update the software and figure out how to use the new interface, or pay people to assist with these things. Volunteered technical support is the same as open source software - "free as in puppy".  We rarely if ever mention this when we offer help.

Via Ben Laurie, I spotted
https://twitter.com/noahsussman/status/1117919975350648832

"As someone who seems to have had to salvage more than my fair share of projects tech debt seems like a catch all term to avoid discussing at any greater depth in both directions of the reporting chain."
 -- https://twitter.com/garrybodsworth/status/1114988770464620550

Via Adrian McEwen, an article about free software and open source, and the differences in terms of production/consumption.  Open source is also a feature of Trustable.io which is exploring models for certification and possibly insurances for software, to drive up standards and trustworthiness. Will be interesting to see how this develops.

Chris Moller gave a nice talk for IET Cambridge about sustainable electronics, the reasons we don't have much sustainable design today, and the tough choices engineers face in designing for sustainability.

First of all, the context in which products are made doesn't drive design for sustainability:

Let's assume there's a business model to support sustainable design. What do you do? Well, some things are out of your control. The chips you choose in today's design are probably only going to be made for 2 years, so after that they may become unobtainable (or very costly). (Someone mentioned about one particular vintage computer fetching great prices on ebay, because some critical American infrastructure still relies on a component from it, and they try to keep spares in stock.) If the chip isn't made any more you might have to update your design anyway.

Ultimately, though, it's a tough choice for the individual or team concerned:


Chris did offer some concrete advice though. Use common parts, not things only available from one source. Take up all-time-buy offers if you must use a single source. Provide a service manual. Reclaim broken or unused items from the field if you can. Not everyone can provide all the information a repairer might want, because of proprietary information or company policy; but the components most likely to fail might be in a power system, so even if you can't share much information, a block diagram of power components which might go bang would be useful for future repairers.

A good talk, and definitely a useful introduction for people who perhaps haven't thought about why so few modern products are sustainable or repairable, and a useful contribution to the growing maintenance movement.

We've been announcing speakers for the Festival of Maintenance - some really interesting people - tickets for 28th September selling nicely.