Weeknotes: ethics and social organisation, leeway, open source

Why you probably don't need blockchain for your use case - a nice summary, highlighting the critical point that the physical world, even with sensors, is not as tamper-proof as blockchains.

An FT(paywall) article suggesting "data dividends" as a potential policy tool in California makes me think I might need to actually read Radical Markets properly to understand the data-related ideas, rather than just the highlights from Glen Weyl's talk last year.

I enjoyed the Internet Society's report on the state of consolidation in the internet - well, the executive summary anyway, which is a genuinely useful summary of the underlying architectural issues (actual topology, and deep dependencies) so often neglected in the (many) responses to consumer-level concerns today.
 ...interoperability, and standard development and deployment are increasingly becoming a function of scale. In this case, open, collaborative, and interoperable Internet is influenced by a small number of large companies, and organisational scale and market share play a significant role in the development and deployment of the open technical standards on which the Internet depends.
A blast from the past this week bumping into Jonathan Gray, formerly an Open Knowledge colleague, at Kings College London, where Stefaan Verhulst (another familiar face!) was talking about his work on data collaboratives.  Data collaboratives area range of forms of public/private partnership, which allow private datasets to be used for public good purposes. It's an interesting idea, to make the most of information we have in the world, both on a voluntary basis and potentially via policy mandate. Access to raw data is also important for reproducibility of research, and quality control of insight products (often used by government and public sectors).

Stefaan touched on a couple of other interesting ideas. One was the apparent slow demise of think tanks, and the growth in corporate-controlled 'insight' entities, eg JP Morgan Chase Institute. Whilst it's nice that this Institute has access to Chase bank data, and so can research things like the real earnings of Uber drivers, you get less independent analysis. Another was the point that the history of corporations, in the US at least, had origins in creating a way for risks to be taken for public benefit - hence the "limited liability" in "LLC".

All of this encourages us to think about secondary uses of data, too, as the concept of 'consent' for specific uses looks more and more unhelpful. Maybe we actually need to use data - even if it was not originally collected for that purpose - in aggregate forms to tackle the big, wicked and interdependent problems we face around climate and so on.  Stefaan noted that whilst much debate about data use focusses on personal data, and (although I've not come across this so much myself), satellite imagery, there are loads of other data types which could be really useful. A brewery in South Africa, for instance, will hold private data on road condition (from distribution trucks), water usage, and maybe even community health (via sales figures), which could be very useful for public services there.


A thoughtful piece from M Feldstein on hype in educational technologies, what has stuck, and what has gone away. A good reminder that hype cycles do pass, in time.
One could argue that we hit peak ed tech hype in 2012. The Year of the MOOC. Remember how there were only going to be 10 universities in the world, and only one lecture for every subject, given by the very best lecturer in the world? Remember how everyone was going to get a Stanford education for free?
Yeah. Good times.
..... Mostly, people seem to be approaching all of these things—learning analytics, adaptive learning, OER, inclusive access, etc.—with a little more sobriety. These developments are all getting attention, but not a lot of hype (though not always for lack of trying). The general attitude among educators and institutions seems to be more like, "Huh. So that's a thing now. Good to know. What can I do with it?"
A useful reminder, indirectly via Bill Janeway, of how Silicon Valley has not always been how it is now.
https://twitter.com/Dylan_Bftn/status/1067371896650588160  


There's a new whistleblowing platform for tech workers.

Susan Silbey at MIT writes about the stories we tell about tech ethics (HT Nathan Matias). One thing I took away from this was the historical validation of how personal ethics (and the focus on training in this as response to technology and business crises) mean little in the face of the wider context, particularly of social organisation.
We might begin by noting that crises of corporate and professional responsibility have been endemic to American society, at least since the last quarter of the nineteenth century. With each chapter of professional misconduct – from the robber barons and the Teapot Dome scandals, through the progressive era up through Watergate, Iran Contra, the financial crisis of 2008, and to the recent epidemic of research scandals in political science and psychology – the response has been the same: calls for education in ethical responsibilities, and specifically training in ethics as part of professional education. 
This cycle of scandal and responsive calls for better training has been so often repeated that one can be surprised only by the paucity of models for providing that education. The standard model – required in law and medical schools now leaking into engineering and computer science programs with minor variations – teaches ethics as problems in individual decision-making, personal values, and choices. Training focuses on formalized rules of professional conduct, punctuated by appeals for social responsibility. It has not proved to be a successful regimen, if the repeated cycles of corporate and professional misconduct are any gauge. 
Such standard models fail because the diagnosis and cure share a basic misconception: that corporate and professional misconduct are problems caused by rotten apples; some few weak, uninformed, or misguided individuals making independently poor choices.  
.... From this perspective, we might think of the task of ethics education as socio-cultural analysis, and preparation for a career as a scientist or engineer as requiring lessons in history, organization, cultural exploration, and management. 
... Preparation for a career in science, for example, might include attention to the organization of laboratories (including perhaps how they have changed over time), the incentives and pitfalls of different forms of funding (including the differences between grants and contracts), as well as the role of gender in both local group and external professional activities. 
The whole article is great.


I also enjoyed this 2015 article by Stefan Czerniawski about the concept and importance of leeway and the risks around losing it through computerisation. (Highlights mine)
The best rules are simple and explicit. The best way of applying rules is with an element of judgement about the context. Computers (and, for different reasons, bureaucrats) are good at the first part, rather less so at the second. Computerised bureaucrats (who can be found far beyond the public sector) are a case of their own. So there is a dilemma. We can try to create a system which is perfectly rule bound, where total fairness is ensured by the complete absence of discretion – but that complete fairness is almost certain to look (and be) unfair in a whole range of difficult edge cases. Or we can try to create a system based on the application of principles and judgement, where fairness is ensured by tailoring decisions to precise circumstances – but that fairness is almost certain to result in similar cases getting dissimilar outcomes. That dilemma does not just apply at the level of individual entitlements and obligations. It – or something very like it – also applies in broader collective decision making. We demand that service provision should be tailored to local needs and circumstances but decry the postcode lottery. 
... Computerisation tends to make all this worse, for two big reasons. The first is that humans become interface devices not autonomous agents, not able to offer leeway even it they want to (indeed, preventing them from doing so may be part of the point). That’s not limited to government, of course, as anybody who has done battle over a mobile phone contract or a dodgy gas bill knows. The second is that computerised rules need to be computable. Binary conditions are easier to code than fine assessments. More subtly, the act of computerisation can be a prompt to ‘simplify’ systems in ways which risk creating much cruder boundaries, so exacerbating the first problem.
So we come back to leeway, being careful to follow Weinberger’s approach to what it does and doesn’t mean. Leeway doesn’t mean that there are no rules or that some people are entitled to ignore the rules, it means that at the margin it may be more important to respect the spirit of a rule than the letter.
 
Bureaucracies often have something that computers do not: logical escape valves. When the inevitable cases arise that break the logic of the bureaucratic machine, these escape valves can provide crucial relief from its heartless and implacable nature. Every voicemail system needs the option to press zero. 

An impressive map showing, supposedly, the flight paths of an eagle over 20 years, was going to appear here, but it turned out to be inaccurate - most likely showing a range of paths of different eagles. Check out some science on eagles showing individual paths instead, and still be impressed by how far they range.

A super post from my Doteveryone colleague Lydia Nicholas about things to think about when you are working with users/communities who can't just rock up and spend time at your workshops. Lydia is working on our Better Care Systems project. Actually considering participants who are not like you is something too many events fail to do - I notice this especially when academic and industry events assume people from small NGOs and charities have unlimited time to contribute, and don't realise that often if you are funded via restricted grants, you are literally not paid to do some stuff.

Cassie's interview with Anna Laycock from the Finance Innovation Lab touched on some important aspects of trying to change complex systems; I remember trying to manage the breadth and depth and community energy at Open Knowledge, and how often our pragmatic not fanatic stated value was useful.
... challenge: balancing ambition and opportunity with the realities of our limited capacity. There’s a need for our work everywhere in the financial system — in start-ups, in established organisations, in policymaking and regulation, in campaigning, and beyond — and we know how powerful it can be in catalysing change, but we simply can’t be everywhere at once. Patience is one of the hardest, but most important, aspects of our work.....
We have to do it all with integrity and optimism, to be both a beacon of hope and a bearer of truth.
 

I'm looking forward to watching the recording of Allison Randal's Turing lecture about open source influences on technology innovation, when it's available. Jon Crowcroft was there and posted the questions he would liked to have asked.

One is about repair - and the Restart Project's insights show the need for openness to enable better sustainability:
1/ as well as open source software, we're seeing open hardware - not just processors (risc V) but peripherals, but also affordable 3D printing means even things like electric guitars - so the maker community that does a lot of this stuff (c.f. Cory Doctorow's books:) maybe exemplify the open collaborative ecosystems even more than coders, no? and they get to have a really good story about sustainability (repairing stuff is so much better than replacement).
One is about machine learning and sustainability. I'd like to know more about this.
2/ Sustainability - so machine learning (particularly deep learning) appears to be badly unsustainable in terms of compute resource training takes - this argues strongly for sharing trained classifiers - perhaps a carbon tax on neural networks could be turned into an incentive (carbon trading for AIs)
And one is about when open goes wrong.
3/ some open ideas have horrible consequences - simple things like pagerank (which made google's search very hard to game compared to predecessors like Altavista) led to clickthrough which led to two-sided markets which led to surveillance capitalism. H-Index, which is supposed to replace publish-or-perish with citation count weight as a measure of quality, just leads to citation gaming. And so on - can we encourage replacements from the open source community, please, asap.
UNESCO are working on preserving software source code - the Paris Call is an effort to raise awareness and increase access to code.
In today’s world digital technology has become for many an essential tool for social existence, communication, creation, sharing, and is increasingly indispensable for accessing public services. However, the role of software development is still largely underrated, as is the recognition of software source code as an intellectual effort and as the receptacle and expression of part of our knowledge.
That is why it is crucial to work towards preserving the technological and scientific knowledge embodied in software source code.
.....
challenges include the importance of raising awareness among decision-makers, and the recognition of software creators as well as of the contribution of women and minorities to digital innovation and software. ....
These efforts, however, are just starting. It is our collective responsibility and we all must ensure that the knowledge accumulated – and constantly being generated – is not lost.
Code is being collected on the Software Heritage site.


Excerpt of data from https://www.softwareheritage.org/



And finally, from Wondermark - is it just an elaborate ruse to redistribute venture capital to small businesses? I kind of like this idea of subverted marketing.

These weeknotes are actually fortnightnotes again, and in fact I'm trying something different - taking two larger topics out of here to separate posts. In retrospect I should have written one of these last week, as all the content been building up and making me procrastinate.