Brains trust: notes from my session at UK Healthcamp

A couple of Saturdays ago, still buzzing from a week of NHS website and service manual launches, and the NHS Expo, I took part in my first UK Health Camp.

I learned loads, put faces to names I’d long followed from afar, and posed a question of my own to a windowless basement room full of thoughtful healthcampers: “What do people need to be able to trust a digital health service?”

Trust session at UK Healthcamp

It’s a question I’ve been thinking about a lot, because the fifth of our new NHS design principles is “Design for trust”.

I ran the session as a loose variant of the 1-2-4-all liberating structure. Asking people to think about the question first as individuals, then in growing groups, the format was a great way of eliciting contributions from everyone in the room, then distilling down to some common themes.

At the end of the session, I left the room with a stack of sticky notes on which I had scribbled the key themes as groups reported back. Below is a summary with my own grouping and interpretation of the themes after the event.

The weighing of trust starts before we use a service, as we evaluate it to see if it’s going to meet our needs.

A few groups in the session talked about relevance: will it help me achieve what I need to do? To be relevant, a digital service will likely have to be part of an end-to-end journey, quite possibly including both digital and non-digital elements. Even in this digital world, having an offline presence is one of the things that can give a service credibility.

Once we believe a service might be useful, the next question, not far behind, is “has it been tested – for safety, practicality, and effectiveness?”

We trust things that come recommended by people we trust. Reputation matters, especially when expressed through peer recommendation. We make decisions about services in a web of relationships; a service will be more trusted if it is “culturally embedded”. In the context of British healthcare, there’s nothing more culturally embedded than the NHS.

To earn trust fully, there are things a service has to demonstrate in use.

Is it confidential? People set high standards for data protection, security and privacy. A service shouldn’t collect data it doesn’t need, and must be totally anonymous when you need it to be.

Is it personal? Provided confidentiality is assured, one of the ways a service can gain credibility is by showing information that only it should know. While anonymity is sometimes necessary, so too can be personalisation.

Is it transparent? Transparency of intent and clarity of operation are essential for any digital health service. Why is it asking me this? How did it get to that answer? Is it clear what I’m consenting to?

It is professional? The boring qualities of stability, reliability and consistency should not be underrated. If they go missing, trust in a service will be rapidly undermined.

Finally, there’s a quality of continuous improvement. Without this any trust gained is likely to be short-lived. Does the service take feedback? Is it accountable for its actions? Can you see in its present state the traces of past user feedback? “You said… we did…”

Those were the combined ideas of a self-selecting group one Saturday in Manchester. But tell me what you’d add? What would you need to be able to trust a digital health service?

Advertisements

AI, black boxes, and designerly machines

On my holiday, I started reading into some topics I ought to know more about: artificial intelligence, genomics, healthcare, and the fast approaching intersection of the above. Here follow some half-baked reckons for your critical appraisal. Please tell me what’s worth digging into more. Also where I’m wrong and what I might be missing.

1. Opening the black box

large ribosomal subunit (50S) of Haloarcula marismortui, facing the 30S subunit. The ribosomal proteins are shown in blue, the rRNA in ochre, the active site (A 2486) in red. Data were taken from PDB: 3CC2​, redered with PyMOL.
By Yikrazuul CC BY-SA 3.0, from Wikimedia Commons

Reading Siddhartha Mukherjee’s ‘The Gene: An Intimate History’, I discovered the amazing trajectory of human understanding of DNA, RNA, enzymes, proteins, the genome, and the mechanisms by which they interact. There’s no doubt that this stuff will transform – is already transforming – our relationships with medicine. Crucially this generation of scientists are looking inside a black box, where their predecessors could observe its effects but not its inner workings.

At the same time, fuelled by petabytes of readily available data to digest, computer science risks going the other way in the framing of artificial intelligences: moving from explicable, simple systems to ones where it’s allowed to say, “this stuff is so complex that we don’t know how it works. You have to take it on trust.”

When we apply artificial intelligence (AI) to healthcare, transparency is essential; black boxes must be considered harmful.

It’s not just me saying this. Here are the words of the Institute of Electrical and Electronics Engineers (IEEE):

“Software engineers should employ black-box software services or components only with extraordinary caution and ethical care, as they tend to produce results that cannot be fully inspected, validated or justified by ordinary means, and thus increase the risk of undetected or unforeseen errors, biases and harms.” — Ethics of Autonomous & Intelligent Systems [PDF]

Transparency must be the order of the day. It comes in (at least) two flavours: the first is clear intent; the second, understandable operation. Both are under threat, and designers have a vital role to play in saving them.

2. The opacity of intent

It’s a commonplace to say that technology is not neutral. I won’t labour that point here because Sara Wachter-BoettcherEllen Broad and others do a good job of highlighting how bias becomes embedded, “AI-washed” into seemingly impartial algorithms. As the title of Ellen’s wonderful book has it, AI is ‘Made By Humans’.

That doesn’t seem to stop stock definitions from attempting to wall off AI beyond the purview human control:

“In computer science, AI research is defined as the study of ‘intelligent agents’: any device that perceives its environment and takes actions that maximise its chance of successfully achieving its goals.” — Wikipedia

But what goals exactly? And how did the AI get them? The Wikipedia definition is silent about how goals are set, because, in the words of Professor Margaret Boden“the computer couldn’t care less.”

“…computers don’t have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.)” — Robot says: Whatever

When any technology moves from pure to applied science, intent must be centre stage. If we fixate too much on the computer science of AI, and not enough on the context of its application, intent will always be unintentionally obscured.

Many discussions about the “ethics” of AI or genomics are really, I think, discussions about the opacity of intent. If we don’t know who’s setting the goals for the machine, or how those goals are derived, how can we know if the intent is good or bad?

Moreover, true human intent may be difficult to encode. In a domain as complex as health and care, intent is rarely straightforward. It can be changing, conflicting and challenging to untangle:

  • a boy was triaged on first contact as in less urgent need, but has suddenly taken a turn for the worse
  • an elderly woman wants to get home from hospital, but her doctors need first to be sure she’ll be safe there
  • the parents want to help their children lose weight, but know that pester power always leads them back to the burger chain.

In these situations, even Moore’s Law is no match for empathy, and actual human care.

3. Designers to the rescue

Design, in Jared Spool’s wonderfully economical definition, is “the rendering of intent.” Intent without rendering gives us a strategy but cannot make it real. Rendering without intent may be fun – may even be fine art – but is, by definition, ineffective.

It’s time for designers to double down on intent, and – let’s be honest – this is not an area where design has always covered itself in glory.

We know what design without intent looks like, right? It’s an endless scroll of screenshots presented without context – the Dribbblisation of design.  If you think that was bad, just wait for the Dribbblisation of AI. Or the Dribbblisation of genomics. (“Check out my cool CRISPR hacks gallery, LOL!”)

Thoughtful designers on the other hand can bust their way out of any black box. Even if they’re only called in to work on a small part of a process, they make it their business to understand the situation holistically, from the user’s point of view, and that of the organisation.

Design comes in many specialisms, but experienced designers are confident moving up and down the stack – through graphic design, interaction design and service design problem spaces. Should we point an AI agent at optimising the colour of the “book now” buttons? Or address the capacity bottlenecks in our systems that make appointments hard to find?

One of my team recently talked me through a massive service map they had on their wall. We discussed the complexity in the back-end processes, the push and pull of factors that affected the system. Then, pointing at a particular step of the process: “That’s the point where we could use machine learning, to help clinicians be confident they’re making a good recommendation.” Only by framing the whole service, could they narrow in on a goal that had value to users and could be usefully delegated to AI.

4. How do you know? Show your thinking.

School exam paper. Question:

Crucially, designers are well placed to show the workings of their own (and others’) processes, in a way that proponents of black box AI never will.

This is my second flavour of transparency, clarity of operation.

How might we:

  • communicate probabilities and uncertainties to help someone decide what to do about their disposition to a form of cancer?
  •  show someone exactly how their personal data can be used in research to develop a new treatment?
  • involve people waiting for treatment in the co-design of a fair process for prioritisation?

In a world of risks and probabilities, not black and white answers, we should look for design patterns and affordances that support people’s understanding and help them take real, fully informed, control of the technologies on offer.

This is not an optional extra. It’s a vital part of the bond of trust on which our public service depends.

5. Designerly machines

Applying fifty iterations of DeepDream, the network having been trained to perceive dogs CC0 MartinThoma
Applying fifty iterations of DeepDream, the network having been trained to perceive dogs – CC0 MartinThoma

The cultural ascendancy of AI poses both a threat and an opportunity to human-centred design. It moves computers into territory where designers should already be strong: exploration and iteration.

I’m critically optimistic because many features of AI processes look uncannily like a repackaging of classic design technique. These are designerly machines.

Dabbers ready, eyes down…

  • Finding patterns in a mass of messy data? Check!
  • Learning from experiments over many iterations? Check!
  • Sifting competing options according to emerging heuristics? House!

Some diagrams explaining AI processes even resemble mangled re-imaginings of the divergent/convergent pattern in the Design Council’s famous double diamond.

Diagram showing how design moves from problem to solution in four stages, shown as one diamond after another. There are two pairs of divergence and convergence: Discover and Define, Develop and Deliver
© Design Council 2014 – https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond

A diagram outlining a forward pass though three 3D generative systems, data is divergent and then convergent
“A diagram outlining a forward pass though our three 3D generative systems.” – Improved Adversarial Systems for 3D Object Generation and Reconstruction [PDF]
The threat is that black box AI methods are seen as a substitute for intentional design processes. I’ve heard it suggested that AI could be used to help people navigate a complex website. But if the site’s underlying information architecture is broken, then an intelligent agent will surely just learn the experience of being lost. (Repeat after me: “No AI until we’ve fixed the IA!”)

The opportunity is to pair the machines with designers in the service of better, faster, clearer, more human-centred exploration and iteration.

Increased chatter about AI will bring new more design-like metaphors of rendering that designers should embrace. We should talk more about our processes for discovering and framing problems, generating possible solutions and whittling them down with prototypes and iteration. As a profession, we have a great story to tell.

A resurgent interest in biology, evolution and inheritance might also open up space for conversations about how design solutions evolve in context. Genetic organism, intelligent software agent, or complex public service – we’re all entangled in sociotechnical systems now.

What do Wardley maps really map? A settler writes

On the last day of Foocamp 2011, after a whirlwind of other fascinating conversations, Edd Dumbill introduced me to the business strategist and researcher Simon Wardley. Over a tasty Californian street food lunch Simon proceeded to draw me a literal back of a napkin sketch of his “pioneers, settlers, town planners” model.

I was intrigued because this tripartite structure seemed to mirror my own experience at Orange/France Telecom Group, in a division dedicated to the “industrialisation” of solutions pushed by the company’s powerful research and development division. At the end of our chat, Simon took my business card; on it he wrote, “Settler”.

Ever since, I’ve followed Simon’s writing, and, more recently the well-deserved success of his mapping technique as a way for large organisations to acquire some semblance of situational awareness. You should check out his highly readable and enlightening series of posts. In fact, the rest of this will only make sense if you have at least a passing familiarity with Simon’s model. But there are some aspects of the model that bother me, and this long-overdue post is to share them with you.

I write from the perspective of a serial settler. I’m the one who moved from print to new media just as America Online was carpet-bombing the developed world with connection CDs. I joined a mobile operator the year its customer base doubled thanks to first-time, pay as you go, phone buyers. I arrived at the Government Digital Service the week the first, 24-department, transition to GOV.UK was completed.

We settlers occupy a precarious yet privileged position. Simon’s other two archetypes can always reach for a handrail at one or other edge of the map. There’s always something so bleeding edge that only the pioneers geek out about it, and something so commoditised that only the town planners can get excited. But settlers are stuck in the middle, constantly jostled by both of the other tribes. I reckon this positions settlers well to see the others’ points of view, as well as to appreciate the pitfalls of the model.

My first big lesson is this: do not structure your large organisation by pioneers, settlers and town planners.

I know because I’ve been there. In 2009-10, Orange Group wasted valuable months that could have been spent learning about user needs for iPhone and Android apps on a turf war over whether to treat the app store as a site of innovation, industrialisation or commodity. Rather than seek consensus about where on the map any given component sat, each group was incentivised to claim it for their own. This set up a permanent three-way tug-of-war between the tribes.

As anti-patterns go, it’s not much better than the rightly derided “bimodal” approach to managing technology. By all means recognise that the map has different kinds of context, with different attitudes required. But put all the attitudes into cross-functional teams – that way they have a fighting chance of being able to respond as one when the world changes.

My second insight is that evolution is complicated – much more so than you’d guess by seeing the simple x-axis of Simon’s maps. As I traced in my post on the three lives of the front-facing camera, actor-networks form and re-form; unlikely components gain visibility; others recede. Things flip from commodity to innovation and back again.

The use of the term “evolution” bothers me. It carries strong implication of an inevitable unidirectional process. Only with the benefit of hindsight can evolution be said to be a straight line, and that’s just a trick of perspective.

In the words of Michael Mullaney who analysed 20 years of Gartner hype cycles, “we’re terrible at making predictions. Especially about the future.” Gaining a better grasp of the here and now – “situational awareness” – seems more useful. But mapping should not be mistaken for a tool with predictive power by implying that things naturally move from left to right.

1-ycr-SvWHFfQKBf7GmQAEyw.jpeg
Figure 20 from Simon’s Medium post titled “Everything evolves”

It also troubles me that the axes on a Wardley map are not truly independent. There’s a clear correlation between visibility and commodification, which sees most maps take on a top-left to bottom-right drift. This begs some questions. Is there a causal link between the two axes, and if so in which direction? Or might there be a third factor at play?

My experience of organisational dysfunction and dissatisfaction with “evolution” as an axis combine to one big conclusion. Accept this conclusion and I think mapping can be a valuable practice.

Here goes: All maps are socially constructed. Wardley maps are therefore an artefact of social science, not (despite the Darwinian metaphor) a life science. The x-axis shows not evolution but level of consensus.

Exhibit, this table of the different characteristics at different stages. It’s a chart that can only have come from years of astute people watching. Whether he knows it or not, Simon is an ethnographer par excellence

cuj9gc0wiaasooi
Table showing characteristics and general properties of the different stages in Simon Wardley’s model

I say that as a good thing, not a criticism, but it has 2 important implications…

  1. the process of mapping is itself part of the social construction. The act of observing always changes the outcome
  2. a common manoeuvre to secure consensus is to create an illusion of objectivity, so maps contain the seeds of their own misinterpretation.

When we map, we are never disinterested observers. We all have agendas, and whether consciously or not, will use the mapping process to advance them. Elements only move from left to right on Simon’s maps because people move them! And not moving them, or moving them backwards is always an option.

Likewise, whether things are visible or invisible is often a matter of contention. People seeking prematurely to commoditise an element may claim that “users don’t care about x so we can treat it as a commodity.” For example, when it comes to matters like encryption, or where their data is held, user research shows that users don’t care – until one day suddenly they do.

As I was puzzling over this point a few weeks ago, Simon tweeted that: “All maps are imperfect representations. Their value is in exposing assumptions. allowing challenge and creating consensus.”

That is true. But one could just as easily use maps to launder assumptions into facts, delegitimise challenge (and still create consensus). If I wanted to lie with a map, the implied inevitability of evolution would be very convenient to me.

“Evolution. What’s it like?” The three lives of the front-facing camera

“Evolution. What’s it like? So one day you’re a single-celled amoeba and then, whoosh! A fish, a frog, a lizard, a monkey, and, before you know it, an actress.
[On-screen caption: “Service limitations apply. See three.co.uk”]
I mean, look at phones. One, you had your wires. Two, mobile phones. And three, Three video mobile.
Now I can see who I’m talking to. I can now be where I want, when I want, even when I’m not. I can laugh, I can cry, I can look at life in a completely different way.
I don’t want to be a frog again. Do you?”

— Anna Friel, 3 UK launch advert, 2003

Today, in 2016, that ad feels so right, and yet so wrong. Of course phones have changed massively in the intervening decade-and-a-bit — just not how the telecoms marketeers of the early Noughties fantasised. In this post I want to trace what evolution of technology might really be like. I’ll do it by following the unstable twists and turns around one small element of the construct we now call a smartphone.

Something was missing from the Anna Friel commercial. All the way through, the director was at pains to avoid even the tiniest glimpse of something the audience was eager to see. You know, a phone. At the time I worked for Three’s competitor Orange whose brand rules also forbade the appearance of devices in marketing. The coyness was partly aesthetic: mobiles in those days were pig-ugly. Moreover, the operators had just paid £4 billion each for the right to run 3G networks in the UK. They wanted consumers to think of the phone as a means to an end, a mere conduit for telecommunications service, delivered over licensed spectrum.

To see a device in all its glory, we must turn to the manufacturer’s literature. Observe the product manual of the NEC e606, one of three models offered by Three at its launch on 3 March 2003:

NEC_e606_eng.pdf-0.png
NEC e606 product manual

Notice where a little starburst has been Photoshopped onto the otherwise strictly functional product shot? That’s the only tangible hint of the phone’s central feature, the thing that makes it worth buying despite being pricier and weightier than all the other matte grey clamshells on the market. By this point, loads of phones have digital cameras built in, but they are always on the back, facing away so the holder can use the tiny colour screen as a viewfinder. This is something different: a front-facing camera. It exists so that Anna Friel can be seen by the person she is talking to.

Let’s map* this network.

maps3.001.png

Loosely, the vertical axis answers the question “how much do users care about this thing?” The nearer the top, the more salient the concept. The horizontal concerns stability of the concept – the further to the right, the less controversial. But at this point the choice of nodes and the connections between them matters more to me than their precise placement. This forms an actor-network – a set of concepts that belong together, in at least one contested interpretation.

  • Phone calls are over on the top right, a very stable concept. Users understand what phone calls are for, know how to access them, and accept that they cost money.
  • If the operators can persuade users to add pictures, to see who they’re talking to, they have a reason to sell not just plain old telephony service but 3G, that thing they’ve just committed billions of pounds to building. Cue the front-facing camera.
  • Video calling and 3G cellular networks rely on each other, but both are challenged. Do users really need them? Will they work reliably enough to be a main selling point for the device? Whisper it softly, “service limitations apply”.
  • Because of this weakness, the assemblage is bolstered by a less glamorous but more stable concept – asynchronous video messaging. This at least can be delivered by the more reliable and widespread 2.5G cellular. Users don’t care much about this, but it’s an important distinction to our network.

What then remains for the telco executive of 2003 to do? Maybe just wait for the technology to “evolve”?

  • More 3G base stations will be built and the bandwidth will increase
  • Cameras and screens will improve in resolution
  • People will take to the idea of seeing who they’re talking to, if not on every call, then at least on ones that really matter.

All these things have come to pass. But could I draw the same network 10 years later with everything just a bit further over to the right? No, because networks come apart.

Nokia’s first 3G phone, the 6630 had no front-facing camera. Operators used their market muscle and subsidies to push phones capable of video calling. Yet many of the hit devices of the next few years didn’t bother with them. The first two versions of the Apple iPhone likewise. Even the iPhone 3G was missing a front facing camera. Finally in 2010, the operators had to swallow their pride and market an iPhone 4 with Apple’s exclusive Facetime video calling service that ran only over unlicensed spectrum wifi.

This is the social construction of technology in action. Maybe evolution is a helpful metaphor, maybe not. Whatever we call it, this is the story of how, over the course of a decade, by their choices what to buy and what to do, users taught the technology sector what phones were for. Hint: it wasn’t video calling.

Just when we think the front-facing camera is out of the frame, it makes a surprising comeback. This time it’s not shackled to either video calls or mobile messaging. Instead it emerges as a tool of self-presentation in social media.

16522941831_66f5407cdd_o
Some rights reserved – Ashraf Siddiqui

“Are you sick of reading about selfies?” asks an article in The Atlantic, announcing that selfies are now boring and thus finally interesting. “Are you tired of hearing about how those pictures you took of yourself on vacation last month are evidence of narcissism, but also maybe of empowerment, but also probably of the click-by-click erosion of Culture at Large?” Indeed, for all its usage, the term — and more so the practice(s) — remain fundamentally ambiguous, fraught, and caught in a stubborn and morally loaded hype cycle.”

‘What Does the Selfie Say? Investigating a Global Phenomenon’, Theresa M. Selft and Nancy K. Baym

Time for another map.

maps3.002.png

  • By 2013, 3G (now also 4G) cellular mobile is no longer in doubt, but its salience to users is diminished. It is a bearer of last resort when wifi is not an option for accessing the Internet.
  • The lynchpin at the top right is not the phone call but social media, with its appetite for videos and photos. In their service, we find the front-facing camera, now though rarely used for calling.
  • Only a fraction of selfies even leave the phone. Many of them are shared in person, in the moment, on the bright, HD screen. They are accumulated and enhanced with storage and processing powers that barely figured on the phones of 2003.

Call it evolution if you like, this total dissolution and reassembly of concepts.

We’re not done yet. Here’s another commercial for your consideration. One for the Samsung Galaxy S4 mapped above. Can you spot the third incarnation of the front-facing camera?

Man 1: “Hey, sorry I was just checking out your phone. That’s the Galaxy S4, right?”
Man 2: “Yeah, I just got it.”
Woman: “Did your video just pause on its own?”
Man 2: “Yeah it does it every time you look away from the screen.”
Man 1: “And that’s a big screen too.”
Man 2: “Yeah, HD.”
Man 3: “Is that the phone you answer by waving your hand over it?”
Man 2: “Yeah.”
Man 1: [waves hand over Man 2’s phone] “Am I doing it right?”
Man 2: “Someone has to call you first…”

Samsung Galaxy S4 TV advert, 2013

See how far a once-secure concept has fallen? The guy needs reminding (in jest at least) how phone calls work! Compared to the 3G launch video, this scene is more quotidian; the phone itself is present as an actor.

And what is the front-facing camera up to now? Playing stooge in the S4’s new party trick: the one where the processor decides for itself when to pause videos and answer calls. If the user never makes another video call or takes another selfie, it’ll still be there as the enabler of gesture control. Better add that to my map:

maps3.003.png

We used to think the phone had a front-facing camera so we could see each other. Then it became a mirror in which we could see ourselves. Now, it turns out, our phones will use it so they can observe us.

Maybe that’s what evolution is like.


* These maps are not Wardley value chain maps though I see much value in that technique. More on that in a later post.

And yet it moves! Digital and self-organising teams with a little help from Galileo

IMG_20160810_171835.jpg

 

This summer, after a lovely 2 week holiday in Tuscany, I returned to Leeds and straight into a classroom full of government senior leaders discussing agile and user-centred design. Their challenges set me thinking once more about the relationship between technology and social relations in the world of work. One well-known story from the Italy of 400 years ago is helping me make sense of it all.

Galileo's_sketches_of_the_moon.png
Galileo’s sketches of the moon

1. Magnification

Galileo Galileo did not invent the telescope but he greatly improved it, reaching more than 20x magnification and pointing it for the first time at the seemingly smooth, celestial bodies of the night sky. In March 1610, he published drawings of the universe as never seen before. What seemed to the naked eye a handful of constellations appeared through Galileo’s telescope as thousands of teeming stars. He showed the moon pocked with craters, mountain ranges and plains. He used his observations and calculations of the planets to confirm a long held but never proven conjecture that the earth and other planets travel elliptically around the sun.

With its twin, the microscope, the telescope was a transformative technology of Galileo’s age, affording new ways of seeing things that people thought they already knew well. Our tools are the smartphone and the web. They too change how we see the world in many ways. Most of all they shed new light upon, and throw into relief, the detail of the social. Minutiae of conversations and interactions that used to occur fleetingly in private before disappearing into thin air can now be shared, stored and searched in previously unimaginable ways.

So let’s focus our gaze upon the world of work. (I am not the first to draw this parallel. Steve Denning write eloquently about what he calls the “Copernican Revolution In Management“.) In a pre-digital era, organisations appeared to be made of smooth, reporting lines, opaque meeting agendas and crisp minutes. Now the wrinkles and pits of communication and interaction are exposed in detail for all to see – every email, every message, every line of code.

Digital communications facilitate, magnify and expose people’s timeless habits of co-operation. These social phenomena are not new. It’s just that, until recently, indicators of productive informality were hidden from view. In the absence of evidence, we focused more attention, and founded our theories of management, on things that were immediately obvious: explicit hierarchies and formal plans.

up close.jpg

Now by observing the details, we can confirm a long-held theory: that self-organisation is rife in the workplace. The new communications tools reveal…

  • the human voices of individuals and interactions in Slack groups, wikis and code repositories
  • the depth of customer collaboration in Twitter replies and support forums
  • the endless resourcefulness of teams responding to change in Trello boards and live product roadmaps.

social.jpg

We should be careful not to over-claim for this shift. As a student of history and the social sciences, I am instinctively suspicious of any narrative which has human nature suddenly change its spots. I come to bury mumbo-jumbo, not to praise it. I reject the teal-coloured fantasy of Frederick Laloux’s “next stage of human consciousness.” More likely the behaviours Laloux identifies have always been with us, only hidden from view. Future generations may judge that we are living through a paradigm shift, but such things can only be confirmed after the fact.

2. Empiricism

The day after Galileo’s publication, the stars and planets carried on doing their thing, much as they had for the billions of days before. After all, heliocentrism was not even an original idea. Aristarchus of Samos had proposed it in the 3rd Century BC; Islamic scholars discussed it on and off throughout the middle ages; and Nicolaus Copernicus himself had revived it more than 20 years before Galileo was born. In one way, nothing had changed. In another, everything had changed. As with another famous experiment – dropping different objects from the Leaning Tower of Pisa to test the speed of falling bodies – Galileo was all about empiricism. He did not ask whether a proposition was more elegant to the mind’s eye or more convenient to the powerful. He designed tests to see whether it was true.

The Manifesto for Agile Software Development is itself an empirical text, founded in the real-world experiences of its authors. It begins (my emphasis): “We are uncovering better ways of developing software by doing it and helping others do it.” The authors set out four pairs of value statements in the form “this over that“, stressing “that while there is value in the items on the right, we value the items on the left more”.

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

These were the values of 17 balding and bearded early Noughties software professionals who gathered at the Snowbird ski resort in Utah. It would be easy to mistake the manifesto for a creed – a set of assertions that true followers must accept as gospel. But they’re not that at all. This is not a religion. Empiricism says we have the power to see for ourselves.

In scores of learning and development sessions over the past couple of years, my associates and I have conducted a little experiment of our own. This is the method:

  • Without sharing the text of the manifesto, we hand out eight randomly ordered cards each showing a different value statement – “contract negotiation”, “working software”, “following a plan” and so on.
  • Then we ask participants to rank them in the order that they would value when delivering a service.
  • There are no right or wrong answers. We jut want to understand what they value.

The result: 90% of the time the items on the left bubble to the top of the list – regardless of participants’ roles and experiences. Of course many project managers say they value “following a plan”, but most of them value “responding to change” more highly. I had a couple of contract managers on one course. They ranked the “contract negotiation” card pretty high up their list. But they put “customer collaboration” at the top.

When people recall their best experiences at work, the things they describe are invariably the things on the left. For the ones who have been around big organisations for 20 years or more, they often speak in terms of “that’s how we used to do things” – before the so-called professionalisation of “information technology” tried to replace trust and teamwork with contracts and stage gates. For others there are more recent stories of emergencies and turnarounds when everyone pulled together around a common cause and just got stuff done in an amazingly productive, naturally iterative rhythm.

3. Reaction

From the time of Copernicus in the 1540s until Galileo’s work in the 1610s, Catholic Church leaders were mostly comfortable with heliocentricity. While Copernicus’ propositions remained “just a theory” they were interesting but unthreatening. But Galileo’s evidence, his assertion of empiricism over the authority of Aristotelian ideas, provoked a backlash. They accused him of heresy and threatened him with torture until he solemnly recanted his view that the earth moved round the sun. This he did, though allegedly muttered under his breath, “And yet it moves.”

That’s the thing about this set of propositions we call “agile”, or “lean”, or “post-agile” or whatever. Often we contrast these with something called “waterfall” as if these were equally valid, alternative ways of getting things done. I think that’s a mistake. They’re not things we pick and choose, any more than Galileo chose to make the earth travel round the sun. Agile and waterfall are alternative theories of how things get done – how things have always got done.

Digging a little into the history, it turns out that “waterfall” was never meant to be taken literally:

“Dr Winston Royce, the man who is often but mistakenly called the “father of waterfall” and the author of the seminal 1970 paper Managing the Development of Large Software Systems, apparently never intended for the waterfall caricature of his model to be anything but part of his paper’s academic discussion leading to another, more iterative version.” – Parallel Worlds: Agile and Waterfall Differences and Similarities

But when people feel threatened by new ideas, there’s a risk, as happened with astronomy, that they back further into their corner and end up espousing more extreme views than they would have held if left unchallenged.

Some who attribute their successes to top-down command-and-control management may fear they have a lot to lose from the growing evidence base for self-organisation. We need to find unthreatening ways to talk to the small group of people – in my experience less than 10% – for whom the values of the left-hand side do not spring naturally to the top of the list.

Coexistence is possible. Equivalence is not. Many religious believers, for example, manage to square their faith in a divine creator with the iterative circle of Darwinian evolution. What’s not credible though is a like-for-like, pick-and-mix approach to agile and waterfall. Nobody argues for evolution of the flea and creation of the elephant. Because one of these is an account that is based on empiricism, the other on an appeal to authority.

4. Conclusion

It took more than a century for the Catholic Church to overcome its aversion to heliocentrism. Meanwhile scientists in the Protestant world continued to circulate and build on Galileo’s findings. Remember Isaac Newton: “If I have seen further, it is by standing on the shoulders of giants.” The last books by Copernicus and Galileo were finally removed from the Church’s banned list in 1835.

If the last few years of domestic and international affairs have taught us anything, it should be that the arrow of progress can go backwards as well as forwards. Rightness and rationality can easily lose out to conflicting interests. If we believe there’s a better way, then it’s down to every one of us to model that better way, in how we work, and how we talk about our work. We can do this by:

  • working out loud to make our collaboration visible and legible
  • collecting and sharing evidence of self-organisation in action
  • resisting mumbo jumbo with simple, factual accounts of how we get stuff done
  • accepting coexistence with other theories but never false equivalence.

In 1992, Pope John Paul II expressed regret for how the Galileo affair was handled. But plans to put a statue of the astronomer in the grounds of the Vatican proven controversial, and were scrapped in 2009.

Technology enables variation

Technology enables variation

HT to Emma Bearman for tweeting me this Imperica article on Cedric Price.

It’s so important to see change as a thing people demand of technology, not, as often framed, the other way round.

“Technology enables variation” – that’s basically what I meant in appropriating John Ruskin’s term “changeful.”

dConstruct 2013: “It’s the Future. Take it.”

It puzzles me that technology so easily becomes the dominant metaphor for explaining society, and not the other way round. “Self-organise like nanobots into the middle,” exhorts dConstruct host Jeremy Keith as we assemble for the afternoon session at the Brighton Dome. We shuffle obligingly to make room for the latecomers, because everyone here accepts without question that nanobots really do self-organise, even if they’re so tiny we can’t see them with our puny, unaugmented eyes.

“It’s the Future. Take it.” Dan Williams mocks strident techno-determinism and refuses to take anything at face value: “I find the concept of wonder to be problematic.” Even Wenlock, the Olympic Mascot, conceals in plain sight a sinister surveillance camera eye, homage perhaps to London’s insouciant acceptance of closed-circuit television. Maybe we should “take it” like the CCTV filmmakers whose manifesto includes the use of subject access requests to wrest footage of themselves from surveillance authorities unaware of their role in an art phenomenon.

Other speakers also touched on this theme of acceptance – the ease with which we come to terms with new tools in the environment and extensions of the physical and mental self.

For cyborg anthropologist Amber Case “design completely counts.” Just contrast reactions to the in-your-face Google Glass and the “calm”, unobtrusive Memoto Lifelogging Camera. I love the history lesson too, starting with Steve Mann‘s 40lbs of hacked-together heads-up-display rig from 1981. This stuff is shape-shifting fast, from the 1950s mainframe to the “bigger on the inside”, Mary Poppins smartphones we’ve so readily come to rely on as extensions of the mental self.

Digital designer Luke Wroblewski seems more matter-of-factly interested in the quantity of change than in its qualitative implications. Designers who have struggled to cope with just one new interface, touch, now face up to 13 distinct input types. Luke’s our tour guide to a dizzying variety of input methods – each with its own quirks and affordances – from 9-axis motion orientation sensing to Samsung’s Smart Stay gaze detection to Siri’s role as a whole other “parallel interface layer”. No wonder, I reckon, that minimal “flat UI” is the order of day. What with all these new interactions to figure out, designers simply lack the time and energy to spend on surface decoration.

Simone Rebaudengo imaginatively plays out the internet of things. He’s against a utilitarian future, and for one in which objects tease their way into their users’ affections. “Rather than demonstrating their buying power, people have to prove their keeping power.” He imagines a world in which toasters experience anxiety and addiction. People apply to look after them (though they can never be owned, only hosted) by answering questions of interest to the toasters. Hosts throw parties with copious sliced bread to make their toasters feel wanted. No, really. Simone has a unique and playful take on the service-dominant world. (I just wish he would stop calling things “products”. It’s so last century.)

However, conflict and repression are always nearby.

Nicole Sullivan presents a taxonomy of internet trolls: the jealous, the grammar Nazi, the biased, and the scary. Women in tech experience trolling far more and far worse than men. And we all need to challenge our biases. Fortunately there’s a handy online tool for that.

After watching ‘Hackers’ and ‘Ghost in the Shell’ at a formative age, Keren Elazari makes a passionate defence of the hacker, tracing a line from Guy Fawkes through V for Vendetta to the masked legion of Anonymous. Quoting Oscar Wilde: “Man is least himself when he talks in his own person. Give him a mask and he will tell you the truth.”

Pinboard-founder Maciej Cegłowski (stand-out phrase “social is not syrup”) voices admiration for the often derided fan-fiction community. Fans fight censorship, defend privacy and improve our culture. They have also developed elaborate tagging systems, and when alienated, like so many of us, by a Delicious re-design, they created a 52-page-long Google Doc of Pinboard feature requests. “It was almost noon when Pinboard stumbled into the office, eyes bleary. His shirt, Delicious noted, was buttoned crooked.”

Visibility is a central concern of our optically-obsessed culture. Much conflict arises from our suspicion of hidden biases and agendas, and our struggle to reveal them. Dan: “Every time we put software into objects they behave in ways that aren’t visible.” People who neglect to read the press releases of bin manufacturers may have missed the appearance on City of London streets of MAC address-snooping litter bins. Fortunately we have James Bridle to war-chalk them and Tom Taylor to consider stuffing them with rapidly changing random MAC address junk.

Amber wants to render the visible invisible – like Steve Mann’s “diminished reality” billboard-cancelling eyewear – and to make the invisible visible, by exposing un-noticed behaviours of smart objects. There can be unintended consequences in the human world, such as a touching conversation between student and construction worker sparked by Amber’s inadvertent placing of a target for GPS game MapAttack in the middle of a building site.

Making the invisible visible is what Timo Arnall’s celebrated ‘Immaterials‘ films are all about. I’d seen them online, of course, but during the dConstruct lunch break I popped into the Lighthouse where they’re beautifully displayed in the gallery setting they deserve. Dan talks of Buckminster Fuller “creating solutions where the problem isn’t quite ready to be solved”. Which is exactly how I feel re-watching Timo’s 2009 work on RFID. Creatives and “critical engineers” see this stuff in many more dimensions than mainstream imagines possible.

Not just seeing but hearing. Robot musician and sound historian Sarah Angliss tells of instruments that died out – the Serpent, the Giraffe Piano, the castrato’s voice – and of the way we’ve become accustomed to things our ancestors would have considered uncanny, unheimliche. Feel the fear induced by massive infrasonic church organ pipes. Look at a photo of people hearing a phonogram for the first time. Listen to Florence Nightingale’s voice recorded, musing about mortality.

And yet, towards the end of the day, something unexpected happens that makes me optimistic about our present condition. Dan Williams shows ‘The Conjourer‘ by magician-turned-cinematographer Georges Méliès – he of Scorsese’s ‘Hugo’ – performing disappearing tricks on the silver screen. We all know exactly how they’re done. They’d be trivial to recreate in iMovie. In spite of this we delight and laugh together at the tricks, as if the film was only made yesterday. This stuff has been the future for a long time now, and we seem to be taking it quite well.

Thanks to all the speakers, organisers and volunteers. dConstruct was brilliant as ever.