AI, black boxes, and designerly machines

On my holiday, I started reading into some topics I ought to know more about: artificial intelligence, genomics, healthcare, and the fast approaching intersection of the above. Here follow some half-baked reckons for your critical appraisal. Please tell me what’s worth digging into more. Also where I’m wrong and what I might be missing.

1. Opening the black box

large ribosomal subunit (50S) of Haloarcula marismortui, facing the 30S subunit. The ribosomal proteins are shown in blue, the rRNA in ochre, the active site (A 2486) in red. Data were taken from PDB: 3CC2​, redered with PyMOL.
By Yikrazuul CC BY-SA 3.0, from Wikimedia Commons

Reading Siddhartha Mukherjee’s ‘The Gene: An Intimate History’, I discovered the amazing trajectory of human understanding of DNA, RNA, enzymes, proteins, the genome, and the mechanisms by which they interact. There’s no doubt that this stuff will transform – is already transforming – our relationships with medicine. Crucially this generation of scientists are looking inside a black box, where their predecessors could observe its effects but not its inner workings.

At the same time, fuelled by petabytes of readily available data to digest, computer science risks going the other way in the framing of artificial intelligences: moving from explicable, simple systems to ones where it’s allowed to say, “this stuff is so complex that we don’t know how it works. You have to take it on trust.”

When we apply artificial intelligence (AI) to healthcare, transparency is essential; black boxes must be considered harmful.

It’s not just me saying this. Here are the words of the Institute of Electrical and Electronics Engineers (IEEE):

“Software engineers should employ black-box software services or components only with extraordinary caution and ethical care, as they tend to produce results that cannot be fully inspected, validated or justified by ordinary means, and thus increase the risk of undetected or unforeseen errors, biases and harms.” — Ethics of Autonomous & Intelligent Systems [PDF]

Transparency must be the order of the day. It comes in (at least) two flavours: the first is clear intent; the second, understandable operation. Both are under threat, and designers have a vital role to play in saving them.

2. The opacity of intent

It’s a commonplace to say that technology is not neutral. I won’t labour that point here because Sara Wachter-BoettcherEllen Broad and others do a good job of highlighting how bias becomes embedded, “AI-washed” into seemingly impartial algorithms. As the title of Ellen’s wonderful book has it, AI is ‘Made By Humans’.

That doesn’t seem to stop stock definitions from attempting to wall off AI beyond the purview human control:

“In computer science, AI research is defined as the study of ‘intelligent agents’: any device that perceives its environment and takes actions that maximise its chance of successfully achieving its goals.” — Wikipedia

But what goals exactly? And how did the AI get them? The Wikipedia definition is silent about how goals are set, because, in the words of Professor Margaret Boden“the computer couldn’t care less.”

“…computers don’t have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.)” — Robot says: Whatever

When any technology moves from pure to applied science, intent must be centre stage. If we fixate too much on the computer science of AI, and not enough on the context of its application, intent will always be unintentionally obscured.

Many discussions about the “ethics” of AI or genomics are really, I think, discussions about the opacity of intent. If we don’t know who’s setting the goals for the machine, or how those goals are derived, how can we know if the intent is good or bad?

Moreover, true human intent may be difficult to encode. In a domain as complex as health and care, intent is rarely straightforward. It can be changing, conflicting and challenging to untangle:

  • a boy was triaged on first contact as in less urgent need, but has suddenly taken a turn for the worse
  • an elderly woman wants to get home from hospital, but her doctors need first to be sure she’ll be safe there
  • the parents want to help their children lose weight, but know that pester power always leads them back to the burger chain.

In these situations, even Moore’s Law is no match for empathy, and actual human care.

3. Designers to the rescue

Design, in Jared Spool’s wonderfully economical definition, is “the rendering of intent.” Intent without rendering gives us a strategy but cannot make it real. Rendering without intent may be fun – may even be fine art – but is, by definition, ineffective.

It’s time for designers to double down on intent, and – let’s be honest – this is not an area where design has always covered itself in glory.

We know what design without intent looks like, right? It’s an endless scroll of screenshots presented without context – the Dribbblisation of design.  If you think that was bad, just wait for the Dribbblisation of AI. Or the Dribbblisation of genomics. (“Check out my cool CRISPR hacks gallery, LOL!”)

Thoughtful designers on the other hand can bust their way out of any black box. Even if they’re only called in to work on a small part of a process, they make it their business to understand the situation holistically, from the user’s point of view, and that of the organisation.

Design comes in many specialisms, but experienced designers are confident moving up and down the stack – through graphic design, interaction design and service design problem spaces. Should we point an AI agent at optimising the colour of the “book now” buttons? Or address the capacity bottlenecks in our systems that make appointments hard to find?

One of my team recently talked me through a massive service map they had on their wall. We discussed the complexity in the back-end processes, the push and pull of factors that affected the system. Then, pointing at a particular step of the process: “That’s the point where we could use machine learning, to help clinicians be confident they’re making a good recommendation.” Only by framing the whole service, could they narrow in on a goal that had value to users and could be usefully delegated to AI.

4. How do you know? Show your thinking.

School exam paper. Question:

Crucially, designers are well placed to show the workings of their own (and others’) processes, in a way that proponents of black box AI never will.

This is my second flavour of transparency, clarity of operation.

How might we:

  • communicate probabilities and uncertainties to help someone decide what to do about their disposition to a form of cancer?
  •  show someone exactly how their personal data can be used in research to develop a new treatment?
  • involve people waiting for treatment in the co-design of a fair process for prioritisation?

In a world of risks and probabilities, not black and white answers, we should look for design patterns and affordances that support people’s understanding and help them take real, fully informed, control of the technologies on offer.

This is not an optional extra. It’s a vital part of the bond of trust on which our public service depends.

5. Designerly machines

Applying fifty iterations of DeepDream, the network having been trained to perceive dogs CC0 MartinThoma
Applying fifty iterations of DeepDream, the network having been trained to perceive dogs – CC0 MartinThoma

The cultural ascendancy of AI poses both a threat and an opportunity to human-centred design. It moves computers into territory where designers should already be strong: exploration and iteration.

I’m critically optimistic because many features of AI processes look uncannily like a repackaging of classic design technique. These are designerly machines.

Dabbers ready, eyes down…

  • Finding patterns in a mass of messy data? Check!
  • Learning from experiments over many iterations? Check!
  • Sifting competing options according to emerging heuristics? House!

Some diagrams explaining AI processes even resemble mangled re-imaginings of the divergent/convergent pattern in the Design Council’s famous double diamond.

Diagram showing how design moves from problem to solution in four stages, shown as one diamond after another. There are two pairs of divergence and convergence: Discover and Define, Develop and Deliver
© Design Council 2014 – https://www.designcouncil.org.uk/news-opinion/design-process-what-double-diamond

A diagram outlining a forward pass though three 3D generative systems, data is divergent and then convergent
“A diagram outlining a forward pass though our three 3D generative systems.” – Improved Adversarial Systems for 3D Object Generation and Reconstruction [PDF]
The threat is that black box AI methods are seen as a substitute for intentional design processes. I’ve heard it suggested that AI could be used to help people navigate a complex website. But if the site’s underlying information architecture is broken, then an intelligent agent will surely just learn the experience of being lost. (Repeat after me: “No AI until we’ve fixed the IA!”)

The opportunity is to pair the machines with designers in the service of better, faster, clearer, more human-centred exploration and iteration.

Increased chatter about AI will bring new more design-like metaphors of rendering that designers should embrace. We should talk more about our processes for discovering and framing problems, generating possible solutions and whittling them down with prototypes and iteration. As a profession, we have a great story to tell.

A resurgent interest in biology, evolution and inheritance might also open up space for conversations about how design solutions evolve in context. Genetic organism, intelligent software agent, or complex public service – we’re all entangled in sociotechnical systems now.

Advertisements

Published by

mattedgar

Product strategy and design leadership in web and mobile media. Before that I was a newspaper journalist and history student

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s