“Evolution. What’s it like?” The three lives of the front-facing camera

“Evolution. What’s it like? So one day you’re a single-celled amoeba and then, whoosh! A fish, a frog, a lizard, a monkey, and, before you know it, an actress.
[On-screen caption: “Service limitations apply. See three.co.uk”]
I mean, look at phones. One, you had your wires. Two, mobile phones. And three, Three video mobile.
Now I can see who I’m talking to. I can now be where I want, when I want, even when I’m not. I can laugh, I can cry, I can look at life in a completely different way.
I don’t want to be a frog again. Do you?”

— Anna Friel, 3 UK launch advert, 2003

Today, in 2016, that ad feels so right, and yet so wrong. Of course phones have changed massively in the intervening decade-and-a-bit — just not how the telecoms marketeers of the early Noughties fantasised. In this post I want to trace what evolution of technology might really be like. I’ll do it by following the unstable twists and turns around one small element of the construct we now call a smartphone.

Something was missing from the Anna Friel commercial. All the way through, the director was at pains to avoid even the tiniest glimpse of something the audience was eager to see. You know, a phone. At the time I worked for Three’s competitor Orange whose brand rules also forbade the appearance of devices in marketing. The coyness was partly aesthetic: mobiles in those days were pig-ugly. Moreover, the operators had just paid £4 billion each for the right to run 3G networks in the UK. They wanted consumers to think of the phone as a means to an end, a mere conduit for telecommunications service, delivered over licensed spectrum.

To see a device in all its glory, we must turn to the manufacturer’s literature. Observe the product manual of the NEC e606, one of three models offered by Three at its launch on 3 March 2003:

NEC_e606_eng.pdf-0.png
NEC e606 product manual

Notice where a little starburst has been Photoshopped onto the otherwise strictly functional product shot? That’s the only tangible hint of the phone’s central feature, the thing that makes it worth buying despite being pricier and weightier than all the other matte grey clamshells on the market. By this point, loads of phones have digital cameras built in, but they are always on the back, facing away so the holder can use the tiny colour screen as a viewfinder. This is something different: a front-facing camera. It exists so that Anna Friel can see who she is talking to.

Let’s map* this network.

maps3.001.png

Loosely, the vertical axis answers the question “how much do users care about this thing?” The nearer the top, the more salient the concept. The horizontal concerns stability of the concept – the further to the right, the less controversial. But at this point the choice of nodes and the connections between them matters more to me than their precise placement. This forms an actor-network – a set of concepts that belong together, in at least one contested interpretation.

  • Phone calls are over on the top right, a very stable concept. Users understand what phone calls are for, know how to access them, and accept that they cost money.
  • If the operators can persuade users to add pictures, to see who they’re talking to, they have a reason to sell not just plain old telephony service but 3G, that thing they’ve just committed billions of pounds to building. Cue the front-facing camera.
  • Video calling and 3G cellular networks rely on each other, but both are challenged. Do users really need them? Will they work reliably enough to be a main selling point for the device? Whisper it softly, “service limitations apply”.
  • Because of this weakness, the assemblage is bolstered by a less glamorous but more stable concept – asynchronous video messaging. This at least can be delivered by the more reliable and widespread 2.5G cellular. Users don’t care much about this, but it’s an important distinction to our network.

What then remains for the telco executive of 2003 to do? Maybe just wait for the technology to “evolve”?

  • More 3G base stations will be built and the bandwidth will increase
  • Cameras and screens will improve in resolution
  • People will take to the idea of seeing who they’re talking to, if not on every call, then at least on ones that really matter.

All these things have come to pass. But could I draw the same network 10 years later with everything just a bit further over to the right? No, because networks come apart.

Nokia’s first 3G phone, the 6630 had no front-facing camera. Operators used their market muscle and subsidies to push phones capable of video calling. Yet many of the hit devices of the next few years didn’t bother with them. The first two versions of the Apple iPhone likewise. Even the iPhone 3G was missing a front facing camera. Finally in 2010, the operators had to swallow their pride and market an iPhone 4 with Apple’s exclusive Facetime video calling service that ran only over unlicensed spectrum wifi.

This is the social construction of technology in action. Maybe evolution is a helpful metaphor, maybe not. Whatever we call it, this is the story of how, over the course of a decade, by their choices what to buy and what to do, users taught the technology sector what phones were for. Hint: it wasn’t video calling.

Just when we think the front-facing camera is out of the frame, it makes a surprising comeback. This time it’s not shackled to either video calls or mobile messaging. Instead it emerges as a tool of self-presentation in social media.

16522941831_66f5407cdd_o
Some rights reserved – Ashraf Siddiqui

“Are you sick of reading about selfies?” asks an article in The Atlantic, announcing that selfies are now boring and thus finally interesting. “Are you tired of hearing about how those pictures you took of yourself on vacation last month are evidence of narcissism, but also maybe of empowerment, but also probably of the click-by-click erosion of Culture at Large?” Indeed, for all its usage, the term — and more so the practice(s) — remain fundamentally ambiguous, fraught, and caught in a stubborn and morally loaded hype cycle.”

‘What Does the Selfie Say? Investigating a Global Phenomenon’, Theresa M. Selft and Nancy K. Baym

Time for another map.

maps3.002.png

  • By 2013, 3G (now also 4G) cellular mobile is no longer in doubt, but its salience to users is diminished. It is a bearer of last resort when wifi is not an option for accessing the Internet.
  • The lynchpin at the top right is not the phone call but social media, with its appetite for videos and photos. In their service, we find the front-facing camera, now though rarely used for calling.
  • Only a fraction of selfies even leave the phone. Many of them are shared in person, in the moment, on the bright, HD screen. They are accumulated and enhanced with storage and processing powers that barely figured on the phones of 2003.

Call it evolution if you like, this total dissolution and reassembly of concepts.

We’re not done yet. Here’s another commercial for your consideration. One for the Samsung Galaxy S4 mapped above. Can you spot the third incarnation of the front-facing camera?

Man 1: “Hey, sorry I was just checking out your phone. That’s the Galaxy S4, right?”
Man 2: “Yeah, I just got it.”
Woman: “Did your video just pause on its own?”
Man 2: “Yeah it does it every time you look away from the screen.”
Man 1: “And that’s a big screen too.”
Man 2: “Yeah, HD.”
Man 3: “Is that the phone you answer by waving your hand over it?”
Man 2: “Yeah.”
Man 1: [waves hand over Man 2’s phone] “Am I doing it right?”
Man 2: “Someone has to call you first…”

Samsung Galaxy S4 TV advert, 2013

See how far a once-secure concept has fallen? The guy needs reminding (in jest at least) how phone calls work! Compared to the 3G launch video, this scene is more quotidian; the phone itself is present as an actor.

And what is the front-facing camera up to now? Playing stooge in the S4’s new party trick: the one where the processor decides for itself when to pause videos and answer calls. If the user never makes another video call or takes another selfie, it’ll still be there as the enabler of gesture control. Better add that to my map:

maps3.003.png

We used to think the phone had a front-facing camera so we could see each other. Then it became a mirror in which we could see ourselves. Now, it turns out, our phones will use it so they can observe us.

Maybe that’s what evolution is like.


* These maps are not Wardley value chain maps though I see much value in that technique. More on that in a later post.

And yet it moves! Digital and self-organising teams with a little help from Galileo

IMG_20160810_171835.jpg

 

This summer, after a lovely 2 week holiday in Tuscany, I returned to Leeds and straight into a classroom full of government senior leaders discussing agile and user-centred design. Their challenges set me thinking once more about the relationship between technology and social relations in the world of work. One well-known story from the Italy of 400 years ago is helping me make sense of it all.

Galileo's_sketches_of_the_moon.png
Galileo’s sketches of the moon

1. Magnification

Galileo Galileo did not invent the telescope but he greatly improved it, reaching more than 20x magnification and pointing it for the first time at the seemingly smooth, celestial bodies of the night sky. In March 1610, he published drawings of the universe as never seen before. What seemed to the naked eye a handful of constellations appeared through Galileo’s telescope as thousands of teeming stars. He showed the moon pocked with craters, mountain ranges and plains. He used his observations and calculations of the planets to confirm a long held but never proven conjecture that the earth and other planets travel elliptically around the sun.

With its twin, the microscope, the telescope was a transformative technology of Galileo’s age, affording new ways of seeing things that people thought they already knew well. Our tools are the smartphone and the web. They too change how we see the world in many ways. Most of all they shed new light upon, and throw into relief, the detail of the social. Minutiae of conversations and interactions that used to occur fleetingly in private before disappearing into thin air can now be shared, stored and searched in previously unimaginable ways.

So let’s focus our gaze upon the world of work. (I am not the first to draw this parallel. Steve Denning write eloquently about what he calls the “Copernican Revolution In Management“.) In a pre-digital era, organisations appeared to be made of smooth, reporting lines, opaque meeting agendas and crisp minutes. Now the wrinkles and pits of communication and interaction are exposed in detail for all to see – every email, every message, every line of code.

Digital communications facilitate, magnify and expose people’s timeless habits of co-operation. These social phenomena are not new. It’s just that, until recently, indicators of productive informality were hidden from view. In the absence of evidence, we focused more attention, and founded our theories of management, on things that were immediately obvious: explicit hierarchies and formal plans.

up close.jpg

Now by observing the details, we can confirm a long-held theory: that self-organisation is rife in the workplace. The new communications tools reveal…

  • the human voices of individuals and interactions in Slack groups, wikis and code repositories
  • the depth of customer collaboration in Twitter replies and support forums
  • the endless resourcefulness of teams responding to change in Trello boards and live product roadmaps.

social.jpg

We should be careful not to over-claim for this shift. As a student of history and the social sciences, I am instinctively suspicious of any narrative which has human nature suddenly change its spots. I come to bury mumbo-jumbo, not to praise it. I reject the teal-coloured fantasy of Frederick Laloux’s “next stage of human consciousness.” More likely the behaviours Laloux identifies have always been with us, only hidden from view. Future generations may judge that we are living through a paradigm shift, but such things can only be confirmed after the fact.

2. Empiricism

The day after Galileo’s publication, the stars and planets carried on doing their thing, much as they had for the billions of days before. After all, heliocentrism was not even an original idea. Aristarchus of Samos had proposed it in the 3rd Century BC; Islamic scholars discussed it on and off throughout the middle ages; and Nicolaus Copernicus himself had revived it more than 20 years before Galileo was born. In one way, nothing had changed. In another, everything had changed. As with another famous experiment – dropping different objects from the Leaning Tower of Pisa to test the speed of falling bodies – Galileo was all about empiricism. He did not ask whether a proposition was more elegant to the mind’s eye or more convenient to the powerful. He designed tests to see whether it was true.

The Manifesto for Agile Software Development is itself an empirical text, founded in the real-world experiences of its authors. It begins (my emphasis): “We are uncovering better ways of developing software by doing it and helping others do it.” The authors set out four pairs of value statements in the form “this over that“, stressing “that while there is value in the items on the right, we value the items on the left more”.

We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

These were the values of 17 balding and bearded early Noughties software professionals who gathered at the Snowbird ski resort in Utah. It would be easy to mistake the manifesto for a creed – a set of assertions that true followers must accept as gospel. But they’re not that at all. This is not a religion. Empiricism says we have the power to see for ourselves.

In scores of learning and development sessions over the past couple of years, my associates and I have conducted a little experiment of our own. This is the method:

  • Without sharing the text of the manifesto, we hand out eight randomly ordered cards each showing a different value statement – “contract negotiation”, “working software”, “following a plan” and so on.
  • Then we ask participants to rank them in the order that they would value when delivering a service.
  • There are no right or wrong answers. We jut want to understand what they value.

The result: 90% of the time the items on the left bubble to the top of the list – regardless of participants’ roles and experiences. Of course many project managers say they value “following a plan”, but most of them value “responding to change” more highly. I had a couple of contract managers on one course. They ranked the “contract negotiation” card pretty high up their list. But they put “customer collaboration” at the top.

When people recall their best experiences at work, the things they describe are invariably the things on the left. For the ones who have been around big organisations for 20 years or more, they often speak in terms of “that’s how we used to do things” – before the so-called professionalisation of “information technology” tried to replace trust and teamwork with contracts and stage gates. For others there are more recent stories of emergencies and turnarounds when everyone pulled together around a common cause and just got stuff done in an amazingly productive, naturally iterative rhythm.

3. Reaction

From the time of Copernicus in the 1540s until Galileo’s work in the 1610s, Catholic Church leaders were mostly comfortable with heliocentricity. While Copernicus’ propositions remained “just a theory” they were interesting but unthreatening. But Galileo’s evidence, his assertion of empiricism over the authority of Aristotelian ideas, provoked a backlash. They accused him of heresy and threatened him with torture until he solemnly recanted his view that the earth moved round the sun. This he did, though allegedly muttered under his breath, “And yet it moves.”

That’s the thing about this set of propositions we call “agile”, or “lean”, or “post-agile” or whatever. Often we contrast these with something called “waterfall” as if these were equally valid, alternative ways of getting things done. I think that’s a mistake. They’re not things we pick and choose, any more than Galileo chose to make the earth travel round the sun. Agile and waterfall are alternative theories of how things get done – how things have always got done.

Digging a little into the history, it turns out that “waterfall” was never meant to be taken literally:

“Dr Winston Royce, the man who is often but mistakenly called the “father of waterfall” and the author of the seminal 1970 paper Managing the Development of Large Software Systems, apparently never intended for the waterfall caricature of his model to be anything but part of his paper’s academic discussion leading to another, more iterative version.” – Parallel Worlds: Agile and Waterfall Differences and Similarities

But when people feel threatened by new ideas, there’s a risk, as happened with astronomy, that they back further into their corner and end up espousing more extreme views than they would have held if left unchallenged.

Some who attribute their successes to top-down command-and-control management may fear they have a lot to lose from the growing evidence base for self-organisation. We need to find unthreatening ways to talk to the small group of people – in my experience less than 10% – for whom the values of the left-hand side do not spring naturally to the top of the list.

Coexistence is possible. Equivalence is not. Many religious believers, for example, manage to square their faith in a divine creator with the iterative circle of Darwinian evolution. What’s not credible though is a like-for-like, pick-and-mix approach to agile and waterfall. Nobody argues for evolution of the flea and creation of the elephant. Because one of these is an account that is based on empiricism, the other on an appeal to authority.

4. Conclusion

It took more than a century for the Catholic Church to overcome its aversion to heliocentrism. Meanwhile scientists in the Protestant world continued to circulate and build on Galileo’s findings. Remember Isaac Newton: “If I have seen further, it is by standing on the shoulders of giants.” The last books by Copernicus and Galileo were finally removed from the Church’s banned list in 1835.

If the last few years of domestic and international affairs have taught us anything, it should be that the arrow of progress can go backwards as well as forwards. Rightness and rationality can easily lose out to conflicting interests. If we believe there’s a better way, then it’s down to every one of us to model that better way, in how we work, and how we talk about our work. We can do this by:

  • working out loud to make our collaboration visible and legible
  • collecting and sharing evidence of self-organisation in action
  • resisting mumbo jumbo with simple, factual accounts of how we get stuff done
  • accepting coexistence with other theories but never false equivalence.

In 1992, Pope John Paul II expressed regret for how the Galileo affair was handled. But plans to put a statue of the astronomer in the grounds of the Vatican proven controversial, and were scrapped in 2009.

Technology enables variation

Technology enables variation

HT to Emma Bearman for tweeting me this Imperica article on Cedric Price.

It’s so important to see change as a thing people demand of technology, not, as often framed, the other way round.

“Technology enables variation” – that’s basically what I meant in appropriating John Ruskin’s term “changeful.”

dConstruct 2013: “It’s the Future. Take it.”

It puzzles me that technology so easily becomes the dominant metaphor for explaining society, and not the other way round. “Self-organise like nanobots into the middle,” exhorts dConstruct host Jeremy Keith as we assemble for the afternoon session at the Brighton Dome. We shuffle obligingly to make room for the latecomers, because everyone here accepts without question that nanobots really do self-organise, even if they’re so tiny we can’t see them with our puny, unaugmented eyes.

“It’s the Future. Take it.” Dan Williams mocks strident techno-determinism and refuses to take anything at face value: “I find the concept of wonder to be problematic.” Even Wenlock, the Olympic Mascot, conceals in plain sight a sinister surveillance camera eye, homage perhaps to London’s insouciant acceptance of closed-circuit television. Maybe we should “take it” like the CCTV filmmakers whose manifesto includes the use of subject access requests to wrest footage of themselves from surveillance authorities unaware of their role in an art phenomenon.

Other speakers also touched on this theme of acceptance – the ease with which we come to terms with new tools in the environment and extensions of the physical and mental self.

For cyborg anthropologist Amber Case “design completely counts.” Just contrast reactions to the in-your-face Google Glass and the “calm”, unobtrusive Memoto Lifelogging Camera. I love the history lesson too, starting with Steve Mann‘s 40lbs of hacked-together heads-up-display rig from 1981. This stuff is shape-shifting fast, from the 1950s mainframe to the “bigger on the inside”, Mary Poppins smartphones we’ve so readily come to rely on as extensions of the mental self.

Digital designer Luke Wroblewski seems more matter-of-factly interested in the quantity of change than in its qualitative implications. Designers who have struggled to cope with just one new interface, touch, now face up to 13 distinct input types. Luke’s our tour guide to a dizzying variety of input methods – each with its own quirks and affordances – from 9-axis motion orientation sensing to Samsung’s Smart Stay gaze detection to Siri’s role as a whole other “parallel interface layer”. No wonder, I reckon, that minimal “flat UI” is the order of day. What with all these new interactions to figure out, designers simply lack the time and energy to spend on surface decoration.

Simone Rebaudengo imaginatively plays out the internet of things. He’s against a utilitarian future, and for one in which objects tease their way into their users’ affections. “Rather than demonstrating their buying power, people have to prove their keeping power.” He imagines a world in which toasters experience anxiety and addiction. People apply to look after them (though they can never be owned, only hosted) by answering questions of interest to the toasters. Hosts throw parties with copious sliced bread to make their toasters feel wanted. No, really. Simone has a unique and playful take on the service-dominant world. (I just wish he would stop calling things “products”. It’s so last century.)

However, conflict and repression are always nearby.

Nicole Sullivan presents a taxonomy of internet trolls: the jealous, the grammar Nazi, the biased, and the scary. Women in tech experience trolling far more and far worse than men. And we all need to challenge our biases. Fortunately there’s a handy online tool for that.

After watching ‘Hackers’ and ‘Ghost in the Shell’ at a formative age, Keren Elazari makes a passionate defence of the hacker, tracing a line from Guy Fawkes through V for Vendetta to the masked legion of Anonymous. Quoting Oscar Wilde: “Man is least himself when he talks in his own person. Give him a mask and he will tell you the truth.”

Pinboard-founder Maciej Cegłowski (stand-out phrase “social is not syrup”) voices admiration for the often derided fan-fiction community. Fans fight censorship, defend privacy and improve our culture. They have also developed elaborate tagging systems, and when alienated, like so many of us, by a Delicious re-design, they created a 52-page-long Google Doc of Pinboard feature requests. “It was almost noon when Pinboard stumbled into the office, eyes bleary. His shirt, Delicious noted, was buttoned crooked.”

Visibility is a central concern of our optically-obsessed culture. Much conflict arises from our suspicion of hidden biases and agendas, and our struggle to reveal them. Dan: “Every time we put software into objects they behave in ways that aren’t visible.” People who neglect to read the press releases of bin manufacturers may have missed the appearance on City of London streets of MAC address-snooping litter bins. Fortunately we have James Bridle to war-chalk them and Tom Taylor to consider stuffing them with rapidly changing random MAC address junk.

Amber wants to render the visible invisible – like Steve Mann’s “diminished reality” billboard-cancelling eyewear – and to make the invisible visible, by exposing un-noticed behaviours of smart objects. There can be unintended consequences in the human world, such as a touching conversation between student and construction worker sparked by Amber’s inadvertent placing of a target for GPS game MapAttack in the middle of a building site.

Making the invisible visible is what Timo Arnall’s celebrated ‘Immaterials‘ films are all about. I’d seen them online, of course, but during the dConstruct lunch break I popped into the Lighthouse where they’re beautifully displayed in the gallery setting they deserve. Dan talks of Buckminster Fuller “creating solutions where the problem isn’t quite ready to be solved”. Which is exactly how I feel re-watching Timo’s 2009 work on RFID. Creatives and “critical engineers” see this stuff in many more dimensions than mainstream imagines possible.

Not just seeing but hearing. Robot musician and sound historian Sarah Angliss tells of instruments that died out – the Serpent, the Giraffe Piano, the castrato’s voice – and of the way we’ve become accustomed to things our ancestors would have considered uncanny, unheimliche. Feel the fear induced by massive infrasonic church organ pipes. Look at a photo of people hearing a phonogram for the first time. Listen to Florence Nightingale’s voice recorded, musing about mortality.

And yet, towards the end of the day, something unexpected happens that makes me optimistic about our present condition. Dan Williams shows ‘The Conjourer‘ by magician-turned-cinematographer Georges Méliès – he of Scorsese’s ‘Hugo’ – performing disappearing tricks on the silver screen. We all know exactly how they’re done. They’d be trivial to recreate in iMovie. In spite of this we delight and laugh together at the tricks, as if the film was only made yesterday. This stuff has been the future for a long time now, and we seem to be taking it quite well.

Thanks to all the speakers, organisers and volunteers. dConstruct was brilliant as ever.

A {$arbitrary_disruptive_technology} In Every Home

The fantastic culmination of James Burke’s talk at dConstruct last week set me thinking about a misleading trope that seems to recur with regularity in our discourse about technology.

Through his 70s TV series James was a childhood hero of mine. I wrote about his talk in my summary of the event, and thanks to the generous and well-organised dConstruct team you can also listen to the whole thing online.

With a series of stories, James showed how chance connections have led to important new discoveries and paradigm shifts – how, for example, a wrecked ship gave rise to the invention of toilet roll. So far, so serendipitous.

But then he set off on a flight of fancy that I found harder to follow, on the implications of nanotechnologies still gestating in the R&D labs. How this stuff would transform the world in the next 30 to 40 years! Not, thankfully, with a Prince Charles grey goo apocalypse but with a triumph of anarchist equilibrium.

How would the Authorities cope when their subjects no longer needed them to arbitrate Solomon-like over scarce resources? How would society be structured in a new world of abundant everything (except maybe mud, apparently the basic element of nanoconstruction)? How would Everything be changed by the arrival of A Nanotechnology Factory In Every Home?

A {$arbitrary_disruptive_technology} In Every Home!

I wondered what other innovations have held such promise. Cue Google Book Search where among the random pickings I find:

  • 1984: “modem in every home”
  • 1978: “robot in every home”
  • 1976: “microprocessor in every home”
  • 1975: “wireless telegraphy in every home”
  • 1943: “television in every home”
  • 1937: “telephone in every home”
  • 1915: “water supply in every home”
  • 1908: “sewing machine in every home”
  • 1900: “piano and good pictures in every home”

And some of the most popular charted with the wonderful Ngram Viewer:

It seems that whenever a transformative technology comes along there are some who dare to dream of its widespread adoption. On paper of course, they are right. I live in a home with all the above (though we in the developed world can all too easily overlook the 780 million people who still rely on unsafe water supplies).

Yet the focus on the domestic obscures the fact that all these technologies and resources are still employed as much, if not more, in our public spaces and workplaces as in our private homes.

  • A fountain in every town square
  • A screen at every bus stop
  • A server in every server farm
  • A robot in every loading bay
  • A sewing machine in every sweat shop

So why the allure of a domestic context?

200 years ago the Luddites found themselves, with good reason, resisting the wrenching of textile trades out of their cottages and into the factories. They responded by machine breaking, not machine making.

Now, however, we look to technology to redress the balance in the other direction. By limboing under a bar of low price and simple operation, goes the narrative, each new technology will find its place in every home, thus setting people free from the tyranny of mass production.

Except that’s not how it really works out. Even for the cheap and plentiful, large-scale industrialisation trumps cozy domestication.

The printing press managed to change society drastically between 1500 and 1800 without the need to deliver hot metal to the home. One in every town appears to have been plenty disruptive enough. And while computer-connected home printers have been a reality for decades the use cases for large-scale industrial printing continue to expand.

The Computer In Every Home was a vision held early on by the pioneers of the Homebrew Computer Club. But as a by-product of ushering in a new era of small-scale tinkering, homebrew hackers Jobs and Wozniak also happened to grow the most valuable single public company of all time!

And while I write this post in a living room stuffed with processing power and data storage, the services that I value most run in and from the network – Gmail, Facebook, WordPress.com, Dropbox – not so much a computer in every home as a home in every computer. What’s more, for this convenience, it appears people are prepared to put their faith in the most unaccountable, parvenu providers.

The same may be true of the 3d printer, current poster child of post-industrialism. The sector is in spitting distance of  sub-$1000, desktop units, but those are unlikely to prove the most popular or productive way to disperse the benefits of this new technology. Far simpler to pack rows of bigger units into factories where they can be more easily serviced and efficiently employed round the clock.

So if the alchemy of nanotechnology does come to pass (and I’ll be stocking up on Maldon mud just in case) then – like it or not – it seems as likely to be a centralised and centralising force as a decentralising one.

Or am I missing something? Economists, historians of science, help me out, please.

dConstruct threads: Arrogance, uncertainty and the interconnectedness of (nearly) all things

The web is 21, says Ben Hammersley, it can now legally drink in America. And yet, as it strides out into young adulthood, it has much to learn. At dConstruct we hear some of those lessons – ones about humility, unpredictability and the self-appointed tech community’s responsibilities to the rest of humankind.

I agree with Ben when he advocates a layered approach to the web and its next, next, next, larger contexts – the single user, groups of users, society and the world at large. “Make the world more interconnected, more human, more passionate, more understanding.”

“Don’t become enamoured of the tools,” he urges. Think of the people looking at the painting, not the paintbrushes and the pigments.

And then he throws it all away with a breath-taking streak of arrogance: “How do we as a community [of practitioners] decide what to do?” To which the world might legitimately respond: who gave you the authority to decide?

And then comes the most arrogant over-claim of all: “We are the first generation in history to experience exponential change!” Exponential change, don’t get me started on exponential change. That was last year’s dConstruct-inspired strop.

Jenn Lukas shows a little more self-awareness. Teaching people to code is a Good Thing. That much is motherhood and Raspberry Pi. And as she digs deeper into the plethora of resources now coming online, she takes a balanced view.

Some learn-to-code evangelists have taken the time to swot up on pedagogic principles that traditional teachers have known all along, like the one that says a problem-based learning approach built on students’ existing motivations is more likely to be retained. Others simply dump their knowledge on an unsuspecting world in a naive splurge of information-based learning. “It’s a tool seeking a problem,” says Jenn.

It strikes me that the learn-to-code advocates who bother to Do It Right could be at the vanguard of the new disruptive wave in education. The approaches they pioneer in online code classes may easily be extended to other domains of learning.

What I missed in Jenn’s talk was a rationale for why learning to code might be important, besides solving specific personal pain points or seeking to earn a living as a developer. For me learning at least the rudiments of computer science is a prerequisite for empowered citizens in the 21st Century. I want my children to grow up as active masters, not passive recipients of information technology. Also, they should know how to make a rubber band jump fingers.

Jason Scott puts those pesky twenty-one-at-heart Facebook developers in their place with his talk on digital preservation. “We won! The jocks are checking their sports scores on computers, but you dragged in a lot of human lives.” Facebook is “the number one place that humans on Earth store our histories. But there are no controls, no way of knowing what they’re up to.”

One day, the law will catch up with the web and mandate backups and data export as standard. “Pivoting” will not absolve businesses of commitments they made to customers under previous, abandoned business models. Until then we can simply implore those responsible to be, well, responsible, with people’s personal data. Or stage daring multi-terabyte rescue missions when companies that should know better shut down Geocities, MobileMe and Muammar Gaddafi’s personal website.

I also loved the concept of “sideways value,” the unexpected things you learn by zooming in on high-def scans of old pictures, or sifting through data that people uploaded for quite different purposes.

Revelling in the unexpected was a big theme of Ariel Waldman‘s talk about science and hacking. Hacker spaces, like black holes, have had a bad press but it turns out they’re really cool. Both suck matter in and spit it out as new stuff, creating as much as consuming. In Ariel’s world of delicious uncertainty, satellites inspire new football designs, citizens pitch for experiments to send into space, and algorithmic beard detection turns out to be good for spotting cosmic rays in a cloud chamber.

Plenty of sideways value too in the sheer joy of making stuff. It came across viscerally in Seb Lee-Delisle‘s demo of glowsticks and computer vision and fireworks and kittens on conveyor belts. Then intellectually in Tom Armitage‘s thoughtful, beautiful reflection on the value of toys, toying and toy making. Tom is the master of the understated, profound observation, such as noticing the hand movement he makes when talking about toys: it’s small, fiddling, taking stuff apart and putting it back together to see what happens. Only by making stuff and playing with it can we really understand and learn. What does it feel like to interact with time-lapse photography, or to follow your past self on Foursquare? Make it, play with it, and see.

Tom also touches on connections between stuff, with Twitter as a supreme platform for linking one thing – Tower Bridge opening times – with another – the web. Thereafter new uses emerge. Expats seek the comfort of a distant but familiar landmark while taxi drivers consult it to route round the problem of a communications link that’s always going up and down.

While other presenters tackle big picture subjects, Scott Jenson‘s talk is the most UX-ey of the day but none the worse for it. Every word rang true to me. In my time at Orange, I frequently found myself pushing back against the knee-jerk request for an app. If only, as Scott says, we could strip away the “thin crust of effort” that comes as standard with native apps, then we could empower users with a more natural experience of “discover, use, forget”. Instead with silo’ed apps we spend time “gardening our phones.” I glance down at the status bar of my needy Android which is currently pestering me for 28 different app updates.

All this becomes even more pressing when we consider the coming plethora of smart devices that use GPS, wifi, Bluetooth and NFC to link to mobile platforms. Before we even start trying to chain together complex sequences of connected services to second-guess the user’s intent, it makes eminent sense to take Scott’s approach and solve the simpler problem of discovery. Let devices detect their neighbours and expose simple functionality through HTML standards-based interfaces. It may be a tough challenge to liberate such interactivity, but it will be worth it. If smart toasters are mundane, there’s all the more reason for them to work elegantly and without making a fuss.

James Burke takes the interconnection theme to a whole new level. As a child I loved his pacey, unorthodox TV histories of science. Looking back, I think that’s where my own fascination with the Age of Enlightenment was first kindled. Now he seeks to inspire a new generation of school children with exercises in interdisciplinary rounders. He tickles my Ruskinian sensibilities with the suggestion that “focus may turn out to be what the machine is best for, and a waste of human brainpower”.

Only connect. “It is now much easier for noodlers to be exposed to other noodlers with explosive effects.” Children should be schooled in skipping from chewing gum to Isaac Newton in six simple steps, or Mozart to the helicopter in 10. Two hours of daily compulsory wikiracing for every Key Stage 2 pupil, say I.

Sadly James ends the day with a classic “never meet your heroes” moment. Having reeled me in with the unpredictable, wiggly “ripple effects” of innovation, he proceeds to lose me completely at the end of a 40-year-long straight line to a world where autonomous self-copying nano-technology has brought about an abundant, anarchic equilibrium. It is, I suppose, one possible path, but how a social historian of science can jump from so delightful an account of past serendipities to such a techno-determinist vision of the future turned out to be, for me, the biggest mystery of the day.

As ever, I had a great time at dConstruct, saw some old friends and great talks. Thanks to Jeremy Keith and everyone who made it happen. I’m already looking forward to next year.

The Dissolution of the Factories, or Lines Composed a Few Days After Laptops and Looms

In the corner of an attic room in one of Britain’s oldest factories a small group are engaged in the assembly of a Makerbot Thing-O-Matic. They – it – all of us – are there for Laptops and Looms, a gathering of people whose crafts cross the warp of the digital networked world with the weft of making and holding real stuff.

It’s a privilege to be here. Projects are shown, stories shared, frustrations vented. This is the place to be if you’ve ever wondered how to:

  • get funding for projects not considered “digital enough”
  • break out from the category of hand-craft without entering the globalised game of mass-scale manufacture
  • create a connected object that will still be beautiful when the Internet is switched off
  • avoid queuing at the Post Office
  • make a local car.

The inspired move of holding Laptops and Looms in a world heritage site dares us to draw parallels with the makers, hackers and inventors of the past. We are at once nostalgic for the noble, human-scale labour of the weaver’s cottage and awestruck by the all-consuming manufactories that supplanted it.

The nearby city of Derby has just hit the reset button on its Silk Mill industrial museum, mothballed for two years while they work out what to do with it. Rolls Royce aero engines rub shoulders with Midlands railway memorabilia on the site of a silk mill with a claim to be the world’s first factory.

Like Derby itself, the museum needs to find a way to build upon these layers of history, or be crushed by the weight of them. Water wheels, millstones, silk frames, steam locomotives, jet engines  – they all go round in circles.

Skimming stones on the river at Matlock Baths, someone mentions how the beautiful Derwent Valley reminds him of Tintern Abbey. And I realise that to understand where we are now, 30 years on from the last great Tory recession, we need to twist the dial back another whole turn, to the age of the English monasteries.

Abbeys such as Fountains, Rievaulx and Kirkstall began humbly enough, as offshoots of the French Cistercian movement. Their needs were simple: tranquility, running water and some land for agriculture. But over time they grew, soaring higher, sucking in labour, expanding their estates, diversifying their industries and dominating their localities. Imagine the noise, imagine the power! Until a greedy monarch who would brook no opposition made a grab for their riches and sent the monks packing.

England’s monasteries were washed away in a freshwater confluence of printing presses and Protestant ideology. The clergy who had used the Latin tongue to obscure the word of God were cut down to size, disintermediated by the Bible in English. They still had a role, but no longer a monopoly on the invention of new meanings.

In the shadow of the Gothic ruins, sometimes literally from their rubble, arose smaller vernacular working class dwellings, cottage industries. Among the cottage-dwellers’ most prized possessions was the family Bible, not as grand and illuminated as the monks’ Latin one, but there, accessible to anyone who could read.

To our modern eyes, there was much wrong with the cottage industries: patriarchal, piecework-driven and still at the mercy of merchants higher up the pyramid. But economically this seems closest to the model to which some laptops-and-loomers aspire, (dread phrase) a “lifestyle business” bigger than a hobby but smaller than a factory.

It was 200 years before Britain’s gorges would see the rise of new monsters: water wheels and spinning frames and looms and five storey factories. Something in the cottage industries had got out of kilter. With the invention of the flying shuttle, home-spinning could no longer feed the weaver’s demand for thread. The forces of industrialisation seemed unstoppable, pressed home by a new ideology, Adam Smith’s invisible hand and the productivity gains from de-humanising division of labour. The pattern was repeated elsewhere in Europe with local variations: Revolutionary France threw out its monks and turned the Abbey of Fontenay directly into a paper-mill.

By then the ruined abbeys had lost their admonishing power; some became romantic ornaments in the faux-wild gardens of the aristocracy. Gothic became the go-to architectural style of the sentimental idealist. I’m still a sucker for it today.

There were warnings, of course. Just six years after William Wordsworth’s “Lines Composed a Few Miles above Tintern Abbey”  we got William Blake’s “And did those feet in ancient time”. But still the dark Satanic mills grew. They outgrew the valleys and by means of canals and steam engines dispensed with the need for water power. They swept aside the Arts and Crafts objections of Ruskin and Morris, who fought in vain to revive a labour theory of value.

Until one day some time in the second half of the 20th Century, the tide turned. And here we are today picking our way through the rockpools of the anthropocene for glinting sea-glass, smooth abraded shards of blue pottery and rounded red brick stones. Look closely in those rockpools – the railway arches, hidden yards and edge-of-town industrial parks – and you’ll see that Britain is still teeming with productive life, but on a smaller scale, more niche than before. No longer the workshop of the world.

What comes after the dissolution of Britain’s factories?

That 3d printer in the corner could hold some answers. Despite its current immaturity, 3d printing seems an emblematic technology – perhaps as powerful as the vernacular Bible. It may never be the cheapest way to make stuff, nor turn out the finest work. But it speaks powerfully of the democratisation of making, now within reach of anyone who can use a graphics programme or write a little code. Factories still have a role, but no longer a monopoly on the invention of meanings.

These things move slowly. A straw poll in the pub reveals that many of us already come from the second generation of geeks in our families. Some of us are raising the third. A child who grows up with a laptop and a 3d printer knows she can make a future spinning software, hardware, and the services that bind the two.

This time around the abbeys and the factories should stand equally as inspirations and warnings.

Their makers’ inventiveness and determination have left us a rich seam of narrative capital. And for all the current Western angst over the rise of Chinese manufacturing, the Victorians were nothing if not outward-looking. Leeds’ engineers willingly gave a leg-up to Germany’s Krupp Brothers and motorcycle pioneer Gottleib Daimler.

Yet the overbearing influence and precipitous declines of monasteries and mills should make us wary of future aggrandisements. Want to know how that last bit pans out? Please check back on this blog in August 2211.

Thanks to Russell, Toby, Greg and everyone else who made Laptops and Looms happen. And thanks to you, dear reader, for making it to the end of this ramble. As a reward, check out Paul Miller’s proper take-out from Laptops and Looms. He has a numbered list and everything.