Technology enables variation

Technology enables variation

HT to Emma Bearman for tweeting me this Imperica article on Cedric Price.

It’s so important to see change as a thing people demand of technology, not, as often framed, the other way round.

“Technology enables variation” – that’s basically what I meant in appropriating John Ruskin’s term “changeful.”

dConstruct 2013: “It’s the Future. Take it.”

It puzzles me that technology so easily becomes the dominant metaphor for explaining society, and not the other way round. “Self-organise like nanobots into the middle,” exhorts dConstruct host Jeremy Keith as we assemble for the afternoon session at the Brighton Dome. We shuffle obligingly to make room for the latecomers, because everyone here accepts without question that nanobots really do self-organise, even if they’re so tiny we can’t see them with our puny, unaugmented eyes.

“It’s the Future. Take it.” Dan Williams mocks strident techno-determinism and refuses to take anything at face value: “I find the concept of wonder to be problematic.” Even Wenlock, the Olympic Mascot, conceals in plain sight a sinister surveillance camera eye, homage perhaps to London’s insouciant acceptance of closed-circuit television. Maybe we should “take it” like the CCTV filmmakers whose manifesto includes the use of subject access requests to wrest footage of themselves from surveillance authorities unaware of their role in an art phenomenon.

Other speakers also touched on this theme of acceptance – the ease with which we come to terms with new tools in the environment and extensions of the physical and mental self.

For cyborg anthropologist Amber Case “design completely counts.” Just contrast reactions to the in-your-face Google Glass and the “calm”, unobtrusive Memoto Lifelogging Camera. I love the history lesson too, starting with Steve Mann‘s 40lbs of hacked-together heads-up-display rig from 1981. This stuff is shape-shifting fast, from the 1950s mainframe to the “bigger on the inside”, Mary Poppins smartphones we’ve so readily come to rely on as extensions of the mental self.

Digital designer Luke Wroblewski seems more matter-of-factly interested in the quantity of change than in its qualitative implications. Designers who have struggled to cope with just one new interface, touch, now face up to 13 distinct input types. Luke’s our tour guide to a dizzying variety of input methods – each with its own quirks and affordances – from 9-axis motion orientation sensing to Samsung’s Smart Stay gaze detection to Siri’s role as a whole other “parallel interface layer”. No wonder, I reckon, that minimal “flat UI” is the order of day. What with all these new interactions to figure out, designers simply lack the time and energy to spend on surface decoration.

Simone Rebaudengo imaginatively plays out the internet of things. He’s against a utilitarian future, and for one in which objects tease their way into their users’ affections. “Rather than demonstrating their buying power, people have to prove their keeping power.” He imagines a world in which toasters experience anxiety and addiction. People apply to look after them (though they can never be owned, only hosted) by answering questions of interest to the toasters. Hosts throw parties with copious sliced bread to make their toasters feel wanted. No, really. Simone has a unique and playful take on the service-dominant world. (I just wish he would stop calling things “products”. It’s so last century.)

However, conflict and repression are always nearby.

Nicole Sullivan presents a taxonomy of internet trolls: the jealous, the grammar Nazi, the biased, and the scary. Women in tech experience trolling far more and far worse than men. And we all need to challenge our biases. Fortunately there’s a handy online tool for that.

After watching ‘Hackers’ and ‘Ghost in the Shell’ at a formative age, Keren Elazari makes a passionate defence of the hacker, tracing a line from Guy Fawkes through V for Vendetta to the masked legion of Anonymous. Quoting Oscar Wilde: “Man is least himself when he talks in his own person. Give him a mask and he will tell you the truth.”

Pinboard-founder Maciej Cegłowski (stand-out phrase “social is not syrup”) voices admiration for the often derided fan-fiction community. Fans fight censorship, defend privacy and improve our culture. They have also developed elaborate tagging systems, and when alienated, like so many of us, by a Delicious re-design, they created a 52-page-long Google Doc of Pinboard feature requests. “It was almost noon when Pinboard stumbled into the office, eyes bleary. His shirt, Delicious noted, was buttoned crooked.”

Visibility is a central concern of our optically-obsessed culture. Much conflict arises from our suspicion of hidden biases and agendas, and our struggle to reveal them. Dan: “Every time we put software into objects they behave in ways that aren’t visible.” People who neglect to read the press releases of bin manufacturers may have missed the appearance on City of London streets of MAC address-snooping litter bins. Fortunately we have James Bridle to war-chalk them and Tom Taylor to consider stuffing them with rapidly changing random MAC address junk.

Amber wants to render the visible invisible – like Steve Mann’s “diminished reality” billboard-cancelling eyewear – and to make the invisible visible, by exposing un-noticed behaviours of smart objects. There can be unintended consequences in the human world, such as a touching conversation between student and construction worker sparked by Amber’s inadvertent placing of a target for GPS game MapAttack in the middle of a building site.

Making the invisible visible is what Timo Arnall’s celebrated ‘Immaterials‘ films are all about. I’d seen them online, of course, but during the dConstruct lunch break I popped into the Lighthouse where they’re beautifully displayed in the gallery setting they deserve. Dan talks of Buckminster Fuller “creating solutions where the problem isn’t quite ready to be solved”. Which is exactly how I feel re-watching Timo’s 2009 work on RFID. Creatives and “critical engineers” see this stuff in many more dimensions than mainstream imagines possible.

Not just seeing but hearing. Robot musician and sound historian Sarah Angliss tells of instruments that died out – the Serpent, the Giraffe Piano, the castrato’s voice – and of the way we’ve become accustomed to things our ancestors would have considered uncanny, unheimliche. Feel the fear induced by massive infrasonic church organ pipes. Look at a photo of people hearing a phonogram for the first time. Listen to Florence Nightingale’s voice recorded, musing about mortality.

And yet, towards the end of the day, something unexpected happens that makes me optimistic about our present condition. Dan Williams shows ‘The Conjourer‘ by magician-turned-cinematographer Georges Méliès – he of Scorsese’s ‘Hugo’ – performing disappearing tricks on the silver screen. We all know exactly how they’re done. They’d be trivial to recreate in iMovie. In spite of this we delight and laugh together at the tricks, as if the film was only made yesterday. This stuff has been the future for a long time now, and we seem to be taking it quite well.

Thanks to all the speakers, organisers and volunteers. dConstruct was brilliant as ever.

A {$arbitrary_disruptive_technology} In Every Home

The fantastic culmination of James Burke’s talk at dConstruct last week set me thinking about a misleading trope that seems to recur with regularity in our discourse about technology.

Through his 70s TV series James was a childhood hero of mine. I wrote about his talk in my summary of the event, and thanks to the generous and well-organised dConstruct team you can also listen to the whole thing online.

With a series of stories, James showed how chance connections have led to important new discoveries and paradigm shifts – how, for example, a wrecked ship gave rise to the invention of toilet roll. So far, so serendipitous.

But then he set off on a flight of fancy that I found harder to follow, on the implications of nanotechnologies still gestating in the R&D labs. How this stuff would transform the world in the next 30 to 40 years! Not, thankfully, with a Prince Charles grey goo apocalypse but with a triumph of anarchist equilibrium.

How would the Authorities cope when their subjects no longer needed them to arbitrate Solomon-like over scarce resources? How would society be structured in a new world of abundant everything (except maybe mud, apparently the basic element of nanoconstruction)? How would Everything be changed by the arrival of A Nanotechnology Factory In Every Home?

A {$arbitrary_disruptive_technology} In Every Home!

I wondered what other innovations have held such promise. Cue Google Book Search where among the random pickings I find:

  • 1984: “modem in every home”
  • 1978: “robot in every home”
  • 1976: “microprocessor in every home”
  • 1975: “wireless telegraphy in every home”
  • 1943: “television in every home”
  • 1937: “telephone in every home”
  • 1915: “water supply in every home”
  • 1908: “sewing machine in every home”
  • 1900: “piano and good pictures in every home”

And some of the most popular charted with the wonderful Ngram Viewer:

It seems that whenever a transformative technology comes along there are some who dare to dream of its widespread adoption. On paper of course, they are right. I live in a home with all the above (though we in the developed world can all too easily overlook the 780 million people who still rely on unsafe water supplies).

Yet the focus on the domestic obscures the fact that all these technologies and resources are still employed as much, if not more, in our public spaces and workplaces as in our private homes.

  • A fountain in every town square
  • A screen at every bus stop
  • A server in every server farm
  • A robot in every loading bay
  • A sewing machine in every sweat shop

So why the allure of a domestic context?

200 years ago the Luddites found themselves, with good reason, resisting the wrenching of textile trades out of their cottages and into the factories. They responded by machine breaking, not machine making.

Now, however, we look to technology to redress the balance in the other direction. By limboing under a bar of low price and simple operation, goes the narrative, each new technology will find its place in every home, thus setting people free from the tyranny of mass production.

Except that’s not how it really works out. Even for the cheap and plentiful, large-scale industrialisation trumps cozy domestication.

The printing press managed to change society drastically between 1500 and 1800 without the need to deliver hot metal to the home. One in every town appears to have been plenty disruptive enough. And while computer-connected home printers have been a reality for decades the use cases for large-scale industrial printing continue to expand.

The Computer In Every Home was a vision held early on by the pioneers of the Homebrew Computer Club. But as a by-product of ushering in a new era of small-scale tinkering, homebrew hackers Jobs and Wozniak also happened to grow the most valuable single public company of all time!

And while I write this post in a living room stuffed with processing power and data storage, the services that I value most run in and from the network – Gmail, Facebook, WordPress.com, Dropbox – not so much a computer in every home as a home in every computer. What’s more, for this convenience, it appears people are prepared to put their faith in the most unaccountable, parvenu providers.

The same may be true of the 3d printer, current poster child of post-industrialism. The sector is in spitting distance of  sub-$1000, desktop units, but those are unlikely to prove the most popular or productive way to disperse the benefits of this new technology. Far simpler to pack rows of bigger units into factories where they can be more easily serviced and efficiently employed round the clock.

So if the alchemy of nanotechnology does come to pass (and I’ll be stocking up on Maldon mud just in case) then – like it or not – it seems as likely to be a centralised and centralising force as a decentralising one.

Or am I missing something? Economists, historians of science, help me out, please.

dConstruct threads: Arrogance, uncertainty and the interconnectedness of (nearly) all things

The web is 21, says Ben Hammersley, it can now legally drink in America. And yet, as it strides out into young adulthood, it has much to learn. At dConstruct we hear some of those lessons – ones about humility, unpredictability and the self-appointed tech community’s responsibilities to the rest of humankind.

I agree with Ben when he advocates a layered approach to the web and its next, next, next, larger contexts – the single user, groups of users, society and the world at large. “Make the world more interconnected, more human, more passionate, more understanding.”

“Don’t become enamoured of the tools,” he urges. Think of the people looking at the painting, not the paintbrushes and the pigments.

And then he throws it all away with a breath-taking streak of arrogance: “How do we as a community [of practitioners] decide what to do?” To which the world might legitimately respond: who gave you the authority to decide?

And then comes the most arrogant over-claim of all: “We are the first generation in history to experience exponential change!” Exponential change, don’t get me started on exponential change. That was last year’s dConstruct-inspired strop.

Jenn Lukas shows a little more self-awareness. Teaching people to code is a Good Thing. That much is motherhood and Raspberry Pi. And as she digs deeper into the plethora of resources now coming online, she takes a balanced view.

Some learn-to-code evangelists have taken the time to swot up on pedagogic principles that traditional teachers have known all along, like the one that says a problem-based learning approach built on students’ existing motivations is more likely to be retained. Others simply dump their knowledge on an unsuspecting world in a naive splurge of information-based learning. “It’s a tool seeking a problem,” says Jenn.

It strikes me that the learn-to-code advocates who bother to Do It Right could be at the vanguard of the new disruptive wave in education. The approaches they pioneer in online code classes may easily be extended to other domains of learning.

What I missed in Jenn’s talk was a rationale for why learning to code might be important, besides solving specific personal pain points or seeking to earn a living as a developer. For me learning at least the rudiments of computer science is a prerequisite for empowered citizens in the 21st Century. I want my children to grow up as active masters, not passive recipients of information technology. Also, they should know how to make a rubber band jump fingers.

Jason Scott puts those pesky twenty-one-at-heart Facebook developers in their place with his talk on digital preservation. “We won! The jocks are checking their sports scores on computers, but you dragged in a lot of human lives.” Facebook is “the number one place that humans on Earth store our histories. But there are no controls, no way of knowing what they’re up to.”

One day, the law will catch up with the web and mandate backups and data export as standard. “Pivoting” will not absolve businesses of commitments they made to customers under previous, abandoned business models. Until then we can simply implore those responsible to be, well, responsible, with people’s personal data. Or stage daring multi-terabyte rescue missions when companies that should know better shut down Geocities, MobileMe and Muammar Gaddafi’s personal website.

I also loved the concept of “sideways value,” the unexpected things you learn by zooming in on high-def scans of old pictures, or sifting through data that people uploaded for quite different purposes.

Revelling in the unexpected was a big theme of Ariel Waldman‘s talk about science and hacking. Hacker spaces, like black holes, have had a bad press but it turns out they’re really cool. Both suck matter in and spit it out as new stuff, creating as much as consuming. In Ariel’s world of delicious uncertainty, satellites inspire new football designs, citizens pitch for experiments to send into space, and algorithmic beard detection turns out to be good for spotting cosmic rays in a cloud chamber.

Plenty of sideways value too in the sheer joy of making stuff. It came across viscerally in Seb Lee-Delisle‘s demo of glowsticks and computer vision and fireworks and kittens on conveyor belts. Then intellectually in Tom Armitage‘s thoughtful, beautiful reflection on the value of toys, toying and toy making. Tom is the master of the understated, profound observation, such as noticing the hand movement he makes when talking about toys: it’s small, fiddling, taking stuff apart and putting it back together to see what happens. Only by making stuff and playing with it can we really understand and learn. What does it feel like to interact with time-lapse photography, or to follow your past self on Foursquare? Make it, play with it, and see.

Tom also touches on connections between stuff, with Twitter as a supreme platform for linking one thing – Tower Bridge opening times – with another – the web. Thereafter new uses emerge. Expats seek the comfort of a distant but familiar landmark while taxi drivers consult it to route round the problem of a communications link that’s always going up and down.

While other presenters tackle big picture subjects, Scott Jenson‘s talk is the most UX-ey of the day but none the worse for it. Every word rang true to me. In my time at Orange, I frequently found myself pushing back against the knee-jerk request for an app. If only, as Scott says, we could strip away the “thin crust of effort” that comes as standard with native apps, then we could empower users with a more natural experience of “discover, use, forget”. Instead with silo’ed apps we spend time “gardening our phones.” I glance down at the status bar of my needy Android which is currently pestering me for 28 different app updates.

All this becomes even more pressing when we consider the coming plethora of smart devices that use GPS, wifi, Bluetooth and NFC to link to mobile platforms. Before we even start trying to chain together complex sequences of connected services to second-guess the user’s intent, it makes eminent sense to take Scott’s approach and solve the simpler problem of discovery. Let devices detect their neighbours and expose simple functionality through HTML standards-based interfaces. It may be a tough challenge to liberate such interactivity, but it will be worth it. If smart toasters are mundane, there’s all the more reason for them to work elegantly and without making a fuss.

James Burke takes the interconnection theme to a whole new level. As a child I loved his pacey, unorthodox TV histories of science. Looking back, I think that’s where my own fascination with the Age of Enlightenment was first kindled. Now he seeks to inspire a new generation of school children with exercises in interdisciplinary rounders. He tickles my Ruskinian sensibilities with the suggestion that “focus may turn out to be what the machine is best for, and a waste of human brainpower”.

Only connect. “It is now much easier for noodlers to be exposed to other noodlers with explosive effects.” Children should be schooled in skipping from chewing gum to Isaac Newton in six simple steps, or Mozart to the helicopter in 10. Two hours of daily compulsory wikiracing for every Key Stage 2 pupil, say I.

Sadly James ends the day with a classic “never meet your heroes” moment. Having reeled me in with the unpredictable, wiggly “ripple effects” of innovation, he proceeds to lose me completely at the end of a 40-year-long straight line to a world where autonomous self-copying nano-technology has brought about an abundant, anarchic equilibrium. It is, I suppose, one possible path, but how a social historian of science can jump from so delightful an account of past serendipities to such a techno-determinist vision of the future turned out to be, for me, the biggest mystery of the day.

As ever, I had a great time at dConstruct, saw some old friends and great talks. Thanks to Jeremy Keith and everyone who made it happen. I’m already looking forward to next year.

The Dissolution of the Factories, or Lines Composed a Few Days After Laptops and Looms

In the corner of an attic room in one of Britain’s oldest factories a small group are engaged in the assembly of a Makerbot Thing-O-Matic. They – it – all of us – are there for Laptops and Looms, a gathering of people whose crafts cross the warp of the digital networked world with the weft of making and holding real stuff.

It’s a privilege to be here. Projects are shown, stories shared, frustrations vented. This is the place to be if you’ve ever wondered how to:

  • get funding for projects not considered “digital enough”
  • break out from the category of hand-craft without entering the globalised game of mass-scale manufacture
  • create a connected object that will still be beautiful when the Internet is switched off
  • avoid queuing at the Post Office
  • make a local car.

The inspired move of holding Laptops and Looms in a world heritage site dares us to draw parallels with the makers, hackers and inventors of the past. We are at once nostalgic for the noble, human-scale labour of the weaver’s cottage and awestruck by the all-consuming manufactories that supplanted it.

The nearby city of Derby has just hit the reset button on its Silk Mill industrial museum, mothballed for two years while they work out what to do with it. Rolls Royce aero engines rub shoulders with Midlands railway memorabilia on the site of a silk mill with a claim to be the world’s first factory.

Like Derby itself, the museum needs to find a way to build upon these layers of history, or be crushed by the weight of them. Water wheels, millstones, silk frames, steam locomotives, jet engines  – they all go round in circles.

Skimming stones on the river at Matlock Baths, someone mentions how the beautiful Derwent Valley reminds him of Tintern Abbey. And I realise that to understand where we are now, 30 years on from the last great Tory recession, we need to twist the dial back another whole turn, to the age of the English monasteries.

Abbeys such as Fountains, Rievaulx and Kirkstall began humbly enough, as offshoots of the French Cistercian movement. Their needs were simple: tranquility, running water and some land for agriculture. But over time they grew, soaring higher, sucking in labour, expanding their estates, diversifying their industries and dominating their localities. Imagine the noise, imagine the power! Until a greedy monarch who would brook no opposition made a grab for their riches and sent the monks packing.

England’s monasteries were washed away in a freshwater confluence of printing presses and Protestant ideology. The clergy who had used the Latin tongue to obscure the word of God were cut down to size, disintermediated by the Bible in English. They still had a role, but no longer a monopoly on the invention of new meanings.

In the shadow of the Gothic ruins, sometimes literally from their rubble, arose smaller vernacular working class dwellings, cottage industries. Among the cottage-dwellers’ most prized possessions was the family Bible, not as grand and illuminated as the monks’ Latin one, but there, accessible to anyone who could read.

To our modern eyes, there was much wrong with the cottage industries: patriarchal, piecework-driven and still at the mercy of merchants higher up the pyramid. But economically this seems closest to the model to which some laptops-and-loomers aspire, (dread phrase) a “lifestyle business” bigger than a hobby but smaller than a factory.

It was 200 years before Britain’s gorges would see the rise of new monsters: water wheels and spinning frames and looms and five storey factories. Something in the cottage industries had got out of kilter. With the invention of the flying shuttle, home-spinning could no longer feed the weaver’s demand for thread. The forces of industrialisation seemed unstoppable, pressed home by a new ideology, Adam Smith’s invisible hand and the productivity gains from de-humanising division of labour. The pattern was repeated elsewhere in Europe with local variations: Revolutionary France threw out its monks and turned the Abbey of Fontenay directly into a paper-mill.

By then the ruined abbeys had lost their admonishing power; some became romantic ornaments in the faux-wild gardens of the aristocracy. Gothic became the go-to architectural style of the sentimental idealist. I’m still a sucker for it today.

There were warnings, of course. Just six years after William Wordsworth’s “Lines Composed a Few Miles above Tintern Abbey”  we got William Blake’s “And did those feet in ancient time”. But still the dark Satanic mills grew. They outgrew the valleys and by means of canals and steam engines dispensed with the need for water power. They swept aside the Arts and Crafts objections of Ruskin and Morris, who fought in vain to revive a labour theory of value.

Until one day some time in the second half of the 20th Century, the tide turned. And here we are today picking our way through the rockpools of the anthropocene for glinting sea-glass, smooth abraded shards of blue pottery and rounded red brick stones. Look closely in those rockpools – the railway arches, hidden yards and edge-of-town industrial parks – and you’ll see that Britain is still teeming with productive life, but on a smaller scale, more niche than before. No longer the workshop of the world.

What comes after the dissolution of Britain’s factories?

That 3d printer in the corner could hold some answers. Despite its current immaturity, 3d printing seems an emblematic technology – perhaps as powerful as the vernacular Bible. It may never be the cheapest way to make stuff, nor turn out the finest work. But it speaks powerfully of the democratisation of making, now within reach of anyone who can use a graphics programme or write a little code. Factories still have a role, but no longer a monopoly on the invention of meanings.

These things move slowly. A straw poll in the pub reveals that many of us already come from the second generation of geeks in our families. Some of us are raising the third. A child who grows up with a laptop and a 3d printer knows she can make a future spinning software, hardware, and the services that bind the two.

This time around the abbeys and the factories should stand equally as inspirations and warnings.

Their makers’ inventiveness and determination have left us a rich seam of narrative capital. And for all the current Western angst over the rise of Chinese manufacturing, the Victorians were nothing if not outward-looking. Leeds’ engineers willingly gave a leg-up to Germany’s Krupp Brothers and motorcycle pioneer Gottleib Daimler.

Yet the overbearing influence and precipitous declines of monasteries and mills should make us wary of future aggrandisements. Want to know how that last bit pans out? Please check back on this blog in August 2211.

Thanks to Russell, Toby, Greg and everyone else who made Laptops and Looms happen. And thanks to you, dear reader, for making it to the end of this ramble. As a reward, check out Paul Miller’s proper take-out from Laptops and Looms. He has a numbered list and everything.

And te tide and te time þat tu iboren were, schal beon iblescet

The depths of winter, two weeks off to take stock of where we are and where we’re going, a chance to catch up with family and friends. We travelled through blizzards, cooked and ate good food, lit fires, drank wine, fiddled with MP3 play-lists, time-shifted TV, and made one (thankfully minor) visit to Accident and Emergency. We – friends, family, all – talked about our lives in early Twenteenage Britain: public sector insecurity, the choice of good schools, distant relatives, our new phones and other devices. The confection that follows is made from the left-overs.

Our current preoccupations seem to boil down to two resources, both of which are unequally distributed within families, communities, our nation and world at large. To understand these resources is to see where opportunities and conflicts lie, to look for unlikely allies and unexpected lines of agreement.

The first of the two resources is disposable time – the uncommitted minutes and hours in which we make our own choices.

The clichéd “cash rich, time poor” professional classes are not alone in their want of this resource. The pressure on the “squeezed middle” is as much a temporal crunch as a financial one. As Ed Miliband said: “If you are holding down two jobs, working fourteen hour days, worrying about childcare, anxious about elderly relatives, how can you find the time for anything else? … Until we address the conditions that mean that people’s lives are dominated by long hours, then the big society will always remain a fiction.”

Time wealth ebbs and flows as we move through life-stages, and is at least partially subjective – there are huge variations in people’s estimations of their own and others’ busy-ness. But, whether acknowledged or not, the debate over fairness and equality – over social security, pensions and the division of unpaid labour within families – must be as much about time and energy as it is about money.

The second resource, sometimes a skill, but as often a learned attitude, is tech mastery, a belief that computers, the internet and mobile phones exist to help us achieve our goals, not to enslave or bewilder us.

Tech mastery is the toolkit to take control in the modern world, to “program or be programmed.” Good technology products and services increase the mastery of their users; poor ones sap it. That tech mastery tends to rise and fall with age, and to be more concentrated among men than women, says more about the biases of tech implementation than about the innate abilities or preferences of those demographic groups.

I believe 2011 will be a year when people get angry about bad usability and the failure of the new media to meet the needs of all but a narrow section of society. As the web becomes more mobile and more, genuinely, worldwide, it has to do better at empowering all its users, young and old, rich and poor, not all of whom have the latest device designed in California.

The interactions between disposable time and tech mastery reveal (via sweeping generalisations, I know) some interesting gulfs in understanding to be overcome…

When free tech culture meets the law it’s more than a matter of understanding the “what.” There’s also the “why”.

One person’s innocent checking of their mobile phone is another’s gross intrusion into quality time.

We also find some opportunities…

What services could bridge the gaps between the generations and social groups by drawing on what they have in common?

How could two groups of people make the most of their complementary resources?

To square this circle, we need to pay attention to the different characteristics demanded at each point, and find ways to spread the wealth more equally. Something like…

Right now, at the start of 2011, I have many more questions than answers about disposable time and tech mastery inequalities. But I reckon we’ll see a lot more of these themes before the year is out.

On the way to dConstruct: a social constructionist thought for the day

A desire to put some theoretical acro props under my vague unease with the determinist narrative of so much of our technology discourse has led me to the writing of the French anthropologist Bruno Latour. His work on the social construction of science, an ethnography of the R&D lab, has a special resonance for me, a humanities graduate who finds himself colleague to a legion of French engineers.

I’m stumbling intermittently through Catherine Porter’s translation of Latour’s 1991 work “We have never been modern“, as a prelude to David Edgerton’s “The Shock of the Old“. At times it feels a bit like eating up the broccoli before allowing myself desert, but the rich, buttery morsels like the following make it all worthwhile.

The story so far.

Latour argues that modernity, from Civil War England onwards, managed its contradictions by placing boundaries between nature and society. Thomas Hobbes, writer of the Leviathan, was taken up as a founder of political philosophy while Robert Boyle, he of the air pumps, was channelled as a natural philosopher and pioneer of scientific method. In truth both men speculated on both politics and science, but this inconsistency was whitewashed by their modern successors seeking only the pure narrative of one or the other.

And so we are today in a world still riven by CP Snow’s two cultures, where right-wing bloggers can grab acres of media coverage against climate scientists by finding just the tiniest trace of political “contamination” on the lab’s email servers.

But I wonder if the disconnection and reconnection of nature and society is also a useful way to understand some of the ideas I’m expecting to hear today at dConstruct, a conference at the cutting edge of technology and media convergence.

The 19 years since Latour published “Nous n’avons jamais été moderne” roughly spans my working life so far. I’ve witnessed the amazing things that can happen when you expose the humanities-soaked world of newspapers, books and TV to the attentions of software engineers and computer scientists. The results have been delightful and depressing, often both at the same time. Who knew back then that floaty copywriters would have to cohabit – for better or for worse – with the number-crunchers of search engine optimisation?

This fusing of the worlds of media and technology is only just beginning, and the next step is evident in the hand-held touch-sensitive, context-aware marvel of creation that is the latest smartphone.

Hitherto we have seen the the world of human-created information, the texts of the ancients and the tussles of our own times, through the pure window of the newspaper, the book, the TV, the PC screen. But the smartphone is a game-changer, like Robert Boyle’s air pump. With its bundle of sensors, of location, of proximity, and in the future no doubt heat, light, pressure and humidity it becomes a mini-lab through which we measure our world as we interact with it.

All manner of things could be possible once these facts of nature start to mix with the artifacts of society. My Foursquare checkins form a pattern of places created by me, joined with those of my friends to co-create something bigger and more valuable. My view of reality through the camera of the phone can be augmented with information. We will all be the scientists, as well as the political commentators, of our own lives. This is the role of naturalism in my “Mobile Gothic” meander.

To recycle Latour on Robert Boyle’s account of his air pump experiments:

“Here in Boyle’s text we witness the intervention of a new actor recognised by the new [modern] Constitution: inert bodies, incapable of will and bias but capable of showing, signing, writing and scribbling on laboratory instuments before trustworthy witnesses. These nonhumans, lacking souls but endowed with meaning, are even more reliable than ordinary mortals, to whom will is attributed but who lack the capacity to indicate phenomena in a reliable way. According to the Constitution, in case of doubt, humans are better off appealing to nonhumans. Endowed with their new semiotic powers, the latter contribute to a new form of text, the experimental science article, a hybrid of the age-old style of biblical exegesis – which has previously been applied only to the Scriptures and classical texts – and the new instrument that produces new inscriptions. From this point on, witnesses will pursue their discussions in its enclosed space, discussions about the meaningful behavious or nonhumans. The old hermeneutics will persist, but it will add to its parchments the shaky signature of scientific instruments.”

I don’t yet know where I stand in this picture. Am I the experimenter, his audience, or the chick in the jar?

An Experiment on a Bird in an Air Pump by Joseph Wright of Derby, 1768

A desire to put some theoretical acroprops under my vague unease with the determinist narrative of so much of our technologydiscourse has led me to the work of the French anthropologist Bruno Latour. His work on the social construction of science, anethnography of the R&D lab, has a special resonance for me, a humanities graduate who finds himself colleague to a legion of 

French engineers.

I’m stumbling intermittently through Catherine Porter’s translation of Latour’s 1991 work “We have never been modern”, as a

prelude to David Edgerton’s “The Shock of the Old”. At times it feels a bit like eating up the broccoli before allowing myself

desert, but the rich, buttery morsels like the following make it all worthwhile.

The story so far.

Latour argues that modernity, from Civil War England onwards, managed its contradictions by placing boundaries between

naure and society. Thomas Hobbes, writer of the Leviathan, was taken up as a founder of political philosophy while Robert

Boyle, he of the chicks in air pumps, was channelled as a natural philosopher and pioneer of scientific method. In truth both

men speculated on both politics and science, but this inconsintency was whitewashed by their modern successors seeking only

the pure narrative of one or the other.

And so we are today in a world still riven by CP Snow’s two cultures, where right-wing bloggers can grab acres of media

coverage against climate scientists by finding just the tiniest trace of political “contamination” on the lab’s email servers.

But I wonder if the disconnection and reconnection of nature and society is also a useful way to understand some of the ideas

I’m expecting to hear today at dConstruct, a conference at the cutting edge of technology and media convergence.

The 19 years since Latour published “Nous n’avons jamais été moderne” roughly spans a working life in which I’ve witnessed

the amazing things that can happen when you expose the humanities-soaked world of newspapers, books and TV to the

attentions of software engineers and computer scientists. The results have been delightful and depressing, often both at the

same time. Who knew back then that floaty copywriters would have to cohabit – for better or for worse – with the

number-crunchers of search engine optimisation?

This fusing of the worlds of technology and media is only just beginning, and the next step is evident in the hand-held

touch-sensitive, context-aware marvel of creation that is the latest smartphone.

Hitherto we have seen the the world of human-created information, the texts of the ancients and the tussles of our own times,

through the pure window of the newspaper, the book, the TV, the PC screen. But the smartphone is a game-changer, like

Robert Boyle’s air pump. With its bundle of sensors, of location, of proximity, and in the future no doubt heat, light, pressure

and humidity it becomes a mini-lab through which we measure our world as we interact with it.

All manner of things could be possible once these facts of nature start to mix with the artifacts of society. My Foursquare

checkins form a pattern of places created by me, joined with those of my friends to co-create something bigger and more

valuable. My view of reality through the camera of the phone can be augmented with information. We will all be the scientists,

as well as the poticial commentators, of our own lives. This is the role of naturalism in my “Mobile Gothic” meander.

To recycle Latour on Robert Boyle’s account of his air pump experiments:
“Here in Boyle text we witness the intervention of a new actor recognised by the new [modern] Constitution: inert bodies,

incapable of will and bias but capable of showing, signing, writing and scribbling on laboratory instuments before trustworthy

witnesses. These nonhumans, lacking souls but endowed with meaning, are even more reliable than ordinary mortals, to whom

will is attrributed but who lack the capacity to indicate phenomena in a reliable way. According to the Constitution, in case of

doubt, humans are better off appealing to nonhumans. Endowed with their new semiotic powers, the latter contribute to a new

form of text, the experimental science article, a hybrid of the age-old style of biblical exegesis – which has previously been

applied only to the Scriptures and classical texts – and the new instrument that produces new inscriptions. From this point on,

witnesses will pursue their discussions in its enclosed space, discussions about the meaningful behavious or nonhumans. The

old hermeneutics will persist, but it will add to its parchments the shaky signature of scientific instruments.”

I don’t yet know where I stand in this picture. Am I the man in the white coat or the chick in the belljar?

Ten years on, can we stop worrying now?

Ten years ago this month the Sunday Times published an article by Douglas Adams called “How to Stop Worrying and Learn to Love the Internet”. You can read it here.

Some starting observations:

  1. It’s a tragedy that Adams died, aged 49, in 2001, depriving us of more great literature in the vein of the Hitchhiker’s Guide, of genuinely innovative new media projects such as H2G2, and of the witty, insightful commentary we find in the Sunday Times column.
  2. Adams’ insights have stood the test of time.  Everything he wrote at the end of the Nineties stands true as we near the start of the Tens.
  3. We still haven’t stopped worrying.

Adams from 1999:

… there’s the peculiar way in which certain BBC presenters and journalists (yes, Humphrys Snr., I’m looking at you) pronounce internet addresses. It goes ‘wwwDOT … bbc DOT… co DOT… uk SLASH… today SLASH…’ etc., and carries the implication that they have no idea what any of this new-fangled stuff is about, but that you lot out there will probably know what it means.

I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on…

2009: John Humphrys is still huffing and puffing [Update 3/9/09 - further proof provided!], and…

you would think we would learn the way these things work, which is this:

1) everything that’s already in the world when you’re born is just normal;

2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;

3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.

Apply this list to movies, rock music, word processors and mobile phones to work out how old you are.

The moral panic continues, now transferred to social networking and camera phones.

And Douglas Adams hit the nail of the head in his taking to task of the term “interactive”:

the reason we suddenly need such a word is that during this century we have for the first time been dominated by non-interactive forms of entertainment: cinema, radio, recorded music and television. Before they came along all entertainment was interactive: theatre, music, sport – the performers and audience were there together, and even a respectfully silent audience exerted a powerful shaping presence on the unfolding of whatever drama they were there for. We didn’t need a special word for interactivity in the same way that we don’t (yet) need a special word for people with only one head.

The same fallacy persists, now transferred from the term “interactive” to “social“.

Ten years ago, Douglas Adams identifed a few problems.

  • “Only a minute proportion of the world’s population is so far connected” – this one’s well on the way to being fixed, as much by the spread of internet-capable mobile devices as by desktop or laptop PCs.
  • It was still “technology,” defined as “‘stuff that doesn’t work yet.’ We no longer think of chairs as technology, we just think of them as chairs.” – has the internet in 2009 reached the same level of  everyday acceptance as chairs? Almost, I think, though the legs still fall off with disappointing regularity.

The biggest problem, wrote Adams, is that “we are still the first generation of users, and for all that we may have invented the net, we still don’t really get it”. Invoking Steve Pinker’s The Language Instinct (read this too, if you haven’t already), he argued that it would take the next generation of children born into the world of the web to become really fluent. And for me that’s been the most amazing part. Reflecting the other day on Tom Armitage’s augmented reality post to the Schulze and Webb blog, I realised that I see that development in my own children’s engagement with technology.

  • At birth a child may assume that anything is possible: a handheld projector holds no special amazement for my three-year-old.
  • Through childhood we are trained, with toys among other things, to limit our expectations about how objects should behave. My six-year-old, who has been trained by the Wii, waves other remote controls about in a vain attempt to use gestures.
  • My nine-year-old, more worldliwise, mocks him for it.

We arrive in the world Internet-enabled and AR-ready, it’s just that present-day technology beats it out of us. I work for the day when this is no longer the case.

Last words to Douglas Adams, as true today as in 1999:

Interactivity. Many-to-many communications. Pervasive networking. These are cumbersome new terms for elements in our lives so fundamental that, before we lost them, we didn’t even know to have names for them.

Update 3/9/09: Debate about Twitter on the Today programme, and Kevin Anderson takes up the theme.

Why I took part in Ada Lovelace Day

Last month’s Ada Lovelace Day, an international day of blogging to draw attention to women excelling in technology, was based on the insight that women need to see female role models more than men need to see male ones. I was pleased to help meet that need in a very small way, with a short post about 18th Century venture capitalist Elizabeth Montagu.

At the time I tried to keep my personal pontification to a minimum. It’s hard for men to address this topic without sounding patronising. Anyway, on the day itself many others were making the case for higher visibility of women in technology far more eloquently than I ever can.

But this post from Paul Walsh, trailing a Techcrunch Europe panel on the subject of women and start-ups, prompted me to speak up more directly. In Paul’s opinion:

… the books of males vs females doesn’t need to be balanced in favour of more females. Why? Well, because there are plenty of females in tech and those that aren’t, don’t want to be. Ok, so there might be a small percent who would like to be in tech, but don’t make it. But can’t the same be said for any industry?

Are we trying to balance the books to encourage more males to become nurses?

To which I reply the following (originally posted as a comment on Paul’s blog):

Of course there are some very talented and successful women in our industry. That’s not in doubt. It’s not so much that there are too few women in the tech sectors as that there are too many men! Look around you at most tech conferences and you’ll see mainly male audiences listening mainly to other men.

This is not just bad because women are missing out on opportunities (though I believe some are) but also because, in the words of David Ogilvy, “diversity is the mother of invention.” We are all missing out on the creativity and customer-centricity that a more diverse culture would engender. Think, for example, of how long the games industry remained stuck on the demands of a small power user niche while the needs of the much bigger casual user segments went unmet. What other business opportunities might be there for the taking right now?

The nursing analogy is an interesting one. Why not doctors, one might ask, and how much by nature, how much by nurture? (And yes, I do think men should be encouraged to consider nursing as a career!) Either way, if, as I think [Paul is] suggesting, there are deeply engrained differences between men and women then there’s a clear imperative for us to capitalise on those differences.

That means taking active steps such as ensuring that girls can see strong female role models in our industry, as it seems they might in medicine. It means making work more compatible with family life (from which men also stand to benefit). It means changing our business culture so women’s voices can be heard, and their contributions recognised and rewarded equally with those of men.

The conversation will be more profitable as a result.

Sous les pavés, la plage

The payphone has bluescreened…

Payphone, London King's Cross

… the departure board has 404ed…

Departure board at Edgware Road Tube Station

… the giant TV screen is somebody’s Windows desktop…

Big screen, Millennium Square, Leeds

Pay no attention to that man behind the curtain!

Since posting my three broken technology pictures, I’ve been suffering the blogger’s equivalent of what the French call “l’esprit de l’escalier,” and for which German has the deeply satisfying word “Treppenwitz”.

If I’d paused just a little longer before hitting the publish button, I’d have added a discussion of the optimistic message contained in every one of those man-behind-the-curtain snapshots.

I’d have waxed lyrical on how in every case the authorities’ intentions to constrain processing power to a single task – the kiosk, the departure board, the TV screen – were subverted by its very malfunctioning, revealing the endless possibilities implicit in this most malleable and interconnected technology.

If only I’d waited a while, I’d have told the story of how my seven-year-old son, on seeing an unattended till terminal in Ikea, grabs the mouse and tries to log on to Adventure Quest.

I’d have invoked the situationist May 68 slogansous les pavés, la plage” – “beneath the paving stones, the beach” – which speaks of the unconstrained liberty that lurks just below the locked-down surface of our civilisation.

And I’d have ended in an overblown flourish and a bold font: beneath the pixels, the silicon!

Probably just as well I didn’t.