Future Human: Transparent Life
17th February 12
A version of this post originally appeared in the 16.02.12 edition of Campaign magazine.
Billed as a dive into the “rapid evolution of data visualisation tools”, last week’s ‘Future Human: Transparent Life’ could have lost its audience at ‘hello’. Data viz may have become a hot topic in recent years, but there was also plenty of healthy scepticism in the room relating to its publicity hungry off-spring, AR. Ah yes, Augmented Reality.. which, until very recently, has had to work hard not to be dubbed Awkward Reality.
Yet a few minutes in, the event’s organiser and first speaker, the journalist Ben Beaumont-Thomas, had held the audience’s attention, wise-cracking his way through a history of human motivation behind how we portray ourselves in public (the 1970s neatly summarised as a ‘me’ decade of solipsistic confusion; the 1990s as an ‘us’ decade, the start of social transmission and an accompanying loss of privacy), before moving swiftly up to date, to focus on how we consciously and unconsciously allow increasing amounts of information about ourselves to be generated and left in the public domain: the ‘transparent life’ of the event’s title. And with that, the talk became less about bytes of visualised data and instead about something both simpler and more profound: human identity and the blurring boundaries between our private and public selves.
Facebook’s Timeline as an online record of our life history and CCTV cameras (jacked into facial recognition technology) recording our every move offline are simply the start: what trade-offs are we prepared to make between protecting our privacy and freely accessing digital services that purport to improve society and/or our social lives? We hold onto just a few crumpled photographs of our grandparents today, but our own grandchildren, poor blighters, will be able to find out what we had for breakfast any day of any given year this millennium. Are we, as we’ve said here before, the last generation to care about what is ‘virtual’ versus what is ‘real’?
As the talk continued into a panel discussion with Keiichi Matsuda (designer and filmmaker behind the series Augmented (Hyper) Reality), Callum Rex Reid, (CEO, Digicave) and Luke Robert Mason (Research Director, Philter Phactory, ), two quite different interpretations of a data-dominated future started to emerge.
The case for Dystopia
Beaumont-Thomas drew our attention to the sub-terranean layers of personal data used for commercial or security reasons, scraped from places you’d expect (social networks, mobile network operators) and places you might not (tollbooths, even keycard entry systems). Not to mention services like the “People Search Engine”, Spokeo, a frankly creepy directory that displays any data it can find about an individual, including their estimated wealth, photos, contact details etc.
And if you needed more evidence that our public personas are increasingly not our own: we heard the story of the two British tourists summarily deported from America on arrival in Los Angeles a few weeks ago, having joked on Twitter about partying there (“@MelissaWalton free for a quick gossip/prep before I go and destroy America?” went one tweet). The couple also had their luggage searched for shovels by Homeland Security agents in response to another tweet about “diggin’ Marilyn Monroe up” (a joke from Family Guy, in case you were wondering).
Keyword scraping for security purposes has become the norm and we may look on with a mixture of amusement and despair at the tourists’ judgement or the common sense radar of the agents in this case, yet questions still arise: where should the line be drawn between public good and personal privacy? Have we lost all reasonable rights to privacy once something appears outside your own home or, in fact, outside your own head? And who’s watching the watchmen?
The latter gets more interesting when faced with the very early prototype bionic contact lenses recently announced by the University of Washington in collaboration with Aalto University in Finland. With wiring a thousandth the width of a human hair, in theory these Terminator-style accessories promise a route to a better augmented reality user experience, right before our very eyes. And if the thought of “emails transmitted straight to your retinas” sounds youporn like your idea of hell, it only gets more controversial when we consider the opportunity, so often explored in near-future fiction by the authors William Gibson and Cory Doctorow, to overlay data about the person standing in front of you. Will anyone talk to you if they know your credit rating, or can evaluate your social standing just by looking at you? As Beaumont-Thomas put it, will our “sexy mystic evaporate”? Well, in the past few months we’ve just inched closer to that reality.
Stepping closer to adland for a moment, you may have seen film maker Keiichi Matsuda’s short film “Domestic Robocop” which portrays a future world in which a plethora of commercial messages are projected in ambient space. Initially visually mindblowing, it’s an extreme version of a data-dominated world that would be unbearably invasive (witness Charlie Brooker’s Black Mirror episode, 15 Million Merits, for a similar experience).
The case for Utopia
It’s Matsuda’s second film in the series, “Augmented City”, however, that starts to hint at the positive implications of what’s possible with advanced access to data. Here the user is able to overlay their own data in public, customising their own identity. You’re able to switch through different layers of the augmented space on your terms and at your own pace (“you can’t eat the whole thing at once”). He imagines a creative synthesis of data when a group of people come together: a unique environment everyone there has contributed to. As Matsuda said simply: “This real-time 3D future can give us agency over the city we live in.”
Callum Rex Reid described 3D scanning technology that’s so life-like it would enable a collection like the one in the British Museum, say, to be distributed to schools in 3D and wearable technologies like a next gen Nike+ that takes into account environmental factors like terrain. He stressed the fact that this is absolutely about creativity: citing the Conflux Festival, that sees MoMA NY filled with new work by super-imposing an augmented layer of imagery on the existing work, and the ‘sculptural photography’ created when he saw his company’s tools recently put to creative use by a high end fashion photographer.
Flipping several years forward, the panel discussed the unimaginably rich data future historians will have at their fingertips (just looking back at something like Street View now, in 2050) compared to the data we have on previous centuries today. And the questions that arise when we’re faced with what William Gibson describes as our generation’s “consensual hallucination” about what we consume or own: so who inherits your iTunes when you die? If you can capture virtually the things we don’t need to touch (eg a picture on a wall), do you need to own them? Does more value get ascribed to things that are physically there?
Luke Robert Mason talked about his company’s Weavrs (we’re fans of Weavrs around here: here are one or two we’re particularly fond of), the bots that use web APIs to find and remix social data, effectively becoming the digital alter egos or research agents of their creators. If that weren’t enough, he recounted how they’d placed Weavrs modelled on Alice in Wonderland characters in Berlin, brought them together to enact a story (“digital replicants that ran through the streets virtually”) and allowed users to interact with them via the AR app, Layar.
Back to life, back to identity
This brought us back to an exploration of identity, specifically our human right to anonymity vs our sociocultural need for authenticity. Beaumont-Thomas is quick to point out that, despite the importance of appearing authentic to one another, the “persona we create are an easily consumable version of ourselves, not the crazy thoughts in our head”.
Are data tools actually distorting human identity online or offering a better representation? Google and Facebook would argue the latter: neither allows a user to participate on their social platforms unless they are open about precisely who they are. In their view, this improves the Search experience for users, ensures fair and accurate attribution or credit to a source and helps deter trolls and spambots, whilst also, of course, protecting their own revenue streams. Relying on ‘you being you’, so that you can be effectively targeted and reached with advertising. 4chan’s founder, Christopher Poole, puts forward a counter-argument:
“Google and Facebook sort of think of you as a mirror. That you have one reflection. And the reflection that you see in that mirror is what everyone else sees. Not true, people are multi-faceted, people are more like diamonds. Identity is very complex.. we ought to allow that flexibility in a web product.”
Poole goes on to stress you can incorporate something like Facebook Connect to authenticate a user, but that doing this doesn’t de facto require a business to make a user post with a full name and a profile photo. As one of the speakers at Future Human put it: “Identity is not real names.. it’s about accountability in a community, anonymous or not.”
Whatever happens, this stuff isn’t going away
As Matsuda put it, “Bridges between our online persona and physical self are going to happen, it’s just a matter of HOW it’s done and when.”
And boundaries continue to be pushed at the fringes – whether that be Biologists at Berkeley pioneering ways to read our minds or Anonymous’ latest release of a hacked call between the Met and the FBI – providing radical provocation around data and privacy that it will take society some time to come to terms with.
“Everything is okay in the end. If it’s not okay, it’s not the end” (Anon)
So why shouldn’t we run for the hills? Well, it strikes us we humans are programmed from birth to seek out choice and difference. We tend to self-regulate when backed into a corner… eventually. For the majority right now, our digital social experiences are governed by our own personal rules about how much we wish to reveal about ourselves, within a framework dictated to us by the existing mainstream social platforms. Platforms that are themselves, to be fair, in a constant state of evolution. Together, we’re going to have to deal with what we’ve got and slowly find new ways to create more sophisticated personas to suit us.
As Mason added in closing last Thursday night: “The tools we have today are not the tools we’ll have tomorrow – we’re naive if we think so.” To which Reid responded: “It’ll be our fault at the end of the day. There is no swooping external force, tearing at our freedom. [The good news is..] We’re extremely good at surviving.”
Ben Beaumont-Thomas’s Storify of the event here.
For more about Future Human and their events in London, check out their site here.
Tom Uglow and I will probably meander into this topic and more in our talk “Skynet vs Mad Max: Battle For The Future” at this year’s SXSW. We’re on at 5-6pm, Sunday 11th March, in Ballroom A at the Intercontinental. If you’re in Austin, please do come and find us there. cheers.