The thing we like most about Mary Meeker’s annual Internet Trends presentation is it’s just packed with data. The charts are sometimes *too* intense, in fact, carrying too much data. But it’s always revealing, and usually inspiring. Because it’s fact, not fiction.
Slide 7 is especially impactful. I was born on the left hand side of the chart, probably around when there were 5 million computing-capable units globally. On the right, just ten years from today, the forecast is for 10 billion+ units. Extraordinary.
Came across this today. Tweet-o-Meter (link) is the beta version of a platform created by University College London’s Centre for Advanced Spatial Analysis. The Tweet-o-Meter supposedly updates every ten seconds (not sure it does quite do that right now), showing the number of tweets in each city per minute. The ambition is to log and analyze all geo-located tweets in these major cities. Once logged, they will be used to show Twitter activity over time and space. Various kinds of maps will be the main output. I imagine a variety of delicious visualizations will be forthcoming.
We are possibly attracted partly by the simple analogue-feel, dial-based interface. But we’re also struck by yet another work-in-progress attempt to bring life to the data spawned by Twitter (see also Getting to Know Your Twitter Followers & Why that Matters from earlier this week).
And of course it also reminds us of of the work by Google’s Aaron Koblin on visualizing SMS messages sent on New Year’s Eve in Amsterdam in 2007 (see below). We imagine as Tweet-o-Meter moves forward through beta they’ll need to figure out how to marry Koblin-esque visualizations to their gushing pipe of data. Bringing magic to the mayhem.
This is a good summary of some of the key shifts in music retail (although US-only data).
But what’s also really interesting is that it’s coming from a financial services company: mint.com
mint.com’s service – already brilliant on the web, and on a very strong iPhone app, now seems to be extending into data visualization and cultural commentary.
I first came across this last year, and found it to be one of the best written and most insightful papers of the year.
At first glance this year’s presentation, posted yesterday (20th Oct) looks equally essential reading. See what you think.
We’re moderately obsessed with the world of data visualisaton here at Labs for a number of reasons: the ability to generate fresh insight from extraordinarily complex data sets, the ability to trigger radical reappraisal of familiar problems, the ability to put consumers in control of the vast quantities of personal data they generate every day. Not to mention the extraordinary fusion of technology and creativity it represents.
We firmly believe that data visualisation has a wealth of exciting commercial applications, from communicating in new ways to developing new tools, apps and utilities for clients and consumers alike. So we’ve grown slightly frustrated by the rise of visualisations that are moderately pretty but add little in terms of real insight, utility or illumination.
We’re also, as we may have mentioned, big fans of Manuel Lima here at Labs. So we were intrigued to see that he has authored an “Information Visualisation Manifesto”, a provocative (but characteristically generous and nuanced) take on the future of data visualisation which tackles head on the thorny questions at the heart of this ever-expanding field:
- Art versus Science
- Intrigue versus Immediacy
- Aesthetics versus apprehension.
Manuel comes down firmly on the side of clarity of communication versus visualisation for visualisation’s sake, citing the discipline’s roots in the desire “to facilitate understanding and aid cognition” and a growing frustration with the “eye candy” approach to the craft. Many of his principles are rooted in this utilitarian approach, reading almost like a Bauhaus manifesto (and none the worse for that):
- Form follows Function
- Do not glorify Aesthetics
- Look for relevancy
- Aspire for Knowledge
It’s a bold, purist and punchy vision yet also acknowledges the power of narrative and the role of intrigue. Indeed the question of narrative seems to lie at the heart of this Manifesto; the need to pose a specific question of the data and to weave coherent themes and stories from it. These themes then drive the aesthetic approach. As Manuel puts it:
“Form doesn’t follow data. Data is incongruent by nature. Form follows a purpose, and in the case of Information Visualisation, Form follows Revelation”
This is perhaps the key distinction between Information Visualisation as defined here and what Manuel suggests we start thinking of as “Information Art”. Within this approach, artists will freely allow form to follow data, using the random-ness this creates to add texture and interest. Take, for example, Aaron Koblin’s desire to embrace the random-ness of a data set and indeed the richness and texture added to his famous Radiohead video by “interrupting the data”:
“I think it really gives character, because I think it’s really that kind of intricacy and detail that builds character and in a sense it’s the errors and flaws that make art”.
As ever, then, we need to return to objectives, to ask what we are trying to achieve:
- Do we want to educate around an issue, making complex questions simple?
- To shift perceptions and provoke a response?
- To offer a fresh perspective on an infrastructure question for our clients?
- To offer our consumers better comprehension and control of their behaviours?
Simply put, are we going to offer something that is either very, very useful or very, very beautiful? Either way, greater clarity of intent and greater discipline throughout the industry can only be an advantage in building credibility and engagement. Building that credibiltiy is vital if data viz is going to become not just an entertaining diversion but a vital tool for navigating a world generating more and richer data by the second.
If what we are building is neither very beautiful nor very useful, to Manuel’s final point “Avoid Gratuitous visualisations”: “Simply conveying data in a visual form, without shedding light on the portrayed subject, or even making it more complex, can only be considered a failure”.
Or as William Morris put it: “Have nothing in your house that you do not know to be useful, or believe to be beautiful”.
Author: Jim Carroll, Chairman, BBH London
I recently attended an excellent Made by Many event hosted at BBH which featured a re-presentation by Manuel Lima of his 2009 TED talk on data visualisation. Manuel is the curator of visualcomplexity.com and is an eloquent, modest, charming pioneer in this fascinating field.
As a novice myself, I could not help wondering why we are all so immediately and instinctively attracted to the best of data visualisation.To start with, I’m sure there is some fundamental truth that for most of us data become meaningful only when we can see scale, change, patterns and relationships. Seeing is understanding.
It’s also very reassuring to discover that complex, seemingly chaotic data sets and networks can be expressed as elegant, colourful, ordered maps and models. Perhaps there’s something akin to what the Enlightenment scientists felt as every new discovery revealed the endless beauty of nature.
Indeed the best examples of data visualisation have their own aesthetic beauty. (I felt a nostalgic pang as I recalled time spent with spirograph in my bedroom as a child.)
That’s how Rupert Murdoch recently summed up the current relationship between online publishers and aggregators during a call with his shareholders.
He ended with a shot across the bow: “The current days of the Internet will soon be over.”
It’s about to get interesting.
Back when we were floating high inside the web 1.0 bubble, it became indisputably accepted that online content was going to be free and advertising was going to pay for it. And until recently this worked since there was still enough media money in circulation to fuel experimentation and allow digital to continue as a loss leader with an eye toward the future.
But things have changed. Quickly.
Long tail economics are working swimmingly for the aggregates – the blog networks, ad networks, search engines, etc – prosper through triangulation while those that actually create the content that gives these engines their value die a little more each day. Watch in the coming months as the providers, who are now quite literally in a fight for survival, begin to circle the wagons and shoot back.
But within this climate there is also real promise. Necessity being the mother of invention, we may now (finally) begin to see the growth of micropayments in our near future.
Numerous companies have already tried and failed to introduce these systems, but please keep in mind that only a few years ago, it was predicted that consumers wouldn’t trust online security in large enough numbers to sustain retail on a mass level. Consumers are increasingly willing to pay for great online content, it is the high, one time price tag and the hassle of inputting credit card info that is the barrier keeping publishers from our money.
When the barriers are removed, we are generally more than willing to pay 25 cents for a text, 99 cents for a song, so why not 1 cent for an article?
With the introduction of internet ez-pass type payments, users will be able to pass through web pages fractions of a cent at a time. From video games to recipes, from pornography to journalism, this will allow the actual creators to be properly compensated for their work.
Individuals like former Time editor Walter Isaacson and start-ups like Kachingle are pushing just such sytems. But leading this charge will likely require new habit-changing products like the Kindle, which is already beginning to do for print what iPhone did for music. Or more immediately, the new iPhone itself which will change the whole game again this summer by allowing for third party micropayments within its upcoming software update.
In our new data-driven world, micropayments might begin to apply to how creative agencies are compensated as well. Creative and media will likely increasingly begin merging services, molding to a more performance-based system. This doesn’t need to adversely effect creativity though, since appealing to more sophisticated eyeballs might pay better than the blunderbuss approach.
Watch for Labs to be dabbling in exactly these kinds of methods in the months ahead. We welcome further conversation by potentially interested partners and clients here.
Our interview to follow soon, but to whet your appetite, a quick download of our (and your) key questions for the rock star of the data visualisation world.
Crowd-sourcing versus the wisdom of the crowd: Koblin’s recent work experiments with crowd-sourcing but suggests an ambivalence about the process. While a central theme of data visualisation is the wisdom of the crowd, how does it skew the data if the crowd knows it’s being watched? Is the unconscious wisdom of the crowd purer and more compelling or is conscious collaboration of the masses the future? How important is the role of the curator in that process?
Answers – or at least compelling and considered answers – on a blogpost near you shortly….
I transitioned from tinyurl.com to bit.ly earlier this year. Probably way after most people started using it. It’s awesome. But I’m guessing the reason I love bit.ly is not the reason most people would give. Yes, bit.ly delivers super utility simply by shortening a link of seemingly any length to virtually no length. And it makes it easy and quick. That’s part of it.
But I’ve become addicted to the data which bit.ly provides on every link you shorten. Because with bit.ly the shortening is just the beginning of it’s magic. If you register on the site you have a record of all the links you’ve shortened. And if you hit the ‘Info’ function underneath a link you are presented with a treasure trove of metrics & insight. Traffic (clicks) with time & date information, geographical location, platform used to access the link, conversations the link featured within, RTs, and so on.
So one learns that a link posted on Twitter that touches on industrial design is 50% more likely to be clicked on in Brazil than in the UK. Or a link that relates to LEGO is three times as likely to be clicked on in Denmark than in Canada. Or that the optimum time to post is 10pm ET, or that actually one needs to re-post because the two peaks are 10pm ET and 10pm GMT, or that if you want to provoke an Australian audience one should post after 11pm ET. Much of this might seem intuitive, but accessing the data that proves (or refutes) some of the assumptions we work with when we share links is a revealing exercise. Above all, it provides much greater depth of feedback on what’s popular (or not) than simply the crude measure of how often your message is RT on Twitter. And it’s not just Twitter – you can add a bit.ly add-on to your Gmail (http://bit.ly/Xd1yM).
Bit.ly allows you to do a whole lot more than fire-and-forget; it promotes smart linking, and that makes it cool in my (Excel work) book.