What’s next? Social Media and the Information Life Cycle

10 10 2011
The Crystal Ball

Image via Wikipedia

As previously seen in Biznology:

Back in the days when “Web 2.0″ was a hot buzzword, many people asked what “Web 3.0″ would look like. Even though that question sounds now as outdated as an X-Files re-run episode, the quest for “what’s next?” is always in our minds. I’m no better prognosticator than anybody else, but my best answer would be: look at where the inefficiencies are. In no way is that a good indicator of what 2012 or 2013 will look like. Some inefficiencies may take decades to be properly addressed. But if you are trying to guess where we’ll see the next revolutionary step in social media–as opposed to the much-easier-to-predict incremental improvements–you have to focus where the biggest opportunities are, the undiscovered country somewhere out there. Analyzing the current landscape and the evolution of information markets by borrowing from a product life cycle framework may assist in developing some educated guesses about what the future may bring.

This is the third post in this current series discussing the differences between making and moving information and knowledge versus physical goods.

Since the first businessperson started bartering goods in some cave thousands of years ago, products went through a more or less standard life cycle: they were conceived, designed, produced, filtered for quality, distributed, sold, consumed, serviced, and finally disposed of. Over the course of history, that life cycle underwent several enhancements, most of them minor and mundane. A few of them, like the introduction of currency, steam power and the production line, had a huge impact in the overall efficiency of moving products around. Naturally, even though in retrospect those inventions seem like no-brainer milestones in the history of business and commerce, in reality their impact was scattered over a number of years. It’s likely that many who lived through those transitions did not realize how big those changes in fact were. One important indication of their revolutionary aspect is how each one of them enabled an incredible amplification in volume or reach of one or more stages of the product life cycle. Currency obviously made selling and trading more practical, steam power increased the ability to reduce the dependency on animals and people for mechanical work and the production line affected both the ability to manufacture better and to improve quality controls.

Non-physical items, such as ideas, information and knowledge, also go through a life-cycle of their own, but some of the steps are still not as sophisticated as the ones described above. Roughly speaking, that cycle consists of: creating, verbalizing (coding), capturing, distributing, filtering, consuming (decoding), and disposing of.

Creating and Coding
Thoughts and emotions come through everybody’s heads all the time, raw, unruly, uncensored. Then some are internally processed by the conscious mind and made sense of. A subset of those are mentally verbalized, and even a smaller portion of them is finally vocalized and communicated to others. The effect of technology in this early cycle has been modest. It still happens mostly the way it used to thousands of years ago.

Capturing knowledge in the early days of humanity basically meant drawing on cave walls. Drawing became more elaborate with the advent of writing–which could be described as the codified doodling of one’s thoughts in an enduring medium–ensuring that a person’s knowledge could survive much beyond the physical constraints in time and space. More recently, photographs, movies, and audio recording permitted the capture of an increasing variety of information, but in practice they were still enabling us to more efficiently capture a concept, a memorable moment, or to tell a story.

The introduction of movable type removed much of the limitations around the distribution of that content. Radio, cinema, TV, and the Internet of the 1990s did more of the same for a broader set of records of human knowledge. It was an expansion in scope, not in nature, but each new media added to the mix represented yet another powerful distribution amplifier.

Filtering and Consuming
All that capturing and distributing of information still relied a lot either on physical media or on the renting of expensive properties and expertise. Whole industries were created to exploit those high transaction costs. The inefficiencies of the system were one of the most powerful filtering mechanisms in that information life cycle. The big mass media players essentially dictated what was “media worth” for you to consume. In fact, that’s what most of us were known for: consumers of goods and information.

Most information that is digitized today is not formally disposed of, it’s just abandoned or forgotten. Archived emails and SharePoint sites are an example of that, but how often do you search your company’s intranet and find information that is completely outdated and irrelevant now?

The social media role, so far
Much of what we call social media today, be it Internet forums, blogs, podcasts, photo, video and file sharing, social networks, or microblogging, along with the advances in mobile communications, contributed significantly to bring those transaction costs closer to zero. Billions of people can now not only capture and distribute information at low cost, but can also consume more of it and consume it faster. Not only that: the ability to add links to status updates, retweet, and share enabled regular people to filter and distribute content too. Everybody became a content curator and distributor of sorts, often without even realizing it.

So what’s next?
Most likely, we’ll continue to see an increasing sophistication of the inner steps of the information life cycle. We’re already witnessing better ways to capture (the IBM Smarter Planet initiative is a good example of that), filter (Google Plus’ circles), distribute (Tumblr, StumbleUpon) and consume (Flipboard, Zite) it. However, the obvious under-served stages in the information life cycle are the two extremities: creating, coding, and capturing on one side, and disposal of on the other.

On the creating, coding and capturing end, the major current inefficiency is loss of information. From the millions of ideas and thoughts that come to people’s minds, the vast majority vanishes without a trace. Twitter and status updates showed a very raw way of capturing some of that, but they are still cumbersome to use — and often impractical:

In case of fire, exit the building before tweeting about it

Via funnysigns.net

Apps like Evernote and Remember The Milk are evolving to make it much easier to record our impromptu thoughts, but the actual potential is enormous (suggested reading: The Future of Evernote). Even a real brain dump may be more feasible than most of us initially thought. As we learned a few days ago, UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. The results are mesmerizing:

Reconstructing brain images into video

But it does not stop there. Ideas generate ideas. Capturing makes indexing and sharing possible. Imagine how much more knowledge could be created if we had better ways to share and aggregate atomic pieces of information. We might not like Facebook’s timeline and Open Graph real-time apps in their first incarnations, but they are just giving us a peek of what the future — and the past — may look like.

Decoding the existing content out there would be a good start. My first language being Portuguese, I often find amazing content in Brazilian websites or brilliant posts shared by my Portuguese-speaking colleagues that are still not easily consumable by most people in the world, making me wonder how much I’m missing for not knowing German, Chinese, Hindi, or Japanese. One can always argue that there are plenty of tools out there for translating Internet content. True, but the actual user-friendly experience would just be browsing websites or watching videos in Punjabi or Arabic without even noticing that they were originally produced in another language.

Finally, one of the unsolved problems of the information age is proper disposal of information. Since storage is so cheap, we keep most of what is created, and this is increasingly becoming an annoyance. I often wish that my Google search could default to limiting search to the last year or the last month only, as the standard results normally show ancient content that are no longer relevant to me. Also, most of us just accept that whatever we say online stays online forever, but that is just a limitation of our current technology. If we could for a second convert all that digital information to a physical representation, we would see a landfill of unwanted items and plenty of clutter. Of course, “disposal” in the digital world does not need to be complete elimination–could just be better ways to place things in the back burner or the backroom archive. For example, we could use sensorial clues of aging information. You always can tell if a physical book was freshly published or a print from 20 years ago, based on its appearance and smell. Old internet content could be shown with increasingly yellow and brown background tones, so that you could visually tell the freshness of content.

Some of the above sounds crazy and superfluous, but the idea of Twitter probably sounded insane less than 10 years ago. Are we there yet? Not even close. But that is what makes this journey really interesting: what is coming is much more intriguing than what we saw so far. Like I said before, we’re still witnessing the infancy of social technologies.

Enhanced by Zemanta

Moving things vs. moving ideas

10 10 2011

AVENTURA, FL - AUGUST 18:  Ed Cole (R), with t...

Image by Getty Images via @daylife

As previously seen in Biznology:

All of our existing controls around content, intellectual property, and information exchange were developed when moving information around was an ancillary function to what mattered at the time: moving goods efficiently to generate wealth. The most powerful nations and organizations throughout the centuries were the ones that mastered the levers that controlled the flow of things. That pattern may soon be facing its own Napster moment. Information is becoming a good in itself, and our controls have not yet adapted to this new reality. In fact, much of what we call governance consists of ensuring that information moves very slowly–if at all. The entities–countries, companies, individuals–that first realize that a shift has already taken place, and re-think their raison d’être accordingly, might be the ones who will dominate the market in this brave new world.

In my last Biznology post, I used a comparison between information and physical goods to support an argument that social technologies still have a long way to go to be considered mature. When information itself becomes the good, and social interactions become the transportation medium, some new and interesting economic patterns may emerge.

Scarcity is a natural attribute of the physical world: a house, a car, or a TV set cannot be owned by multiple people at the same time, nor can one person provide hairdressing or medical services to several customers simultaneously. Our whole economic model is built on top of it: theories around economies of scale, price elasticity, bargaining, patents and copyright all have a strong dependency on “things” or “services” being limited. We even created artificial scarcity to digital items such as software and audio files in the form of license keys and DRM, so that they could fit our “real world” economy.

That model worked OK when being digital was the exception. However, more and more “things” are becoming digital: photos, movies, newspapers, books, magazines, maps, money, annotations, card decks, board games, drawings, paintings, kaleidoscopes–you name it. Furthermore, services are increasingly less dependent on geographical or temporal proximity: online programming, consulting, doctor appointments, tutoring, and teaching are sometimes better than their face-to-face counterparts. While most online services are still provided on a one-off basis, the digitization of those human interactions is just the first step to make them reusable. TED talks and iTunes University lectures are early examples of that.

Of course, I’m not advocating a world without patents or copyrights. But I do think that it’s important to understand what that world would look like, and assess if the existing controls are playing in our favor or against us. Even if we do not dare to change something that served us so well in the past, others may not have the same incentives to keep the status quo.

Another factor to consider is the leapfrog pattern experienced by the mobile telephony industry: countries that were behind in the deployment of phone landlines ended up surpassing those in the developed world in the adoption of cellular phones. Similarly, countries that never developed a sophisticated intellectual property framework may be able to start fresh and put a system in place where broad dissemination and re-use trumps authorship and individual success.

Finally, the emergence of social technologies over the last 10 years showed the power of a resource that has been underutilized for centuries: people and their interactions with each other. The essence of what we used to call Web 2.0 was the transformational aspect of leveraging social interactions throughout the information value chain: creation, capture, distribution, filtering and consumption. The crowd now is often the source, the medium, and the destination of information in its multiple forms.

The conclusion is that the sheer number of people that can be mobilized by an entity–a nation, an organization or an individual–may become a source of a wealth in the near future. Of course, peoplenomics is mostly a diamond in the rough for now. A quick comparison between the top 20 countries by GDP per capita (based on Purchasing Power Parity) and the top 20 countries in the world by population shows that the size of a country’s population is still a poor indicator of its wealth–only the United States, Japan and Germany are part of both lists. Whether or not unleashing the economic value of large populations and efficient information flows will ever materialize is anybody’s guess, but keeping an eye for it and being able to adapt quickly may be key survival skills in a rapidly changing landscape.

Enhanced by Zemanta

A Skewed Web: Innovation is in the outskirts of social media

15 09 2010
Honeybees with a nice juicy drone

Image by dni777 via Flickr

As previously seen in Biznology:

As I discussed in my post last month, it’s a skewed Web out there. A multitude of online social filters were developed over the last 15 years to address our perennial information overload curse. From Google’s page rank, we went all the way to tag clouds, social bookmarking, Twitter trending topics and Gmail’s Priority Inbox, trying to find ways to make what matters float to the top. However, most of these social filters are based on some variation of a “majority rules” algorithm. While they all contributed to keep information input manageable, they also skewed the stream of information getting to us to something more uniform. Will crowdsourcing make us all well-informed drones? Ultimately, it may depend on where you’re looking at, the center or the fringe of the beehive.

Almost two years ago, Clay Shirky boldly stated that information overload was not a problem, or at least not a new one. It was just a fact of life at least as old as the Alexandria library. According to Shirky, the actual issue we faced in this Internet age would be that of a filter failure: our mechanisms to separate the wheat from the chaff were just not good enough. Here is an excerpt from his interview at CJR:

The reason we think that there’s not an information overload problem in a Barnes and Noble or a library is that we’re actually used to the cataloging system. On the Web, we’re just not used to the filters yet, and so it seems like “Oh, there’s so much more information.” But, in fact, from the 1500s on, that’s been the normal case. So, the real question is, how do we design filters that let us find our way through this particular abundance of information? And, you know, my answer to that question has been: the only group that can catalog everything is everybody. One of the reasons you see this enormous move towards social filters, as with Digg, as with del.icio.us, as with Google Reader, in a way, is simply that the scale of the problem has exceeded what professional catalogers can do.

While some still beg to differ about information overload not being an issue – after all, our email inboxes, RSS readers and Facebook and Twitter streams never cease to overwhelm us–we tend to welcome every step in the evolution of smarter filters.

The whole lineage of social filters, from Google’s page rank, passing through Digg and Delicious, culminating with Twitter’s trending topics, mitigated one problem–information overload–but exacerbated another one: we were all getting individually smarter, but collectively dumber. By letting the majority or the loud mouths dictate what was relevant, we ended up with a giant global echo chamber.

We were all watching Charlie biting Harry’s finger, and Justin Bieber trying to convince (or threaten) us that we will never, ever, ever be apart. That Ludacris video surpassed 300 million views in seven months in YouTube alone, taking their all-time #1 spot. An unverified claim about Bieber using 3% of Twitter’s infrastructure being passed as news by traditional media outlets is just the last example of how far we went down the madness of crowds road.

br />This of course is not a new problem. Back in the early 1980s, MTV was running Michael Jackson’s 14-minute “Thriller” video twice an hour. The trouble here is just the magnitude of it. A potential downside of this mass-media-on-steroids uniformity is that a homogeneous environment is not the best place for innovation to flourish. Borrowing from paleontologist Stephen Jay Gould: transformation is rare in the large, stable central populations. Evolution is more likely to happen in the tiny populations living in the geographic corners: “major genetic reorganizations almost always take place in the small, peripherally isolated populations that form new species.”

If you are looking for the next big thing, or trying to “think different,” or to be creative and innovative, you need to look beyond the center. The core will tell you what’s up, so that you’ll be “in the know.” The fringe will show you what’s coming next. To paraphrase William Gibson, the future is peripherally distributed.

Enhanced by Zemanta

iPad – First impressions

4 04 2010

Yesterday morning, I took my visiting family to Niagara Falls which is oh-so-convenient-ly close to the US border, so of course I *had* to pay a visit to the Apple Store at the Walden Gallery Mall and buy the iPad I had reserved “just in case” :-) . At least that’s the story I tell myself to justify traveling 400 km just to address this totally illogical gadget lust.

I have not had much time to blog or do much else actually over the last 40 days or so, being busy at both work and personal fronts – had a few folks staying with us and others visiting us too. So, this post is going to be a bit rushed, just collecting my first impressions on the most expected iThing of the year. On top of it, I’m typing this on the iPad itself, using the revamped WordPress app, so pardon the clunkness of this post. So, there you go, in bullet point format:

- Overall, huge thumbs up to Apple for adding a new category in the already crowded portable computing landscape. The person sitting beside me at the mall was completely unaware of what the fuss was about at the Apple Store, thinking they were giving away something. When I opened the box, she gasped: “OMG, that’s a gigantic iPhone”! It definitely looks like that, but after a day using it, I can honestly say that it’s much more than that. As biology has repeatedly shown us, small increments in features can sometimes drive major leaps in innovation – stand-up posture and opposable thumbs being just two recent examples. The iPad is not just a big iPhone or iPod Touch, not a laptop without a keyboard, not a crippled netbook, not a fancier Kindle, nor a Mac version of the Tablet PC. It’s in its own category, and will follow its own evolution branch path. Personal Computing speciation just occurred, and we witnessed it first hand. Of course, this does not necessarily mean that the iPad will succeed in its current incarnation. But it will influence what others will be doing over the next few years.

- The big positives: the device is fast, the screen is crispy, the layout is gorgeous and it feels good in your hands. Battery life is just unbelievably long. Maps, iBooks, Photos, and the various comics/magazines/newspaper/drawing apps all feel brand new in the big screen. That’s just a glimpse of what’s coming. The iPad is the best portable device to consume content that I have ever used.

- The negatives are well published already: the iPad would greatly benefit from a front-facing camera, multitasking, and more flexibility for applications to share context and objects, including files. All these limitations have one thing in common: they are related to content creation, not consumption. From a market perspective, it makes a lot of sense to target content consumers first, as they represent the vast majority of buyers. I also suspect those limitations are all part of overall Apple strategy to keep us buying the latest and greatest every few years or so. The Cupertino-based brain-trust creates products with enough features to make them desirable, but very rarely offers everything that’s technically feasible in any given release. This way, when an iPad with a camera comes next year, they will sell it in loads again. Furthermore, sometimes we waste too much time thinking about what we don’t have, as opposed to what’s there now for us to enjoy. That’s like being in Paris and complaining about not having a good beach to go to.

That’s it for now!

Felipe Machado and Andrew Keen: Thinking outside the social media echo chamber

7 02 2010

Back in November, I had the pleasure of having lunch with Felipe Machado, multimedia editor for one of the largest newspapers in Brazil, and a former business partner in a short-lived Internet venture in the mid-nineties. The get-together was brokered by Daniel Dystyler, the consummate connector in the Gladwell-esque sense of the word.

Felipe Machado and Daniel Dystyler

Felipe is an accomplished journalist, book author and musician, and I deeply respect his ability to connect the dots between the old and new media. I actually often disagree with him: I tend to analyze the world through a logical framework, and Felipe relies on intuition and passion. That’s exactly why I savour every opportunity to talk to him. If you understand Portuguese, you may want to check his participation in “Manhattan Connection” (Rede Globo, 4th largest TV network in the world), talking about the future of media:

During our lunch conversation, Felipe mentioned Andrew Keen’s “The Cult of the Amateur”, as a book that broke away from the sameness of social media authors. Coincidentally, I had read an article about that book the day before, so I bit the bait and borrowed the book from the local library the first week I came back from Brazil.

This may come as a surprise to anybody who knows me, but if you work in anything related to new media, social media, Web 2.0 and emerging Internet technologies, I highly recommend you read Keen’s book. Make no mistake: the book deserves all criticism it got – you can start with Lawrence Lessig’s blog post for a particularly heated discussion on the limitations of Keen’s arguments. “The Cult of the Amateur” is ironically a concrete proof that having editors and a publisher behind a book does not necessarily make it any better than, say, a blog post.

The reason I recommend a not-so-good book is this: Andrew Keen represents a large contingent of people in your circle of friends, co-workers, clients and audience – people who hear your social media message and deeply disagree with you. They may well be the vast majority that does not blog, does not use Twitter and couldn’t care less about what you had for dinner last night. They often don’t say it out loud, to not be perceived as luddites, but are not convinced that social media is making things any better, or Web 2.0 is something inevitable.

Those are the folks you should pay attention to. No matter how much you admire the work by Chris Anderson, Clay Shirky, Jeff Howe and others social media luminaries, you are probably just hearing the echo of your own voice there. You need to understand the concerns, the points of view and the anxiety of the Andrew Keens of the world toward the so-called social media revolution. Failing to do that will prevent you from crossing the chasm between early adopters and everybody else.

Reaching out to the members of our social network who are not in Facebook, LinkedIn and Twitter can go a long way for us all to realize that the real world is MUCH BIGGER than Web 2.0 and Social Media (as I learned from Jean-François Barsoum long time ago).

The joke, the circus and the soap-opera

14 12 2009

A few people who saw my Enterprise 2.0 Anti-patterns presentation at SlideShare asked what I meant by “the joke, the circus and the soap-opera”. That came from a post I wrote for Biznology a long time ago, on Sep 15, 2008. It’s old news now, but for the sake of completeness I’m republishing it here. I updated some of the broken links and also moved the “I work for” disclaimer from IBM to RBC :-)

What role do timing and duration play in your Web 2.0 strategy? Marketing experts have long emphasized the importance of media selection and scheduling decisions, but seeing how traditional companies have been exploiting the Internet over the last few years shows that there are still lessons to be learned in that arena.

Imitation may be the sincerest form of flattery, but it doesn’t always pay off when it comes to your online marketing strategy. All the hype around Web 2.0 and User Generated Content a couple of years ago initially led to some embarrassing attempts of letting regular folks to create ads. The Chevy Tahoe Apprentice challenge in 2006 is probably the most prominent example of how to not do it: even after GM wiped out ChevyApprentice.com, a search in YouTube for “Chevy Tahoe Apprentice” brings plenty of ads that should have never been created in the first place, a sobering reminder that having an exit strategy established up front is a must in your Internet experiments. Eventually marketing teams got it right, and the success of the Doritos Crash the Super Bowl competition in early 2007 led to several others companies to jump onto the UGC bandwagon, with varying, but mostly diminishing, levels of returns.

Another case in point was the creation of online places for your customer base to hang around and discuss subjects that take a front seat in their lives. HSBC’s Your Point of View was launched in October 2005 and generated a lot of buzz for quite some time. However, three years later, it has lost its freshness and novelty, giving the casual observer the impression of a failed experiment, when it could have been considered one of the most successful stories of a traditional company building a site based on the architecture of participation. Vancity’s “community powered” Change Everything, launched in September 2006, suffered from a similar problem, but had a longer shelf life, and people still contribute with comments to this day. One of the major differences between the two services that may explain the varying longevity of two similar offerings is that the Vancity experiment established itself as a social networking site, while the HSBC one stayed away from forming an online community and keeping user profiles. Change Everything is currently announcing a complete revamp of the service, so I’m curious to see what’s coming next.

What’s clear in the examples above is that timing and duration play a crucial role in the success of your online initiatives. This might sound obvious, but it’s often ignored in many of the initiatives we see online. Being too early might prevent you from understanding the dynamics of a new approach, but being too late can just position your company as a me-too player. The sweet spot, of course, is hard to determine, but recognizing these patterns can help you to sniff the right moment. Or you might be better prepared to fail gracefully from the get-go, not as an after-thought.

Influenced by a conversation I had with my colleague Bernie Michalik, I started thinking about three metaphors that highlight the importance of duration in your online strategy. Some initiatives work very well when applied exactly once, as it was the case with the Doritos Super Bowl commercial. Like telling a joke, the second time around people get bored and disengaged.

Other initiatives work better when mimicking a circus pattern: you come, raise your tent, run your dog-and-pony show, and then leave after a week or a month. One or two years from now, you can do it again, but staying there on a continuous basis would never work. This is how RBC approached its Next Great Innovator site. In the first edition, back in 2006, they defined up-front that it would be a time-boxed experiment, so that when they were done a few months after, retiring the site was perceived as the conclusion of a successful experiment. Every year since they keep coming back with new features, but still positioning it as a time-limited event (full disclosure: I work for RBC).

The IBM jams are another good example of how the circus pattern can be efficiently used to your advantage. Besides helping clients to deliver jams, we eat our own dog food and use them as one of the tools in our innovation strategy. If you are wondering what the jam looks like, the next round begins on Sunday, October 5th at 6 pm EDT, and participation is open to IBM clients.

Over the last few years, many marketers have started using microsites to drive marketing campaigns, as opposed to relying on the main corporate site. One of the advantages here is that microsites can be changed—and retired, if necessary—more easily than the company’s main Web site.

Finally, some of your initiatives might actually work well as a place that’s always open for business, pretty much like a never ending daytime soap opera. This typically works well for services that drive a steady number of clients, or whose audience is recycled on a yearly basis, like college students or pre-teens. Procter & Gamble’s Connect + Develop site is a good example of that, as the site serves an audience that has a continuous relationship with them. I often see initiatives that would operate better following the joke or circus patterns defaulting to the soap opera mode. Despite their initial huge success, they become victims of not selecting the appropriate duration for their endeavor.

When devising your next online initiative make sure you think about which of those patterns best fits your offering. Timing and duration might end up being the key determinants in how that incredible new site you conceived will be perceived a few years down the road.

Brazilian football: a disregard for the impossible

13 12 2009

(…) regional tournaments are not economically efficient, as small football clubs benefit from revenues without generating them, due to their lack of followers.

(…) to solve several problems in Brazilian football (…):

1. Reduce the importance of regional tournaments, which would include from now on only small clubs on a “promotion and relegation” system.

2. Integrate the national and international tournament schedules (…)

3. Solve the economic issues of football clubs, and consequently, the issues of Brazilian football as a whole.

If you thought the excerpts above were written by Juca Kfouri or some other present-day Brazilian sports writer, think again: they were taken from the first issue of the weekly news magazine Veja, published on September 11 (!), 1968:

Veja No 1 - Sep 11, 1968

Forty one years later, the administrative problems of Brazilian football are still pretty much the same. Despite of the perpetual mess that is the CBF (the national football association), or perhaps because of that, Brazil has won 3 more FIFA World Cups after that article was written, and has been a staple at the top of FIFA rankings since its inception.

As anything else in the world, the success of Brazilian football in the international arena can’t be linked to a single factor. The diversity and the size of the population, the tropical climate, and the popularity of the game across all social-economic classes, all played a significant role in the development of that sport in Brazil. That’s all nice and logical, but I would argue that chaos and uncertainty were no smaller contributors there.

Where else in the world you would find:

On the other side, football is not a conventional team sport. To win the FIFA World Cup in its current format, a team does not need to score a goal or win a single game in regulation or extra time. Chile qualified to the knock-out phase in 1978 with 3 draws, and theoretically could go all the way to the finals by the means of just winning on penalty shootouts. Furthermore, bad refereeing seems to just increase the interest of fans, to the point that football remains one of the few team sports today where modern technology is off-limits. I suspect this kind of logic is unfathomable to the typical sports fan in North America. If the sport itself is so counter-intuitive, maybe being disorganized, irrational and implausible end up being competitive advantages :-) .

Marissa Mayer, VP of Search Product and User Experience at Google once wrote:

Creativity loves constraints but they must be balanced with a healthy disregard for the impossible. (…) Disregarding the bounds of what we know or accept gives rise to ideas that are non-obvious, unconventional, or unexplored. The creativity realized in this balance between constraint and disregard for the impossible is fueled by passion and leads to revolutionary change.

I can’t think of a better description for the jogo bonito. Of course, being creative and fancy is not necessarily the road to success (Netherlands in 1974 and Brazil in 1982 come to mind), but from time to time, that passion for the unconventional gets us gems like these:

Note: This post was updated after its initial publication to add the screenshot of the news magazine and for clarity purposes.

Kindle in Canada: first impressions

11 12 2009

The Kindle and my first e-book purchase

Despite Farley‘s well-reasoned arguments on why buying the Kindle is a bad idea, the Inspector Gadget within me succumbed to the temptation and ordered the #1 bestselling, most-wished-for, and most gifted item from Amazon. My brain simply stops working and reverts to its basic geek mode when it comes to new electronic toys.

“New”, of course, is relative. Following the well-walked path set by the Chumby, the iPhone, Hulu, Pandora and Google Voice, the Kindle was also off-limits for Canadians until very recently, despite being available in 100 other countries, including Zimbabwe, Myanmar and Albania. I don’t mean any disrespect to those 3 countries, the point here is that we are next door neighbours to Amazon’s headquarters, so it puzzles me why it’s easier to get legal wrinkles solved in other continents than here.

Even when the Kindle finally arrived in Canada, on November 17, it was not fully featured: web browsing and blogs are not available North of the US. Not even the iPhone Kindle app is up for Canuck’s grabs yet, unless one’s willing to be a bit, err, adventurous. But we Canadians can always get the KindleCandle app for 0.99:

While you wait for the Kindle App in Canada...

Ok, end of rant.

A few months from now, when the elusive Apple tablet is finally revealed, I’ll regret this purchase, but for now, I’m actually very pleased with it.


  • The screen is very readable, much better than I expected. I read about the e-ink a million times, played with the Sony e-reader for a few minutes, but only when you go through several pages on an e-reader you start noticing why it’s better than your laptop screen.
  • Battery life is really good. Unfortunately, I can’t say the same about the iPhone.
  • It’s much easier to carry and handle than a regular hard-cover. If you are a subway warrior, you know the importance of being able to hold a book and move to the next page using the same hand.
  • The dictionary feature is handy for folks like me, whose English vocabulary came mostly from reading Wolverine and Spider-Man.
  • I could spend days browsing Wikipedia in the Kindle.
  • Amazon finally gave in to a no-hassle PDF support. Competition, we love you.
  • Being able to clip excerpts and annotate your favourite paragraphs change the reading experience. No more dog ears or chicken scratch.
  • Ability to download sample chapters of books for free.
  • Text to speech is a nice touch, but I don’t see myself using it much.


  • The contrast of “e-printed” text and the gray background is not as good as the old black text on white paper.
  • The screen is smaller than it needs to be. That physical keyboard is a waste of real estate.
  • PDF reading is still poor: you can’t zoom in or annotate.
  • Colours, or lack there of. It has that first generation iPod feel.
  • The first 2 books I tried to buy were not available in the Kindle store: “The Wisdom of Crowds” and “The Cult of the Amateur”.

THE DREAM (or: is that what they call the iTablet?)

  • Touch screen, no buttons, gestures
  • Colour
  • Comic book viewing
  • Web 2.0 features: sharing reading lists, recommendations, annotations with my network
  • Bookshelf-like interface
  • Voice recording for commentary/annotations

In summary, I give the Kindle thumbs up for now, at least until the next Apple event, when Farley will I-told-you-so me.

Individually smarter, collectively dumber?

8 12 2009

In my first corporate job back in Brazil, I was part of a large cohort of interns who end up all being hired together. We were young and well-connected, and always on top of everything that was happening in the company, from official stuff to the proverbial grapevine telegraph. Rumour conversations used to start like this: “I’ve heard from 3 different sources that…” My pal Alexandre Guimaraes used to joke that none of us had 3 different sources as we all shared the same connections.

Likewise, I often hear from my Twitter fellows that their RSS feed reader is now abandoned, as most of the interesting online things they find now comes from their tweeps. A quick experiment seems to confirm that trend. Here are the results of a Twitter search for “twitter feed reader“:

Search results for "twitter feed reader"

Search results for "twitter feed reader"

In my recent re-read of The Wisdom of Crowds, the following excerpt called my attention (highlight is mine):

(…) the more influence a group’s members exert on each other, and the more personal contact they have with each other, the less likely it is that the group’s decisions will be wise ones. The more influence we exert on each other, the more likely it is that we will believe the same things and make the same mistakes. That means it’s possible that we could become individually smarter but collectively dumber.

The first time I read that was many years before Twitter even existed, so it didn’t mean much to me. Now I can relate: I do feel that Twitter is making me individually smarter, as I can quickly consume a whole lot of info from news sources, geeks, NBA players, celebrities, friends and others. I find the Twitscoop cloud in TweetDeck a particularly good way to find what’s going on around the globe right now.

Twitscoop cloud

I used to see that cloud as a visualization of our collective intelligence. But perhaps that cloud is actually something much more humbling: the visualization of our own echo chamber, our herd’s brain. By being so intensely connected, we may be losing one of the most basic conditions identified by Surowiecki’s for a crowd to be wise: independence (the other 2 are diversity and decentralization).

Should we all stop using Twitter and Facebook now? Of course not. But maybe we should invest a bit more of our time going after the unusual, the unpopular, the offline, the old and the out-of-fashion. The core is boring, and the fringe is where real innovation and change tend to appear first.

Twilight: New Moon – Interactive Displays in Brazil

7 12 2009

I started writing this post a month ago, but stopped as I did not have access to the Internet while in Brazil, so pardon the taste of yesterday’s news here.

Unlike Bernie, I don’t have a teenager daughter, so I have just a very fuzzy idea about what The Twilight Saga is all about. But it doesn’t take a Roger Ebert or Peter Travers to know that it’s at least as popular in Brazil as it is in Canada and the US: its second installment ranked as the top box office in Brazil this year. Taking the subway in São Paulo 2 weeks before the opening of New Moon, it was hard to miss this eye-catching, vending-machine-like, err, device:

Twilight Interactive Display in São Paulo

Here are some more pictures, in case Twilight is your thing:

The main feature was the embedded camera, that allowed you to take a picture of yourself and edit it to transform yourself into a werewolf or a vampire. Your picture then became part of the gallery for all to see. No, I did not try it, or at least that’s what I claim :-) . It actually looked a lot like a very big version of an iPhone app, except that you could not shake it to start over. You could also watch movie trailers and download an app to your cell via Bluetooth.

The company behind it was a Brazilian “digital interaction agency”, Ginga. I know the explanation above is as clear as mud, so here’s their own video showing how it works:

How effective is this new media outlet? Hard to tell. But they used a 1.0 version of their displays for the first movie of the series, back in December 2008, and Ginga claims the following:

This solution was integrated with the whole digital campaign: website, banners, and a strong community created for the fans in Brazil.


Over 4.5 million people reached by the subway campaign over a month.

One of the top 10 box-offices in 2008 in Brazil.

Over 180,000 content downloads via Bluetooth.

Not too shabby, eh? Here’s the video of their first version (which, by the way, looks much more impressive than the second one):

P.S.: If you see me blogging next time about Hannah Montana, it’s a sign that the end of the world is coming.

The Circus and le Cirque

7 08 2009

Over the long weekend, my wife and I took L to see the Shrine Circus at the Centre Point Mall in North York.

Looking at the kids on the back of this elephant was a trip down the memory lane:

Despite all the controversy around the use of animals – a Twitter search for that event will return at least as many protests as praises – I have to admit that one of my earliest and fondest memories as a kid is playing with a lion cub in some anonymous circus, duly recorded in a badly preserved picture (I’m the one on my father’s lap):

The last time I’ve been to a traditional circus – i.e., excluding the Cirque du Soleil – I was a 9-year-old living in the same city I was born at. I vividly recall my friend Drausio petting a camel and getting sprayed with drool all over his face – no picture of that, unfortunately :-P , and no relationship with the delicious camel drool Portuguese dessert, or “baba de camelo”.

Back then, having a circus coming to our city was a big deal, as the only other mass entertainment available for kids was to watch old movies on Sunday’s matinées. Old is an understatement: I actually remember going to a black-and-white Tarzan movie featuring Johnny Weissmuller. Most Disney cartoons didn’t get distributed beyond the large cities, but you don’t miss what you never had, so I have no complaints there. The pluses of growing up on the countryside outweigh by far the constraints – in my naturally biased view, of course.

Not much changed since: the Shrine Circus 2009 show was not very different from the ones I used to see so many years ago: no high-tech involved, just the artist, the act and the public, all frozen in time and space. Hopefully I’ll be proven wrong here, but I think I just saw the last few breaths of a dying art. I quoted Evan Solomon (CBC) a few months ago saying that “when a new technology comes, the incumbent never dies: it simply goes after deeper efficiencies”. The innovation pipeline does not always work like that, as typewritters and the telegraph can attest. Radio, TV, movies, games and the Internet all fragmented the entertainment space in formats that are more easily consummable, forcing live performances to go after deeper efficiencies.Thus, circus performances will live on through the several forms of Cirque Nouveau, but somehow the amateur spirit is gone as shown in this Wikipedia excerpt:

Cirque expanded rapidly through the 1990s and 2000s, going from one show to approximately 3,500 employees from over 40 countries producing 15 shows over every continent except Africa and Antarctica, with an estimated annual revenue exceeding US$ 600 million. The multiple permanent Las Vegas shows alone play to more than 9,000 people a night, 5% of the city’s visitors, adding to the 70+ million people who have experienced Cirque. In 2000, Laliberté bought out Gauthier, and with 95% ownership, has continued to expand the brand. Several more shows are in development around the world, along with a television deal, women’s clothing line and the possible venture into other mediums such as spas, restaurants and nightclubs.

I used “amateur”, but the precise word is “mambembe” – no idea on how to translate that from Brazilian Portuguese. So, in the mambembe spirit, I’d like to conclude this post with this very amateurish video with my favourite circus song:

Kiva.org and the future of philanthropy

28 07 2009

Two months ago, Bernie Michalik kindly set up a virtual card-blog for my IBM farewell, complete with a donation widget from ChipIn, raising $165 as a parting gift. After scratching our heads for a few weeks, we finally figured out how to cash that amount via PayPal (after paying quite a hefty fee).

Inspired by Jamie Alexander, of Pass It Along fame, I then decided to use the opportunity to try out Kiva.org. Kiva was recently featured at Time.com as one of “10 Great Ways To Spend Your Tax Refund”.

Kiva’s mission is to connect people through lending for the sake of alleviating poverty.

Kiva is the world’s first person-to-person micro-lending website, empowering individuals to lend directly to unique entrepreneurs around the globe.

I divided the amount among 5 entrepreneurs, and you can follow the progress of those loans here.

Conventional wisdom suggests that good deeds should be kept to oneself, but the more people know about services like Kiva and MicroPlace, the better. Kiva’s success led to an unusual supply-demand situation last year: having more money available to lend than people asking for it, according to this New York Times article. But just to keep things in perspective, take a look at some of the possible shortcomings too, so that you can make a conscious decision.

In the next few years, I expect more and more institutions who depend on public donations to follow Kiva’s “data-rich, transparent lending platform” model, showing exactly what happens to your contributions throughout the whole value chain. Donations are scarce resources, and being transparent goes a long way in gaining credibility and loyalty.

Is failure overrated?

2 04 2009

Web 2.0 Expo San Francisco 2008

As seen in Biznology (slightly modified to avoid overlapping with previous posts in this blog):

Is learning from failures overrated? When emphasizing the importance of learning from errors, are we actually creating a culture of losers? Read on to hear arguments on both sides of this discussion and make up your mind. Your company’s survival in the long term may depend on it.

I’m in San Francisco this week, speaking at and attending the Web 2.0 Expo at the Moscone West. In a number of sessions, the speakers emphasized that failure is an important part of the innovation game. Knowing that I also tend to subscribe to that theory, and commenting on the Charlie Brown comic strip I embedded in my previous blog entry, a colleague at IBM pointed me to an interesting piece written by Jason Fried, from 37signals, who challenges that whole concept: “Failure is overrated, a redux”. It’s a good post, and the comments are also worth reading. To have a complete picture of the discussion, I suggest you to also read the New York Times article Jason refers to, “Try, Try Again, or Maybe Not”.

As it’s often the case in heated discussions, I initially found that Jason was defending a completely different perspective toward failure and learning, but this comment of his on another related post made me think that the difference is mostly one of weight.

“Everything is a learning experience. It’s just that I’ve found learning from your successes to be more advantageous. (…) I’ve always found more value in learning from the things that work than the things that don’t.”

I definitely can live with that position. What I have more trouble with is the cited Harvard Business School working paper. Here are some excerpts from the NYT article:

“The data are absolutely clear,” says Paul A. Gompers, a professor of business administration at the school and one of the study’s authors. “Does failure breed new knowledge or experience that can be leveraged into performance the second time around?” he asks. In some cases, yes, but over all, he says, “We found there is no benefit in terms of performance.”

(…) first-time entrepreneurs who received venture capital funding had a 22 percent chance of success. Success was defined as going public or filing to go public; Professor Gompers says the results were similar when using other measures, like acquisition or merger.

Already-successful entrepreneurs were far more likely to succeed again: their success rate for later venture-backed companies was 34 percent. But entrepreneurs whose companies had been liquidated or gone bankrupt had almost the same follow-on success rate as the first-timers: 23 percent.

If the article is accurate – and that’s a big if, considering that this is still a working paper – it seems that the HBS research is not actually proving that “when it comes to venture-backed entrepreneurship, the only experience that counts is success”, as stated in the opening paragraph. It basically demonstrates that enterpreneurs who managed to go public or filed to go public are slightly more likely (going from 22% to 34%) to have a repeat, but isn’t that expected?

There are several factors that come into play when filing a venture to go public, and having done it once gives an entrepreneur some knowledge of what it takes to get there again. I actually find surprising that, even with that edge, the rate of failure is still very high. Another way to interpret the same data is: roughly two thirds of entrepreneurs who were successful the first time (and I’m using the same loose definition of success here) fail the second time. If anything, the data tells me that success is also overrated.

The “learning from failures” approach makes more sense when you take a granular approach to it. Every single initiative you undertake is composed of a vast number of small wins and losses. You definitely can learn from both outcomes, so regardless of which one will teach you the most, embrace successes AND failures. The fundamental message when advocating a culture that allows failure to occur from time to time is to avoid analysis paralysis, or even worse, denial by hiding what went wrong and exaggerating what went right.

The bottom line is that innovation entails good risk management and shares many features with the financial world. Low risk initiatives are likely to generate low returns, and don’t give you much of a competitive edge. Being bold may lead you to collect wins and losses along the way, but also can reward you more handsomely overall. Knowing that, it’s important that you balance your innovation initiatives the same way you handle a portfolio: diversify them and adjust the mix to your comfort level. During economic downturns like the one we are going through now, it’s easy to panic and stop innovating. Keep in mind that a solid and consistent long term approach to innovation may determine your ability to survive in good and bad times.

Tony Scott, Fernando Pessoa, Michael Jordan, Innovation and Failure: Am I the only vile and errant one on Earth?

10 03 2009

This is an updated version of a post I originally wrote for my internal IBM blog back in 2006. Some of the points there are still relevant today.

As the downturn in the global economy continues, many companies adopt a cautious approach towards innovation. In some ways, tough economic times may actually be a good opportunity for companies to innovate and differentiate themselves from the competition. Borrowing from the punctuated equilibrium theory, innovation may occur in bursts when facing major shifts in the ecosystem. Also, as Google likes to claim, creativity loves constraint.

A few years ago, I was fortunate to listen to a talk about innovation by Tony Scott, who carries the impressive track of being a CTO at GM, and a CIO with Disney and Microsoft. In my poorly written notes, here’s what he said, or more precisely, my recollection of what he said back then:

  • Innovation is a combination of inspiration, perspiration, persistence and really good marketing.
  • Good architecture principles, according to Vitruvius Pollio (referred by some as the first architect) are order, eurhythmy, symmetry, propriety, economy, commodity, firmness and delight. We tend to focus the least in the last one.
  • Competition and cooperation can co-exist.
  • Create a culture where you’re allowed to fail from time to time.

Innovation implies exploring new possibilities, and learning from mistakes. An error-adverse culture cannot expect much innovation to occur.

In our continuous pursuit for innovation in the enterprise, we need to have a frame of mind where we take some risks and we accept failures, admit them, and learn from them. If you don’t tolerate errors, or deny them, you are just freezing yourself in your current position. In a world changing at a very fast pace, the status quo means staying behind and it just creates an environment where nobody dares to innovate. Enterprises could learn a lot from:

  • All projects that went over budget
  • All innovative ideas that failed to realize potential gains
  • All bids and proposals lost
  • All products and services exhibiting a shrinking market share

I also like this Michael Jordan quote:“I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.” As basketball wisdom goes, you miss 100% of the shots you don’t take.

Note that I’m not proposing that we should create a culture of losers. The idea I’m trying to convey can be summarized by “His Airness” again: “I can accept failure, but I can’t accept not trying.”

One of my favourite writers/poets of all time was the Portuguese author Fernando Pessoa, who wrote, as his heteronym Álvaro de Campos, the gem below about the denial with which we tend to approach failure.

For the Portuguese and Spanish speakers out there, try the original version, “Poema em linha reta.

Foursquare Poem

I’ve never known anybody who’s had the crap beaten out of them.
All my aquaintances have been champions in everything.

I, so often shabby, so often swinish, so often vile,
I, so often, unforgivably, a parasite.
Inexcusably filthy I,
Who so often haven’t had the patience to shower,
I, who so often have been ridiculous, absurd,
Who have publicly wiped my feet on etiquette’s tapestry,
Who have been grotesque, paltry, servile, and arrogant,
Who have silently suffered besmirching
And when I haven’t been silent, have been even more ridiculous;
I, who have been a clown for chambermaids,
I, who have felt the winks of stevedores,
I, who have been fiscally embarassed, who have borrowed and forfeited,
I, who when the time for blows arises,
Have recoiled in advance of the possibility of blows;
I who have suffered the anguish of ridiculous little things,
I declare that in all the world I am without par.

Every one I know who speaks to me
Never did a ridiculous thing, never suffered besmirching,
Was never anything but a prince – all of them princes – in life…

If only I could hear another human voice
Confess not sin, but disgrace;
Confess not violence, but cowardice!
No, they’re all The Ideal, to hear them tell it.
Who in this great world will confess to me that even once they were vile?
O princes, my brothers,

God damn it, I’m fed up with semi-gods!
Where are there people in the world?

Am I the only vile and errant one on earth?

Women may not have loved them,
They may have been betrayed – but ridiculous, never!
And I, who have been ridiculous without being betrayed,
How can I speak to my superiors without reeling?
I who have been vile, literally vile,
Vile in the most paltry and infamous meaning of the word.

(I couldn’t find the source for the translation above, so if anybody knows it, please drop me a note so that I can properly credit it and ask for permission to have it here.)

ROI 2.0, Part 3: We don’t need a Social Media ROI model

19 02 2009

Malcolm Gladwell, in his hilarious TED talk on spaghetti sauce, tells the story of Howard Moskowitz’s epiphany while looking for the perfect concentration of aspartame to use in the Diet Pepsi formulation:

Howard does the experiment, and he gets the data back, and he plots it on a curve, and all of a sudden he realizes it’s not a nice bell curve. In fact, the data doesn’t make any sense. It’s a mess. It’s all over the place. (…) Why could we not make sense of this experiment with Diet Pepsi? And one day, he was sitting in a diner in White Plains (…). And suddenly, like a bolt of lightning, the answer came to him. And that is, that when they analyzed the Diet Pepsi data, they were asking the wrong question. They were looking for the perfect Pepsi, and they should have been looking for the perfect Pepsis.”

Tangent note: Most TED talks are a treat, but this one is particularly funny and thought-provoking. If you haven’t seen it yet, consider paying it a visit. If you have an iPhone or iPod Touch, you may like the TED app too!

Over the last few years, many in the Social Media space have been on a quest to find the perfect ROI model for blogs, micro-blogs, wikis, social networking, social bookmarking and other animals in the ever growing Web 2.0 zoo. You’ll see opinions ranging from “we don’t need ROI for Social Media” to “Web 2.0 has to rely on a lagging ROI” to “ROI 2.0 comes from time savings”. In a way, they are all right and all wrong at the same time. Paraphrasing Doctor Moskowitz, there is no perfect Social Media ROI model, there are only perfect Social Media ROI models.

Since 2006, I’ve been talking to several senior executives in multiple industries and across geographies about the business value of Web 2.0, and have noticed a wide range of approaches when deciding whether or not (and how much) to invest in social computing. For companies in the forefront of the social media battleground, such as newspapers, book publishers and TV channels, investing heavily in new web technologies has often been a question of survival, and decision makers had significant leeway in trying new ways of delivering their products and services, with the full blessing of their stakeholders. On the other side of the spectrum, in sectors such as financial services, social media is not yet unanimously regarded as the way to go. I’ve heard from a number of banking and insurance clients that, if Social Media advocates don’t articulate clearly the returns they are expecting to achieve, they won’t get the funds to realize their vision.

Most players in Government were also very skeptical until the Obama effect took the world by storm, creating a sense of urgency that was not as prevalent before. Since then, government agencies around the globe seem to be a bit more forgiving with high level business cases for social computing initiatives inside and outside the firewall. However, to balance things out, in most of the other industries, investments in innovation are being subject to even more scrutiny than normal due to the tough current economic environment. So, having a few ROI models in your pocket does not hurt.

The following ROI models are emerging, and we can expect a few more to appear in the near future.

1. Lagging ROI

Last year, I spoke to the CIO of a global retail chain and he had an interesting approach towards strategic investments in emerging technologies. Instead of trying to develop a standard business case based on pie-in-the-sky ROI calculations, he managed to convince the board of directors to give him more flexibility to invest in a few projects his team deemed to be essential for the long-term survival of the company. For those, he would provide after-the-fact ROI metrics, so that decision makers could assess whether to keep investing or pull the plug. He also managed expectations by saying upfront that some of those projects would fail, but doing nothing was not an option. By setting aside an innovation bucket and establishing a portfolio of parallel innovation initiatives, you can hedge your bets and improve your overall success rate.

2. Efficiency gains or cost avoidance

Many of the early Social Media ROI models are based on how much time you save by relying on social media, converting that to monetary terms based on the cost of labour. While this is certainly a valid approach, it needs to be supplemented by other sources of business value. Unless you are capable of mapping the saved minutes with other measurable outcomes derived from having more time available, the most obvious way to realize the value of being more efficient is to reduce head count, as in theory the group can do the same work as before with less people. If that’s the core of your business case justification, it may fire back in the long term, as some people may feel that the more they use social computing, the more likely it is that their department will be downsized.

3. Proxy Metrics

Some of the ROI examples in the Groundswell book and blog rely on proxy marketing metrics, i.e., what would be the corresponding cost of a conventional marketing campaign to achieve the same level of reach or awareness. For example, when calculating the ROI of an executive blog, the authors measure value by calculating the cost of advertising, PR, SEO and word-of-mouth equivalents.

4. Product/Service/Process Innovation

The value of customer or employee insights that end up generating brand new products, services and processes or improvements to existing one needs to be taken into account. Measuring the number of new features is relatively straightforward. Over time, you may want to figure out the equivalent R&D cost to get the same results.

5. Improved Conversions

Back to the Groundswell book, one of the ROI examples there shows how ratings and reviews can improve conversion rates (i.e., from all people visiting your site, how many more buy products because they trust the input from other consumers, compared to typical conversion rates).

6. Digitalization of knowledge

By having employees blogging, contributing to wikis, commenting or rating content, creating videos and podcasts, companies are essentially enabling the digitalization of knowledge. Things that used to exist only in people’s heads are now being converted to text, audio and images that are searchable and discoverable. It’s the realization of the asset that Clay Shirky calls the cognitive surplus. That was an elusive resource that didn’t have much monetary value before the surge in user-generated content. Naturally, a fair portion of that digitalized knowledge has very little business value, so you need to find metrics to determine how much of that truckload of content is actually useful. You can infer that by using cross-links, comments, ratings or even number of visits.

7. Social capital and empowerment of the workforce

There is certainly business value in having a workforce composed of well connected, well informed and motivated employees. What metrics can be used to assess the degree of connectivity/knowledge/motivation of your human resources? Several social computing tools give you indirect metrics that provide a glimpse of the metrics you can exploit. Atlas for IBM Lotus Connections, for example, gives you the ability to see how your social network evolves quarterly, and can help determining how many people are associated with some hot skill (full disclosure: I work for IBM).

As you can see in several of the emerging models listed above, there are often three types of inputs to develop ROI calculations:

  • Quantitative metrics that can be obtained directly from the system data and log files
  • Qualitative metrics that are determined using surveys, questionnaires and polls
  • Dollar multipliers that attribute arbitrary monetary value to hard to assess items such as a blog comment or an extra contact in your social network

For the monetary value, I would suggest to adopt a sensitivity analysis approach, working with conservative, average and aggressive scenarios, and adjusting them over time. Just don’t go overboard. As I stated in a previous post, there’s an ROI for calculating ROI. ROI models should be easy to understand, as decision makers will often frown upon obscure calculations that require a PhD degree in financial modeling.

In summary: we don’t need one Social Media ROI model, we need many of them. None of the ones emerging now is perfect, none will ever be. You may need to have a few in your toolkit and develop a sense of which one to use in each case.

Previous ROI entries:

ROI 2.0, Part 1: Bean counters vs Innovators – The need for a real exchange of ideas
ROI 2.0, Part 2: Storytelling and Business Cases

IBM: Building a smarter planet

6 11 2008

Note: most of you probably know, but for full disclosure, I work at IBM.

Update: just added some more meat to the post. Succinct is a quality that I definitely don’t have.

Sam Palmisano is speaking this morning at the Council of Foreign Relations. You can find all about it at today’s edition of the New York Times: “IBM’s Chief Sees Technology Leading a Recovery”.

Andy Piper has just blogged about it, so I’ll try not to just repeat what he said – but I whole-heartedly agree with him.

In our daily, mundane working life at IBM we go through mostly small peaks and valleys, but from time to time we get inspirational moments like this, when it feels good to be part of IBM. Google claims that their mission is to “organize the world’s information and make it universally accessible and useful”. The smart planet point-of-view tells me that we are paying attention beyond just data. IBM’s reach and breadth positions it uniquely to aim higher than that. We have the potential to be a key enabler of a smarter, sustainable, better world by applying technology and business acumen. Our 3-letter acronym never looked so visionary.

I worked in University research for some time, doing obscure biochemistry work around fireflies, and also on the interactions between ferns and a Brazilian species of moth. When you are deep at work, you keep wondering why you are doing that, and how that is going to change anything in the world. I actually gave up on becoming a scientist mainly because I was not able to see the big picture, and I couldn’t explain to a normal person what my research was all about.

I firmly believe that having an easy to articulate vision is fundamental to keep focus and understand where we all fit in the big picture. A vision does not accomplish anything by itself, but fuels our passion, especially during the dull moments of doubt, like when doing expenses or sitting for hours at airports.

Of course, the actual challenge is to go from vision to realization. In a week where change is in everybody’s mind, the announcement’s timing is impeccable. I hope that a few years from now I can come back to this post and grin, seeing that the promise was fulfilled.

Yes, we can. But “will we?” is the question for all of us to answer.

Meritocracy, Pauline Ores and the multi-dimensional IT Professional

30 09 2008

Yesterday, I started reading “Crowdsourcing: why the power of the crowd is driving the future of business”, by Jeff Howe. I did not actually buy the book, it was given to me as part of the attendee package at the IBM Social Media event I attended 2 weeks ago at Ogilvy & Mather.

The book has good insights, covering the emerging reputation economy, where, contrary to conventional economics, rewards are often not measurable by dollars but by the desire to contribute to a worthwhile cause or just the “sheer joy of practicing a craft” and get some peer recognition for that. I like this quote in particular:

Crowdsourcing turns on the presumption that we are all creators – artists, scientists, architects, and designers, in any combination or order. It holds the promise to unleash the latent potential of the individual to excel at more than one vocation, and to explore new avenues for creative expression. Indeed, it contains the potential – or alternately, the threat – of rendering the idea of a vocation itself an industrial-age artifact.

Many years ago, I had a manager who told me that he could not give me a good rating in my annual assessment because I had done 3 totally different things that year: started as a Unix Admin, moved to a Performance Engineering role, and ended the year as a developer. According to him, you had to pick one role and stick to it, as nobody could do more than one thing really well. Needless to say, I couldn’t disagree more with the previous argument. It would be ok if he thought that I tried 3 different things and didn’t do particularly well in any or some of them, but saying that nobody can do that, and recommending anybody to be a one-dimensional professional sounds very Fordist to me.

Some people ask me why I blog about apparently non-work related subjects, such as vacation trips, soccer, or Moleskine Art. I wish I could blog even more about things not related to Web 2.0 or social media or conferences. We all have multiple vocations. I know IBMers who are great photographers, parents, writers, cooks, graphic artists, actors, athletes and scientists, and there is no reason for any of us to strangle those vocations to focus solely in our current professional role. In fact, both our careers and our workplace can greatly benefit from being more multi-dimensional. As work becomes more virtual, global and dynamic, and the pace of change accelerates, we all need to be more like Da Vinci and Marco Polo than assembly-line workers.

Furthermore, Web 2.0 and Social Media are leveling the professional playing field. Two quotes by Pauline Ores (who is the IBM personification of Social Media Marketing) during the O&M event caught my attention:

1) In the Social Media world, the most powerful person is the one who shares the most.
2) Control in Social Media is like grabbing water: the stronger you grab, the less you hold. There’s a right way to retain water, but not by being forceful.

Disclaimer: that’s my recollection of what she said, so don’t hold her accountable for the exact words :-)

Not too long ago, knowledge workers had incentives to hold what they knew close to their chest, as a way of keeping their employability. The more they kept to themselves, the more their company and fellow employees would depend on them. This happened because the distribution of information was very inefficient, and the higher up you were in the food chain, the more channels you had to be known by others.

In the YouTube age, where everybody, anybody can broadcast themselves inside and outside of the firewall, the advantage of saying things from a higher hierarchical post had shrunk considerably. According to Howe, a meritocracy is now in place, where the only thing that matters is the quality of the work itself. If you believe you are the Subject Matter Expert in SOA, Internet Marketing, z/OS or Performance Engineering, you need to make evidence of that widely available. An increasing number of people won’t care much if your title says “The know-all see-all tech guru” or “Executive <something>”. If you know it, it should be made evident by the crumb trails you leave behind you. Your knowledge needs to be searchable and discoverable (not sure if those words exist, but you catch my drift).

Sacha Chua
is one of the best examples I see of that trend. I learned a lot from just observing her working habits over the last year or so. Ten years ago, a recent hire direct from University would be years away from being known and respected across the enterprise. By sharing what she knows and what she does to the extreme, she is arguably more influencial than others with many years of job tenure. This is not a generation Y thing, as I see her more as an exception than the rule even among her young cohorts, and there are many boomers and Xers like her at IBM and elsewhere.

The one line summary for this post: If perception is reality, you only know what you share.

Minor update: fixed a typo in the final quote.

Sapere aude: Dare to think on your own

22 07 2008

I remember as a kid my mother explaining to me that, in Japan, people referred to Korea as “cho-sen”, meaning “Land of morning calm”. Being a pain in the neck since my early years, I always wondered how one could possibly say “land of morning calm” using just two syllables – that’s when my mother gently suggested me to shut up :-) .

Latin shares some of that hidden magic with Japanese and can also express a lot in a few words. Ad augusta per angusta, Caveat emptor and Urbi et orbe all seem to have this elastic semantic property. My favourite among the short Latin quotes is sapere aude, which mysteriously means “Dare to think on your own”.

In the last couple of years, I have read my fair share of business books (or at least portions of them, as I’m admittedly a lousy reader):

  • Get things done
  • The long tail
  • The world is flat
  • Wikinomics

and I’m currently reading:

  • Web 2.0: a strategy guide
  • Groundswell: Winning in a World Transformed by Social Technologies
  • Here comes everybody
  • Thinkertoys

While many things can be learned from those books, they are written in a way that can lead us to refer to them as gospels, and not simple sources of opinions.

Likewise, many times we see the use of blank statements disguised as common wisdom justifying policies or courses of action. Here are some that examples:

  • You can’t teach an old dog new tricks
  • Jack of all trades, master of none
  • Perception is reality

The real world is so much more complex than that. And I don’t mean to say I’m immune to that: from time to time I catch myself unconsciously trapped in that herd mentality. That’s why I enjoy to hear people who disagree with me, as they may be my only chance to snap out of it

If we have to choose a blank statement to adopt, I like this one better: “when everybody thinks alike, nobody thinks much”. If everything looks rosy and everybody is agreeing with you, think twice. And above it all, sapere aude.

Santos-Dumont, The Wright Brothers and Innovation

17 07 2008

This is a post I wrote long time ago in my internal blog at work and decided to publish here too, as it seems to still be current

Unless you’re Brazilian or an aviation enthusiast, chances are that you have never heard about Alberto Santos-Dumont. Most people in the world would not hesitate in saying that the Wright brothers invented the airplane. However, some claim that “the only witnesses to the Wright brothers flights (…) were typically close friends and family”, while “Santos-Dumont made his flights in public, often accompanied by the scientific elite of the time, then gathered in Paris” (read more about it here and here). The picture above (from Wikimedia Commons) shows one of his flights in the Bagatelle field (close to the Eiffel Tower). PBS aired “Wings of Madness”, a good documentary about Santos-Dumont, last year. Here are some excerpts from the program description:

In the early 1900s, the most acclaimed celebrity in Europe, and arguably the world, was a fashionable, frail, Brazilian-born aviator named Alberto Santos-Dumont. (…)Tiring of balloons, Santos built the 14bis, an ungainly tail-first flying machine that nevertheless made the first powered airplane flight in Europe in 1906. At that time, the Wright brothers’s secret early flights were widely disbelieved, so Santos and his adoring public were convinced he was the first to fly. When Wilbur made his triumphant European tour in 1908, Santos had to face the terrible realization that the Wrights were the true pioneers after all. But just before his long slide into illness began, he designed an exquisite new airplane out of bamboo: the Demoiselle, or Damselfly. One of the classic aircraft of the pioneering era, it was the true forerunner of today’s ultralight planes.

An interesting aside from this discussion is that the Gartner’s hype cycle around emerging technologies was already in full display mode 100 years ago: Dumont went from the technology trigger all the way to the plateau of productivity in a decade and was very hyped for a while to the point that the local Dayton Daily News in 1903 stated that the Wright brothers were emulating Dumont (Orville and Wilbur lived in Dayton):

In any case, the true answer for the question “Who invented the airplane” is: none of them. Or better yet, all of them: Orville, Wilbur, Alberto and several others pioneers, all should be credited with the invention of the airplane. We tend to like simple answers, and so we just accept that Gutenberg invented printing, Thomas Edison the light bulb and Christopher Columbus discovered the Americas. In reality, all inventions and findings in the world are composites of ideas and experiments run by several people. That’s why I strongly believe that our current models governing intellectual property are outdated and preventing us from unleashing the true power of innovation. Our copyright laws are way too strict, and patents many times are inhibitors, not drivers, for new inventions.

Note that I’m not advocating that all IP protection should be dropped. However, the big accomplishment that should be awarded is not the idea, but the execution. Ideas are cheap, good implementation is the real challenge. This concept applies even in the case of artistic works like music, movies or books. Just imagine what would happen if everything was governed by a Creative Commons-like license, where anybody, everybody could share, remix and reuse whatever they want. Often times we see songs that were very flat in their original recording to become masterpieces with some novel interpretation. If we lower the barriers, even disasters could be rescued. Can you improve on “The Godfather” I and II? Unlikely. “The Godfather
III”, on the other side, had some good ideas ruined by a few really lame ones. The potential for a great movie was there, but it was never realized. You’re just left wondering “what if”. Of course, movies are not that easy to tweak, but scripts are. I bet that the last three Star Wars movies could benefit from better writing.

It would be interesting as a social experiment to establish a 5-year moratorium on all IP-related claims and see what would happen: chaos and the-end-of-the-world-as-we-know-it or an explosion in innovation. At a minimum, this approach would help us to find out how much control is actually needed to foster innovation.

New York – Part 2 of 2: The business

29 04 2008

The client event I was attending in New York was held at the IBM office in midtown, just a couple of blocks from the Central Park. Nice office, even better location, if you ask me.

New York - IBM 590 MadisonNew York - 590 MadisonNew York - Former IBM Tower

In the afternoon, we spent a few hours visiting several retail locations in Manhattan, courtesy of an IBMer who knows that area inside out, and was very kind to pick the cream of the crop. That was a great opportunity to get a glimpse of what retail will look in the near future by observing what’s being tried in the flagship stores. Here’s the highlight reel.

  • The Cube Apple Store – I’ve been to several of those in Canada and in the U.S., but this one is special. Open 24 by 7, 365 days a year, this place is incredibly crowded during the day, so I highly suggest you go there after hours – I went twice, at 4 pm and 2 am, and had a much more civilized experience in the wee hours. The store is actually underground, and the glass cube is the street level entrance. Taking the stairs down gave me the feel that I was entering the Louvre, as the cube reminds me a lot of the pyramids by IM Pei. Somehow, this store feels like a temple dedicated to the Apple brand and technology. I posted some pictures below, but you can see much better ones, and some movies too, here.

New York - Apple Store
New York - Apple StoreNew York - Apple Store
New York - Apple Store 5th Ave

  • Niketown – Talking about Apple, the Nike store ostensively co-brands Nike+ with Apple. I’m not a runner – in fact I hate running – but this is so cool that I may even try it one day. The whole store is very well thought, from the colour palette to the overall layout and the glass tubes to transfer items from the storage rooms to the PoS (point-of-sale) stations. Other cool feature is the NIKEiD.STUDIO: you can create shoes customized to your taste and have it delivered to you – if you live in the U.S., of course.
New York - Nike StoreNew York - Nike Store
New York - Nike StoreNew York - Nike Store
New York - Nike StoreNew York - Nike Store
New York - Nike Store

  • Nokia Flagship Store – A three-story mecca for cell-phone fans. The huge screens behind the phones are interactive: they can react to actions such as text messaging and handling of the mobile devices. Very cool and blue. You can get more details about it here.

New York - Nokia Store

  • Citibank – As city regulations around the world become more strict towards visual pollution, retail stores are becoming more creative and using colour and shapes as brand identity clues. I just mentioned the blueness of Nokia’s store. Citibank is using the Chevron format in the façade of its branches. This particular branch is very modular, with sliding internal walls to provide ample spaces during business hours and access to ATMs only after hours. Another curiosity there is a terminal for client feedback, which was used to request a water cooler to be brought back after the branch redesign. Who would’ve thunk that clients would miss the good and old drinking fountain?
New York - Citibank Branch

  • Bank of America – This branch has two interesting features: a bookcase with finance-related books &amp; magazines is a comfortable living room setting, and banners at the top with a timeline showing how BoA’s history is deeply ingrained in the U.S. history. I know it sounds trivial, but it was very well executed. Unfortunately no pictures could be taken inside.

New York - Bank of America Branch
  • Commerce Bank – Open extended hours, including Saturdays and Sundays, this branch has some kiosks with free souvenirs (like pens) and also a coin counter game for kids: if you get the total amount right, you’re eligible for a prize.

New York - Commerce Bank

  • ING Direct Café – This is the one that blew my mind away. This is not a bank office or a branch. It’s more like a Starbucks store, including free Internet access, and it was insanely packed when we visited. Why would a bank do that? Many reasons, including probably some that I have not even thought about yet. Having coffee is a very social thing so people just go there for a break, and while in there, there are some cross-selling opportunities. In the second floor, there’s a space for people to meet or learn about financial services. What a great way to associate a pleasant experience with a strong bran. They also sell souvenirs, including toys for kids with Cedric and Amy, ING characters from Planet Orange. If you were wondering why I tagged this entry as “web20forbiz”, there is your link! You can read more about it here and here.

New York - ING Cafe
New York - ING CafeNew York - ING Cafe

This was a really long post, sorry about that. I should give a prize too to anybody getting to the end of it.


Get every new post delivered to your Inbox.