What’s next? Social Media and the Information Life Cycle

10 10 2011
The Crystal Ball

Image via Wikipedia

As previously seen in Biznology:

Back in the days when “Web 2.0″ was a hot buzzword, many people asked what “Web 3.0″ would look like. Even though that question sounds now as outdated as an X-Files re-run episode, the quest for “what’s next?” is always in our minds. I’m no better prognosticator than anybody else, but my best answer would be: look at where the inefficiencies are. In no way is that a good indicator of what 2012 or 2013 will look like. Some inefficiencies may take decades to be properly addressed. But if you are trying to guess where we’ll see the next revolutionary step in social media–as opposed to the much-easier-to-predict incremental improvements–you have to focus where the biggest opportunities are, the undiscovered country somewhere out there. Analyzing the current landscape and the evolution of information markets by borrowing from a product life cycle framework may assist in developing some educated guesses about what the future may bring.

This is the third post in this current series discussing the differences between making and moving information and knowledge versus physical goods.

Since the first businessperson started bartering goods in some cave thousands of years ago, products went through a more or less standard life cycle: they were conceived, designed, produced, filtered for quality, distributed, sold, consumed, serviced, and finally disposed of. Over the course of history, that life cycle underwent several enhancements, most of them minor and mundane. A few of them, like the introduction of currency, steam power and the production line, had a huge impact in the overall efficiency of moving products around. Naturally, even though in retrospect those inventions seem like no-brainer milestones in the history of business and commerce, in reality their impact was scattered over a number of years. It’s likely that many who lived through those transitions did not realize how big those changes in fact were. One important indication of their revolutionary aspect is how each one of them enabled an incredible amplification in volume or reach of one or more stages of the product life cycle. Currency obviously made selling and trading more practical, steam power increased the ability to reduce the dependency on animals and people for mechanical work and the production line affected both the ability to manufacture better and to improve quality controls.

Non-physical items, such as ideas, information and knowledge, also go through a life-cycle of their own, but some of the steps are still not as sophisticated as the ones described above. Roughly speaking, that cycle consists of: creating, verbalizing (coding), capturing, distributing, filtering, consuming (decoding), and disposing of.

Creating and Coding
Thoughts and emotions come through everybody’s heads all the time, raw, unruly, uncensored. Then some are internally processed by the conscious mind and made sense of. A subset of those are mentally verbalized, and even a smaller portion of them is finally vocalized and communicated to others. The effect of technology in this early cycle has been modest. It still happens mostly the way it used to thousands of years ago.

Capturing
Capturing knowledge in the early days of humanity basically meant drawing on cave walls. Drawing became more elaborate with the advent of writing–which could be described as the codified doodling of one’s thoughts in an enduring medium–ensuring that a person’s knowledge could survive much beyond the physical constraints in time and space. More recently, photographs, movies, and audio recording permitted the capture of an increasing variety of information, but in practice they were still enabling us to more efficiently capture a concept, a memorable moment, or to tell a story.

Distributing
The introduction of movable type removed much of the limitations around the distribution of that content. Radio, cinema, TV, and the Internet of the 1990s did more of the same for a broader set of records of human knowledge. It was an expansion in scope, not in nature, but each new media added to the mix represented yet another powerful distribution amplifier.

Filtering and Consuming
All that capturing and distributing of information still relied a lot either on physical media or on the renting of expensive properties and expertise. Whole industries were created to exploit those high transaction costs. The inefficiencies of the system were one of the most powerful filtering mechanisms in that information life cycle. The big mass media players essentially dictated what was “media worth” for you to consume. In fact, that’s what most of us were known for: consumers of goods and information.

Disposing
Most information that is digitized today is not formally disposed of, it’s just abandoned or forgotten. Archived emails and SharePoint sites are an example of that, but how often do you search your company’s intranet and find information that is completely outdated and irrelevant now?

The social media role, so far
Much of what we call social media today, be it Internet forums, blogs, podcasts, photo, video and file sharing, social networks, or microblogging, along with the advances in mobile communications, contributed significantly to bring those transaction costs closer to zero. Billions of people can now not only capture and distribute information at low cost, but can also consume more of it and consume it faster. Not only that: the ability to add links to status updates, retweet, and share enabled regular people to filter and distribute content too. Everybody became a content curator and distributor of sorts, often without even realizing it.

So what’s next?
Most likely, we’ll continue to see an increasing sophistication of the inner steps of the information life cycle. We’re already witnessing better ways to capture (the IBM Smarter Planet initiative is a good example of that), filter (Google Plus’ circles), distribute (Tumblr, StumbleUpon) and consume (Flipboard, Zite) it. However, the obvious under-served stages in the information life cycle are the two extremities: creating, coding, and capturing on one side, and disposal of on the other.

On the creating, coding and capturing end, the major current inefficiency is loss of information. From the millions of ideas and thoughts that come to people’s minds, the vast majority vanishes without a trace. Twitter and status updates showed a very raw way of capturing some of that, but they are still cumbersome to use — and often impractical:

In case of fire, exit the building before tweeting about it

Via funnysigns.net

Apps like Evernote and Remember The Milk are evolving to make it much easier to record our impromptu thoughts, but the actual potential is enormous (suggested reading: The Future of Evernote). Even a real brain dump may be more feasible than most of us initially thought. As we learned a few days ago, UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. The results are mesmerizing:

Reconstructing brain images into video

But it does not stop there. Ideas generate ideas. Capturing makes indexing and sharing possible. Imagine how much more knowledge could be created if we had better ways to share and aggregate atomic pieces of information. We might not like Facebook’s timeline and Open Graph real-time apps in their first incarnations, but they are just giving us a peek of what the future — and the past — may look like.

Decoding the existing content out there would be a good start. My first language being Portuguese, I often find amazing content in Brazilian websites or brilliant posts shared by my Portuguese-speaking colleagues that are still not easily consumable by most people in the world, making me wonder how much I’m missing for not knowing German, Chinese, Hindi, or Japanese. One can always argue that there are plenty of tools out there for translating Internet content. True, but the actual user-friendly experience would just be browsing websites or watching videos in Punjabi or Arabic without even noticing that they were originally produced in another language.

Finally, one of the unsolved problems of the information age is proper disposal of information. Since storage is so cheap, we keep most of what is created, and this is increasingly becoming an annoyance. I often wish that my Google search could default to limiting search to the last year or the last month only, as the standard results normally show ancient content that are no longer relevant to me. Also, most of us just accept that whatever we say online stays online forever, but that is just a limitation of our current technology. If we could for a second convert all that digital information to a physical representation, we would see a landfill of unwanted items and plenty of clutter. Of course, “disposal” in the digital world does not need to be complete elimination–could just be better ways to place things in the back burner or the backroom archive. For example, we could use sensorial clues of aging information. You always can tell if a physical book was freshly published or a print from 20 years ago, based on its appearance and smell. Old internet content could be shown with increasingly yellow and brown background tones, so that you could visually tell the freshness of content.

Some of the above sounds crazy and superfluous, but the idea of Twitter probably sounded insane less than 10 years ago. Are we there yet? Not even close. But that is what makes this journey really interesting: what is coming is much more intriguing than what we saw so far. Like I said before, we’re still witnessing the infancy of social technologies.

Enhanced by Zemanta




Moving things vs. moving ideas

10 10 2011

AVENTURA, FL - AUGUST 18:  Ed Cole (R), with t...

Image by Getty Images via @daylife

As previously seen in Biznology:

All of our existing controls around content, intellectual property, and information exchange were developed when moving information around was an ancillary function to what mattered at the time: moving goods efficiently to generate wealth. The most powerful nations and organizations throughout the centuries were the ones that mastered the levers that controlled the flow of things. That pattern may soon be facing its own Napster moment. Information is becoming a good in itself, and our controls have not yet adapted to this new reality. In fact, much of what we call governance consists of ensuring that information moves very slowly–if at all. The entities–countries, companies, individuals–that first realize that a shift has already taken place, and re-think their raison d’être accordingly, might be the ones who will dominate the market in this brave new world.

In my last Biznology post, I used a comparison between information and physical goods to support an argument that social technologies still have a long way to go to be considered mature. When information itself becomes the good, and social interactions become the transportation medium, some new and interesting economic patterns may emerge.

Scarcity is a natural attribute of the physical world: a house, a car, or a TV set cannot be owned by multiple people at the same time, nor can one person provide hairdressing or medical services to several customers simultaneously. Our whole economic model is built on top of it: theories around economies of scale, price elasticity, bargaining, patents and copyright all have a strong dependency on “things” or “services” being limited. We even created artificial scarcity to digital items such as software and audio files in the form of license keys and DRM, so that they could fit our “real world” economy.

That model worked OK when being digital was the exception. However, more and more “things” are becoming digital: photos, movies, newspapers, books, magazines, maps, money, annotations, card decks, board games, drawings, paintings, kaleidoscopes–you name it. Furthermore, services are increasingly less dependent on geographical or temporal proximity: online programming, consulting, doctor appointments, tutoring, and teaching are sometimes better than their face-to-face counterparts. While most online services are still provided on a one-off basis, the digitization of those human interactions is just the first step to make them reusable. TED talks and iTunes University lectures are early examples of that.

Of course, I’m not advocating a world without patents or copyrights. But I do think that it’s important to understand what that world would look like, and assess if the existing controls are playing in our favor or against us. Even if we do not dare to change something that served us so well in the past, others may not have the same incentives to keep the status quo.

Another factor to consider is the leapfrog pattern experienced by the mobile telephony industry: countries that were behind in the deployment of phone landlines ended up surpassing those in the developed world in the adoption of cellular phones. Similarly, countries that never developed a sophisticated intellectual property framework may be able to start fresh and put a system in place where broad dissemination and re-use trumps authorship and individual success.

Finally, the emergence of social technologies over the last 10 years showed the power of a resource that has been underutilized for centuries: people and their interactions with each other. The essence of what we used to call Web 2.0 was the transformational aspect of leveraging social interactions throughout the information value chain: creation, capture, distribution, filtering and consumption. The crowd now is often the source, the medium, and the destination of information in its multiple forms.

The conclusion is that the sheer number of people that can be mobilized by an entity–a nation, an organization or an individual–may become a source of a wealth in the near future. Of course, peoplenomics is mostly a diamond in the rough for now. A quick comparison between the top 20 countries by GDP per capita (based on Purchasing Power Parity) and the top 20 countries in the world by population shows that the size of a country’s population is still a poor indicator of its wealth–only the United States, Japan and Germany are part of both lists. Whether or not unleashing the economic value of large populations and efficient information flows will ever materialize is anybody’s guess, but keeping an eye for it and being able to adapt quickly may be key survival skills in a rapidly changing landscape.

Enhanced by Zemanta




A Skewed Web: Innovation is in the outskirts of social media

15 09 2010
Honeybees with a nice juicy drone

Image by dni777 via Flickr

As previously seen in Biznology:

As I discussed in my post last month, it’s a skewed Web out there. A multitude of online social filters were developed over the last 15 years to address our perennial information overload curse. From Google’s page rank, we went all the way to tag clouds, social bookmarking, Twitter trending topics and Gmail’s Priority Inbox, trying to find ways to make what matters float to the top. However, most of these social filters are based on some variation of a “majority rules” algorithm. While they all contributed to keep information input manageable, they also skewed the stream of information getting to us to something more uniform. Will crowdsourcing make us all well-informed drones? Ultimately, it may depend on where you’re looking at, the center or the fringe of the beehive.

Almost two years ago, Clay Shirky boldly stated that information overload was not a problem, or at least not a new one. It was just a fact of life at least as old as the Alexandria library. According to Shirky, the actual issue we faced in this Internet age would be that of a filter failure: our mechanisms to separate the wheat from the chaff were just not good enough. Here is an excerpt from his interview at CJR:

The reason we think that there’s not an information overload problem in a Barnes and Noble or a library is that we’re actually used to the cataloging system. On the Web, we’re just not used to the filters yet, and so it seems like “Oh, there’s so much more information.” But, in fact, from the 1500s on, that’s been the normal case. So, the real question is, how do we design filters that let us find our way through this particular abundance of information? And, you know, my answer to that question has been: the only group that can catalog everything is everybody. One of the reasons you see this enormous move towards social filters, as with Digg, as with del.icio.us, as with Google Reader, in a way, is simply that the scale of the problem has exceeded what professional catalogers can do.

While some still beg to differ about information overload not being an issue – after all, our email inboxes, RSS readers and Facebook and Twitter streams never cease to overwhelm us–we tend to welcome every step in the evolution of smarter filters.

The whole lineage of social filters, from Google’s page rank, passing through Digg and Delicious, culminating with Twitter’s trending topics, mitigated one problem–information overload–but exacerbated another one: we were all getting individually smarter, but collectively dumber. By letting the majority or the loud mouths dictate what was relevant, we ended up with a giant global echo chamber.

We were all watching Charlie biting Harry’s finger, and Justin Bieber trying to convince (or threaten) us that we will never, ever, ever be apart. That Ludacris video surpassed 300 million views in seven months in YouTube alone, taking their all-time #1 spot. An unverified claim about Bieber using 3% of Twitter’s infrastructure being passed as news by traditional media outlets is just the last example of how far we went down the madness of crowds road.

br />This of course is not a new problem. Back in the early 1980s, MTV was running Michael Jackson’s 14-minute “Thriller” video twice an hour. The trouble here is just the magnitude of it. A potential downside of this mass-media-on-steroids uniformity is that a homogeneous environment is not the best place for innovation to flourish. Borrowing from paleontologist Stephen Jay Gould: transformation is rare in the large, stable central populations. Evolution is more likely to happen in the tiny populations living in the geographic corners: “major genetic reorganizations almost always take place in the small, peripherally isolated populations that form new species.”

If you are looking for the next big thing, or trying to “think different,” or to be creative and innovative, you need to look beyond the center. The core will tell you what’s up, so that you’ll be “in the know.” The fringe will show you what’s coming next. To paraphrase William Gibson, the future is peripherally distributed.

Enhanced by Zemanta




iPad – First impressions

4 04 2010

Yesterday morning, I took my visiting family to Niagara Falls which is oh-so-convenient-ly close to the US border, so of course I *had* to pay a visit to the Apple Store at the Walden Gallery Mall and buy the iPad I had reserved “just in case” :-) . At least that’s the story I tell myself to justify traveling 400 km just to address this totally illogical gadget lust.

I have not had much time to blog or do much else actually over the last 40 days or so, being busy at both work and personal fronts – had a few folks staying with us and others visiting us too. So, this post is going to be a bit rushed, just collecting my first impressions on the most expected iThing of the year. On top of it, I’m typing this on the iPad itself, using the revamped WordPress app, so pardon the clunkness of this post. So, there you go, in bullet point format:

- Overall, huge thumbs up to Apple for adding a new category in the already crowded portable computing landscape. The person sitting beside me at the mall was completely unaware of what the fuss was about at the Apple Store, thinking they were giving away something. When I opened the box, she gasped: “OMG, that’s a gigantic iPhone”! It definitely looks like that, but after a day using it, I can honestly say that it’s much more than that. As biology has repeatedly shown us, small increments in features can sometimes drive major leaps in innovation – stand-up posture and opposable thumbs being just two recent examples. The iPad is not just a big iPhone or iPod Touch, not a laptop without a keyboard, not a crippled netbook, not a fancier Kindle, nor a Mac version of the Tablet PC. It’s in its own category, and will follow its own evolution branch path. Personal Computing speciation just occurred, and we witnessed it first hand. Of course, this does not necessarily mean that the iPad will succeed in its current incarnation. But it will influence what others will be doing over the next few years.

- The big positives: the device is fast, the screen is crispy, the layout is gorgeous and it feels good in your hands. Battery life is just unbelievably long. Maps, iBooks, Photos, and the various comics/magazines/newspaper/drawing apps all feel brand new in the big screen. That’s just a glimpse of what’s coming. The iPad is the best portable device to consume content that I have ever used.

- The negatives are well published already: the iPad would greatly benefit from a front-facing camera, multitasking, and more flexibility for applications to share context and objects, including files. All these limitations have one thing in common: they are related to content creation, not consumption. From a market perspective, it makes a lot of sense to target content consumers first, as they represent the vast majority of buyers. I also suspect those limitations are all part of overall Apple strategy to keep us buying the latest and greatest every few years or so. The Cupertino-based brain-trust creates products with enough features to make them desirable, but very rarely offers everything that’s technically feasible in any given release. This way, when an iPad with a camera comes next year, they will sell it in loads again. Furthermore, sometimes we waste too much time thinking about what we don’t have, as opposed to what’s there now for us to enjoy. That’s like being in Paris and complaining about not having a good beach to go to.

That’s it for now!





Felipe Machado and Andrew Keen: Thinking outside the social media echo chamber

7 02 2010

Back in November, I had the pleasure of having lunch with Felipe Machado, multimedia editor for one of the largest newspapers in Brazil, and a former business partner in a short-lived Internet venture in the mid-nineties. The get-together was brokered by Daniel Dystyler, the consummate connector in the Gladwell-esque sense of the word.


Felipe Machado and Daniel Dystyler

Felipe is an accomplished journalist, book author and musician, and I deeply respect his ability to connect the dots between the old and new media. I actually often disagree with him: I tend to analyze the world through a logical framework, and Felipe relies on intuition and passion. That’s exactly why I savour every opportunity to talk to him. If you understand Portuguese, you may want to check his participation in “Manhattan Connection” (Rede Globo, 4th largest TV network in the world), talking about the future of media:

During our lunch conversation, Felipe mentioned Andrew Keen’s “The Cult of the Amateur”, as a book that broke away from the sameness of social media authors. Coincidentally, I had read an article about that book the day before, so I bit the bait and borrowed the book from the local library the first week I came back from Brazil.

This may come as a surprise to anybody who knows me, but if you work in anything related to new media, social media, Web 2.0 and emerging Internet technologies, I highly recommend you read Keen’s book. Make no mistake: the book deserves all criticism it got – you can start with Lawrence Lessig’s blog post for a particularly heated discussion on the limitations of Keen’s arguments. “The Cult of the Amateur” is ironically a concrete proof that having editors and a publisher behind a book does not necessarily make it any better than, say, a blog post.

The reason I recommend a not-so-good book is this: Andrew Keen represents a large contingent of people in your circle of friends, co-workers, clients and audience – people who hear your social media message and deeply disagree with you. They may well be the vast majority that does not blog, does not use Twitter and couldn’t care less about what you had for dinner last night. They often don’t say it out loud, to not be perceived as luddites, but are not convinced that social media is making things any better, or Web 2.0 is something inevitable.

Those are the folks you should pay attention to. No matter how much you admire the work by Chris Anderson, Clay Shirky, Jeff Howe and others social media luminaries, you are probably just hearing the echo of your own voice there. You need to understand the concerns, the points of view and the anxiety of the Andrew Keens of the world toward the so-called social media revolution. Failing to do that will prevent you from crossing the chasm between early adopters and everybody else.

Reaching out to the members of our social network who are not in Facebook, LinkedIn and Twitter can go a long way for us all to realize that the real world is MUCH BIGGER than Web 2.0 and Social Media (as I learned from Jean-François Barsoum long time ago).





The joke, the circus and the soap-opera

14 12 2009

A few people who saw my Enterprise 2.0 Anti-patterns presentation at SlideShare asked what I meant by “the joke, the circus and the soap-opera”. That came from a post I wrote for Biznology a long time ago, on Sep 15, 2008. It’s old news now, but for the sake of completeness I’m republishing it here. I updated some of the broken links and also moved the “I work for” disclaimer from IBM to RBC :-)

What role do timing and duration play in your Web 2.0 strategy? Marketing experts have long emphasized the importance of media selection and scheduling decisions, but seeing how traditional companies have been exploiting the Internet over the last few years shows that there are still lessons to be learned in that arena.

Imitation may be the sincerest form of flattery, but it doesn’t always pay off when it comes to your online marketing strategy. All the hype around Web 2.0 and User Generated Content a couple of years ago initially led to some embarrassing attempts of letting regular folks to create ads. The Chevy Tahoe Apprentice challenge in 2006 is probably the most prominent example of how to not do it: even after GM wiped out ChevyApprentice.com, a search in YouTube for “Chevy Tahoe Apprentice” brings plenty of ads that should have never been created in the first place, a sobering reminder that having an exit strategy established up front is a must in your Internet experiments. Eventually marketing teams got it right, and the success of the Doritos Crash the Super Bowl competition in early 2007 led to several others companies to jump onto the UGC bandwagon, with varying, but mostly diminishing, levels of returns.

Another case in point was the creation of online places for your customer base to hang around and discuss subjects that take a front seat in their lives. HSBC’s Your Point of View was launched in October 2005 and generated a lot of buzz for quite some time. However, three years later, it has lost its freshness and novelty, giving the casual observer the impression of a failed experiment, when it could have been considered one of the most successful stories of a traditional company building a site based on the architecture of participation. Vancity’s “community powered” Change Everything, launched in September 2006, suffered from a similar problem, but had a longer shelf life, and people still contribute with comments to this day. One of the major differences between the two services that may explain the varying longevity of two similar offerings is that the Vancity experiment established itself as a social networking site, while the HSBC one stayed away from forming an online community and keeping user profiles. Change Everything is currently announcing a complete revamp of the service, so I’m curious to see what’s coming next.

What’s clear in the examples above is that timing and duration play a crucial role in the success of your online initiatives. This might sound obvious, but it’s often ignored in many of the initiatives we see online. Being too early might prevent you from understanding the dynamics of a new approach, but being too late can just position your company as a me-too player. The sweet spot, of course, is hard to determine, but recognizing these patterns can help you to sniff the right moment. Or you might be better prepared to fail gracefully from the get-go, not as an after-thought.

Influenced by a conversation I had with my colleague Bernie Michalik, I started thinking about three metaphors that highlight the importance of duration in your online strategy. Some initiatives work very well when applied exactly once, as it was the case with the Doritos Super Bowl commercial. Like telling a joke, the second time around people get bored and disengaged.

Other initiatives work better when mimicking a circus pattern: you come, raise your tent, run your dog-and-pony show, and then leave after a week or a month. One or two years from now, you can do it again, but staying there on a continuous basis would never work. This is how RBC approached its Next Great Innovator site. In the first edition, back in 2006, they defined up-front that it would be a time-boxed experiment, so that when they were done a few months after, retiring the site was perceived as the conclusion of a successful experiment. Every year since they keep coming back with new features, but still positioning it as a time-limited event (full disclosure: I work for RBC).

The IBM jams are another good example of how the circus pattern can be efficiently used to your advantage. Besides helping clients to deliver jams, we eat our own dog food and use them as one of the tools in our innovation strategy. If you are wondering what the jam looks like, the next round begins on Sunday, October 5th at 6 pm EDT, and participation is open to IBM clients.

Over the last few years, many marketers have started using microsites to drive marketing campaigns, as opposed to relying on the main corporate site. One of the advantages here is that microsites can be changed—and retired, if necessary—more easily than the company’s main Web site.

Finally, some of your initiatives might actually work well as a place that’s always open for business, pretty much like a never ending daytime soap opera. This typically works well for services that drive a steady number of clients, or whose audience is recycled on a yearly basis, like college students or pre-teens. Procter & Gamble’s Connect + Develop site is a good example of that, as the site serves an audience that has a continuous relationship with them. I often see initiatives that would operate better following the joke or circus patterns defaulting to the soap opera mode. Despite their initial huge success, they become victims of not selecting the appropriate duration for their endeavor.

When devising your next online initiative make sure you think about which of those patterns best fits your offering. Timing and duration might end up being the key determinants in how that incredible new site you conceived will be perceived a few years down the road.





Brazilian football: a disregard for the impossible

13 12 2009

(…) regional tournaments are not economically efficient, as small football clubs benefit from revenues without generating them, due to their lack of followers.

(…) to solve several problems in Brazilian football (…):

1. Reduce the importance of regional tournaments, which would include from now on only small clubs on a “promotion and relegation” system.

2. Integrate the national and international tournament schedules (…)

3. Solve the economic issues of football clubs, and consequently, the issues of Brazilian football as a whole.

If you thought the excerpts above were written by Juca Kfouri or some other present-day Brazilian sports writer, think again: they were taken from the first issue of the weekly news magazine Veja, published on September 11 (!), 1968:

Veja No 1 - Sep 11, 1968

Forty one years later, the administrative problems of Brazilian football are still pretty much the same. Despite of the perpetual mess that is the CBF (the national football association), or perhaps because of that, Brazil has won 3 more FIFA World Cups after that article was written, and has been a staple at the top of FIFA rankings since its inception.

As anything else in the world, the success of Brazilian football in the international arena can’t be linked to a single factor. The diversity and the size of the population, the tropical climate, and the popularity of the game across all social-economic classes, all played a significant role in the development of that sport in Brazil. That’s all nice and logical, but I would argue that chaos and uncertainty were no smaller contributors there.

Where else in the world you would find:

On the other side, football is not a conventional team sport. To win the FIFA World Cup in its current format, a team does not need to score a goal or win a single game in regulation or extra time. Chile qualified to the knock-out phase in 1978 with 3 draws, and theoretically could go all the way to the finals by the means of just winning on penalty shootouts. Furthermore, bad refereeing seems to just increase the interest of fans, to the point that football remains one of the few team sports today where modern technology is off-limits. I suspect this kind of logic is unfathomable to the typical sports fan in North America. If the sport itself is so counter-intuitive, maybe being disorganized, irrational and implausible end up being competitive advantages :-) .

Marissa Mayer, VP of Search Product and User Experience at Google once wrote:

Creativity loves constraints but they must be balanced with a healthy disregard for the impossible. (…) Disregarding the bounds of what we know or accept gives rise to ideas that are non-obvious, unconventional, or unexplored. The creativity realized in this balance between constraint and disregard for the impossible is fueled by passion and leads to revolutionary change.

I can’t think of a better description for the jogo bonito. Of course, being creative and fancy is not necessarily the road to success (Netherlands in 1974 and Brazil in 1982 come to mind), but from time to time, that passion for the unconventional gets us gems like these:

Note: This post was updated after its initial publication to add the screenshot of the news magazine and for clarity purposes.








Follow

Get every new post delivered to your Inbox.