What’s next? Social Media and the Information Life Cycle

10 10 2011
The Crystal Ball

Image via Wikipedia

As previously seen in Biznology:

Back in the days when “Web 2.0″ was a hot buzzword, many people asked what “Web 3.0″ would look like. Even though that question sounds now as outdated as an X-Files re-run episode, the quest for “what’s next?” is always in our minds. I’m no better prognosticator than anybody else, but my best answer would be: look at where the inefficiencies are. In no way is that a good indicator of what 2012 or 2013 will look like. Some inefficiencies may take decades to be properly addressed. But if you are trying to guess where we’ll see the next revolutionary step in social media–as opposed to the much-easier-to-predict incremental improvements–you have to focus where the biggest opportunities are, the undiscovered country somewhere out there. Analyzing the current landscape and the evolution of information markets by borrowing from a product life cycle framework may assist in developing some educated guesses about what the future may bring.

This is the third post in this current series discussing the differences between making and moving information and knowledge versus physical goods.

Since the first businessperson started bartering goods in some cave thousands of years ago, products went through a more or less standard life cycle: they were conceived, designed, produced, filtered for quality, distributed, sold, consumed, serviced, and finally disposed of. Over the course of history, that life cycle underwent several enhancements, most of them minor and mundane. A few of them, like the introduction of currency, steam power and the production line, had a huge impact in the overall efficiency of moving products around. Naturally, even though in retrospect those inventions seem like no-brainer milestones in the history of business and commerce, in reality their impact was scattered over a number of years. It’s likely that many who lived through those transitions did not realize how big those changes in fact were. One important indication of their revolutionary aspect is how each one of them enabled an incredible amplification in volume or reach of one or more stages of the product life cycle. Currency obviously made selling and trading more practical, steam power increased the ability to reduce the dependency on animals and people for mechanical work and the production line affected both the ability to manufacture better and to improve quality controls.

Non-physical items, such as ideas, information and knowledge, also go through a life-cycle of their own, but some of the steps are still not as sophisticated as the ones described above. Roughly speaking, that cycle consists of: creating, verbalizing (coding), capturing, distributing, filtering, consuming (decoding), and disposing of.

Creating and Coding
Thoughts and emotions come through everybody’s heads all the time, raw, unruly, uncensored. Then some are internally processed by the conscious mind and made sense of. A subset of those are mentally verbalized, and even a smaller portion of them is finally vocalized and communicated to others. The effect of technology in this early cycle has been modest. It still happens mostly the way it used to thousands of years ago.

Capturing
Capturing knowledge in the early days of humanity basically meant drawing on cave walls. Drawing became more elaborate with the advent of writing–which could be described as the codified doodling of one’s thoughts in an enduring medium–ensuring that a person’s knowledge could survive much beyond the physical constraints in time and space. More recently, photographs, movies, and audio recording permitted the capture of an increasing variety of information, but in practice they were still enabling us to more efficiently capture a concept, a memorable moment, or to tell a story.

Distributing
The introduction of movable type removed much of the limitations around the distribution of that content. Radio, cinema, TV, and the Internet of the 1990s did more of the same for a broader set of records of human knowledge. It was an expansion in scope, not in nature, but each new media added to the mix represented yet another powerful distribution amplifier.

Filtering and Consuming
All that capturing and distributing of information still relied a lot either on physical media or on the renting of expensive properties and expertise. Whole industries were created to exploit those high transaction costs. The inefficiencies of the system were one of the most powerful filtering mechanisms in that information life cycle. The big mass media players essentially dictated what was “media worth” for you to consume. In fact, that’s what most of us were known for: consumers of goods and information.

Disposing
Most information that is digitized today is not formally disposed of, it’s just abandoned or forgotten. Archived emails and SharePoint sites are an example of that, but how often do you search your company’s intranet and find information that is completely outdated and irrelevant now?

The social media role, so far
Much of what we call social media today, be it Internet forums, blogs, podcasts, photo, video and file sharing, social networks, or microblogging, along with the advances in mobile communications, contributed significantly to bring those transaction costs closer to zero. Billions of people can now not only capture and distribute information at low cost, but can also consume more of it and consume it faster. Not only that: the ability to add links to status updates, retweet, and share enabled regular people to filter and distribute content too. Everybody became a content curator and distributor of sorts, often without even realizing it.

So what’s next?
Most likely, we’ll continue to see an increasing sophistication of the inner steps of the information life cycle. We’re already witnessing better ways to capture (the IBM Smarter Planet initiative is a good example of that), filter (Google Plus’ circles), distribute (Tumblr, StumbleUpon) and consume (Flipboard, Zite) it. However, the obvious under-served stages in the information life cycle are the two extremities: creating, coding, and capturing on one side, and disposal of on the other.

On the creating, coding and capturing end, the major current inefficiency is loss of information. From the millions of ideas and thoughts that come to people’s minds, the vast majority vanishes without a trace. Twitter and status updates showed a very raw way of capturing some of that, but they are still cumbersome to use — and often impractical:

In case of fire, exit the building before tweeting about it

Via funnysigns.net

Apps like Evernote and Remember The Milk are evolving to make it much easier to record our impromptu thoughts, but the actual potential is enormous (suggested reading: The Future of Evernote). Even a real brain dump may be more feasible than most of us initially thought. As we learned a few days ago, UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. The results are mesmerizing:

Reconstructing brain images into video

But it does not stop there. Ideas generate ideas. Capturing makes indexing and sharing possible. Imagine how much more knowledge could be created if we had better ways to share and aggregate atomic pieces of information. We might not like Facebook’s timeline and Open Graph real-time apps in their first incarnations, but they are just giving us a peek of what the future — and the past — may look like.

Decoding the existing content out there would be a good start. My first language being Portuguese, I often find amazing content in Brazilian websites or brilliant posts shared by my Portuguese-speaking colleagues that are still not easily consumable by most people in the world, making me wonder how much I’m missing for not knowing German, Chinese, Hindi, or Japanese. One can always argue that there are plenty of tools out there for translating Internet content. True, but the actual user-friendly experience would just be browsing websites or watching videos in Punjabi or Arabic without even noticing that they were originally produced in another language.

Finally, one of the unsolved problems of the information age is proper disposal of information. Since storage is so cheap, we keep most of what is created, and this is increasingly becoming an annoyance. I often wish that my Google search could default to limiting search to the last year or the last month only, as the standard results normally show ancient content that are no longer relevant to me. Also, most of us just accept that whatever we say online stays online forever, but that is just a limitation of our current technology. If we could for a second convert all that digital information to a physical representation, we would see a landfill of unwanted items and plenty of clutter. Of course, “disposal” in the digital world does not need to be complete elimination–could just be better ways to place things in the back burner or the backroom archive. For example, we could use sensorial clues of aging information. You always can tell if a physical book was freshly published or a print from 20 years ago, based on its appearance and smell. Old internet content could be shown with increasingly yellow and brown background tones, so that you could visually tell the freshness of content.

Some of the above sounds crazy and superfluous, but the idea of Twitter probably sounded insane less than 10 years ago. Are we there yet? Not even close. But that is what makes this journey really interesting: what is coming is much more intriguing than what we saw so far. Like I said before, we’re still witnessing the infancy of social technologies.

Enhanced by Zemanta




Moving things vs. moving ideas

10 10 2011

AVENTURA, FL - AUGUST 18:  Ed Cole (R), with t...

Image by Getty Images via @daylife

As previously seen in Biznology:

All of our existing controls around content, intellectual property, and information exchange were developed when moving information around was an ancillary function to what mattered at the time: moving goods efficiently to generate wealth. The most powerful nations and organizations throughout the centuries were the ones that mastered the levers that controlled the flow of things. That pattern may soon be facing its own Napster moment. Information is becoming a good in itself, and our controls have not yet adapted to this new reality. In fact, much of what we call governance consists of ensuring that information moves very slowly–if at all. The entities–countries, companies, individuals–that first realize that a shift has already taken place, and re-think their raison d’être accordingly, might be the ones who will dominate the market in this brave new world.

In my last Biznology post, I used a comparison between information and physical goods to support an argument that social technologies still have a long way to go to be considered mature. When information itself becomes the good, and social interactions become the transportation medium, some new and interesting economic patterns may emerge.

Scarcity is a natural attribute of the physical world: a house, a car, or a TV set cannot be owned by multiple people at the same time, nor can one person provide hairdressing or medical services to several customers simultaneously. Our whole economic model is built on top of it: theories around economies of scale, price elasticity, bargaining, patents and copyright all have a strong dependency on “things” or “services” being limited. We even created artificial scarcity to digital items such as software and audio files in the form of license keys and DRM, so that they could fit our “real world” economy.

That model worked OK when being digital was the exception. However, more and more “things” are becoming digital: photos, movies, newspapers, books, magazines, maps, money, annotations, card decks, board games, drawings, paintings, kaleidoscopes–you name it. Furthermore, services are increasingly less dependent on geographical or temporal proximity: online programming, consulting, doctor appointments, tutoring, and teaching are sometimes better than their face-to-face counterparts. While most online services are still provided on a one-off basis, the digitization of those human interactions is just the first step to make them reusable. TED talks and iTunes University lectures are early examples of that.

Of course, I’m not advocating a world without patents or copyrights. But I do think that it’s important to understand what that world would look like, and assess if the existing controls are playing in our favor or against us. Even if we do not dare to change something that served us so well in the past, others may not have the same incentives to keep the status quo.

Another factor to consider is the leapfrog pattern experienced by the mobile telephony industry: countries that were behind in the deployment of phone landlines ended up surpassing those in the developed world in the adoption of cellular phones. Similarly, countries that never developed a sophisticated intellectual property framework may be able to start fresh and put a system in place where broad dissemination and re-use trumps authorship and individual success.

Finally, the emergence of social technologies over the last 10 years showed the power of a resource that has been underutilized for centuries: people and their interactions with each other. The essence of what we used to call Web 2.0 was the transformational aspect of leveraging social interactions throughout the information value chain: creation, capture, distribution, filtering and consumption. The crowd now is often the source, the medium, and the destination of information in its multiple forms.

The conclusion is that the sheer number of people that can be mobilized by an entity–a nation, an organization or an individual–may become a source of a wealth in the near future. Of course, peoplenomics is mostly a diamond in the rough for now. A quick comparison between the top 20 countries by GDP per capita (based on Purchasing Power Parity) and the top 20 countries in the world by population shows that the size of a country’s population is still a poor indicator of its wealth–only the United States, Japan and Germany are part of both lists. Whether or not unleashing the economic value of large populations and efficient information flows will ever materialize is anybody’s guess, but keeping an eye for it and being able to adapt quickly may be key survival skills in a rapidly changing landscape.

Enhanced by Zemanta




The infancy of social technologies

3 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Alex Pickering Transfer Company, early moving ...

Image via Wikipedia

The last 20 years saw knowledge workers adding a steady stream of tools to their repertoire: increasingly sophisticated office suite software, email, the Internet, instant messaging, voice over IP, Web conferences, and, in the last decade, a number of social technologies in the form of blogs, wikis, social networks, microblogging and others. Google+ is just the latest addition to the mix, introducing some interesting ideas to a space that seemed to be quite mature already. Nobody knows for sure if Google+ will ever dethrone Facebook and Twitter, but the buzz it created showed something already: our allegiance to any Social Platform in particular is as reliable as that of a mercenary just waiting for the highest bidder. Taking a step back, it becomes clear that we came a long way since the days where Wikipedia sounded like a misplaced hippie idea transplanted from the 60s. But make no mistake: we are still witnessing the infancy of social technologies, and there is much more to come.

David Allen, of Getting Things Done fame, stated in an interview to the Harvard Business Review magazine earlier this year (May 2011):

Peter Drucker said that the toughest job for knowledge workers is defining the work. A century ago, 80% of the world made and moved things. You worked as long as you could, and then you slept, and then you got up and worked again. You didn’t have to triage or make executive decisions. It’s harder to be productive today because the work has become much more complex.

I have no idea of how much that percentage changed since then, but I suspect that in much of world, a significant number of workers now “make and move” knowledge and information, as opposed to physical goods. Of course, this is no earth-shattering statement, but what is sometimes missed in this obvious assertion is that the same kind of inefficiencies and constraints that limited the production and distribution of “things” one hundred years ago can be observed in the way we deal with knowledge and information today. By visualizing information as a “thing” that can be produced, distributed and consumed, we can better understand how far we still are from an efficient knowledge marketplace.

While we spend countless hours debating if email is dead, if IM is a productivity booster or killer, and if Twitter and Facebook and Google+ will be here 5 years from now, the fact of the matter is that each new social technology brings new mechanisms trying to solve the same problem: reduce inefficiencies in the way we create, capture and move information. While MySpace has likely gone the way of the Dodo, like Geocities did before it, they both introduced some memes and patterns that are still alive today. Wikipedia, blogs, podcasts, Friendster, Facebook, Twitter and FourSquare all contributed to this mix, and social business platforms are continuously incorporating several of those concepts and making them available to knowledge workers.

FedEx, Amazon, and Walmart all created a very efficient ecosystem to move goods by reducing or eliminating obstacles to efficiency. They make the complex task of moving goods a painless experience–at least most of the time. For the non-physical goods, we’re not even close to that. Information flows are inefficient across the value chain. Compared to their counterparts in the physical world, our mechanisms to digitize information are precarious, the channels to distribute it are cumbersome, and our filters to screen it are primitive.

However, eliminating inefficiencies does not necessarily mean eliminating barriers altogether. Sticking to the physical goods metaphor, while there are items that you want to distribute to everybody, like water, food, sanitation, and medication, there are others that you need to control more selectively (flowers for your wife or Punjabi-language TV shows to a Punjabi-speaking population). Some of the problems we attribute to email or Facebook communications are simply a mismatch between the medium and the nature of the message, not an intrinsic failure of the tools themselves. The Google+ concept of circles and streams are a good start, but still very far from perfect. After spending a few minutes there, you will notice that you are still getting more information than you wanted in some cases, and not even a small percentage of what you need in others. That would be unacceptable today for physical goods: just imagine you receiving all sorts of unwanted books or groceries or clothes by your door everyday, but not having a way to just get the few things you need to live a good life.

Thus, before you get too carried away with the latest and greatest social technology darling, be it FourSquare, Tumblr, Quora, Zynga, or Google+, know that we still have a long way to go. If the knowledge mountain is the Everest and social technologies are the tools to climb it, we have not even got to Kathmandu yet.





The Age of Disinformation

2 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

My Room - Looks Like I've Got My Work Cut Out ...

Image by raider3_anime via Flickr

Coincidentally or not, after I covered the topic of Q & A services in my last Biznology post, I’ve heard complaints from three different acquaintances about the low quality of knowledge in Yahoo! Answers, one of them mockingly calling this world where everybody is an expert “the age of disinformation.” Another friend of mine has recently complained about getting mostly useless content–with zero editorial and zero user reviews–from reputable sites whenever he Googles “<non-mainstream product> review”. Has filter failure become so prevalent that, despite all the information available to us, we are no better off than we were 20 years ago, when content was scarce, difficult to produce and difficult to access?

Three months ago, my wife called me from the grocery store, asking if a product has the expiry date of “11 MA 10″, does that mean May 10, 2011 (which would be good, since it was still April), or March 10, 2011 (which would mean that the product was way past its “best before” date)?

Naturally, my first instinct was to Google it, and inevitably I ended up getting a bunch of entries in Yahoo! Answers. Here are some of the pearls of wisdom I found:

“March. May has no abbreviation””I think it means May 11. Unless it’s on something that lasts a long time, like toothpaste. Then it’s probably 2011″

“march” (wrong, the right answer, I found later, was “May 10, 2011″)

“most likely March cuz May is so short they can just put the full month”

“I believe it’s May… I think March would be Mar”

I finally started ignoring any result coming from Yahoo! and found the definitive right answer: the format NN AA NN is a Canadian thing–I live in Toronto–and it’s the doing of the Canadian Food Inspection Agency. You can find the whole reference here. Apparently, to get to a month abbreviation that works both in English and French, that government agency decided to use “monthly bilingual symbols.” The problem is, if you don’t know the context, and if you are not accustomed to that convention, you might mistakenly assume that MA is March, JN is June, or that the two numbers at the beginning are the day, not the year. When it comes to food safety, relying on a standard that is easily subject to misinterpretation is something that you probably would like to avoid.

On the other side of this spectrum, the product reviews posted at Amazon are typically very reliable. Amazon reveals a lot of information about the reviewers, such as “real name,” their other reviews, the “verified purchase” stamp. Also, many filtering and ranking mechanisms are provided, such as the ability for other users to comment on reviews, vote for helpfulness, and say if a comment added to the discussion, or it’s abusive, or if a given reviewer should be ignored.

Unfortunately, Amazon is the exception, not the rule, one of the few sites out there where everybody knows when you are a dog. Twitter’s verified accounts seemed promising, but since they closed the program to the regular public, unless you are a celebrity, you are out-of-luck proving that you are not the person behind that account with your name and your photo. Of course, sometimes having a verified account may play against you, like Rep. Anthony Weiner found out in the last few weeks.

Reflecting over the low quality of information generally available, I concede that skeptics have reasons to not hop into the social media bandwagon mindlessly. But what we are really observing is just an amplification phenomenon, and a moment in time that many decades from now will be seen as the infancy of social technologies.

Since the first pieces of “persistent” content started being produced as rough drawings in some pre-historic cave thousands of years ago, the bad outnumbered the good by orders of magnitude. Creating good content is the exception, and social media amplifies all kinds of content. In part, there are lots of bad Yahoo! Answers because we always had a high degree of disinformation in the world. The only difference is that that disinformation can be easily spread, but that also applies to the good content.

On top of that, the same way natural ecosystems are in a constant state of imbalance but trend towards an equilibrium, information ecosystems will find themselves in apparent disarray from time to time. The original Yahoo! Search, editorialized by real people, once dominated the Internet. It soon became inefficient, and then the PageRank-driven Google search took over. It worked really well for several years, but it’s now also showing its age. Better filters will be developed to overcome the current deficiencies, and this battle will never end. The dynamic between quality of content and quality of filters will perpetually behave like a pendulum, as they always had.

Is this the age of disinformation? Yes, but no more than any other in the past. The fact that, by producing more content in general, we also increase the quantity of good content, should make us optimistic that we are better off today than we were yesterday. If the cost of coming up with one more Mozart is to produce thousands of Salieris, so be it: we may end up finding that Salieris are not that bad after all.





From the batcomputer to Quora: the quest for the perfect answering machine

1 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

When Quora announced earlier this month that they were eliminating their policy against self-promoting questions and answers, some analysts wondered if that was opening the gates for spammers to dominate the conversation. The reality is that the whole evolution of Q&A services is not much different from what Google and other search engines have been experiencing throughout the years. It’s a battle to separate the wheat from the chaff, where the chaff keeps finding creative ways to look like the wheat. Keep reading, and you’ll find why developing the perfect Q&A engine should not be our real objective here.

As a kid, I spent my fair number of hours watching re-runs of camp TV shows, including the classic Batman TV series from the 60′s. I remember how the batcomputer was able to answer any question you asked it, no matter how weird or convoluted they were. For those of you who never had the privilege (?) to see the precursor of IBM’s Watson, here it is, courtesy of YouTube (it’s a long video, so you may want to jump directly to the 2:20 mark):

Yes, you saw it right. The bat-computer was fed a bunch of alphabet soup letters and gave the dynamic duo the answer they were looking for, where they should go next to complete their mission. However, as a sign of things to come, Batman then tries to go extreme and feeds the bat-computer with the Yellow Pages directory book, but—oh the horror—the batcomputer fails miserably trying to get them a more precise answer for their subsequent question.

More than 40 years later, our quest for the infallible computer has not changed much. Watson could easily answer Jeopardy! questions about song lyrics and book topics, but choked when facing more nuanced themes. That was not very different from the 18th century “Mechanical Turk”, which was capable of winning chess games, solving puzzles, conversing in English, French and German and even answering questions about people’s age and marital status, but had its fair share of defeats.

I concede that services like Wolfram Alpha, ChaCha and Quora raised the bar compared to early players such as Yahoo! Answers and WikiAnswers, but they all come short to address complex, subtle or fringe questions.

If you don’t believe me, just try yourself. Use your favorite online Q&A service to ask a question that you can’t easily find in Wikipedia or via a quick Google search and let me know if you get anything meaningful back.

Quora gave many of us the hope that we would be finally getting a high-quality, well-curated Q&A service. It’s becoming increasingly clear now that, albeit a step forward, Quora is not the know-all oracle that we were looking for.

Are we going to ever find the perfect Q&A service, where more nuanced questions will get satisfactory responses? My guess is “no”, but not even Adam West’s noodle-eating batcomputer would know the answer for that.

In fact, at the end of the day, that answer is not relevant at all. As we make strides in the information technology journey, our fundamental objective is not to replace people with machines. Our real target is to free us all from as many mundane and “automatable” tasks as possible, so that we can focus our efforts and energy more and more on the tasks that only humans can do. Having increasingly smarter systems that can answer most of our trivial questions are not a sign of our defeat to “our new computer overlords.” It’s rather a great opportunity to re-define what being human actually means.





A Skewed Web: Innovation is in the outskirts of social media

15 09 2010
Honeybees with a nice juicy drone

Image by dni777 via Flickr

As previously seen in Biznology:

As I discussed in my post last month, it’s a skewed Web out there. A multitude of online social filters were developed over the last 15 years to address our perennial information overload curse. From Google’s page rank, we went all the way to tag clouds, social bookmarking, Twitter trending topics and Gmail’s Priority Inbox, trying to find ways to make what matters float to the top. However, most of these social filters are based on some variation of a “majority rules” algorithm. While they all contributed to keep information input manageable, they also skewed the stream of information getting to us to something more uniform. Will crowdsourcing make us all well-informed drones? Ultimately, it may depend on where you’re looking at, the center or the fringe of the beehive.

Almost two years ago, Clay Shirky boldly stated that information overload was not a problem, or at least not a new one. It was just a fact of life at least as old as the Alexandria library. According to Shirky, the actual issue we faced in this Internet age would be that of a filter failure: our mechanisms to separate the wheat from the chaff were just not good enough. Here is an excerpt from his interview at CJR:

The reason we think that there’s not an information overload problem in a Barnes and Noble or a library is that we’re actually used to the cataloging system. On the Web, we’re just not used to the filters yet, and so it seems like “Oh, there’s so much more information.” But, in fact, from the 1500s on, that’s been the normal case. So, the real question is, how do we design filters that let us find our way through this particular abundance of information? And, you know, my answer to that question has been: the only group that can catalog everything is everybody. One of the reasons you see this enormous move towards social filters, as with Digg, as with del.icio.us, as with Google Reader, in a way, is simply that the scale of the problem has exceeded what professional catalogers can do.

While some still beg to differ about information overload not being an issue – after all, our email inboxes, RSS readers and Facebook and Twitter streams never cease to overwhelm us–we tend to welcome every step in the evolution of smarter filters.

The whole lineage of social filters, from Google’s page rank, passing through Digg and Delicious, culminating with Twitter’s trending topics, mitigated one problem–information overload–but exacerbated another one: we were all getting individually smarter, but collectively dumber. By letting the majority or the loud mouths dictate what was relevant, we ended up with a giant global echo chamber.

We were all watching Charlie biting Harry’s finger, and Justin Bieber trying to convince (or threaten) us that we will never, ever, ever be apart. That Ludacris video surpassed 300 million views in seven months in YouTube alone, taking their all-time #1 spot. An unverified claim about Bieber using 3% of Twitter’s infrastructure being passed as news by traditional media outlets is just the last example of how far we went down the madness of crowds road.

br />This of course is not a new problem. Back in the early 1980s, MTV was running Michael Jackson’s 14-minute “Thriller” video twice an hour. The trouble here is just the magnitude of it. A potential downside of this mass-media-on-steroids uniformity is that a homogeneous environment is not the best place for innovation to flourish. Borrowing from paleontologist Stephen Jay Gould: transformation is rare in the large, stable central populations. Evolution is more likely to happen in the tiny populations living in the geographic corners: “major genetic reorganizations almost always take place in the small, peripherally isolated populations that form new species.”

If you are looking for the next big thing, or trying to “think different,” or to be creative and innovative, you need to look beyond the center. The core will tell you what’s up, so that you’ll be “in the know.” The fringe will show you what’s coming next. To paraphrase William Gibson, the future is peripherally distributed.

Enhanced by Zemanta




A Skewed Web: Are you an outlier?

14 09 2010

70/365 - It. Was. Amazing.

Image by BLW Photography via Flickr

As previously seen at Biznology:

Relying solely on social news or social bookmarking services such as Digg, Reddit, Fark, Slashdot, and Delicious might leave you with a very peculiar version of the world. A glance at the Twitter hot topics or Google Trends suggests that our collective Web brain is that of a tween. It’s a skewed Web out there, and sometimes you might just feel like you don’t belong. But is that real, or just a distorted view of the social media world?

If you believe that Google’s Zeitgeist is a good proxy for “the spirit of times” as its name claims, last year we apparently cared more about Jon and Kate and Twilight’s New Moon than about the presidential inauguration, and there was also a quite unusual interest in paranormal activity:

Google Zeitgeist (US) – 2009 News – Fastest Rising
Google Zeitgeist (US) – 2009 News – Overview

Also, a quick glimpse at the current Twitter trending topics, or the top 50 topics of all time (which, in social media terms, means since September 2008) may also leave you wondering about how wise the crowds really are:

Twitter Trending Stats (Source: TweetStats.com, Aug 7, 2010, 12:07 AM)
Twitter Trending Stats – “All time”: Sep 24, 2008 to Aug 7, 2010 (Source: TweetStats.com)

Collectively, our social media activity seems to be closer to People Magazine and Sports Illustrated than to The New York Times or National Geographic. Of course, there’s nothing wrong with that, it is what it is—and I’m as guilty of taking the occasional look at TMZ as the next person.

Is this definitive proof that users of social media are more interested in celebrities, athletes and gadgets than in politics, science and, you know, “serious stuff”? Well, not necessarily. Both Google Zeitgeist and Twitter Trending Topics show “deltas” of interest, subjects that for one reason or another are suddenly becoming popular. A quick look at Google Trends show that, for all its popularity in 2009, “New Moon” doesn’t hold a candle to other popular terms:

Google Trends snapshot (taken on Aug 7, 2010)

Furthermore, people obviously search for things they don’t know where to find. Sites you visit often are likely already bookmarked or just get resolved by your browser when you start typing related keywords in the navigation box.

So, before you lose all faith in humanity, or at least in the online portion of it, take a deep breath and think again. There is a social Web out there that is much more diverse than what is revealed byTwitter or Google trending topics. If you are an outlier, rest assured that you are in good company ;-)

Enhanced by Zemanta




Felipe Machado and Andrew Keen: Thinking outside the social media echo chamber

7 02 2010

Back in November, I had the pleasure of having lunch with Felipe Machado, multimedia editor for one of the largest newspapers in Brazil, and a former business partner in a short-lived Internet venture in the mid-nineties. The get-together was brokered by Daniel Dystyler, the consummate connector in the Gladwell-esque sense of the word.


Felipe Machado and Daniel Dystyler

Felipe is an accomplished journalist, book author and musician, and I deeply respect his ability to connect the dots between the old and new media. I actually often disagree with him: I tend to analyze the world through a logical framework, and Felipe relies on intuition and passion. That’s exactly why I savour every opportunity to talk to him. If you understand Portuguese, you may want to check his participation in “Manhattan Connection” (Rede Globo, 4th largest TV network in the world), talking about the future of media:

During our lunch conversation, Felipe mentioned Andrew Keen’s “The Cult of the Amateur”, as a book that broke away from the sameness of social media authors. Coincidentally, I had read an article about that book the day before, so I bit the bait and borrowed the book from the local library the first week I came back from Brazil.

This may come as a surprise to anybody who knows me, but if you work in anything related to new media, social media, Web 2.0 and emerging Internet technologies, I highly recommend you read Keen’s book. Make no mistake: the book deserves all criticism it got – you can start with Lawrence Lessig’s blog post for a particularly heated discussion on the limitations of Keen’s arguments. “The Cult of the Amateur” is ironically a concrete proof that having editors and a publisher behind a book does not necessarily make it any better than, say, a blog post.

The reason I recommend a not-so-good book is this: Andrew Keen represents a large contingent of people in your circle of friends, co-workers, clients and audience – people who hear your social media message and deeply disagree with you. They may well be the vast majority that does not blog, does not use Twitter and couldn’t care less about what you had for dinner last night. They often don’t say it out loud, to not be perceived as luddites, but are not convinced that social media is making things any better, or Web 2.0 is something inevitable.

Those are the folks you should pay attention to. No matter how much you admire the work by Chris Anderson, Clay Shirky, Jeff Howe and others social media luminaries, you are probably just hearing the echo of your own voice there. You need to understand the concerns, the points of view and the anxiety of the Andrew Keens of the world toward the so-called social media revolution. Failing to do that will prevent you from crossing the chasm between early adopters and everybody else.

Reaching out to the members of our social network who are not in Facebook, LinkedIn and Twitter can go a long way for us all to realize that the real world is MUCH BIGGER than Web 2.0 and Social Media (as I learned from Jean-François Barsoum long time ago).





Individually smarter, collectively dumber?

8 12 2009

In my first corporate job back in Brazil, I was part of a large cohort of interns who end up all being hired together. We were young and well-connected, and always on top of everything that was happening in the company, from official stuff to the proverbial grapevine telegraph. Rumour conversations used to start like this: “I’ve heard from 3 different sources that…” My pal Alexandre Guimaraes used to joke that none of us had 3 different sources as we all shared the same connections.

Likewise, I often hear from my Twitter fellows that their RSS feed reader is now abandoned, as most of the interesting online things they find now comes from their tweeps. A quick experiment seems to confirm that trend. Here are the results of a Twitter search for “twitter feed reader“:

Search results for "twitter feed reader"

Search results for "twitter feed reader"

In my recent re-read of The Wisdom of Crowds, the following excerpt called my attention (highlight is mine):

(…) the more influence a group’s members exert on each other, and the more personal contact they have with each other, the less likely it is that the group’s decisions will be wise ones. The more influence we exert on each other, the more likely it is that we will believe the same things and make the same mistakes. That means it’s possible that we could become individually smarter but collectively dumber.

The first time I read that was many years before Twitter even existed, so it didn’t mean much to me. Now I can relate: I do feel that Twitter is making me individually smarter, as I can quickly consume a whole lot of info from news sources, geeks, NBA players, celebrities, friends and others. I find the Twitscoop cloud in TweetDeck a particularly good way to find what’s going on around the globe right now.

Twitscoop cloud

I used to see that cloud as a visualization of our collective intelligence. But perhaps that cloud is actually something much more humbling: the visualization of our own echo chamber, our herd’s brain. By being so intensely connected, we may be losing one of the most basic conditions identified by Surowiecki’s for a crowd to be wise: independence (the other 2 are diversity and decentralization).

Should we all stop using Twitter and Facebook now? Of course not. But maybe we should invest a bit more of our time going after the unusual, the unpopular, the offline, the old and the out-of-fashion. The core is boring, and the fringe is where real innovation and change tend to appear first.





Business Books: The cover vs. the core

7 12 2009

For a person who deeply loves Biology and keeps blogging about Darwin, I have to confess: I never read The Origin of Species, only parts of it. There, I said it. I actually tried to go through it a few times, the last attempt being via Stanza on my iPhone:

Stanza for the iPhone: Origin of Species

Heck, I haven’t even skimmed Origin‘s Cliff’s Notes (that’s just a figure of speech: there’s none, actually) so you can say that my knowledge of what Darwin said or thought is like second-hand smoking or back-seat driving: mostly hear-say. Some saving grace are those 5 years spent at University studying Biology. Furthermore, I would guess that most Biology students (at least in Brazil) have never seen a copy of Origin either.

On a smaller scale, many of us have a similar approach with business books. We have not read most of them – well except maybe Sacha did :-) , but we often have an opinion about them, typically based on indirect evidence.

I usually don’t go through the same book twice – life is short and time is at a premium, but I recently made an exception with The Wisdom of Crowds (2004) and The Long Tail (2006), two books that have been much maligned by supposedly championing the advent of new business models that never materialized or that failed to deliver at the promise.

The Long Tail and The Wisdom of Crowds

The Long Tail and The Wisdom of Crowds

Their respective authors even had faceoffs of sorts with the excellent Malcolm Gladwell of The Tipping Point and Blink fame, one friendly, the other not so much. By the way, if you are unfamiliar with Slate’s Book Club feature, you are in for a treat. It’s kind of The Next Supermodel for the written world. I know that doesn’t sound very enticing, but the series is really good.

The major problem I see with both books is not their content: it’s their covers. Both books are fairly balanced in their core and depict scenarios showing both supporting evidence and possible shortcomings for their arguments. But their covers are not as nuanced. Why the future of business is selling less of more and Why the many are smarter than the few, besides sounding like catch phrases written by the same marketing wiz, are hardly shy in the over-promising department.

My learning going through the re-reading process is that I have a much better appreciation for the content of these books now that they don’t have all the buzz around them. It’s like listening to popular songs from the past years after they fell in oblivion. You can more clearly see their actual merits and limitations, without being so influenced by the media. So, if you haven’t yet, give them a try, you may still learn a thing or two, no matter if you believe in their premises or not.

I can’t help but think that, if The Origin of Species was published today, instead of the dull sub-title The preservation of favoured races in the struggle for life, it would bring something like: Why everything you knew about life will change forever.

The Origin of Species, original cover

The Origin of Species, original cover (Darwin Online)





The Apple logo, Annie Hall and the single version of the truth

9 08 2009

CreativeBits published last week a good interview with Rob Janoff, the designer of the Apple logo (thanks to TUAW for the pointer). Over the years, I’ve heard several theories explaining the bitten apple, from the obvious (Eve’s bite on the forbidden fruit representing the lust for knowledge), to the nerdy (a reference to the computer term byte), to the convoluted (like the one below from Wikipedia).

Another explanation exists that the bitten apple pays homage to the mathematician Alan Turing, who committed suicide by eating an apple he had laced with cyanide.

Then you learn directly from the horse’s mouth that all of the above are just BS (his term, not mine). The real explanation turned out to be so much more mundane and simpler:

Anyway, when I explain the real reason why I did the bite it’s kind of a let down. But I’ll tell you. I designed it with a bite for scale, so people get that it was an apple not a cherry. Also it was kind of iconic about taking a bite out of an apple. Something that everyone can experience. It goes across cultures. If anybody ever had an apple he probably bitten into it and that’s what you get.

All the fancy theories about the bitten apple logo and the real reason is that Janoff didn’t want to have people mistaking his stylized apple by a cherry??? “Kind of a let down” is the understatement of the year.

This whole discussion reminds me of this classic scene from Woody Allen’s Annie Hall movie:

The video above is a bit long, so here is a description for the time-starved among you:

In one scene, Allen’s character, standing in a cinema queue with Annie and listening to someone behind him expound on Marshall McLuhan’s work, leaves the line to speak to the camera directly. The man then speaks to the camera in his defense, and Allen resolves the dispute by pulling McLuhan himself from behind a free-standing movie posterboard to tell the man that his interpretation is wrong.

I had a great literature teacher who told me many years ago that what an artist meant when creating his art is important if you are interested in history or passing an exam, but all the possible interpretations by consumers of that art are as legitimate as the one by the author, be her or him a writer, a musician, a painter or a sculptor. The bottom line is that once the art is out to the public, the audience owns its meaning, and that meaning will evolve as time and context keeps building on top of it, regardless of what the author’s original intention was.

Revisiting the Annie Hall scene from that perspective, Allen’s character, McLuhan and the Columbia U professor were all right in their distinct interpretations, and all wrong in assuming that only one was possible.

In the fields of IT and Business Intelligence, we often hear the (terrible) acronym SVOT, or Single Version of the Truth (sometimes referred as “one version of the truth”). While in very technical terms that may make sense – a person cannot have two different places of birth, for example – SVOT in anything above bits and bytes is just an urban myth.

A personal story to illustrate this: my maternal uncle’s place of birth was supposed to be some Japanese city named Keijo, according to old documents from my grandfather. As many of you know, my mother is Japanese, and I always just assumed that my uncle was born in Japan, so I never bothered looking for Keijo in the map. Last month, talking to my sister over Skype, I googled it and found that Keijo is actually the former Japanese name used for Seoul, the capital of South Korea, during the period of Japanese rule! In a few seconds, SVOT just became to me IDWTYART, as in “it depends what truth you are referring to” :-)

Just to bring this post back to its original subject, I want to conclude it with a pictorial representation of SVOT vs. IDWTYART juxtaposing the iconic logo and its corresponding pwned version:





Is failure overrated?

2 04 2009


Web 2.0 Expo San Francisco 2008

As seen in Biznology (slightly modified to avoid overlapping with previous posts in this blog):

Is learning from failures overrated? When emphasizing the importance of learning from errors, are we actually creating a culture of losers? Read on to hear arguments on both sides of this discussion and make up your mind. Your company’s survival in the long term may depend on it.

I’m in San Francisco this week, speaking at and attending the Web 2.0 Expo at the Moscone West. In a number of sessions, the speakers emphasized that failure is an important part of the innovation game. Knowing that I also tend to subscribe to that theory, and commenting on the Charlie Brown comic strip I embedded in my previous blog entry, a colleague at IBM pointed me to an interesting piece written by Jason Fried, from 37signals, who challenges that whole concept: “Failure is overrated, a redux”. It’s a good post, and the comments are also worth reading. To have a complete picture of the discussion, I suggest you to also read the New York Times article Jason refers to, “Try, Try Again, or Maybe Not”.

As it’s often the case in heated discussions, I initially found that Jason was defending a completely different perspective toward failure and learning, but this comment of his on another related post made me think that the difference is mostly one of weight.

“Everything is a learning experience. It’s just that I’ve found learning from your successes to be more advantageous. (…) I’ve always found more value in learning from the things that work than the things that don’t.”

I definitely can live with that position. What I have more trouble with is the cited Harvard Business School working paper. Here are some excerpts from the NYT article:

“The data are absolutely clear,” says Paul A. Gompers, a professor of business administration at the school and one of the study’s authors. “Does failure breed new knowledge or experience that can be leveraged into performance the second time around?” he asks. In some cases, yes, but over all, he says, “We found there is no benefit in terms of performance.”

(…) first-time entrepreneurs who received venture capital funding had a 22 percent chance of success. Success was defined as going public or filing to go public; Professor Gompers says the results were similar when using other measures, like acquisition or merger.

Already-successful entrepreneurs were far more likely to succeed again: their success rate for later venture-backed companies was 34 percent. But entrepreneurs whose companies had been liquidated or gone bankrupt had almost the same follow-on success rate as the first-timers: 23 percent.

If the article is accurate – and that’s a big if, considering that this is still a working paper – it seems that the HBS research is not actually proving that “when it comes to venture-backed entrepreneurship, the only experience that counts is success”, as stated in the opening paragraph. It basically demonstrates that enterpreneurs who managed to go public or filed to go public are slightly more likely (going from 22% to 34%) to have a repeat, but isn’t that expected?

There are several factors that come into play when filing a venture to go public, and having done it once gives an entrepreneur some knowledge of what it takes to get there again. I actually find surprising that, even with that edge, the rate of failure is still very high. Another way to interpret the same data is: roughly two thirds of entrepreneurs who were successful the first time (and I’m using the same loose definition of success here) fail the second time. If anything, the data tells me that success is also overrated.

The “learning from failures” approach makes more sense when you take a granular approach to it. Every single initiative you undertake is composed of a vast number of small wins and losses. You definitely can learn from both outcomes, so regardless of which one will teach you the most, embrace successes AND failures. The fundamental message when advocating a culture that allows failure to occur from time to time is to avoid analysis paralysis, or even worse, denial by hiding what went wrong and exaggerating what went right.

The bottom line is that innovation entails good risk management and shares many features with the financial world. Low risk initiatives are likely to generate low returns, and don’t give you much of a competitive edge. Being bold may lead you to collect wins and losses along the way, but also can reward you more handsomely overall. Knowing that, it’s important that you balance your innovation initiatives the same way you handle a portfolio: diversify them and adjust the mix to your comfort level. During economic downturns like the one we are going through now, it’s easy to panic and stop innovating. Keep in mind that a solid and consistent long term approach to innovation may determine your ability to survive in good and bad times.





On learning and losing

26 03 2009

This is a great counterpoint to my previous post on learning from failures :-)

Mr. Charles M Schulz, we miss you and ol’ Charlie.

Peanuts





Ctrl + X and Scissors: Share, even if you think everybody knows it already

19 03 2009

Working with Bernie Michalik for a few years now, we changed our behaviour when sharing knowledge – and also other trivial things that don’t deserve to be called “knowledge”, more like gossip or useless tidbits of information. At the beginning, we would not share some tips about interesting Web 2.0 sites or piece of news because we just assumed that the other party would have heard about it already, as we both are avid consumers of new geeky stuff.

Over time, we noticed that more often than not our assumption was wrong. Even though we share quite a bit of a network and sources of information, we still find that a good deal of what one of us know is not as universally known as we expected. Coming to think of it, the most popular YouTube video of all time as of this writing is Avril Lavigne’s “Girlfriend”, with 117 million views – it just passed the long time favourite “Evolution of Dance”. Even if you consider that each view was by a different person – very unlikely by the way – that music video would have failed to reach the remaining 883,000,000 people with Internet access. I know, people could have seen it in Vimeo or Metacafe, but you catch my drift. No matter how many people know about anything, there are always more people who don’t know about it.

That’s one of the beauties of blogging or tweeting – or re-tweeting, for that matter. You share without actually knowing if people care of not, a “To Whom It May Concern” note to the world. Sometimes it’s a hit, sometimes it’s a miss. Sometimes it’s a miss that becomes a hit a few months from now, as that shared knowledge becomes digitalized and searchable.

One silly example. In the early nineties, somebody told me a handy logic behind having Ctrl + X and Ctrl + V as shortcuts for “cut” and “paste”, respectively. The letter “X” resembles an open scissor – thus “cut”, and the letter “V” is like that handwritten markup most of us use to signal an insertion point in the middle of a text – thus “paste”. Even 15 years later, there are still a fair number of people who never heard about the mnemonic aspect of those shortcuts.

The bottom line? Don’t be afraid to share what you learn. You’ll quickly find you are almost always the “second last to learn”.





Enterprise Blogging Inhibitors: writer’s block, making a fool of oneself and lack of feedback

26 01 2009

This is an updated version of a blog post I wrote for my internal IBM blog back in April 2006. It shows its age, but it may still be relevant for folks starting to blog inside the corporation.

When I ask colleagues at IBM why they don’t blog, or why they don’t blog more often, the most common answers are “I don’t have  time”, “I don’t know what to blog about” and “no one cares about my thoughts”. In a survey I ran 3 years ago, not even a single respondent mentioned writer’s block or fear of making a fool of oneself as blogging inhibitors.

Many of my fellow IBMers are quick-witted, bright and have plenty of good ideas. They are typically well-read, inquisitive and very open to hear other people’s opinions. Most of them are good writers too, and they would probably be good bloggers. However, many of them don’t blog. There’s this somehow unfounded idea that blogging is going to take a lot of time and effort. Some of them even started a blog, but stopped after a while. They got discouraged by the number of daily hits in their blogs or by the low number of comments their early posts generated or by the time they spent just to write a few paragraphs. Or they just don’t know what to write about on a frequent basis.

If any of the readers of this blog is wondering whether or not to start blogging or resume blogging inside the enterprise, here’s my take on it. Don’t forget that we are all learning, so take it with a grain of salt (as you should do with anything you read). Also, you’ll find lots of – sometimes conflicting – advice out there on how to blog effectively. Be confident that you’ll eventually find what works better for you.

  • Don’t liken enterprise blogging to writing an article for a magazine. In blogs, you can afford to disclose unpolished thoughts out there. Writing them actually may help you to structure your ideas, and sharing with others may enrich a reflection you had only as a raw piece of clay inside your brain, as others may have a common interest on the topic. So, while your post may not be getting you a Pulitzer Award any time soon, it may actually trigger a good discussion with others in your company. I see blogging more like chatting in a bar after hours (minus the drinks and the hangover) than giving a lecture to a demanding audience.
  • Approach blogging like reading and writing e-mails, with the advantage that there’s no serious harm if you skip reading some posts from time to time, and that nobody ever expects you to reply to blog entries. It’s something you do at a best effort basis. Time-box the time you spent reading and writing blogs to, say, 15 minutes a day, or 30 minutes a week. Or just harness your interstitial time, blogging whenever you have a few minutes to spare. As you get used to doing it, you’ll become more efficient. Remember, don’t approach it as one more task to squeeze into your already busy schedule. It’s a learning and networking venue where you get a lot accomplished just by dedicating 15 minutes a day to it.
  • Be aware that many in your company will consume your internal blog via an RSS reader. This means that even though people are reading your blog, the hit counter may not show that. Also, as it’s the case with most blogs, expect a very low comment-to-post ratio at least at the beginning. Some of your interesting posts will not necessarily generate any comment, even though people are paying attention. I found over the years that some of my “comment-less” posts were actually “dogeared” by some colleagues, proving that the number of comments is not necessarily an indication of whether or not readers found it relevant. Most days, like many others blog addicts, I skim through all posts in my feed reader. Whatever you write about, you’ll have the attention of a fair number of readers for at least a few moments. Therefore, make sure the title of your blog entry and its first few lines give a good idea about what you are writing about.
  • Blogging is a 2-way street. If you blog but you don’t read other people’s blogs, you may not “get” it. Reading internal and external blogs actually is crucial for you to REALLY understand why blogs are not the same as newsgroups, instant messaging or social networking web sites. As you start commenting on other people’s blogs and observing how some topics generate more interest or discussions, you’ll probably have a better understanding of the dynamics of this media. You’ll also establish your own network of bloggers who are more attuned to your own interests and area of expertise. Make sure that you reply to comments when appropriate, showing your appreciation for other people’s time and effort. It’s pretty much like going from high-school to University: it takes time to adapt to this new environment.
  • At first, you may not want to limit yourself to a single theme. Some of my favourite blogs talk about a wide variety of subjects: technology, working environment experiences, “fluffy” stuff, latest news, photography, parenthood, jokes. The proverbial writer’s block only happens if you see yourself as a writer with a theme or a deadline to meet. If the whole world is “in scope” for your blog, and you are just “chatting”, not “authoring”, you’ll probably start having a backlog of things you may want to blog about. I’m not suggesting that you blog about things that are too personal all the time, but variety is a good thing. Keep in mind the “virtual watercooler” analogy: in real offices, you do talk about things that are not strictly work-related sometimes, and that helps building rapport with your colleagues.

In my first Social Media presentation ever, back in 2006, I mentioned that Charles Darwin wondered many times if it was worth it to publish his ideas (note that some scholars dispute this as a myth):

Darwin feared putting the theory out in an incomplete form, as his ideas about evolution would be highly controversial if any attention was paid to them at all.

I keep imagining how many good ideas are left private just because people feel afraid of making a fool of themselves. As I said before, everybody has something to say, and nobody says brilliant things all the time. What if Shakespeare, Einstein, Martin Luther King, Gandhi all had blogs where they could share their reflections with others? It takes ideas to generate ideas, so just let you ideas out: many of them will probably be soon forgotten, but a few good ones may florish and persist (if you are not familiar with the concept, you may want to read about memes). Innovation is most often just a way of aggregating independent ideas into a new cohesive structure.





Sapere aude: Dare to think on your own

22 07 2008

I remember as a kid my mother explaining to me that, in Japan, people referred to Korea as “cho-sen”, meaning “Land of morning calm”. Being a pain in the neck since my early years, I always wondered how one could possibly say “land of morning calm” using just two syllables – that’s when my mother gently suggested me to shut up :-) .

Latin shares some of that hidden magic with Japanese and can also express a lot in a few words. Ad augusta per angusta, Caveat emptor and Urbi et orbe all seem to have this elastic semantic property. My favourite among the short Latin quotes is sapere aude, which mysteriously means “Dare to think on your own”.

In the last couple of years, I have read my fair share of business books (or at least portions of them, as I’m admittedly a lousy reader):

  • Get things done
  • The long tail
  • The world is flat
  • Wikinomics

and I’m currently reading:

  • Web 2.0: a strategy guide
  • Groundswell: Winning in a World Transformed by Social Technologies
  • Here comes everybody
  • Thinkertoys

While many things can be learned from those books, they are written in a way that can lead us to refer to them as gospels, and not simple sources of opinions.

Likewise, many times we see the use of blank statements disguised as common wisdom justifying policies or courses of action. Here are some that examples:

  • You can’t teach an old dog new tricks
  • Jack of all trades, master of none
  • Perception is reality

The real world is so much more complex than that. And I don’t mean to say I’m immune to that: from time to time I catch myself unconsciously trapped in that herd mentality. That’s why I enjoy to hear people who disagree with me, as they may be my only chance to snap out of it

If we have to choose a blank statement to adopt, I like this one better: “when everybody thinks alike, nobody thinks much”. If everything looks rosy and everybody is agreeing with you, think twice. And above it all, sapere aude.








Follow

Get every new post delivered to your Inbox.