What’s next? Social Media and the Information Life Cycle

10 10 2011
The Crystal Ball

Image via Wikipedia

As previously seen in Biznology:

Back in the days when “Web 2.0″ was a hot buzzword, many people asked what “Web 3.0″ would look like. Even though that question sounds now as outdated as an X-Files re-run episode, the quest for “what’s next?” is always in our minds. I’m no better prognosticator than anybody else, but my best answer would be: look at where the inefficiencies are. In no way is that a good indicator of what 2012 or 2013 will look like. Some inefficiencies may take decades to be properly addressed. But if you are trying to guess where we’ll see the next revolutionary step in social media–as opposed to the much-easier-to-predict incremental improvements–you have to focus where the biggest opportunities are, the undiscovered country somewhere out there. Analyzing the current landscape and the evolution of information markets by borrowing from a product life cycle framework may assist in developing some educated guesses about what the future may bring.

This is the third post in this current series discussing the differences between making and moving information and knowledge versus physical goods.

Since the first businessperson started bartering goods in some cave thousands of years ago, products went through a more or less standard life cycle: they were conceived, designed, produced, filtered for quality, distributed, sold, consumed, serviced, and finally disposed of. Over the course of history, that life cycle underwent several enhancements, most of them minor and mundane. A few of them, like the introduction of currency, steam power and the production line, had a huge impact in the overall efficiency of moving products around. Naturally, even though in retrospect those inventions seem like no-brainer milestones in the history of business and commerce, in reality their impact was scattered over a number of years. It’s likely that many who lived through those transitions did not realize how big those changes in fact were. One important indication of their revolutionary aspect is how each one of them enabled an incredible amplification in volume or reach of one or more stages of the product life cycle. Currency obviously made selling and trading more practical, steam power increased the ability to reduce the dependency on animals and people for mechanical work and the production line affected both the ability to manufacture better and to improve quality controls.

Non-physical items, such as ideas, information and knowledge, also go through a life-cycle of their own, but some of the steps are still not as sophisticated as the ones described above. Roughly speaking, that cycle consists of: creating, verbalizing (coding), capturing, distributing, filtering, consuming (decoding), and disposing of.

Creating and Coding
Thoughts and emotions come through everybody’s heads all the time, raw, unruly, uncensored. Then some are internally processed by the conscious mind and made sense of. A subset of those are mentally verbalized, and even a smaller portion of them is finally vocalized and communicated to others. The effect of technology in this early cycle has been modest. It still happens mostly the way it used to thousands of years ago.

Capturing
Capturing knowledge in the early days of humanity basically meant drawing on cave walls. Drawing became more elaborate with the advent of writing–which could be described as the codified doodling of one’s thoughts in an enduring medium–ensuring that a person’s knowledge could survive much beyond the physical constraints in time and space. More recently, photographs, movies, and audio recording permitted the capture of an increasing variety of information, but in practice they were still enabling us to more efficiently capture a concept, a memorable moment, or to tell a story.

Distributing
The introduction of movable type removed much of the limitations around the distribution of that content. Radio, cinema, TV, and the Internet of the 1990s did more of the same for a broader set of records of human knowledge. It was an expansion in scope, not in nature, but each new media added to the mix represented yet another powerful distribution amplifier.

Filtering and Consuming
All that capturing and distributing of information still relied a lot either on physical media or on the renting of expensive properties and expertise. Whole industries were created to exploit those high transaction costs. The inefficiencies of the system were one of the most powerful filtering mechanisms in that information life cycle. The big mass media players essentially dictated what was “media worth” for you to consume. In fact, that’s what most of us were known for: consumers of goods and information.

Disposing
Most information that is digitized today is not formally disposed of, it’s just abandoned or forgotten. Archived emails and SharePoint sites are an example of that, but how often do you search your company’s intranet and find information that is completely outdated and irrelevant now?

The social media role, so far
Much of what we call social media today, be it Internet forums, blogs, podcasts, photo, video and file sharing, social networks, or microblogging, along with the advances in mobile communications, contributed significantly to bring those transaction costs closer to zero. Billions of people can now not only capture and distribute information at low cost, but can also consume more of it and consume it faster. Not only that: the ability to add links to status updates, retweet, and share enabled regular people to filter and distribute content too. Everybody became a content curator and distributor of sorts, often without even realizing it.

So what’s next?
Most likely, we’ll continue to see an increasing sophistication of the inner steps of the information life cycle. We’re already witnessing better ways to capture (the IBM Smarter Planet initiative is a good example of that), filter (Google Plus’ circles), distribute (Tumblr, StumbleUpon) and consume (Flipboard, Zite) it. However, the obvious under-served stages in the information life cycle are the two extremities: creating, coding, and capturing on one side, and disposal of on the other.

On the creating, coding and capturing end, the major current inefficiency is loss of information. From the millions of ideas and thoughts that come to people’s minds, the vast majority vanishes without a trace. Twitter and status updates showed a very raw way of capturing some of that, but they are still cumbersome to use — and often impractical:

In case of fire, exit the building before tweeting about it

Via funnysigns.net

Apps like Evernote and Remember The Milk are evolving to make it much easier to record our impromptu thoughts, but the actual potential is enormous (suggested reading: The Future of Evernote). Even a real brain dump may be more feasible than most of us initially thought. As we learned a few days ago, UC Berkeley scientists have developed a system to capture visual activity in human brains and reconstruct it as digital video clips. The results are mesmerizing:

Reconstructing brain images into video

But it does not stop there. Ideas generate ideas. Capturing makes indexing and sharing possible. Imagine how much more knowledge could be created if we had better ways to share and aggregate atomic pieces of information. We might not like Facebook’s timeline and Open Graph real-time apps in their first incarnations, but they are just giving us a peek of what the future — and the past — may look like.

Decoding the existing content out there would be a good start. My first language being Portuguese, I often find amazing content in Brazilian websites or brilliant posts shared by my Portuguese-speaking colleagues that are still not easily consumable by most people in the world, making me wonder how much I’m missing for not knowing German, Chinese, Hindi, or Japanese. One can always argue that there are plenty of tools out there for translating Internet content. True, but the actual user-friendly experience would just be browsing websites or watching videos in Punjabi or Arabic without even noticing that they were originally produced in another language.

Finally, one of the unsolved problems of the information age is proper disposal of information. Since storage is so cheap, we keep most of what is created, and this is increasingly becoming an annoyance. I often wish that my Google search could default to limiting search to the last year or the last month only, as the standard results normally show ancient content that are no longer relevant to me. Also, most of us just accept that whatever we say online stays online forever, but that is just a limitation of our current technology. If we could for a second convert all that digital information to a physical representation, we would see a landfill of unwanted items and plenty of clutter. Of course, “disposal” in the digital world does not need to be complete elimination–could just be better ways to place things in the back burner or the backroom archive. For example, we could use sensorial clues of aging information. You always can tell if a physical book was freshly published or a print from 20 years ago, based on its appearance and smell. Old internet content could be shown with increasingly yellow and brown background tones, so that you could visually tell the freshness of content.

Some of the above sounds crazy and superfluous, but the idea of Twitter probably sounded insane less than 10 years ago. Are we there yet? Not even close. But that is what makes this journey really interesting: what is coming is much more intriguing than what we saw so far. Like I said before, we’re still witnessing the infancy of social technologies.

Enhanced by Zemanta




Moving things vs. moving ideas

10 10 2011

AVENTURA, FL - AUGUST 18:  Ed Cole (R), with t...

Image by Getty Images via @daylife

As previously seen in Biznology:

All of our existing controls around content, intellectual property, and information exchange were developed when moving information around was an ancillary function to what mattered at the time: moving goods efficiently to generate wealth. The most powerful nations and organizations throughout the centuries were the ones that mastered the levers that controlled the flow of things. That pattern may soon be facing its own Napster moment. Information is becoming a good in itself, and our controls have not yet adapted to this new reality. In fact, much of what we call governance consists of ensuring that information moves very slowly–if at all. The entities–countries, companies, individuals–that first realize that a shift has already taken place, and re-think their raison d’être accordingly, might be the ones who will dominate the market in this brave new world.

In my last Biznology post, I used a comparison between information and physical goods to support an argument that social technologies still have a long way to go to be considered mature. When information itself becomes the good, and social interactions become the transportation medium, some new and interesting economic patterns may emerge.

Scarcity is a natural attribute of the physical world: a house, a car, or a TV set cannot be owned by multiple people at the same time, nor can one person provide hairdressing or medical services to several customers simultaneously. Our whole economic model is built on top of it: theories around economies of scale, price elasticity, bargaining, patents and copyright all have a strong dependency on “things” or “services” being limited. We even created artificial scarcity to digital items such as software and audio files in the form of license keys and DRM, so that they could fit our “real world” economy.

That model worked OK when being digital was the exception. However, more and more “things” are becoming digital: photos, movies, newspapers, books, magazines, maps, money, annotations, card decks, board games, drawings, paintings, kaleidoscopes–you name it. Furthermore, services are increasingly less dependent on geographical or temporal proximity: online programming, consulting, doctor appointments, tutoring, and teaching are sometimes better than their face-to-face counterparts. While most online services are still provided on a one-off basis, the digitization of those human interactions is just the first step to make them reusable. TED talks and iTunes University lectures are early examples of that.

Of course, I’m not advocating a world without patents or copyrights. But I do think that it’s important to understand what that world would look like, and assess if the existing controls are playing in our favor or against us. Even if we do not dare to change something that served us so well in the past, others may not have the same incentives to keep the status quo.

Another factor to consider is the leapfrog pattern experienced by the mobile telephony industry: countries that were behind in the deployment of phone landlines ended up surpassing those in the developed world in the adoption of cellular phones. Similarly, countries that never developed a sophisticated intellectual property framework may be able to start fresh and put a system in place where broad dissemination and re-use trumps authorship and individual success.

Finally, the emergence of social technologies over the last 10 years showed the power of a resource that has been underutilized for centuries: people and their interactions with each other. The essence of what we used to call Web 2.0 was the transformational aspect of leveraging social interactions throughout the information value chain: creation, capture, distribution, filtering and consumption. The crowd now is often the source, the medium, and the destination of information in its multiple forms.

The conclusion is that the sheer number of people that can be mobilized by an entity–a nation, an organization or an individual–may become a source of a wealth in the near future. Of course, peoplenomics is mostly a diamond in the rough for now. A quick comparison between the top 20 countries by GDP per capita (based on Purchasing Power Parity) and the top 20 countries in the world by population shows that the size of a country’s population is still a poor indicator of its wealth–only the United States, Japan and Germany are part of both lists. Whether or not unleashing the economic value of large populations and efficient information flows will ever materialize is anybody’s guess, but keeping an eye for it and being able to adapt quickly may be key survival skills in a rapidly changing landscape.

Enhanced by Zemanta




The infancy of social technologies

3 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Alex Pickering Transfer Company, early moving ...

Image via Wikipedia

The last 20 years saw knowledge workers adding a steady stream of tools to their repertoire: increasingly sophisticated office suite software, email, the Internet, instant messaging, voice over IP, Web conferences, and, in the last decade, a number of social technologies in the form of blogs, wikis, social networks, microblogging and others. Google+ is just the latest addition to the mix, introducing some interesting ideas to a space that seemed to be quite mature already. Nobody knows for sure if Google+ will ever dethrone Facebook and Twitter, but the buzz it created showed something already: our allegiance to any Social Platform in particular is as reliable as that of a mercenary just waiting for the highest bidder. Taking a step back, it becomes clear that we came a long way since the days where Wikipedia sounded like a misplaced hippie idea transplanted from the 60s. But make no mistake: we are still witnessing the infancy of social technologies, and there is much more to come.

David Allen, of Getting Things Done fame, stated in an interview to the Harvard Business Review magazine earlier this year (May 2011):

Peter Drucker said that the toughest job for knowledge workers is defining the work. A century ago, 80% of the world made and moved things. You worked as long as you could, and then you slept, and then you got up and worked again. You didn’t have to triage or make executive decisions. It’s harder to be productive today because the work has become much more complex.

I have no idea of how much that percentage changed since then, but I suspect that in much of world, a significant number of workers now “make and move” knowledge and information, as opposed to physical goods. Of course, this is no earth-shattering statement, but what is sometimes missed in this obvious assertion is that the same kind of inefficiencies and constraints that limited the production and distribution of “things” one hundred years ago can be observed in the way we deal with knowledge and information today. By visualizing information as a “thing” that can be produced, distributed and consumed, we can better understand how far we still are from an efficient knowledge marketplace.

While we spend countless hours debating if email is dead, if IM is a productivity booster or killer, and if Twitter and Facebook and Google+ will be here 5 years from now, the fact of the matter is that each new social technology brings new mechanisms trying to solve the same problem: reduce inefficiencies in the way we create, capture and move information. While MySpace has likely gone the way of the Dodo, like Geocities did before it, they both introduced some memes and patterns that are still alive today. Wikipedia, blogs, podcasts, Friendster, Facebook, Twitter and FourSquare all contributed to this mix, and social business platforms are continuously incorporating several of those concepts and making them available to knowledge workers.

FedEx, Amazon, and Walmart all created a very efficient ecosystem to move goods by reducing or eliminating obstacles to efficiency. They make the complex task of moving goods a painless experience–at least most of the time. For the non-physical goods, we’re not even close to that. Information flows are inefficient across the value chain. Compared to their counterparts in the physical world, our mechanisms to digitize information are precarious, the channels to distribute it are cumbersome, and our filters to screen it are primitive.

However, eliminating inefficiencies does not necessarily mean eliminating barriers altogether. Sticking to the physical goods metaphor, while there are items that you want to distribute to everybody, like water, food, sanitation, and medication, there are others that you need to control more selectively (flowers for your wife or Punjabi-language TV shows to a Punjabi-speaking population). Some of the problems we attribute to email or Facebook communications are simply a mismatch between the medium and the nature of the message, not an intrinsic failure of the tools themselves. The Google+ concept of circles and streams are a good start, but still very far from perfect. After spending a few minutes there, you will notice that you are still getting more information than you wanted in some cases, and not even a small percentage of what you need in others. That would be unacceptable today for physical goods: just imagine you receiving all sorts of unwanted books or groceries or clothes by your door everyday, but not having a way to just get the few things you need to live a good life.

Thus, before you get too carried away with the latest and greatest social technology darling, be it FourSquare, Tumblr, Quora, Zynga, or Google+, know that we still have a long way to go. If the knowledge mountain is the Everest and social technologies are the tools to climb it, we have not even got to Kathmandu yet.





The Age of Disinformation

2 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

My Room - Looks Like I've Got My Work Cut Out ...

Image by raider3_anime via Flickr

Coincidentally or not, after I covered the topic of Q & A services in my last Biznology post, I’ve heard complaints from three different acquaintances about the low quality of knowledge in Yahoo! Answers, one of them mockingly calling this world where everybody is an expert “the age of disinformation.” Another friend of mine has recently complained about getting mostly useless content–with zero editorial and zero user reviews–from reputable sites whenever he Googles “<non-mainstream product> review”. Has filter failure become so prevalent that, despite all the information available to us, we are no better off than we were 20 years ago, when content was scarce, difficult to produce and difficult to access?

Three months ago, my wife called me from the grocery store, asking if a product has the expiry date of “11 MA 10″, does that mean May 10, 2011 (which would be good, since it was still April), or March 10, 2011 (which would mean that the product was way past its “best before” date)?

Naturally, my first instinct was to Google it, and inevitably I ended up getting a bunch of entries in Yahoo! Answers. Here are some of the pearls of wisdom I found:

“March. May has no abbreviation””I think it means May 11. Unless it’s on something that lasts a long time, like toothpaste. Then it’s probably 2011″

“march” (wrong, the right answer, I found later, was “May 10, 2011″)

“most likely March cuz May is so short they can just put the full month”

“I believe it’s May… I think March would be Mar”

I finally started ignoring any result coming from Yahoo! and found the definitive right answer: the format NN AA NN is a Canadian thing–I live in Toronto–and it’s the doing of the Canadian Food Inspection Agency. You can find the whole reference here. Apparently, to get to a month abbreviation that works both in English and French, that government agency decided to use “monthly bilingual symbols.” The problem is, if you don’t know the context, and if you are not accustomed to that convention, you might mistakenly assume that MA is March, JN is June, or that the two numbers at the beginning are the day, not the year. When it comes to food safety, relying on a standard that is easily subject to misinterpretation is something that you probably would like to avoid.

On the other side of this spectrum, the product reviews posted at Amazon are typically very reliable. Amazon reveals a lot of information about the reviewers, such as “real name,” their other reviews, the “verified purchase” stamp. Also, many filtering and ranking mechanisms are provided, such as the ability for other users to comment on reviews, vote for helpfulness, and say if a comment added to the discussion, or it’s abusive, or if a given reviewer should be ignored.

Unfortunately, Amazon is the exception, not the rule, one of the few sites out there where everybody knows when you are a dog. Twitter’s verified accounts seemed promising, but since they closed the program to the regular public, unless you are a celebrity, you are out-of-luck proving that you are not the person behind that account with your name and your photo. Of course, sometimes having a verified account may play against you, like Rep. Anthony Weiner found out in the last few weeks.

Reflecting over the low quality of information generally available, I concede that skeptics have reasons to not hop into the social media bandwagon mindlessly. But what we are really observing is just an amplification phenomenon, and a moment in time that many decades from now will be seen as the infancy of social technologies.

Since the first pieces of “persistent” content started being produced as rough drawings in some pre-historic cave thousands of years ago, the bad outnumbered the good by orders of magnitude. Creating good content is the exception, and social media amplifies all kinds of content. In part, there are lots of bad Yahoo! Answers because we always had a high degree of disinformation in the world. The only difference is that that disinformation can be easily spread, but that also applies to the good content.

On top of that, the same way natural ecosystems are in a constant state of imbalance but trend towards an equilibrium, information ecosystems will find themselves in apparent disarray from time to time. The original Yahoo! Search, editorialized by real people, once dominated the Internet. It soon became inefficient, and then the PageRank-driven Google search took over. It worked really well for several years, but it’s now also showing its age. Better filters will be developed to overcome the current deficiencies, and this battle will never end. The dynamic between quality of content and quality of filters will perpetually behave like a pendulum, as they always had.

Is this the age of disinformation? Yes, but no more than any other in the past. The fact that, by producing more content in general, we also increase the quantity of good content, should make us optimistic that we are better off today than we were yesterday. If the cost of coming up with one more Mozart is to produce thousands of Salieris, so be it: we may end up finding that Salieris are not that bad after all.





From the batcomputer to Quora: the quest for the perfect answering machine

1 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

When Quora announced earlier this month that they were eliminating their policy against self-promoting questions and answers, some analysts wondered if that was opening the gates for spammers to dominate the conversation. The reality is that the whole evolution of Q&A services is not much different from what Google and other search engines have been experiencing throughout the years. It’s a battle to separate the wheat from the chaff, where the chaff keeps finding creative ways to look like the wheat. Keep reading, and you’ll find why developing the perfect Q&A engine should not be our real objective here.

As a kid, I spent my fair number of hours watching re-runs of camp TV shows, including the classic Batman TV series from the 60’s. I remember how the batcomputer was able to answer any question you asked it, no matter how weird or convoluted they were. For those of you who never had the privilege (?) to see the precursor of IBM’s Watson, here it is, courtesy of YouTube (it’s a long video, so you may want to jump directly to the 2:20 mark):

Yes, you saw it right. The bat-computer was fed a bunch of alphabet soup letters and gave the dynamic duo the answer they were looking for, where they should go next to complete their mission. However, as a sign of things to come, Batman then tries to go extreme and feeds the bat-computer with the Yellow Pages directory book, but—oh the horror—the batcomputer fails miserably trying to get them a more precise answer for their subsequent question.

More than 40 years later, our quest for the infallible computer has not changed much. Watson could easily answer Jeopardy! questions about song lyrics and book topics, but choked when facing more nuanced themes. That was not very different from the 18th century “Mechanical Turk”, which was capable of winning chess games, solving puzzles, conversing in English, French and German and even answering questions about people’s age and marital status, but had its fair share of defeats.

I concede that services like Wolfram Alpha, ChaCha and Quora raised the bar compared to early players such as Yahoo! Answers and WikiAnswers, but they all come short to address complex, subtle or fringe questions.

If you don’t believe me, just try yourself. Use your favorite online Q&A service to ask a question that you can’t easily find in Wikipedia or via a quick Google search and let me know if you get anything meaningful back.

Quora gave many of us the hope that we would be finally getting a high-quality, well-curated Q&A service. It’s becoming increasingly clear now that, albeit a step forward, Quora is not the know-all oracle that we were looking for.

Are we going to ever find the perfect Q&A service, where more nuanced questions will get satisfactory responses? My guess is “no”, but not even Adam West’s noodle-eating batcomputer would know the answer for that.

In fact, at the end of the day, that answer is not relevant at all. As we make strides in the information technology journey, our fundamental objective is not to replace people with machines. Our real target is to free us all from as many mundane and “automatable” tasks as possible, so that we can focus our efforts and energy more and more on the tasks that only humans can do. Having increasingly smarter systems that can answer most of our trivial questions are not a sign of our defeat to “our new computer overlords.” It’s rather a great opportunity to re-define what being human actually means.





From the batcomputer to Quora: the quest for the perfect answering machine

1 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Cartoon Maze Card

Image by andertoons via Flickr

When Quora announced in May that they were eliminating their policy against self-promoting questions and answers, some analysts wondered if that was opening the gates for spammers to dominate the conversation. The reality is that the whole evolution of Q&A services is not much different from what Google and other search engines have been experiencing throughout the years. It’s a battle to separate the wheat from the chaff, where the chaff keeps finding creative ways to look like the wheat. Keep reading, and you’ll find why developing the perfect Q&A engine should not be our real objective here.

As a kid, I spent my fair number of hours watching re-runs of camp TV shows, including the classic Batman TV series from the 60’s. I remember how the batcomputer was able to answer any question you asked it, no matter how weird or convoluted they were. For those of you who never had the privilege (?) to see the precursor of IBM’s Watson, here it is, courtesy of YouTube (it’s a long video, so I’m taking you directly to the 2:20 mark):

Yes, you saw it right. The bat-computer was fed a bunch of alphabet soup letters and gave the dynamic duo the answer they were looking for, where they should go next to complete their mission. However, as a sign of things to come, Batman then tries to go extreme and feeds the bat-computer with the Yellow Pages directory book, but—oh the horror—the batcomputer fails miserably trying to get them a more precise answer for their subsequent question.

More than 40 years later, our quest for the infallible computer has not changed much. Watson could easily answer Jeopardy! questions about song lyrics and book topics, but choked when facing more nuanced themes. That was not very different from the 18th century “Mechanical Turk”, which was capable of winning chess games, solving puzzles, conversing in English, French and German and even answering questions about people’s age and marital status, but had its fair share of defeats.

I concede that services like Wolfram Alpha, ChaCha and Quora raised the bar compared to early players such as Yahoo! Answers and WikiAnswers, but they all come short to address complex, subtle or fringe questions.

If you don’t believe me, just try yourself. Use your favorite online Q&A service to ask a question that you can’t easily find in Wikipedia or via a quick Google search and let me know if you get anything meaningful back.

Quora gave many of us the hope that we would be finally getting a high-quality, well-curated Q&A service. It’s becoming increasingly clear now that, albeit a step forward, Quora is not the know-all oracle that we were looking for.

Are we going to ever find the perfect Q&A service, where more nuanced questions will get satisfactory responses? My guess is “no”, but not even Adam West’s noodle-eating batcomputer would know the answer for that.

In fact, at the end of the day, that answer is not relevant at all. As we make strides in the information technology journey, our fundamental objective is not to replace people with machines. Our real target is to free us all from as many mundane and “automatable” tasks as possible, so that we can focus our efforts and energy more and more on the tasks that only humans can do. Having increasingly smarter systems that can answer most of our trivial questions are not a sign of our defeat to “our new computer overlords.” It’s rather a great opportunity to re-define what being human actually means.





Data lust, tacit knowledge, and social media

27 07 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Data Center Lobby

Data Center Lobby by WarzauWynn via Flickr

We are all witnessing the dawn of a new information technology era, the hyper-digitization of the world around us. While the physical world is being captured and monitored via smart sensors, human interactions in both personal and business domains are making their way to the binary realm via social media. Did we finally find the treasury map that will lead us to the Holy Grail of information nirvana? Is the elusive tacit knowledge finally within the reach of this generation? Those are questions that not even Watson can answer, but I would dare to say that we are still very far from getting anywhere close to that.

The Internet has come a long way since its early days of consumerization in the 1990s, and we’re often amazed by how disrupting it has been—and still is—in several aspects of our personal and business lives. The more people and information get connected, the more value is derived—and we often hear that there’s much more to come. This is nothing new, of course: the lure of the new has led us to believe that technology will eventually solve all our problems ever since the days when “techne” was more art, skill and craft, than space travel, jeopardy-champion computers, and nuclear science. In the last few years, as our ability to digitize the world around us improved, our data lust was awakened, and we are currently seeing an explosion of information moving from the offline world to bits
and bytes.

The expectations are high. A recent article at Mashable stated:

Do you think there’s a lot of data on the Internet? Imagine how much there is in the offline world: 510 million square kilometers of land, 6.79 billion people, 18 million kilometers of paved roads, and countless objects inhabit the Earth. The most exciting thing about all this data? Technologists are now starting to chart and index the offline world, down to street signs and basketball hoops.

Tragedies like the earthquake-tsunami-nuclear plant combo in Japan are powerful reminders that data alone won’t save us. Digitizing information is an important first step, but it’s the easy one. A good proxy to understand the difference between collecting the data and changing the world is the human genome sequencing effort: once we finished that big effort, the question morphed from “how fast can we do it?” to “what’s next?” We got the book, but it’s written in an unknown language that will take generations to decipher.

Raising the stakes even further, the promise of finally getting the keys to tacit knowledge—defined as “knowledge that is difficult to transfer to another person by means of writing it down or verbalising it” (Wikipedia) or, more broadly, “the accumulated knowledge that is stored in our heads and in our immediate personal surroundings” (PwC article)—has often been used as a carrot to justify social media investments in the corporate world. The same PwC article says:

Tacit knowledge can be unleashed and shared as never before by connecting people ubiquitously through social networking and its closely related partner, collaboration. In large and small companies alike, tacit knowledge is stored in the heads and personal information collections of thousands of employees of all levels, not to mention their clients’ personal stores of information. Up until now, tacit knowledge has scarcely been captured in conventional computer-based databases because it has not been easy to “tap,” summarize, save, and use in day-to-day business.

After years of observing companies aiming for that moving target, it became clear to me that most of the tacit knowledge will remain out-of-bounds to us for the time being. This is not supposed to be a blow to the importance of social media in the enterprise. In the long term, having reasonable expectations will only help us all. If you use the Wikipedia definition, it actually turns out to be an easy and obvious conclusion: if tacit knowledge is the one difficult to write down or verbalize, it is clearly not a good candidate for digitization.

The actual low hanging fruit of social media in corporations is not tacit knowledge. Using the widespread iceberg metaphor, the tip of the iceberg is the so-called explicit knowledge, i.e., “knowledge that is codified and conveyed to others through dialog, demonstration, or media such as books, drawings, and documents”. Much of that information is already digitized in e-mails, bookmarks, documents and IM conversations, but often inaccessible to those who need it when they need it. Moving conversations away from those traditional channels to more shareable and “spreadable” media, and improving the filtering and distribution mechanisms will enable us to harvest the early fruits from our corporate social media efforts.

What about the tacit knowledge? This four-year-old article provides a good analysis of it. Much of it will remain for years in the “can’t be recorded” or “too many resources required to record” buckets. Social media can help by uncovering the hooks hinting that some of that knowledge exists, and suggesting the individuals or groups most likely to possess it, but the technology and processes to fully discover and digitize them are not here yet. Even if you are an avid user of Twitter or Facebook or Social Business Platforms and operating in hyper-sharing mode, how much of your knowledge is actually available there? Very little, I would guess.

So, before declaring that you are about to unleash the tacit knowledge in your company, take a deep breath and a step back. That iceberg might be much bigger than you thought. Data lust can be inebriating, but reality will soon take over.








Follow

Get every new post delivered to your Inbox.