The infancy of social technologies

3 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Alex Pickering Transfer Company, early moving ...

Image via Wikipedia

The last 20 years saw knowledge workers adding a steady stream of tools to their repertoire: increasingly sophisticated office suite software, email, the Internet, instant messaging, voice over IP, Web conferences, and, in the last decade, a number of social technologies in the form of blogs, wikis, social networks, microblogging and others. Google+ is just the latest addition to the mix, introducing some interesting ideas to a space that seemed to be quite mature already. Nobody knows for sure if Google+ will ever dethrone Facebook and Twitter, but the buzz it created showed something already: our allegiance to any Social Platform in particular is as reliable as that of a mercenary just waiting for the highest bidder. Taking a step back, it becomes clear that we came a long way since the days where Wikipedia sounded like a misplaced hippie idea transplanted from the 60s. But make no mistake: we are still witnessing the infancy of social technologies, and there is much more to come.

David Allen, of Getting Things Done fame, stated in an interview to the Harvard Business Review magazine earlier this year (May 2011):

Peter Drucker said that the toughest job for knowledge workers is defining the work. A century ago, 80% of the world made and moved things. You worked as long as you could, and then you slept, and then you got up and worked again. You didn’t have to triage or make executive decisions. It’s harder to be productive today because the work has become much more complex.

I have no idea of how much that percentage changed since then, but I suspect that in much of world, a significant number of workers now “make and move” knowledge and information, as opposed to physical goods. Of course, this is no earth-shattering statement, but what is sometimes missed in this obvious assertion is that the same kind of inefficiencies and constraints that limited the production and distribution of “things” one hundred years ago can be observed in the way we deal with knowledge and information today. By visualizing information as a “thing” that can be produced, distributed and consumed, we can better understand how far we still are from an efficient knowledge marketplace.

While we spend countless hours debating if email is dead, if IM is a productivity booster or killer, and if Twitter and Facebook and Google+ will be here 5 years from now, the fact of the matter is that each new social technology brings new mechanisms trying to solve the same problem: reduce inefficiencies in the way we create, capture and move information. While MySpace has likely gone the way of the Dodo, like Geocities did before it, they both introduced some memes and patterns that are still alive today. Wikipedia, blogs, podcasts, Friendster, Facebook, Twitter and FourSquare all contributed to this mix, and social business platforms are continuously incorporating several of those concepts and making them available to knowledge workers.

FedEx, Amazon, and Walmart all created a very efficient ecosystem to move goods by reducing or eliminating obstacles to efficiency. They make the complex task of moving goods a painless experience–at least most of the time. For the non-physical goods, we’re not even close to that. Information flows are inefficient across the value chain. Compared to their counterparts in the physical world, our mechanisms to digitize information are precarious, the channels to distribute it are cumbersome, and our filters to screen it are primitive.

However, eliminating inefficiencies does not necessarily mean eliminating barriers altogether. Sticking to the physical goods metaphor, while there are items that you want to distribute to everybody, like water, food, sanitation, and medication, there are others that you need to control more selectively (flowers for your wife or Punjabi-language TV shows to a Punjabi-speaking population). Some of the problems we attribute to email or Facebook communications are simply a mismatch between the medium and the nature of the message, not an intrinsic failure of the tools themselves. The Google+ concept of circles and streams are a good start, but still very far from perfect. After spending a few minutes there, you will notice that you are still getting more information than you wanted in some cases, and not even a small percentage of what you need in others. That would be unacceptable today for physical goods: just imagine you receiving all sorts of unwanted books or groceries or clothes by your door everyday, but not having a way to just get the few things you need to live a good life.

Thus, before you get too carried away with the latest and greatest social technology darling, be it FourSquare, Tumblr, Quora, Zynga, or Google+, know that we still have a long way to go. If the knowledge mountain is the Everest and social technologies are the tools to climb it, we have not even got to Kathmandu yet.





The Age of Disinformation

2 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

My Room - Looks Like I've Got My Work Cut Out ...

Image by raider3_anime via Flickr

Coincidentally or not, after I covered the topic of Q & A services in my last Biznology post, I’ve heard complaints from three different acquaintances about the low quality of knowledge in Yahoo! Answers, one of them mockingly calling this world where everybody is an expert “the age of disinformation.” Another friend of mine has recently complained about getting mostly useless content–with zero editorial and zero user reviews–from reputable sites whenever he Googles “<non-mainstream product> review”. Has filter failure become so prevalent that, despite all the information available to us, we are no better off than we were 20 years ago, when content was scarce, difficult to produce and difficult to access?

Three months ago, my wife called me from the grocery store, asking if a product has the expiry date of “11 MA 10″, does that mean May 10, 2011 (which would be good, since it was still April), or March 10, 2011 (which would mean that the product was way past its “best before” date)?

Naturally, my first instinct was to Google it, and inevitably I ended up getting a bunch of entries in Yahoo! Answers. Here are some of the pearls of wisdom I found:

“March. May has no abbreviation””I think it means May 11. Unless it’s on something that lasts a long time, like toothpaste. Then it’s probably 2011″

“march” (wrong, the right answer, I found later, was “May 10, 2011″)

“most likely March cuz May is so short they can just put the full month”

“I believe it’s May… I think March would be Mar”

I finally started ignoring any result coming from Yahoo! and found the definitive right answer: the format NN AA NN is a Canadian thing–I live in Toronto–and it’s the doing of the Canadian Food Inspection Agency. You can find the whole reference here. Apparently, to get to a month abbreviation that works both in English and French, that government agency decided to use “monthly bilingual symbols.” The problem is, if you don’t know the context, and if you are not accustomed to that convention, you might mistakenly assume that MA is March, JN is June, or that the two numbers at the beginning are the day, not the year. When it comes to food safety, relying on a standard that is easily subject to misinterpretation is something that you probably would like to avoid.

On the other side of this spectrum, the product reviews posted at Amazon are typically very reliable. Amazon reveals a lot of information about the reviewers, such as “real name,” their other reviews, the “verified purchase” stamp. Also, many filtering and ranking mechanisms are provided, such as the ability for other users to comment on reviews, vote for helpfulness, and say if a comment added to the discussion, or it’s abusive, or if a given reviewer should be ignored.

Unfortunately, Amazon is the exception, not the rule, one of the few sites out there where everybody knows when you are a dog. Twitter’s verified accounts seemed promising, but since they closed the program to the regular public, unless you are a celebrity, you are out-of-luck proving that you are not the person behind that account with your name and your photo. Of course, sometimes having a verified account may play against you, like Rep. Anthony Weiner found out in the last few weeks.

Reflecting over the low quality of information generally available, I concede that skeptics have reasons to not hop into the social media bandwagon mindlessly. But what we are really observing is just an amplification phenomenon, and a moment in time that many decades from now will be seen as the infancy of social technologies.

Since the first pieces of “persistent” content started being produced as rough drawings in some pre-historic cave thousands of years ago, the bad outnumbered the good by orders of magnitude. Creating good content is the exception, and social media amplifies all kinds of content. In part, there are lots of bad Yahoo! Answers because we always had a high degree of disinformation in the world. The only difference is that that disinformation can be easily spread, but that also applies to the good content.

On top of that, the same way natural ecosystems are in a constant state of imbalance but trend towards an equilibrium, information ecosystems will find themselves in apparent disarray from time to time. The original Yahoo! Search, editorialized by real people, once dominated the Internet. It soon became inefficient, and then the PageRank-driven Google search took over. It worked really well for several years, but it’s now also showing its age. Better filters will be developed to overcome the current deficiencies, and this battle will never end. The dynamic between quality of content and quality of filters will perpetually behave like a pendulum, as they always had.

Is this the age of disinformation? Yes, but no more than any other in the past. The fact that, by producing more content in general, we also increase the quantity of good content, should make us optimistic that we are better off today than we were yesterday. If the cost of coming up with one more Mozart is to produce thousands of Salieris, so be it: we may end up finding that Salieris are not that bad after all.





From the batcomputer to Quora: the quest for the perfect answering machine

1 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

When Quora announced earlier this month that they were eliminating their policy against self-promoting questions and answers, some analysts wondered if that was opening the gates for spammers to dominate the conversation. The reality is that the whole evolution of Q&A services is not much different from what Google and other search engines have been experiencing throughout the years. It’s a battle to separate the wheat from the chaff, where the chaff keeps finding creative ways to look like the wheat. Keep reading, and you’ll find why developing the perfect Q&A engine should not be our real objective here.

As a kid, I spent my fair number of hours watching re-runs of camp TV shows, including the classic Batman TV series from the 60’s. I remember how the batcomputer was able to answer any question you asked it, no matter how weird or convoluted they were. For those of you who never had the privilege (?) to see the precursor of IBM’s Watson, here it is, courtesy of YouTube (it’s a long video, so you may want to jump directly to the 2:20 mark):

Yes, you saw it right. The bat-computer was fed a bunch of alphabet soup letters and gave the dynamic duo the answer they were looking for, where they should go next to complete their mission. However, as a sign of things to come, Batman then tries to go extreme and feeds the bat-computer with the Yellow Pages directory book, but—oh the horror—the batcomputer fails miserably trying to get them a more precise answer for their subsequent question.

More than 40 years later, our quest for the infallible computer has not changed much. Watson could easily answer Jeopardy! questions about song lyrics and book topics, but choked when facing more nuanced themes. That was not very different from the 18th century “Mechanical Turk”, which was capable of winning chess games, solving puzzles, conversing in English, French and German and even answering questions about people’s age and marital status, but had its fair share of defeats.

I concede that services like Wolfram Alpha, ChaCha and Quora raised the bar compared to early players such as Yahoo! Answers and WikiAnswers, but they all come short to address complex, subtle or fringe questions.

If you don’t believe me, just try yourself. Use your favorite online Q&A service to ask a question that you can’t easily find in Wikipedia or via a quick Google search and let me know if you get anything meaningful back.

Quora gave many of us the hope that we would be finally getting a high-quality, well-curated Q&A service. It’s becoming increasingly clear now that, albeit a step forward, Quora is not the know-all oracle that we were looking for.

Are we going to ever find the perfect Q&A service, where more nuanced questions will get satisfactory responses? My guess is “no”, but not even Adam West’s noodle-eating batcomputer would know the answer for that.

In fact, at the end of the day, that answer is not relevant at all. As we make strides in the information technology journey, our fundamental objective is not to replace people with machines. Our real target is to free us all from as many mundane and “automatable” tasks as possible, so that we can focus our efforts and energy more and more on the tasks that only humans can do. Having increasingly smarter systems that can answer most of our trivial questions are not a sign of our defeat to “our new computer overlords.” It’s rather a great opportunity to re-define what being human actually means.





Data lust, tacit knowledge, and social media

27 07 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Data Center Lobby

Data Center Lobby by WarzauWynn via Flickr

We are all witnessing the dawn of a new information technology era, the hyper-digitization of the world around us. While the physical world is being captured and monitored via smart sensors, human interactions in both personal and business domains are making their way to the binary realm via social media. Did we finally find the treasury map that will lead us to the Holy Grail of information nirvana? Is the elusive tacit knowledge finally within the reach of this generation? Those are questions that not even Watson can answer, but I would dare to say that we are still very far from getting anywhere close to that.

The Internet has come a long way since its early days of consumerization in the 1990s, and we’re often amazed by how disrupting it has been—and still is—in several aspects of our personal and business lives. The more people and information get connected, the more value is derived—and we often hear that there’s much more to come. This is nothing new, of course: the lure of the new has led us to believe that technology will eventually solve all our problems ever since the days when “techne” was more art, skill and craft, than space travel, jeopardy-champion computers, and nuclear science. In the last few years, as our ability to digitize the world around us improved, our data lust was awakened, and we are currently seeing an explosion of information moving from the offline world to bits
and bytes.

The expectations are high. A recent article at Mashable stated:

Do you think there’s a lot of data on the Internet? Imagine how much there is in the offline world: 510 million square kilometers of land, 6.79 billion people, 18 million kilometers of paved roads, and countless objects inhabit the Earth. The most exciting thing about all this data? Technologists are now starting to chart and index the offline world, down to street signs and basketball hoops.

Tragedies like the earthquake-tsunami-nuclear plant combo in Japan are powerful reminders that data alone won’t save us. Digitizing information is an important first step, but it’s the easy one. A good proxy to understand the difference between collecting the data and changing the world is the human genome sequencing effort: once we finished that big effort, the question morphed from “how fast can we do it?” to “what’s next?” We got the book, but it’s written in an unknown language that will take generations to decipher.

Raising the stakes even further, the promise of finally getting the keys to tacit knowledge—defined as “knowledge that is difficult to transfer to another person by means of writing it down or verbalising it” (Wikipedia) or, more broadly, “the accumulated knowledge that is stored in our heads and in our immediate personal surroundings” (PwC article)—has often been used as a carrot to justify social media investments in the corporate world. The same PwC article says:

Tacit knowledge can be unleashed and shared as never before by connecting people ubiquitously through social networking and its closely related partner, collaboration. In large and small companies alike, tacit knowledge is stored in the heads and personal information collections of thousands of employees of all levels, not to mention their clients’ personal stores of information. Up until now, tacit knowledge has scarcely been captured in conventional computer-based databases because it has not been easy to “tap,” summarize, save, and use in day-to-day business.

After years of observing companies aiming for that moving target, it became clear to me that most of the tacit knowledge will remain out-of-bounds to us for the time being. This is not supposed to be a blow to the importance of social media in the enterprise. In the long term, having reasonable expectations will only help us all. If you use the Wikipedia definition, it actually turns out to be an easy and obvious conclusion: if tacit knowledge is the one difficult to write down or verbalize, it is clearly not a good candidate for digitization.

The actual low hanging fruit of social media in corporations is not tacit knowledge. Using the widespread iceberg metaphor, the tip of the iceberg is the so-called explicit knowledge, i.e., “knowledge that is codified and conveyed to others through dialog, demonstration, or media such as books, drawings, and documents”. Much of that information is already digitized in e-mails, bookmarks, documents and IM conversations, but often inaccessible to those who need it when they need it. Moving conversations away from those traditional channels to more shareable and “spreadable” media, and improving the filtering and distribution mechanisms will enable us to harvest the early fruits from our corporate social media efforts.

What about the tacit knowledge? This four-year-old article provides a good analysis of it. Much of it will remain for years in the “can’t be recorded” or “too many resources required to record” buckets. Social media can help by uncovering the hooks hinting that some of that knowledge exists, and suggesting the individuals or groups most likely to possess it, but the technology and processes to fully discover and digitize them are not here yet. Even if you are an avid user of Twitter or Facebook or Social Business Platforms and operating in hyper-sharing mode, how much of your knowledge is actually available there? Very little, I would guess.

So, before declaring that you are about to unleash the tacit knowledge in your company, take a deep breath and a step back. That iceberg might be much bigger than you thought. Data lust can be inebriating, but reality will soon take over.





Twitter and Politics

14 12 2010
Election night crowd, Wellington, 1931 

Image by National Library NZ on The Commons via Flickr

As previously seen on Biznology on Nov 9, 2010:

Back in the summer, I wrote a pair of posts about Social Media and the FIFA World Cup in South Africa, and Mike Moran talked about reaching your audience during that big event. Well, one can argue that elections and politics can generate passionate discussions that rival or even surpass those by soccer fans, and last much longer than the 30 days of the popular tournament. Following a Twitter list of World Cup players is entertaining, but a list of actual head of states can give us a unique glimpse on how diplomacy is shaping up in the Social Media space. If you want to know which heads of state are using Twitter, how active they are and their following/follower patterns, you came to the right place.

In the past month, I had the unusual experience of being in Brazil during the first round of the general elections (October 3), then back to Toronto during the mayoral election (October 25) and the second round of the Brazilian presidential elections (October 31). On top of that, the US midterm elections were a hot topic around the globe last week, making it almost impossible for me to ignore politics for the last 6 weeks or so. Naturally, politics is a hot topic, and not one that I’m particularly keen in discussing in this blog. Having said that, I find fascinating to analyze the social media layer that is permeating the political scene globally.

Inspired by a TechCrunch article on Twitter diplomacy prior to the G20 meeting in Seoul this week, I used the @davos/g20 list curated by the World Economic Forum as a starting point to visualize how the heads of state are using Twitter. Being from Latin America, I supplemented that list with a few other verified accounts to have a better view of regional politics as well.

I tend to write long and convoluted posts, but in this particular case, a picture is definitely worth a thousand words. So, instead of a navigating the troubled waters of political analysis, I’ll just leave you with a number of visualizations covering different aspects of the Twitter social networking dynamics among this very select group, courtesy of my niece, Gabriela Passos, who’s visiting me this month.

Some points to consider when looking at these charts:

  • They don’t show a complete picture: there might be more heads of state using Twitter, but I preferred to use a curated list from a reliable source as my main reference
  • The number of people followed by these accounts is relevant in at least one subtle, but important way: you can only send direct Twitter messages to people who follow you. By following a large number of Twitter users, these leaders open a private channel that may reveal interesting insights that they would not have access to otherwise.
  • The number of Tweets shown below is a historical cumulative total as of this writing, a metric that favors early adopters. A more interesting metric would be the frequency of tweets over time, but this would be too time consuming for me to get. I bet there will be some online tools covering that aspect some time soon.

The first infographic shows how active each of these head of state Twitter accounts are. It’s interesting to note that @whitehouse (1.8 million followers), @PresidenciaMX (150 thousand followers) and @Laura_Ch (11 thousand followers), despite being order of magnitudes apart in the attention they get, all tweet a lot. On the other side, @JuliaGillard, Prime Minister of Australia, only has 166 tweets, but a considerable number of followers.

Number of Tweets
(c) Gabriela Passos 2010

The second set of charts reveals the followers / following pattern. France and Turkey tie on the low end, both of them not following anybody at all, while @Number10gov and @BarackObama follow over half a million people.

Legend for Twitter charts below
(c) Gabriela Passos 2010

North America (Canada, U.S. and Mexico)
(c) Gabriela Passos 2010


Central and South America (Costa Rica, Chile, Ecuador, Venezuela, Argentina, Brazil)
(c) Gabriela Passos 2010

Europe (UK, France, Russia, Turkey)
(c) Gabriela Passos 2010

Oceania (Australia), Africa (South Africa), Asia (South Korea)
(c) Gabriela Passos 2010

Finally, the last diagram shows that, even among the head of states, you following your peer is not always reciprocated in kind. Cristina Kirchner, president of Argentina, follows a considerable number of her Latin American colleagues, but only two of them follow her back.


Cristina Kirchner follows / followers
(c) Gabriela Passos 2010

One interesting side note: Seeing my niece creating all these infographics by hand, it became painfully clear to me that, despite all the efforts to develop better Social Networking visualization tools (Mashable has a good list here), we still have a long way to go to get the most from the information hidden under the surface in Twitter and elsewhere in the social media landscape.

After seeing the finished product, you can’t help but conclude that politics is but one more area where social media (and Twitter in particular) has become the place for activity that would have happened elsewhere, and has spawned activity that would not have happened at all.





On Leaks, Privacy and Social Media

13 12 2010
Logo used by Wikileaks

Image via Wikipedia

As previously seen on Biznology:

Last week, we learned more about world leaders and diplomacy than some of us would care to know – with revelations about Gaddafi and his nurse being the front-runner candidate for the most TMZ-like material made available, courtesy of Wikileaks. All the buzz and panic that ensued following the release of the US Embassy diplomatic cables motivated a colleague of mine to ask me: are social media’s mantras of transparency, information sharing and digitization of conversations, relationships and activities saving us or are they dooming us all?

Even though Wikileaks have been called “social media journalism” by some, the seeds that enabled their model were planted much before the age of Twitter or Facebook. Clay Shirky had already stated in his book Here Comes Everybody:

(…) in an age of inifinite perfect copyability to many people at once, the very act of writing and sending an e-mail can be a kind of publishing, because once an e-mail is sent, it is almost impossible to destroy all the copies, and anyone who has a copy can broadcast it to the world at will, and with ease. Now, and presumably from now on, the act of creating and circulating evidence of wrongdoing to more than a few people, even if they all work together, will be seen as a delayed but public act.

A clever Venn diagram was circulated over the Internet in late October, suggesting that nothing in the Internet is actually private. The picture below was based on that diagram:

“A helpful Venn diagram” – derivation work

I would argue that the Internet is not the culprit either. I still remember a good friend of mine saying in the early 1990s: “the only things that are really private are the ones you never shared with anybody”. Thus, a Venn diagram depicting privacy vs. the Internet would look more like this:

A relatively recent case of private information leaking to the general public corroborates that point-of-view: Tiger Woods’ challenging year started with verbal accounts of marital infidelity and a phone voice message. No email, no tweets, no Facebook status updates took any part in it. In fact, the now number 2 golfer in the world has finally turned on his Twitter account, seemingly to restore his public image. Ironically enough, by checking the Wikipedia entry on this subject, I learned that Woods and his former wife own a 155-foot yacht called Privacy!

Any concern that by adopting social media practices a company or person will be more vulnerable to this kind of exposure is not well supported by actual evidence. By adding transparency to conversations, relationships and activities, the use of social media may actually contribute to reduce opportunities for doing the wrong or the hidden thing in the first place. Furthermore, it puts you on the driver seat – the remaining question being how much of a good driver you are.

Perhaps we are taking ourselves too seriously here. To keep all this in perspective and also for a good laugh, I highly recommend you to read Julien Smith’s post, All social media experts “are actually the same person,” Wikileaks documents reveal. Here’s a sneak preview:

“Paul from Miami,” as he is identified in Wikileaks documents, appears to be the source of an entire industry of Twitter experts who seemingly give the same advice and yet somehow all have over 20,000 Twitter followers each. (…) Meanwhile, Wikileaks founder Julian Assange suggested more bombshells might be on the way. Speculation was rampant that “SEO experts” and “marketing gurus” might also all be sourced from a single individual, or worse, be “Paul from Miami” as well. Paranoia is on the rise.

Anyway, this is Aaron from Toronto, who I promise is not the same person as Paul from Miami, but on the Internet, I guess you would never know.

Enhanced by Zemanta




Unskewing the Web: Curators as filters

11 12 2010
Sieves (40/365) 

Image by prettyflower via Flickr

As previously seen on Biznology:

This is my final post on the skewed Web. In the early days of Web 2.0 awareness, much was said about the new —now old—Web being all about participation: in the age of User-Generated Content, everybody and their mother became a publisher, leveling the playing field. An independent blogger could potentially be more influential than a New York Times columnist, and the role of editors in identifying and promoting relevant content would be seeing their last days. What was unclear back then was that social media was not only lowering the barriers for content creators: it would eventually enable a new breed of editors, the social media curators.

In my previous post, I cited Clay Shirky’s assertion that the Internet did not bring us an information overload problem: we just needed better filters. However, the wholesale online sieves, like Google Search and Digg, created a different kind of problem, a giant global echo chamber, where we all were becoming collectively dumber. An online world dominated by page rank and skewed crowdsourcing had the potential to dethrone TV as the ultimate idiot box.

As some of you may know, my academic background is in Biology–thus my frequent comparisons between life sciences and social media in my posts. Conservation Biology advocates that Biodiversity “is essential for the maintenance of vital ecosystem services, and ultimately for human survival”, and that we all need to focus on the conservation of all species, not only the cute ones. E.O. Wilson, renowned biologist and Harvard Professor, stated in his book “The Creation”:

“The pauperization of Earth’s fauna and flora was an acceptable price until recent centuries, when Nature seemed all but infinite, and an enemy to explorers and pioneers. (…) History now teaches a different lesson, but only to those who will listen. (…) The homogenization of the biosphere is painful and costly to our own species and will become more so.”

Likewise, the health and long-term viability of our knowledge ecosystems is dependent on diversity of ideas and opinions. Online content curators are playing an increasingly crucial role preserving that diversity beyond mainstream. But despite all the talk around online content curation, there’s still a long way to go.

Online content creators are well-served today. Gone are the GeoCities and “home page” days, where you pretty much had to build everything from scratch or rely on professional help to generate content. You can go fancy and rely on a Content Management System, or just open a Twitter account and go crazy, 140 characters at a time.

Online content curators, on the other side, are still poorly served. Robert Scoble has recently compiled a list of “The Seven Needs of Real-Time Curators”. One of my favourite online content curators, Bernie Michalik, uses a variety of social media channels to highlight interesting things he finds daily both in the core and in the fringes of the online world. However, very few of us are keen enough to write elaborated blog posts or to create neat websites about fringe subject matters. Most of us tend to only go for the quick and dirty: a quick retweet, or a shortened URL or a Facebook link, resulting in somewhat cryptic, hard-to-consume messages like this one:


Every time my wife sees my Twitter stream, full of messages like that, she says that it looks geeky and uninteresting. And she is right. Thankfully, a number of websites and apps are starting to make the online content curators’ life easier, the same way it happened to online content creators several years ago. Even Twitter and Facebook have recently made efforts to add a layer of translation, rendering links to photos and videos to more attractive thumbnails or embedded players.

The iPad app Flipboard give us a glimpse of what is coming. This is a snapshot of how the tweet above is rendered in Flipboard:


The newspaper format and the rendering of the actual content (as opposed to just showing a shortened URL) goes a long way to make tweets more consumable.

Social Media tools play a crucial role in lowering barriers to entry, allowing more people to become online content curators, and enabling diverse content to be easily absorbed and propagated. By avoiding the extinction of diverse ideas, content curator tools will increasingly become instrumental in preserving our global online knowledge ecosystem, a.k.a. our collective intelligence.

Enhanced by Zemanta







Follow

Get every new post delivered to your Inbox.