The infancy of social technologies

3 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Alex Pickering Transfer Company, early moving ...

Image via Wikipedia

The last 20 years saw knowledge workers adding a steady stream of tools to their repertoire: increasingly sophisticated office suite software, email, the Internet, instant messaging, voice over IP, Web conferences, and, in the last decade, a number of social technologies in the form of blogs, wikis, social networks, microblogging and others. Google+ is just the latest addition to the mix, introducing some interesting ideas to a space that seemed to be quite mature already. Nobody knows for sure if Google+ will ever dethrone Facebook and Twitter, but the buzz it created showed something already: our allegiance to any Social Platform in particular is as reliable as that of a mercenary just waiting for the highest bidder. Taking a step back, it becomes clear that we came a long way since the days where Wikipedia sounded like a misplaced hippie idea transplanted from the 60s. But make no mistake: we are still witnessing the infancy of social technologies, and there is much more to come.

David Allen, of Getting Things Done fame, stated in an interview to the Harvard Business Review magazine earlier this year (May 2011):

Peter Drucker said that the toughest job for knowledge workers is defining the work. A century ago, 80% of the world made and moved things. You worked as long as you could, and then you slept, and then you got up and worked again. You didn’t have to triage or make executive decisions. It’s harder to be productive today because the work has become much more complex.

I have no idea of how much that percentage changed since then, but I suspect that in much of world, a significant number of workers now “make and move” knowledge and information, as opposed to physical goods. Of course, this is no earth-shattering statement, but what is sometimes missed in this obvious assertion is that the same kind of inefficiencies and constraints that limited the production and distribution of “things” one hundred years ago can be observed in the way we deal with knowledge and information today. By visualizing information as a “thing” that can be produced, distributed and consumed, we can better understand how far we still are from an efficient knowledge marketplace.

While we spend countless hours debating if email is dead, if IM is a productivity booster or killer, and if Twitter and Facebook and Google+ will be here 5 years from now, the fact of the matter is that each new social technology brings new mechanisms trying to solve the same problem: reduce inefficiencies in the way we create, capture and move information. While MySpace has likely gone the way of the Dodo, like Geocities did before it, they both introduced some memes and patterns that are still alive today. Wikipedia, blogs, podcasts, Friendster, Facebook, Twitter and FourSquare all contributed to this mix, and social business platforms are continuously incorporating several of those concepts and making them available to knowledge workers.

FedEx, Amazon, and Walmart all created a very efficient ecosystem to move goods by reducing or eliminating obstacles to efficiency. They make the complex task of moving goods a painless experience–at least most of the time. For the non-physical goods, we’re not even close to that. Information flows are inefficient across the value chain. Compared to their counterparts in the physical world, our mechanisms to digitize information are precarious, the channels to distribute it are cumbersome, and our filters to screen it are primitive.

However, eliminating inefficiencies does not necessarily mean eliminating barriers altogether. Sticking to the physical goods metaphor, while there are items that you want to distribute to everybody, like water, food, sanitation, and medication, there are others that you need to control more selectively (flowers for your wife or Punjabi-language TV shows to a Punjabi-speaking population). Some of the problems we attribute to email or Facebook communications are simply a mismatch between the medium and the nature of the message, not an intrinsic failure of the tools themselves. The Google+ concept of circles and streams are a good start, but still very far from perfect. After spending a few minutes there, you will notice that you are still getting more information than you wanted in some cases, and not even a small percentage of what you need in others. That would be unacceptable today for physical goods: just imagine you receiving all sorts of unwanted books or groceries or clothes by your door everyday, but not having a way to just get the few things you need to live a good life.

Thus, before you get too carried away with the latest and greatest social technology darling, be it FourSquare, Tumblr, Quora, Zynga, or Google+, know that we still have a long way to go. If the knowledge mountain is the Everest and social technologies are the tools to climb it, we have not even got to Kathmandu yet.

Advertisements




The Age of Disinformation

2 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

My Room - Looks Like I've Got My Work Cut Out ...

Image by raider3_anime via Flickr

Coincidentally or not, after I covered the topic of Q & A services in my last Biznology post, I’ve heard complaints from three different acquaintances about the low quality of knowledge in Yahoo! Answers, one of them mockingly calling this world where everybody is an expert “the age of disinformation.” Another friend of mine has recently complained about getting mostly useless content–with zero editorial and zero user reviews–from reputable sites whenever he Googles “<non-mainstream product> review”. Has filter failure become so prevalent that, despite all the information available to us, we are no better off than we were 20 years ago, when content was scarce, difficult to produce and difficult to access?

Three months ago, my wife called me from the grocery store, asking if a product has the expiry date of “11 MA 10”, does that mean May 10, 2011 (which would be good, since it was still April), or March 10, 2011 (which would mean that the product was way past its “best before” date)?

Naturally, my first instinct was to Google it, and inevitably I ended up getting a bunch of entries in Yahoo! Answers. Here are some of the pearls of wisdom I found:

“March. May has no abbreviation””I think it means May 11. Unless it’s on something that lasts a long time, like toothpaste. Then it’s probably 2011”

“march” (wrong, the right answer, I found later, was “May 10, 2011”)

“most likely March cuz May is so short they can just put the full month”

“I believe it’s May… I think March would be Mar”

I finally started ignoring any result coming from Yahoo! and found the definitive right answer: the format NN AA NN is a Canadian thing–I live in Toronto–and it’s the doing of the Canadian Food Inspection Agency. You can find the whole reference here. Apparently, to get to a month abbreviation that works both in English and French, that government agency decided to use “monthly bilingual symbols.” The problem is, if you don’t know the context, and if you are not accustomed to that convention, you might mistakenly assume that MA is March, JN is June, or that the two numbers at the beginning are the day, not the year. When it comes to food safety, relying on a standard that is easily subject to misinterpretation is something that you probably would like to avoid.

On the other side of this spectrum, the product reviews posted at Amazon are typically very reliable. Amazon reveals a lot of information about the reviewers, such as “real name,” their other reviews, the “verified purchase” stamp. Also, many filtering and ranking mechanisms are provided, such as the ability for other users to comment on reviews, vote for helpfulness, and say if a comment added to the discussion, or it’s abusive, or if a given reviewer should be ignored.

Unfortunately, Amazon is the exception, not the rule, one of the few sites out there where everybody knows when you are a dog. Twitter’s verified accounts seemed promising, but since they closed the program to the regular public, unless you are a celebrity, you are out-of-luck proving that you are not the person behind that account with your name and your photo. Of course, sometimes having a verified account may play against you, like Rep. Anthony Weiner found out in the last few weeks.

Reflecting over the low quality of information generally available, I concede that skeptics have reasons to not hop into the social media bandwagon mindlessly. But what we are really observing is just an amplification phenomenon, and a moment in time that many decades from now will be seen as the infancy of social technologies.

Since the first pieces of “persistent” content started being produced as rough drawings in some pre-historic cave thousands of years ago, the bad outnumbered the good by orders of magnitude. Creating good content is the exception, and social media amplifies all kinds of content. In part, there are lots of bad Yahoo! Answers because we always had a high degree of disinformation in the world. The only difference is that that disinformation can be easily spread, but that also applies to the good content.

On top of that, the same way natural ecosystems are in a constant state of imbalance but trend towards an equilibrium, information ecosystems will find themselves in apparent disarray from time to time. The original Yahoo! Search, editorialized by real people, once dominated the Internet. It soon became inefficient, and then the PageRank-driven Google search took over. It worked really well for several years, but it’s now also showing its age. Better filters will be developed to overcome the current deficiencies, and this battle will never end. The dynamic between quality of content and quality of filters will perpetually behave like a pendulum, as they always had.

Is this the age of disinformation? Yes, but no more than any other in the past. The fact that, by producing more content in general, we also increase the quantity of good content, should make us optimistic that we are better off today than we were yesterday. If the cost of coming up with one more Mozart is to produce thousands of Salieris, so be it: we may end up finding that Salieris are not that bad after all.





From the batcomputer to Quora: the quest for the perfect answering machine

1 08 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

When Quora announced earlier this month that they were eliminating their policy against self-promoting questions and answers, some analysts wondered if that was opening the gates for spammers to dominate the conversation. The reality is that the whole evolution of Q&A services is not much different from what Google and other search engines have been experiencing throughout the years. It’s a battle to separate the wheat from the chaff, where the chaff keeps finding creative ways to look like the wheat. Keep reading, and you’ll find why developing the perfect Q&A engine should not be our real objective here.

As a kid, I spent my fair number of hours watching re-runs of camp TV shows, including the classic Batman TV series from the 60’s. I remember how the batcomputer was able to answer any question you asked it, no matter how weird or convoluted they were. For those of you who never had the privilege (?) to see the precursor of IBM’s Watson, here it is, courtesy of YouTube (it’s a long video, so you may want to jump directly to the 2:20 mark):

Yes, you saw it right. The bat-computer was fed a bunch of alphabet soup letters and gave the dynamic duo the answer they were looking for, where they should go next to complete their mission. However, as a sign of things to come, Batman then tries to go extreme and feeds the bat-computer with the Yellow Pages directory book, but—oh the horror—the batcomputer fails miserably trying to get them a more precise answer for their subsequent question.

More than 40 years later, our quest for the infallible computer has not changed much. Watson could easily answer Jeopardy! questions about song lyrics and book topics, but choked when facing more nuanced themes. That was not very different from the 18th century “Mechanical Turk”, which was capable of winning chess games, solving puzzles, conversing in English, French and German and even answering questions about people’s age and marital status, but had its fair share of defeats.

I concede that services like Wolfram Alpha, ChaCha and Quora raised the bar compared to early players such as Yahoo! Answers and WikiAnswers, but they all come short to address complex, subtle or fringe questions.

If you don’t believe me, just try yourself. Use your favorite online Q&A service to ask a question that you can’t easily find in Wikipedia or via a quick Google search and let me know if you get anything meaningful back.

Quora gave many of us the hope that we would be finally getting a high-quality, well-curated Q&A service. It’s becoming increasingly clear now that, albeit a step forward, Quora is not the know-all oracle that we were looking for.

Are we going to ever find the perfect Q&A service, where more nuanced questions will get satisfactory responses? My guess is “no”, but not even Adam West’s noodle-eating batcomputer would know the answer for that.

In fact, at the end of the day, that answer is not relevant at all. As we make strides in the information technology journey, our fundamental objective is not to replace people with machines. Our real target is to free us all from as many mundane and “automatable” tasks as possible, so that we can focus our efforts and energy more and more on the tasks that only humans can do. Having increasingly smarter systems that can answer most of our trivial questions are not a sign of our defeat to “our new computer overlords.” It’s rather a great opportunity to re-define what being human actually means.





Data lust, tacit knowledge, and social media

27 07 2011

Note: I’m resuscitating this blog one more time, but slowly: copying my posts from Biznology and other places to here and applying minor edits. Naturally, they lost their freshness, but I want to make this WordPress blog an archive of all my posts.

As previously seen in Biznology:

Data Center Lobby

Data Center Lobby by WarzauWynn via Flickr

We are all witnessing the dawn of a new information technology era, the hyper-digitization of the world around us. While the physical world is being captured and monitored via smart sensors, human interactions in both personal and business domains are making their way to the binary realm via social media. Did we finally find the treasury map that will lead us to the Holy Grail of information nirvana? Is the elusive tacit knowledge finally within the reach of this generation? Those are questions that not even Watson can answer, but I would dare to say that we are still very far from getting anywhere close to that.

The Internet has come a long way since its early days of consumerization in the 1990s, and we’re often amazed by how disrupting it has been—and still is—in several aspects of our personal and business lives. The more people and information get connected, the more value is derived—and we often hear that there’s much more to come. This is nothing new, of course: the lure of the new has led us to believe that technology will eventually solve all our problems ever since the days when “techne” was more art, skill and craft, than space travel, jeopardy-champion computers, and nuclear science. In the last few years, as our ability to digitize the world around us improved, our data lust was awakened, and we are currently seeing an explosion of information moving from the offline world to bits
and bytes.

The expectations are high. A recent article at Mashable stated:

Do you think there’s a lot of data on the Internet? Imagine how much there is in the offline world: 510 million square kilometers of land, 6.79 billion people, 18 million kilometers of paved roads, and countless objects inhabit the Earth. The most exciting thing about all this data? Technologists are now starting to chart and index the offline world, down to street signs and basketball hoops.

Tragedies like the earthquake-tsunami-nuclear plant combo in Japan are powerful reminders that data alone won’t save us. Digitizing information is an important first step, but it’s the easy one. A good proxy to understand the difference between collecting the data and changing the world is the human genome sequencing effort: once we finished that big effort, the question morphed from “how fast can we do it?” to “what’s next?” We got the book, but it’s written in an unknown language that will take generations to decipher.

Raising the stakes even further, the promise of finally getting the keys to tacit knowledge—defined as “knowledge that is difficult to transfer to another person by means of writing it down or verbalising it” (Wikipedia) or, more broadly, “the accumulated knowledge that is stored in our heads and in our immediate personal surroundings” (PwC article)—has often been used as a carrot to justify social media investments in the corporate world. The same PwC article says:

Tacit knowledge can be unleashed and shared as never before by connecting people ubiquitously through social networking and its closely related partner, collaboration. In large and small companies alike, tacit knowledge is stored in the heads and personal information collections of thousands of employees of all levels, not to mention their clients’ personal stores of information. Up until now, tacit knowledge has scarcely been captured in conventional computer-based databases because it has not been easy to “tap,” summarize, save, and use in day-to-day business.

After years of observing companies aiming for that moving target, it became clear to me that most of the tacit knowledge will remain out-of-bounds to us for the time being. This is not supposed to be a blow to the importance of social media in the enterprise. In the long term, having reasonable expectations will only help us all. If you use the Wikipedia definition, it actually turns out to be an easy and obvious conclusion: if tacit knowledge is the one difficult to write down or verbalize, it is clearly not a good candidate for digitization.

The actual low hanging fruit of social media in corporations is not tacit knowledge. Using the widespread iceberg metaphor, the tip of the iceberg is the so-called explicit knowledge, i.e., “knowledge that is codified and conveyed to others through dialog, demonstration, or media such as books, drawings, and documents”. Much of that information is already digitized in e-mails, bookmarks, documents and IM conversations, but often inaccessible to those who need it when they need it. Moving conversations away from those traditional channels to more shareable and “spreadable” media, and improving the filtering and distribution mechanisms will enable us to harvest the early fruits from our corporate social media efforts.

What about the tacit knowledge? This four-year-old article provides a good analysis of it. Much of it will remain for years in the “can’t be recorded” or “too many resources required to record” buckets. Social media can help by uncovering the hooks hinting that some of that knowledge exists, and suggesting the individuals or groups most likely to possess it, but the technology and processes to fully discover and digitize them are not here yet. Even if you are an avid user of Twitter or Facebook or Social Business Platforms and operating in hyper-sharing mode, how much of your knowledge is actually available there? Very little, I would guess.

So, before declaring that you are about to unleash the tacit knowledge in your company, take a deep breath and a step back. That iceberg might be much bigger than you thought. Data lust can be inebriating, but reality will soon take over.





Business Books: The cover vs. the core

7 12 2009

For a person who deeply loves Biology and keeps blogging about Darwin, I have to confess: I never read The Origin of Species, only parts of it. There, I said it. I actually tried to go through it a few times, the last attempt being via Stanza on my iPhone:

Stanza for the iPhone: Origin of Species

Heck, I haven’t even skimmed Origin‘s Cliff’s Notes (that’s just a figure of speech: there’s none, actually) so you can say that my knowledge of what Darwin said or thought is like second-hand smoking or back-seat driving: mostly hear-say. Some saving grace are those 5 years spent at University studying Biology. Furthermore, I would guess that most Biology students (at least in Brazil) have never seen a copy of Origin either.

On a smaller scale, many of us have a similar approach with business books. We have not read most of them – well except maybe Sacha did 🙂 , but we often have an opinion about them, typically based on indirect evidence.

I usually don’t go through the same book twice – life is short and time is at a premium, but I recently made an exception with The Wisdom of Crowds (2004) and The Long Tail (2006), two books that have been much maligned by supposedly championing the advent of new business models that never materialized or that failed to deliver at the promise.

The Long Tail and The Wisdom of Crowds

The Long Tail and The Wisdom of Crowds

Their respective authors even had faceoffs of sorts with the excellent Malcolm Gladwell of The Tipping Point and Blink fame, one friendly, the other not so much. By the way, if you are unfamiliar with Slate’s Book Club feature, you are in for a treat. It’s kind of The Next Supermodel for the written world. I know that doesn’t sound very enticing, but the series is really good.

The major problem I see with both books is not their content: it’s their covers. Both books are fairly balanced in their core and depict scenarios showing both supporting evidence and possible shortcomings for their arguments. But their covers are not as nuanced. Why the future of business is selling less of more and Why the many are smarter than the few, besides sounding like catch phrases written by the same marketing wiz, are hardly shy in the over-promising department.

My learning going through the re-reading process is that I have a much better appreciation for the content of these books now that they don’t have all the buzz around them. It’s like listening to popular songs from the past years after they fell in oblivion. You can more clearly see their actual merits and limitations, without being so influenced by the media. So, if you haven’t yet, give them a try, you may still learn a thing or two, no matter if you believe in their premises or not.

I can’t help but think that, if The Origin of Species was published today, instead of the dull sub-title The preservation of favoured races in the struggle for life, it would bring something like: Why everything you knew about life will change forever.

The Origin of Species, original cover

The Origin of Species, original cover (Darwin Online)





ROI 2.0, Part 3: We don’t need a Social Media ROI model

19 02 2009

Malcolm Gladwell, in his hilarious TED talk on spaghetti sauce, tells the story of Howard Moskowitz’s epiphany while looking for the perfect concentration of aspartame to use in the Diet Pepsi formulation:

Howard does the experiment, and he gets the data back, and he plots it on a curve, and all of a sudden he realizes it’s not a nice bell curve. In fact, the data doesn’t make any sense. It’s a mess. It’s all over the place. (…) Why could we not make sense of this experiment with Diet Pepsi? And one day, he was sitting in a diner in White Plains (…). And suddenly, like a bolt of lightning, the answer came to him. And that is, that when they analyzed the Diet Pepsi data, they were asking the wrong question. They were looking for the perfect Pepsi, and they should have been looking for the perfect Pepsis.”

Tangent note: Most TED talks are a treat, but this one is particularly funny and thought-provoking. If you haven’t seen it yet, consider paying it a visit. If you have an iPhone or iPod Touch, you may like the TED app too!

Over the last few years, many in the Social Media space have been on a quest to find the perfect ROI model for blogs, micro-blogs, wikis, social networking, social bookmarking and other animals in the ever growing Web 2.0 zoo. You’ll see opinions ranging from “we don’t need ROI for Social Media” to “Web 2.0 has to rely on a lagging ROI” to “ROI 2.0 comes from time savings”. In a way, they are all right and all wrong at the same time. Paraphrasing Doctor Moskowitz, there is no perfect Social Media ROI model, there are only perfect Social Media ROI models.

Since 2006, I’ve been talking to several senior executives in multiple industries and across geographies about the business value of Web 2.0, and have noticed a wide range of approaches when deciding whether or not (and how much) to invest in social computing. For companies in the forefront of the social media battleground, such as newspapers, book publishers and TV channels, investing heavily in new web technologies has often been a question of survival, and decision makers had significant leeway in trying new ways of delivering their products and services, with the full blessing of their stakeholders. On the other side of the spectrum, in sectors such as financial services, social media is not yet unanimously regarded as the way to go. I’ve heard from a number of banking and insurance clients that, if Social Media advocates don’t articulate clearly the returns they are expecting to achieve, they won’t get the funds to realize their vision.

Most players in Government were also very skeptical until the Obama effect took the world by storm, creating a sense of urgency that was not as prevalent before. Since then, government agencies around the globe seem to be a bit more forgiving with high level business cases for social computing initiatives inside and outside the firewall. However, to balance things out, in most of the other industries, investments in innovation are being subject to even more scrutiny than normal due to the tough current economic environment. So, having a few ROI models in your pocket does not hurt.

The following ROI models are emerging, and we can expect a few more to appear in the near future.

1. Lagging ROI

Last year, I spoke to the CIO of a global retail chain and he had an interesting approach towards strategic investments in emerging technologies. Instead of trying to develop a standard business case based on pie-in-the-sky ROI calculations, he managed to convince the board of directors to give him more flexibility to invest in a few projects his team deemed to be essential for the long-term survival of the company. For those, he would provide after-the-fact ROI metrics, so that decision makers could assess whether to keep investing or pull the plug. He also managed expectations by saying upfront that some of those projects would fail, but doing nothing was not an option. By setting aside an innovation bucket and establishing a portfolio of parallel innovation initiatives, you can hedge your bets and improve your overall success rate.

2. Efficiency gains or cost avoidance

Many of the early Social Media ROI models are based on how much time you save by relying on social media, converting that to monetary terms based on the cost of labour. While this is certainly a valid approach, it needs to be supplemented by other sources of business value. Unless you are capable of mapping the saved minutes with other measurable outcomes derived from having more time available, the most obvious way to realize the value of being more efficient is to reduce head count, as in theory the group can do the same work as before with less people. If that’s the core of your business case justification, it may fire back in the long term, as some people may feel that the more they use social computing, the more likely it is that their department will be downsized.

3. Proxy Metrics

Some of the ROI examples in the Groundswell book and blog rely on proxy marketing metrics, i.e., what would be the corresponding cost of a conventional marketing campaign to achieve the same level of reach or awareness. For example, when calculating the ROI of an executive blog, the authors measure value by calculating the cost of advertising, PR, SEO and word-of-mouth equivalents.

4. Product/Service/Process Innovation

The value of customer or employee insights that end up generating brand new products, services and processes or improvements to existing one needs to be taken into account. Measuring the number of new features is relatively straightforward. Over time, you may want to figure out the equivalent R&D cost to get the same results.

5. Improved Conversions

Back to the Groundswell book, one of the ROI examples there shows how ratings and reviews can improve conversion rates (i.e., from all people visiting your site, how many more buy products because they trust the input from other consumers, compared to typical conversion rates).

6. Digitalization of knowledge

By having employees blogging, contributing to wikis, commenting or rating content, creating videos and podcasts, companies are essentially enabling the digitalization of knowledge. Things that used to exist only in people’s heads are now being converted to text, audio and images that are searchable and discoverable. It’s the realization of the asset that Clay Shirky calls the cognitive surplus. That was an elusive resource that didn’t have much monetary value before the surge in user-generated content. Naturally, a fair portion of that digitalized knowledge has very little business value, so you need to find metrics to determine how much of that truckload of content is actually useful. You can infer that by using cross-links, comments, ratings or even number of visits.

7. Social capital and empowerment of the workforce

There is certainly business value in having a workforce composed of well connected, well informed and motivated employees. What metrics can be used to assess the degree of connectivity/knowledge/motivation of your human resources? Several social computing tools give you indirect metrics that provide a glimpse of the metrics you can exploit. Atlas for IBM Lotus Connections, for example, gives you the ability to see how your social network evolves quarterly, and can help determining how many people are associated with some hot skill (full disclosure: I work for IBM).

As you can see in several of the emerging models listed above, there are often three types of inputs to develop ROI calculations:

  • Quantitative metrics that can be obtained directly from the system data and log files
  • Qualitative metrics that are determined using surveys, questionnaires and polls
  • Dollar multipliers that attribute arbitrary monetary value to hard to assess items such as a blog comment or an extra contact in your social network

For the monetary value, I would suggest to adopt a sensitivity analysis approach, working with conservative, average and aggressive scenarios, and adjusting them over time. Just don’t go overboard. As I stated in a previous post, there’s an ROI for calculating ROI. ROI models should be easy to understand, as decision makers will often frown upon obscure calculations that require a PhD degree in financial modeling.

In summary: we don’t need one Social Media ROI model, we need many of them. None of the ones emerging now is perfect, none will ever be. You may need to have a few in your toolkit and develop a sense of which one to use in each case.

Previous ROI entries:

ROI 2.0, Part 1: Bean counters vs Innovators – The need for a real exchange of ideas
ROI 2.0, Part 2: Storytelling and Business Cases





Government 2.0: The smarter planet initiative and Obama’s inauguration speech

21 01 2009

Yesterday at 12:00 noon EST, several parts of the world came to a standstill to watch Obama’s inauguration ceremony. It felt pretty much like a FIFA World Cup game in Brazil. I found interesting that, at 12.01, the White House site published a blog post entitled “Change has come to WhiteHouse.gov”., written by Macon Phillips, who has the revealing title of “Director of New Media for the White House”. Macon wrote:

Participation — President Obama started his career as a community organizer on the South Side of Chicago, where he saw firsthand what people can do when they come together for a common cause. Citizen participation will be a priority for the Administration, and the internet will play an important role in that. One significant addition to WhiteHouse.gov reflects a campaign promise from the President: we will publish all non-emergency legislation to the website for five days, and allow the public to review and comment before the President signs it.

We’d also like to hear from you — what sort of things would you find valuable from WhiteHouse.gov? If you have an idea, use this form to let us know. Like the transition website and the campaign’s before that, this online community will continue to be a work in progress as we develop new features and content for you. So thanks in advance for your patience and for your feedback.

That’s promising, but still a 1.0 approach: online forms are very 1994. I’m looking forward to see what they mean by “new features”. I would expect to see a conversation that’s more transparent than e-mail and forms. Something like the very cool service provided by debategraph. If you never heard about it, I highly recommend a visit now.

In 2008 we saw a major surge in interest in Government 2.0 in Canada. I spent a good part of the year working in Ottawa, and also speaking in events directed towards all levels of government. However, just by visiting the publick websites of federal and provincial government agencies, you won’t see much of a change yet. I really would like to see that changing from interest and words to action, and I hope 2009 is the year we see that happening in Canada and around the globe, and the White House site will certainly be a major influencer, one way or the other.

Other fact that came to my attention is that this is the first time “digital” is mentioned in an inaugural speech. This is not surprising, as the term was not widely used 16 years ago, but it was not accidental either.

This is an excerpt from Obama’s speech:

For everywhere we look, there is work to be done. The state of the economy calls for action, bold and swift, and we will act — not only to create new jobs, but to lay a new foundation for growth. We will build the roads and bridges, the electric grids and digital lines that feed our commerce and bind us together. We will restore science to its rightful place, and wield technology’s wonders to raise health care’s quality and lower its cost. We will harness the sun and the winds and the soil to fuel our cars and run our factories. And we will transform our schools and colleges and universities to meet the demands of a new age. All this we can do. And all this we will do.

The words above seem to align nicely with this piece IBM published yesterday in the Washington Post, The Wall Street Journal and The New York Times:

In the past, we had to make trade-offs between the imperatives of energy, transportation, infrastructure, security, commerce, the environment and more. But in an ever-more interconnected world, these vast, complex systems are no longer separate from one another. They are now interwoven and interdependent. Which is good news—because the solutions we develop for one system will ripple across many others.

Those solutions are possible because we now have the tools to literally change the way the world works. Computational power is being put into things we wouldn’t recognize as computers: phones, cameras, cars, appliances, roadways, power lines, clothes. We are interconnecting all of this through the Internet, which has come of age. And we are applying sophisticated analytics to make sense of the world’s digital knowledge and pulse.

As we look at investments to stimulate our economies, we have a lot more options and can get a lot more bang for our buck. We can ask ourselves: Do we want an airport, or a smart airport? A highway, or a smart highway? A hospital, or a smart hospital? We can think about new industries and societal benefits spawned by a smart power grid, a smart water system, a smart city. About how innovation across all these systems will multiply the number of new jobs and spread new skills.

Similar to what I said before, while I find the two excerpts above inspiring and encouraging, nothing has been done yet, so it’s still not time for celebration. But we certainly need a vision and charisma to not get lost during the execution, so the first step was a good one.

Update: I forgot to mention, but the White House blog does not seem to allow blog comments either (please let me know if I missed how to do it, other than sending emails). That’s also very web 1.0, I hope them to open it up a bit, by allowing at least moderated comments there. Not a 2-way conversation when only one side has the mike.