About these ads

Will customers adopt your innovation? Hope, fear and jobs-to-be-done

When will a customer decide your innovative product or service is worth adopting? It’s a question that marketers, strategists and others spend plenty of time thinking about. The factors are myriad and diverse. In this post, let’s examine two primary elements that influence both if an innovation will be adopted, and when it would happen:

  1. Decision weights assigned to probabilities
  2. Probability of job-to-be-done improvement

A quick primer on both factors follows. These factors are then mapped to the innovation adoption curve. Finally, they are used to analyze the adoption of smartwatches and DVRS.

Decision weights assigned to probabilities

Let’s start with decision weights, as that’s probably new for many of us. In his excellent book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman describes research he and a colleague did that examined the way people think about probabilities. Specifically, given different probabilities for a gain, how do people weight those probabilities?

Why?

Classic economics indicates that an outcome has a 25% probability, then 25% is the weight a rational person should assign to that outcome. If you’ve taken economics or statistics, you may recall being taught something along these lines. However, Kahneman and his colleague had anecdotally seen evidence that people didn’t act that way. So they conducted field experiments to determine how people actually incorporated probabilities into their decision making. The table below summarizes their findings:

Decision weights vs probability

The left side of the table shows that people assign greater weight to low probabilities than they should. Kahneman calls this the possibility effect. The mere fact that something could potentially happen has a disproportionate weight in decision-making. Maybe we should call this the “hope multiplier”. It’s strongest at the low end, with the effect eroding as probabilities increase. When the probability of a given outcome increases to 50% and beyond, we see the emergence of the uncertainty effect. In this case, the fact that something might not happen starts to loom larger in our psyche. This is because we are loss averse. We prefer avoiding losses to acquiring gains.

Because of loss aversion, an outcome that has an 80% probability isn’t weighted that way by people. We look at that 20% possibility that something will not happen (essentially a “loss”), and fear of that looms large. We thus discount the 80% probability to a too-low decision weight of 60.1.

Probability of job-to-be-done improvement

A job-to-be-done is something we want to accomplish. It consists of individual tasks and our expectation for each of those tasks. You rate the fulfillment of the expectations to determine how satisfied you are with a given job-to-be-done. This assessment is a cornerstone of the “job-to-be-done improvement” function:

Job-to-be-done improvement function

Dissatisfaction: How far away from customers’ expectations is the incumbent way that they fulfill a job-to-be-done? The further away, the greater the dissatisfaction. This analysis is really dependent on the relative importance of the individual job tasks.  More important tasks have greater influence on the overall level of satisfaction.

Solution improvement: How does the proposed innovation (product, service) address the entirety of the existing job? It will be replacing at least some, if not all, of the incumbent solution. What are the better ways it fulfills the different job tasks?

Cost: How much does the innovation cost? There’s the out-of-pocket expense. But there are other costs as well. Learning costs. Things you cannot do with the new solution that you currently can. The costs will be balanced against the increased satisfaction the new solution delivers.

These three elements are the basis of determining the fit with a given job-to-be-done. Because of their complexities, determining precise measures for each is challenging. But it is reasonable to assert a probability. In this case, the probability that the proposed solution will provide a superior experience to the incumbent solution.

Mapping decision weights across the innovation adoption curve

The decision weights described earlier are an average across a population. There is variance in those. The decision weights for each probability of gain in job-to-be-done will differ by adoption segment, as shown below:

Decision weights across innovation adoption curve

The green and red bars along the bottom of each segment indicate the different weights assigned to the same probabilities for each segment. For Innovators and Early Adopters, any possibility of an improvement in job-to-be-done satisfaction is overweighted significantly. At the right end, Laggards are hard-pressed to assign sufficient decision weights to anything but an absolutely certain probability of increased satisfaction.

Studies have shown that our preferences for risk-aversion and risk-seeking are at least somewhat genetically driven. My own experience also says that there can be variance in when you’re risk averse or not. It depends on the arena and your own experience in it. I believe each of us has a baseline of risk tolerance, and we vary from that baseline depending on circumstances.

Two cases in point: smartwatches and DVRs

The two factors – decision weights and probability of improved job-to-be-done satisfaction – work in tandem to determine how far the reach of a new innovation will go. Generally,

  • If the probability of job-to-be-done improvement is low, you’re playing primarily to the eternal optimists, Innovators and Early Adopters.
  • If the probability of improvement is high, reach will be farther but steps are needed to get later segments aware of the benefits, and to even alter their decision weights.

Let’s look at two innovations in the context of these factors.

Smartwatches

SmartwatchSmartwatches have a cool factor. If you think of a long-term trend of progressively smaller computing devices – mainframes, minicomputers,  desktops, laptops, mobile devices – then the emergence of smartwatches is the logical next wave. Finally, it’s Dick Tracy time.

The challenge for the current generation of smartwatches is distinguishing themselves from the incumbent solution for people, smartphones. Not regular time wristwatches. But smartphones.  How much do smartwatches improve the jobs-to-be-done currently fulfilled by smartphones?

Some key jobs-to-be-done by smartphones today:

  • Email
  • Texting
  • Calls
  • Social apps (Facebook, Twitter, etc.)
  • Navigation
  • Games
  • Many, many more

When you consider current smartphone functionality, what job tasks are under-satisfied? In a Twitter discussion about smartwatches, the most compelling proposition was that the watch makes it easier to see updates as soon as they happen. Eliminate the pain of taking your phone out of your pocket or purse. Better satisfaction of the task of knowing when, who and what for emails, texts, social updates, etc.

But improvement in this task comes at a cost. David Breger wrote that he had to stop wearing his smartwatch. Why? The updates pulled his eyes to his watch. Constantly. To the point where his conversational companions noticed, affecting their interactions. What had been an improvement came with its own cost. There are, of course, those people who bury their faces in their phones wherever they are. The smartwatch is a win for them.

If I were to ballpark the probability that a smartwatch will deliver improvement in its targeted jobs-to-be-done, I’d say it’s 20%. Still, that’s good enough for the Innovators segment. I imagine their decision weights look something like this:

Decision weights - Innovators

The mere possibility of improvement drives these early tryers-of-new-things. It explains who was behind Pebble’s successful Kickstarter campaign. But the low probability of improving the targeted jobs-to-be-done dooms the smartwatch, as currently conceived, to the left side of the adoption curve.

DVRs

DVRDigital video recorders make television viewing easier. Much easier. Back when TiVo was the primary game in town, early adopters passionately described how incredible the DVR was. It was life-changing. I recall hearing the praise back then, and I admit I rolled my eyes at these loons.

Not so these days.

DVRs have become more commonplace. With good reason. They offer a number of features which improve  various aspects of the television viewing job-to-be-done:

  • Pause a live program
  • Rewind to watch something again (your own instant replay for sports)
  • Set it and forget it scheduling
  • Easy playback of recorded shows
  • Easy recording without needing to handle separate media (VCR tape, DVD)

But there are costs. If you’ve got a big investment in VCR tapes or DVDs, you want to play those. It does cost money to purchase a DVR plan. The storage of the DVR has a ceiling. You have to learn how to set up and work with a DVR. It becomes part of the room decor. What happens if the storage drive crashes?

My estimate is that the DVR has an 80% probability of being better than incumbent solutions. Indeed, this has been recognized in the market. A recent survey estimates U.S. household adoption of DVRs at 44%. Basically, knocking on the door of the Late Majority. I imagine their decision weights look like this:

Decision weights - Late Majority

On the probability side of the ledger, they will need to experience DVRs themselves to understand its potential. For the Late Majority, this happens through experiencing an innovation through their Early Majority friends. They become aware of how much an innovation can improve their satisfaction.

On the decision weight, vendors must do the work of addressing the uncertainty that comes with the innovation. This means understanding the forces – allegiance to the incumbent solution, anxiety about the proposed solution – that must be overcome.

Two influential factors

As you consider your product or service innovation, pay attention to these two factors. The first – jobs-to-be-done – is central to getting adoption of any thing. without the proper spade work there, you will be flying blind into the market. The second factor is our human psyche, and how we harbor hope (possibility) and fear (uncertainty). Because people are geared differently, you’ll need to construct strategies (communication channels, messaging, product enhancements) that pull people toward your idea, overcoming their natural risk aversion.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

About these ads

Why crowdsourcing works

CrowdCrowdsourcing is a method of solving problems through the distributed contributions of multiple people. It’s used to address tough problems that happen everyday. Ideas for new opportunities. Ways to solve problems. Uncovering an existing approach that addresses your need.

Time and again, crowdsourcing has been used successfully to solve challenges. But…why does it work? What’s the magic? What gives it an advantage over talking with your pals at work, or doing some brainstorming on your own? In a word: diversity. Cognitive diversity. Specifically these two principles:

  • Diverse inputs drive superior solutions
  • Cognitive diversity requires spanning gaps in social networks

These two principles work in tandem to deliver results.

Diverse inputs drive superior solutions

When trying to solve a challenge, what is the probability that any one person will have the best solution for it? It’s a simple mathematical reality: the odds of any single person providing the top answer are low.

How do we get around this? Partly by more participants; increased shots on goal. But even more important is diversity of thinking. People contributing based on their diverse cognitive toolkits:

Cognitive toolkit

As described by University of Michigan Professor Scott Page in The Difference, our cognitive toolkits consist of: different knowledge, perspectives and heuristics (problem-solving methods). Tapping into people’s cognitive toolkits brings fresh perspectives and novel approaches to solving a challenge. Indeed, a research study found that the probability of solving tough scientific challenges is three times higher if a person’s field of expertise is seven degrees outside the domain of the problem.

In another study, researchers analyzed the results of an online protein-folding game, Foldit.  Proteins fold themselves, but no one understands how they do so. This is particularly true of experts in the field of biochemistry. So the online game allows users to simulate it, with an eye towards better understanding the ways the proteins fold themselves. As reported by Andrew McAfee, the top players of Foldit were better than both computers and experts in the field at understanding the folding sequence. The surprising finding? None had taken chemistry beyond a high school course. It turns out spatial skills are more important to solve the problem than deep domain knowledge of proteins.

Those two examples provide real-world proof for the models and solution-seeking benefits of cognitive diversity described by Professor Page.

Solution landscape - cornstalksProblem solving can be thought of as building a solutions landscape, planted with different ideas. Each person achieves their local optimum, submitting the best idea they can for a given challenge based on their cognitive assets.

But here’s the rub: any one person’s idea is unlikely to be the best one that could be uncovered. This makes sense as both a probabilistic outcome, and based on our own experiences. However in aggregate, some ideas will stand out clearly from the rest. Cognitive diversity is the fertile ground where these best ideas will sprout.

In addition to being a source of novel ideas, cognitive diversity is incredibly valuable as feedback on others’ ideas. Ideas are improved as people contribute their distinct points of view. The initial idea is the seedling, and feedback provides the nutrients that allow it to grow.

Cognitive diversity requires spanning gaps in social networks

Cognitive diversity clearly has a significant positive effect on problem-solving. Generally when something has proven value to outcomes, companies adopt it as a key operating principle. Yet getting this diversity has not proven to be as easy and common as one might expect.

Why?

Strong weak no tiesBecause it’s dependent on human behavior. Left to our own devices, we tend to turn to our close connections for advice and feedback. These strong ties are the core of our day-in, day-out interactions.

But this natural human tendency to turn to our strong ties is why companies are challenged to leverage their cognitive diversity. University of Chicago Professor Ron Burt describes the issue as one of structural holes between nodes in a corporate social network in his paper, Structural Holes and Good Ideas (pdf). A structural hole is a gap between different groups in the organization. Information does not flow across structural holes.

In and of themselves, structural holes are not the problem. Rather, the issue is that when people operate primarily within their own node, their information sources are redundant. Over time, the people in the node know the same facts, develop the same assumptions and optimize to work together in harmony. Sort of like a silo of social ties.

Idea quality vs diversity of connectionsThe impact of this is a severe curtailment of fresh thinking, which impacts the quality of ideas. Professor Burt found empirical evidence for this in a study of Raytheon’s Supply Chain Group. 673 employees were characterized by their social network connections, plotting them on a spectrum from insular to diverse. These employees then provided one idea to improve supply chain management at Raytheon. Their ideas were then assessed by two senior executives.

The results? Employees with more diverse social connections provided higher quality ideas. To the right is a graph of the rated ideas, with a curve based on the average idea ratings versus the submitter’s level of network diversity. The curve shows that with each increase in the diversity of a person’s connections, the higher the value of their idea.

Employees with access to diverse sources of information provided better ideas.  Their access to nonredundant information allowed them to generate more novel, higher potential ideas. Inside organizations, there are employees who excel at making diverse connections across the organization. These people are the ones who will provide better ideas. They are brokers across the structural holes in social networks.

Professor Burt provides the key insight about these brokers:

People connected to groups beyond their own can expect to find themselves delivering valuable ideas, seeming to be gifted with creativity. This is not creativity born of genius; it is creativity as an import-export business. An idea mundane in one group can be a valuable insight in another.

An “import-export business”. Consider that for a moment. It’s a metaphor that well describes the key value of the brokers. They are exchange mechanisms for cognitive diversity. They are incredibly valuable to moving things forward inside organizations. But are organizations overly dependent on these super-connectors? Yes. Companies are leaving millions on the table by not enabling a more scalable, comprehensive and efficient means for exchanges of cognitive diversity.

Would if we could systematize what the most connected employees do?

Systematize the diverse connections

Crowdsourcing doesn’t eliminate the need for the super-connectors. They play a number of valuable roles inside organizations. But by crowdsourcing to solve problems, companies gain the following:

  • Deeper reach into the cognitive assets of all employees
  • Avoiding the strong ties trap of problem-solving
  • Faster surfacing of the best insights
  • Neutralize the biases that the super-connectors naturally have

As you consider ways to improve your decision-making and to foster greater cross-organizational collaboration, make crowdsourcing a key element of your strategic approach.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

Bell Labs Created Our Digital World. What They Teach Us about Innovation.

What do these following crucial, society-altering innovations have in common?

  • Transistors
  • Silicon-based semiconductors
  • Mobile communication
  • Lasers
  • Solar cells
  • UNIX operating system
  • Information theory (link)

They all have origins in the amazing Idea Factory, AT&T’s Bell Labs. I’ve had a chance to learn about Bell Labs via Jon Gertner’s new book, The Idea Factory: Bell Labs and the Great Age of American Innovation. (Disclosure: I was given a free copy of the book for review by TLC Book Tours.)

I don’t know about you, but really, I had no sense of the impact Bell Labs had on our current society. Gertner writes a compelling narrative intermingling the distinctive personalities of the innovators with layman points of view about the concepts they developed. In doing so, he brings alive an incredible institution that was accessible only as old black-and-white photos of men wearing ties around lab equipment.

For the history alone, read this book. You will gain knowledge about how the products that define life today came into being back in the 1940′s, 50′s and 60′s. I say that as someone who really wasn’t “in” to learning about these things. Gertner, a writer for Wired and the New York Times, invites you into the world of these fascinating, brilliant people and the challenges they overcame in developing some damn amazing technological achievements.

Those stories really carry the book. But just as interesting for innovation geeks are the lessons imparted from their hands-on work. There are several principles that created the conditions for innovation. Sure, the steady cash flow from the phone service monopoly AT&T held for several decades was a vital element. But that alone was not sufficient to drive innovation. How many companies with a strong, stable cash flow have frittered away that advantage?

Looking beyond the obvious advantage, several elements are seen which determined the Labs’ success. They are described in detail below.

#1: Inhabit a problem-rich environment

In an interview with a Bell Labs engineer, Gertner got this wonderful observation. Bell Labs inhabited “a problem-rich environment”.

“A problem-rich environment.” Yes.

Bell Labs’ problems were the build-out of the nation’s communications infrastructure. How do you maintain signal fidelity over long distances? How will people communicate the number they want? How can vacuum tube reliability be improved for signal transmission? How to maximize spectrum for mobile communications?

I really like this observation, because it sounds obvious, but really isn’t. Apply efforts to solving problems related to the market you serve. It’s something a company like 3M has successfully done for decades.

Where you see companies get this wrong is they stray from the philosophy of solving customer needs, becoming internally focused in their “problems”. For instance, what problem did New Coke solve for customers? And really, what problems is Google+ solving for people that aren’t handled by Facebook and Twitter?

A problem of, “our company needs to increase revenues, market share, profits, etc.” isn’t one that customers give a damn about. Your problem-rich environment should focus on the jobs-to-be-done of customers.

A corollary to inhabiting a problem-rich environment: focus innovation on solving identified problems. This vignette about John Pierce, a leader in Bell Labs, resonates with me:

Pierce was given free rein to pursue any ideas he might have. He considered the experience equivalent to being cast adrift without a compass. “Too much freedom is horrible.”

#2: Cognitive diversity gets breakthroughs

Bell Labs’ first president, Frank Jewett, saw the value of the labs in this way:

Modern industrial research “is likewise an instrument which can bring to bear an aggregate of creative force on any particular problem which is infinitely greater than any force which can be conceived of as residing in the intellectual capacity of an individual.”

The labs were deliberately stocked with scientists from different disciplines. The intention was to bring together people with different persepctives and knowledges to innovate on the problems they wanted solved.

For example, in developing the solid state transistor, Labs researchers were stumped to break through something called the “surface states barrier”. Physicist Walter Brattain worked with electrochemist Robert Gibney to discover a way to do so. Two separate fields working together to solve a critical issue in the development of semiconductors.

The value of cognitive diversity was systematically modeled by professor Scott Page. Bell Labs shows its value in practice.

#3: Expertise and HiPPOs can derail innovation

Ever seen some of these famously wrong predictions?

Ken Olson, President & Founder, Digital Equipment Corp. (1977): “There is no reason anyone would want a computer in their home.”

 Albert Einstein (1932): “There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.”

Western Union internal memo (1876): “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.”

Now, before we get too smug here…haven’t you personally been off on predictions before? I know I have. The point here is not to assume fundamental deficiencies of character and intellect. Rather, to point out that they will occur.

What makes wrong predictions more harmful is the position of the person who makes them. Experts are granted greater license to determine the feasibility and value of an idea. HiPPOs (high paid person’s opinion) are granted similar vaunted positions. In both cases, their positions when they get it wrong can undermione innovation.

Bell Labs was not immune. Two examples demonstrate this. One did not derail innovation, one did.

Mobile phones

In the late 1950s, Bell Labs engineers considered the idea that mobile phones would one day be small and portable to be utopian. Most considered mobile phones as necessarily bulky and limited to cars, due to the power required to transmit signals from the phone to a nearby antenna.

In this case, the engineers’ expertise on wireless communications was proved wrong. And AT&T became an active participant in the mobile market.

Semiconductors

In the late 1950s, Bell Labs faced a fork in the road for developing transistors. The Labs had pioneered the development of the transistor. Over time, the need for ever smaller transistors was seen as a critical element to their commercialization. Bell Labs vice president of device development, Jack Morton, had a specific view on how transistor miniaturization should happen. He believed a reduction in components was the one right way. Even as development of his preferred methodology was proving technically difficult, he was unwilling to hear alternative ideas for addressing the need.

Meanwhile, engineers at Texas Instruments and Fairchild Semiconductor, simultaneously and independently, developed a different methodology for miniaturization, one that involved constructing all components within one piece of silicon. Their approach was superior, and both companies went on to success in semiconductors.

Bell Labs, pioneers in transistors, lost its technological lead and did not become a major player in the semiconductor industry.

With mobile phones, the experts could not see how sufficient power could be generated. Fortunately, their view did not derail AT&T’s progress in the mobile market. In the case of semiconductors, Bell Labs engineers were aware of the integrated circuit concept, before Texas Instruments and Fairchild introduced it. But the HiPPO, Jack Morton, held the view that such an approach could never be reliable. HiPPO killed innovation.

#4: Experiment and learn when it comes to new ideas

When you think you’ve got a big, disruptive idea, what’s the best way to  handle it? Go big or go home? Sure, if you’re the type to put the whole bundle on ’19′ at the roulette table.

Otherwise, take a cue from how Bell Labs handled the development of the first communications satellite. Sputnik had been launched a few years earlier, and the satellite race was on. The basics of what a satellite had to do? Take a signal from Location A and relay it Location B. Turns out, there were a couple models for how to do this: ‘passive’ and ‘active’ satellites.

Passive satellites could do one thing. Intercept a signal from Location A and reflect down to Location B. In so doing, they scattered the signal into millions of little bits, requiring high-powered receptors on the ground. Active satellites were much more equipped. They could take a signal, amplify it and direct it to different places it had to get to. This focused approach required much lower-powered receiving apparatus on the ground, a clear advantage.

But Bell Labs was just learning the dynamics of satellite technology. While active satellites were the obvious future for top business and military value, they were much more complicated to develop. Rather than try to do of that at the outset, John Pierce directed his team to start with the passive satellite. To start with an experiment. He explained his thinking:

“There’s a difference, you see, in thinking idly about something, and in setting out to do something. You begin to see what the problems are when you set out to do things, and that’s why we though [passive] would be a good idea.”

#5: Innovation can sow the seeds of one’s own destruction

Two observations by the author, John Gertner show that even the good fortune of innovation can open a company up for problems. First:

“In any company’s greatest achievements one might, with clarity of hindsight, locate the beginnings of its own demise.”

One sees this in the demise of formerly great companies who “make it”, then fail to move beyond what got them there (something noted in a previous post, It’s the Jobs-to-Be-Done, Stupid!). In a recent column, the New York Times Nick Bilton related this story:

“In a 2008 talk at the Yale School of Management, Gary T. DiCamillo, a former chief executive at Polaroid, said one reason that the company went out of business was that the revenue it was reaping from film sales acted like a blockade to any experimentation with new business models.”

Gertner’s second observation was this, with regard to Bell Labs’ various innovations that were freely taken up by others:

“All the innovations returned, ferociously, in the form of competition.”

This is generally going to be true. Even patented innovations will find substitute methodologies emerging to compete. Which fits a common meme, that ideas are worthless, execution is everything. It’s also seen in the dynamic of the first-to-market firm losing the market by subsequent entrants. After the innovation, relentless execution is the key to winning the market.

Excellent History and Innovation Insight

Wrapping this up, I recommend The Idea Factory. It delivers an excellent history of an institution, and its quirky personalities, that literally has defined our digital age. No, they didn’t invent the Internet. But all the pieces that have led to our ability to utilize the Internet can be traced to Bell Labs. Innovation students will also enjoy the processes and approaches taken to achieve all that Bell Labs does. Jon Gertner’s book is a good read.

I’m @bhc3 on Twitter.

Is Google+ More Facebook or More Twitter? Yes

Quick, what existing social network is Google+ most likely to displace in terms of people’s time?

Another Try by Google to Take On Facebook

Claire Cain Miller, New York Times

This isn’t a Facebook-killer, it’s a Twitter-killer.

Yishan Wong, Google+ post

A hearty congrats to Google for creating an offering that manages to be compared to both Facebook and Twitter. The initial press focused on Google+ as a Facebook competitor. But as people have gotten to play with it, more and more they are realizing that it’s just as much a Twitter competitor.

I wanted to understand how that’s possible. How is it Google+ competes with both of those services? To do so, I plotted Google+’s features against comparable features in both Facebook and Twitter. The objective was to understand:

  • Why are people thinking of Google+ as competitor to both existing social networks?
  • How did the Google team make use of the best of both services?

The chart below is shows where Google+ is more like Facebook or Twitter. The red check marks () and gray shading highlight which service a Google+ feature is more like.

A few notes about the chart.

Circles for tracking: Twitter has a very comparable feature with its Lists. Facebook also lets you put connections into lists; I know because I’ve put connections into lists (e.g. Family, High School, etc.). But I had a hard time figuring out where those lists are. in the Facebook UI. Seriously, where are they for accessing? They may be available somewhere, but it’s not readily accessible. So I didn’t consider Facebook as offering this as a core experience.

+1 voting on posts: Both Google+ and Facebook allow up votes on people’s posts.Twitter has the ‘favorite’ feature. Which is sort of like up voting. But not really. It’s not visible to others, and it’s more a bookmarking feature.

Posts in web search results: Google+ posts, the public ones, show up in Google search results. Not surprising there. Tweets do as well. Facebook posts for the most part do not. I understand some posts on public pages can. But the vast majority of Wall posts never show up in web search results.

Google+ One-Way Following Defines Its Experience

When you look at the chart above, on a strict feature count, Google+ is more like Facebook. It’s got comment threading, video chat,  inline media, and limited sharing.

But for me, the core defining design of Google+ is the one-way following. I can follow anyone on Google+. They may not follow back (er…put me in a circle), but I can see their public posts. This one-way following is what makes the experience more like Twitter for me. Knowing your public posts are out there for anyone to find and read is both boon and caution. For instance, I’ll post pics of my kids on Facebook, because I know who can see those pics – the people I’ve connected with. I don’t tend to post their pics on Twitter. Call me an old fashioned protective parent.

That’s my initial impression. Now as Google+ circles gain ground in terms of usage, they will become the Facebook equivalent of two-way following. Things like sharing and +mentions are issues that are hazy to me right now. Can someone reshare my “circle-only” post to others outside my circle? Do I have to turn off reshare every time? Does +mentioning someone outside my circle make them aware of the post?

Google has created quite a powerful platform here. While most features are not new innovations per se, Google+ benefits from the experience of both Twitter and Facebook. They’re off to a good start.

I’m @bhc3 on Twitter.

Four reasons enterprise software should skip native mobile apps

The desire to “consumerize” mobile apps for their own sake is stoking today’s outsized enthusiasm with device-specific enterprise mobile apps at a time when HTML5 is right there staring us all in the face.

Tony Byrne, Enterprise 2.0 B.S. List: Term No. 1 Consumerization

The runaway success of the iPhone app store has demonstrated that people love mobile, and seek the great user experiences that mobile apps provide. You see these wonderful little icons, beckoning you to give ‘em a tap on your phone. You browse the app store, find an app that interests you, you decide to try it on and see if it fits.

And all the cool kids are doing the native app thing. Path is an iPhone app. Facebook wins high praise for its iPhone app. And Wired ran a story declaring, essentially, that apps killed the web star.

There has been a clear market shift to the apps market, and consumers have gotten comfortable with the different apps on their phones. It’s come to define the mobile experience.

So why doesn’t that logic extend to the enterprise? Because the native app experience isn’t a good fit with enterprise software. Four reasons why.

1. Lists, clicks, text, images

Think about your typical enterprise software. It’s purpose is to get a job done. What does it consist of? Lists, clicks, text and images. And that’s just right. You are presented efficient ways of getting things done, and getting *to* things.

This is the stuff of the web.

For the most part, the on-board functionality afforded by a mobile OS and native features are not relevant for the enterprise software. When trying to manage a set of projects, or to track expenses, or to run a financial analysis…do you really need that awesome accelerator function? The accelerometer? The camera?

The functions of mobile hardware and OS are absolutely fantastic. They’re great for so many amazing apps. But they’re overkill for enterprise software.

2. Enterprise adoption is not premised on the app store

A key value of the app store is visibility for iPhone and Android  users. A convenient, ready-to-go market where downloads are easy and you get to experience them as soon as they’re loaded. This infrastructure lets apps “find their way” with their target markets.

An AdMob survey looked at how consumers find the mobile apps they download. Check out the top 3 below:

Users find apps by search, rankings and word-of-mouth. Great! As it should be. Definitely describes how I’ve found apps to download.

Irrelevant, however,  for enterprise software. Distribution and usage of enterprise software is not an app store process. Employees will use the software because:

  • It’s the corporate standard
  • They’re already using it
  • They need to use it
  • It’s already achieved network effects internally, it’s the “go to” place

Adoption via the app store is not needed. The employee will already have a URL for accessing the app. For example, I use gmail for both my personal and work emails. For whatever reason, the second work gmail will not “take” on the native email function of my iPhone. So I’ve been using the web version of gmail the last several months. It’s been easy, and I didn’t need to download any app. I knew where to access the site.

3. Mobile HTML looks damn good

Visually, native apps can look stunning. They are beautiful, and functional. No limitations of web constructs means freedom to create incredible user experiences.

But you know what? You can do a lot with HTML5. Taking a mobile web approach to styling the page and optimizing the user experience, one can create an experience to rival that of native apps.

As you can see on the right, an enterprise software page presented in a mobile browser need not be a sanitized list of things. It can pop, provide vibrant colors, present a form factor for accessing with the fattest fingers and be indistinguishable from a native app.

Indeed, designing for a mobile experience is actually a great exercise for enterprise software vendors. It puts the focus on simplicity and the most commonly used functions. It’s a slo a chance to re-imagine the UX of the software. It wouldn’t surprise me if elements the mobile optimized HTML find their way back to the main web experience.

4. Too many mobile OS’s to account for

We all know that Apple’s iOS has pushed smart phone usage dramatically. And corporations are looking at iOS for both iPhone and iPads. Meanwhile, Android has made a strong run and is the leading mobile OS now. However, in corporates, RIM’s various Blackberry flavors continue to have a strong installed base. On Microsoft’s Phone 7 OS, “developer momentum on Windows Phone 7 is already incredibly strong.” (ArsTechnica).

Four distinct OS’s, each with their own versions. Now, enterprise software vendors, you ready to staff up to maintain your version of native apps for each?

37signals recently announced it was dropping native apps for mobile. Instead, they’re focusing on mobile web versions of their software. In that announcement, they noted the challenge of having to specialize for both iOS and Android.

Meanwhile, Trulia CEO noted the burden of maintaining multiple native apps for mobile:

“As a brand publisher, I’m loathe to create native apps,” he told me, “it just adds massive overhead.” Indeed, those developers need to learn specific skills to building native mobile apps, arguably having nothing to do with his core business. They have to learn the different programming code, simulators and tech capabilities of each platform, and of each version of the platform. By diverting so much money into this, he’s having to forgo investment in other core innovation.

A balkanized world of OS variants creates administrative, operational support and development costs. Not good for anybody.

While I’m sure there are enterprise software apps that can benefit from the native OS capabilities, such as integrated photos, for most enterprise software, mobile should be an HTML5 game.

I’m @bhc3 on Twitter.

Three Pluses, Three Minuses of Quora as a KM System

This question was posted on Quora, “In 10 words or less, what is Quora?” My answer:

Powerful application of crowdsourcing and social networking to knowledge management

Knowledge Management (aka “KM”) is a field that I don’t have personal experience in. It’s supposed to be practices, processes and systems where valuable knowledge of workers is collected and made available for others. KM continues to be an important topic for enterprises these days, but it also freighted with many failures and disappointments.

Without the benefit of a KM history, I wanted to look at Quora in the context of someone with an objective today: how do I make it easier for employees to find and share their knowledge?

In that cntext, I see three really good things about Quora, and three things that distort its value.

The Pluses

Purpose-Built: A premise of Enterprise 2.0 is that tools need to be lightweight and flexible for multiple purposes. That’s what you get with microblogging, wikis, blogs, forums. The problem there is that the flexibility undermines their value for delivering on specific needs. One must wade through a lot of other stuff to get to what you want.

Quora is purpose-built. It’s not a place for sharing links you find interesting or talking about the American Idol selection process. It’s a place where you know there will be relevant questions, and often good answers. Which means they can focus on delivering to the purpose, not try to be all-things to all people. Important for KM.

Crowdsourcing: Very, very important. Quora leverages the the principles of crowdsourcing to elicit knowledge. It’s not just a system for experts. Too often the focus of people is to get “the experts” on the record, assuming most others have little to add. That is a shame.

The ability to follow topics allows people to track areas of either interest (to find answers) or expertise (to provide answers). As Professor Scott Page notes in his book, The Difference, everyone has a unique set of cognitive skills. To assume there are the “masters” and then there’s the “riff raff” is to lose a significant percentage of knowledge. Crowdsourcing ensure broader opportunity to get at all relevant knowledge.

Social networking: We have people we like to follow. They may be friends, and we enjoy their takes on things. Or they may be people we admire, and who have demonstrated a capacity to provide valuable answers. The personal connection here, that we have an interest in a person as opposed to a topic is valuable.

By letting me follow people, I am exposed to things that have a higher likelihood of interest to me. We can’t all be on Quora, or a KM site. But some portion of our networks will be, and seeing what they’ve been up to keeps me interested and contributes to a serendipity in acquiring knowledge.

It’s also encouraging to know I have a set of people who are receptive to me questions and my answers. Much better than a cold system of questions and answers only.

The Minuses

Discerning the wheat from the chaff: Quora gets noisy. For some, too noisy. That happens in an open platform. There will be some great answers to questions, but some pretty bad ones too.  In terms of KM, some argue for restricting participation to only the known experts:

Few are blessed with serious, specifically relevant knowledge or know-how. Any system which facilitates overly broad participation will inextricably bury any expert knowledge under a pile of low value chatter. I am persuaded that for valuable ideas & thoughts to produce innovation there need to be a highly afferent and efferent system capable of synthesizing powerful multidimensional analytical databases with the know-how of subject matter experts, the imagination of visionaries and the creative mind of innovators who do not fret from the challenge of thinking.

The community culture needs to have a strict sense of what’s valuable, what’s not. And up-vote and down-vote accordingly.

Lots of followers means lots of up-votes: This is the downside of social networking. Some people have HUGE numbers of connections. Which means they have a built-in audience for their answers above and beyond the topic followers. An army of followers can come in and cause an answer to move to first position based on that alone, regardless of answer quality.

A good solution here is to employ a form of reputation to weight those votes. Don’t let just the volume of votes determine the top answer, look at the reputation of those who are voting.

You could also weight the answers themselves according to reputation, although I’m a little wary of that. Makes it harder for new voices with quality contributions to get traction.

Incentives to participate: I’m busy. You’re busy. We’re all busy. Who has time to participate? This will always be an issue. With things like microblogging, there’s a core communication need they satisfy. So that more closely aligns with my day-in-, day-out work. But answering some distant colleague’s question?

There are a lot of ways to address this. Getting participation early on from enthusiasts goes a long way in terms of demonstrating value (something Quora has done). Getting kudos for good answers is a huge motivator. Obviously, getting a good answer just once is critical to seeing the value. And Q&A seems like a perfect activity for applying game mechanics.

All in all, I really like the KM potential for Quora. It doesn’t need to be as heavily active as Twitter, but benefits from a broader participation than what is seen in Wikipedia. The minuses are challenges to overcome, but they are not insurmountable.

Three Reasons Google Should Acquire Delicious from Yahoo

So the news is out. Yahoo plans to shutter Delicious, the largest social bookmarking site. Which is shocking, particularly among the tech savvy and socially oriented. Delicious is iconic for its application of social sharing and collective intelligence. Hard to believe Yahoo wants to shut it down.

But wait…this doesn’t have to be the end. Why not seek alternatives to shutting down the service? Might there be a logical company to take on Delicious, and all the value it holds? Why yes, one company comes to mind.

Google.

Delicious fits Google’s mission

Hmmm…what is it Google wants to do? What defines their corporate philosophy? Ah yes, here’s Google’s mission:


“Organize the world’s information.” Now, doesn’t that sound like the kind of thing that applies to Delicious? Millions of people organizing the world’s information, according to their own tags. Which makes it easier to find for others. Crowdsourced curation.

For that reason alone, Google would be wise to take on Delicious.

Glean new insights about what people value

Google’s pagerank is amazing. It’s incredibly good at finding nuggets. But it’s not perfect, as anyone who regularly use it knows. The use of links is powerful, but is a limited basis for identifying valuable web pages.

What people elect to bookmark is a different sort of valuation. Which is important, because not everyone blogs, or creates web pages with links to their favorite sites. But there is a distributed effort of indicating value via bookmarking.

This activity would be a valuable addition to Google’s search results. Take a look at this thread on Hacker News (a bunch of tech savvy types) about Delicious:

I added that highlighting. And here’s what Michael Arrington said when Yahoo experimented with adding Delicious bookmarks to its search results:

I have previously written that Delicious search is one of the best ways of searching for things when a standard search doesn’t pull up what you are looking for. After Google, it is my favorite “search engine.” Adding this information into Yahoo search is a great idea.

Google could leverage the activity of Delicious users to improve its search results, or at least give users an additional place to find content. Mine the tags to provide more context and connections among pages.

Note that Google, and Bing, are exploring different ways to apply social signals from Twitter and Facebook. Inclusion of Delicious in the search process would be consistent with that.

And Google would still benefit from its Adwords program here. Which would be a monetization strategy for Delicious, which has no ads.

Great PR move with the tech community

Google finds itself in a fight with Facebook for employees. Google is public, Facebook is pre-IPO. Social is hot, and Facebook is dominant in that. Google isn’t.

But as Allen Stern notes, Google does have a special appeal to the tech crowd for its developer-friendly moves. Stepping in and taking over a legendary Web 2.0 site like Delicious would be a good fit with that reputation. Enhance the usage of the data and make it easy for developers to access.

More importantly, Delicious holds a special appeal among the geekier set. Many of us are still active bookmarkers, and use the service. Google is known for being a geek-centric paradise, with a bunch of high-GPA, advanced degree types on its campuses.

What do you think it costs to run Delicious “as is”? I’d hazard a guess that it’s not too much. And Google is throwing off some serious cash ($10 billion in last 12 months):

So they do have some capacity, but obviously need to invest it wisely.

For a relatively low cost, they gain a treasure trove of data on relevance and value, and a solid boost to their PR. Seems like a big win to me. How about it Google? Why not step in and take over Delicious?

Phone Cameras + Social Are Expanding the Historical Record

"There's a plane in the Hudson. I'm on the ferry going to pick up the people. Crazy."

In a critique of the rise of Instagram (current photo sharing app du jour), Laurie Voss argues that the rise of cheap, low fidelity cameras on phones is undermining the data contained in them. And it’s not just that these pictures are lower quality now, it’s affecting their value for future generations:

With these rubbish phone cameras we take terrible photos of some of our most important moments and cherished memories. I am not complaining about composition and lighting here; I’m not a photographer. I am talking about the quantity of meaningful visual data contained in these files. Future historians will decry forever the appalling lack of visual fidelity in the historical record of the last decade.

I read that, and at first though, “Yeah, that could be an issue.” But then I realized that, well no, it’s actually the opposite. The rise of cheap phone cameras is actually increasing the historical record. This even has disruptive innovation undertones to it.

Why?

Picture = Moment + Equipment

When thinking about recording data for history pictorially, I consider two elements:

  • Moment
  • Equipment

"The line at 9 am at the Pleasanton @sfbart stretches for blocks. Huge crowd downtown today for #sfgiants parade."

Now moments are always going to arise. They may be significant moments, such as Janis Krums’ iconic picture above after a US Airways plan crash landed on the Hudson. Recently, the San Francisco Giants were celebrated for their 2010 World Series title with a ticker tape parade in downtown San Francisco. When I arrived at the Dublin/Pleasanton BART the morning of the victory parade, I was shocked by the number of people waiting in line for get to SF.

Just as important as the moment is the equipment. I’m not talking about the quality of the photographic equipment. I’m saying, “do you have something to take the picture?”

Before I got a phone with a camera on it, I had no way of photographing any moments. I could tweet about them, email a description of them and tell people about them. But there was no visual record at all.

I wasn’t carrying a camera around with me. Just not something I wanted to deal with as I also carried my ‘dumb’ phone.  And wallet. And keys. Just too much to deal with.

But a camera included with my mobile phone? Oh yeah, that works. I’ll have that with me at all times.

Which is a much better fit with the notion of capturing moments. They are unpredictable, and do not schedule themselves to when you’re carrying a separate camera.

As for the “quantity of meaningful visual data” being reduced, I think of it mathematically:

The X/Y variable represents the decrease in data per picture. If Y is the “full” data from a high resolution photo, then X is the reduced data set. The loss of scene details, the inability to discern people’s expressions, etc. Yeah, that is a loss due to low quality cameras.

The B/A variable represents the increased number of pictures enabled by the proliferation of convenient low quality cameras. If A is the quantity of photos with high resolution cameras, B is the overall number of photos inclusive of the low quality cameras.

Multiply the ratios, and I believe the overall historical record has been improved by the advent of phone cameras. In other words, “> 1″.

Sharing Is Caring

Something the higher quality, standalone cameras have lacked is connectivity. They miss that aspect we have to share something in the moment. The fact that I can share a picture just as soon as a I take it is extra incentive to take the picture in the first place.

I share my kids’ pics with family via email, and other pics end up in my Twitter and Facebook streams. You know how painful it is to upload photos from the camera and share them? Very.

Standalone cameras are like computer hard drives, locking data off in some siloed storage device somewhere. Good luck to historians in extracting that photographic data.

Convenience Wins Out

This is the disruptive innovation of convenience. People are swapping the separate cameras for the all-in-one mobile devices. And like any good low-end innovation, the quality will increase. Meaning more pictures with better detail and fidelity.

I mean, imagine if there were a bunch of phone cameras at Gettysburg?

Only known photo of Abraham Lincoln (center, without hat) at Gettysburg

We’d have thousands of pics, and it’d be a Twitter Trending Topic. As for the lower data per picture, damn the torpedoes, full steam ahead. Phone cameras will enrich the historical record for future generations.

iPad’s Climb Up the Disruptive Innovation Cycle

Blockbuster’s recent bankruptcy filing was yet another chapter in the Clayton Christensen annals of disruptive innovation. A major brand with convenient locations that got disrupted by a website and the U.S. Mail. Note that we’re seeing the backend of the disruption, when it all seems so clear.

How easy is it to see such a disruption beforehand? “Not very” would be the honest answer. What distinguishes a truly disruptive technology or business model from a flash-in-the-pan idea? Keep in mind the basis of a disruptive innovation:

A technology initially addressing low-end market needs that slowly moves upstream as its capabilities evolve.

From that perspective, think of all the things out there that have stayed low level and did not disrupt industries. Disruptive innovation is like a Category 5 hurricane: powerful, slow-moving and rare.

Which brings me to the Apple iPad. Are we witnessing a disruptive tropical depression?

DISRUPTIVE INNOVATION LADDER

The graphic below (via wikipedia reproduced on the TouchDraw iPad app) describes the levels of usage for disruptive technologies.

Disruptive Innovation Cycle

The target of the iPad here is the global laptop market. In that context, the beautiful, sublime, innovative iPad is solidly…in the low quality usage band of the chart above.

What represents the iPad’s “low quality use”?

  • Email
  • Surfing the web
  • Facebooking
  • Tweeting
  • Playing music

“Low quality” is not a pejorative term here. It’s a reflection of the computing power needed for the listed activities. This is the iPad’s entre into the laptop market. Consider how much of your own digital activity is covered by those items listed above. IPad already offers a great experience here.

Indeed, the Best Buy CMO recently confirmed the iPad’s move into this end of the market.

MEDIUM QUALITY USE

When you see those low quality uses, they’re primarily consumption oriented. If they are production oriented, they’re pretty basic. But there are things that can be done at the next level, medium quality use.

Games are well done on the iPad. They take advantage of the touch aspect of the device. In my opinion, games on the iPad are quickly moving up the quality ladder.

For the office, there are Apple’s apps. The Pages word processing app looks like a winner. For document production, Pages appears to fill the bill. Especially without a Microsoft Word app on the iPad. The other major office apps – spreadsheets and presentations – are available as well.

I really like the graphics program TouchDraw on the iPad. You can create very nice graphics, for business use, with just your finger. The simple graphic above was done with TouchDraw.

While I couldn’t possibly survey all apps that address different activities, I get the sense that a number of them qualify for medium or high quality uses. The question is the breadth of apps addressing the “power use cases” of laptop owners.

Finally, a word about the keyboard. I love it. I find it very easy to type out this post. It’s not without its imperfections, but generally I’m flying around it as I type. One disclaimer: I hunt-n-peck to type. I’ve never learned real typing.

GAPS THAT NEED TO BE CLOSED

In my experimenting to see how much I could do with an iPad instead of a laptop, I’ve found several areas that need to be shored up to move the overall experience to the medium quality use level.

Safari usage: Safari is the browser used for the web on the iPad. It is surprising how many sites aren’t built for usage via Safari. For example, wordpress.com, surprisingly in my view, doesn’t work well with Safari. Google Docs? Similar issue. Doesn’t work well, or at all, with Safari. I’m embedding HTML tags in this post-by-email blog post.

I cannot accept an event into my Google Calendar via Safari. I cannot create a WebEx meeting from Safari, and the WebEx iPad app doesn’t allow you to create an event. In short, doing business via iPad is tough.

As the iPad continues to gain market share, expect better support by websites for Safari. Which will dramatically improve the end user experience with the iPad.

Graphics uploads: Want to add a graphic to a document, presentation, wiki, blog or email? Hard to do. We’re used to having graphics on our local drive, and a simple button to upload/embed that graphic.

Where’s my master upload button on the iPad?!!

Answer: there isn’t one. The graphic above is one that I emailed to Flickr, grabbed the embed code and pasted it into this post. Which works fine for publicly accessible graphics. But not so much in the work context.

I’d like to see the native Photos app become a universal location for accessing graphics in any app.

Stuff at my fingertips: The ability to easily click around different apps on the PC tray at the bottom of my screen, and to click quickly among different websites via tabs, is a great productivity benefit. If you’re like me, you’re zipping around easily.

With iPad, it’s slower going back-n-forth. A lot of clicking the home button to get to other apps, or clicking the button on Safari to view other sites. Which is a pain, reducing the pace of work.

iPAD’S INEVITABLE CLIMB UP THE DISRUPTION CYCLE

So the iPad is still fundamentally in the low quality usage band, but with some clear indications of moving up. I’ve taken to using my iPad for my non-work hours computing needs.

My full expectation is that slowly, but surely, Apple and the third party app developers will improve the utility of the iPad experience. It will take some time.

But the key observation is this: Apple has the time to enhance the iPad. Two points:

That’s why I expect iPad to get better over time: market momentum. How about you? Are you thinking the iPad, and even the new crop of competitor tablets, will disrupt the laptop industry?

Sent from my iPad

Should BP crowdsource solutions to solve the Gulf oil spill?

Clifford Krauss of the New York Times reports on BP’s latest effort to cap the oil leak, called “top kill”. He notes the following:

The consequences for BP are profound: A successful capping of the leaking well could finally begin to mend the company’s brittle image after weeks of failed efforts, and perhaps limit the damage to wildlife and marine life from reaching catastrophic levels.

A failure could mean several months more of leaking oil, devastating economic and environmental impacts across the gulf region, and mounting financial liabilities for the company. BP has already spent an estimated $760 million in fighting the spill, and two relief wells it is drilling as a last resort to seal the well may not be completed until August.

Let’s hope for the best. Given the challenges of the previous efforts, it sounds like it will take a monumental effort to stop the leaking well.

Which begs a question…should BP be tapping a larger set of minds to help solve the leaking well? Can they crowdsource a solution?

In a way, they’re already doing it. Sort of. You can call an idea hotline to suggest ways to stop the oil. They even have the number posted on their home page.

But why not take it a step further? A formal crowdsourcing effort. I’ve heard that the folks at Innocentive asked this on an NPR report. Another vendor also pitched its idea management software, however BP didn’t bite. Spigit hasn’t pitched BP, but would certainly be willing to help.

There are some very good reasons to open it more publicly, and cast a call across the globe for ideas:

  • Diversity of ideas increases the odds of finding something that will be useful
  • While no one idea may solve it, visibility (as opposed to private phone calls) increases the odds of finding parts of ideas that lead to viable solutions
  • The brain power of enthusiastic participants across the globe is a good match to BP’s in-house experts
  • Potentially a good PR move, as the company demonstrates that it’s leaving no stone unturned to solve the leak

Crowdsourcing has proven its value in other endeavors, such as products, government services, technical problems and marketing. Surely it could do well here. But what might hold BP back? Three reasons:

  1. Little previous experience with crowdsourcing
  2. Deep technical domain experience is required
  3. Site becomes a place for public criticism

Are they valid? Let’s see.

Little Previous Crowdsourcing Experience

If a company hasn’t previously mastered open innovation and crowdsourcing, a crisis is a hell of a time to give it a go. This is far from comprehensive, but I did find a couple examples of BP’s forways in the world of crowdsouring and open innovation.

Headshift wrote up a case study about BP’s Beacon Awards. The internal awards recognize innovative marketing initiatives, and BP created a site for employees to submit ideas and vote on them. This example has a couple elements of note:

  • It’s an internal effort, where “mistakes” can be made as the company gets comfortable with the process of crowdsourcing
  • It was for marketing ideas in a time of relative calm, not time-is-ticking ideas during a crisis

BP also touts its open innovation efforts. Open innovation means working with others outside your organization to come up with new ways of tackling problems. In  a post on its website, it discusses its work with partners:

The need to work with others to solve tricky problems has most likely been around since humans learned to communicate, pooling their skills to achieve a desired mutual goal. In today’s world, collaboration between partner organisations has become highly sophisticated, particularly so in the energy industry where new challenges abound, be those in security of supply, cleaner energy sources, or the bringing together of different scientific and engineering disciplines to focus on a common problem.

Certainly the oil spill qualifies as a tricky problem.

So BP has experience in crowdsourcing internally on marketing ideas, and in open innovation with academia and industry partners. Not too shabby, and that argues for their having a favorable disposition toward crowdsourcing.

Deep Technical Domain Expertise Is Required

OK, I’ll admit. I have no idea how I’d stop the oil leak. Maybe I could come up with an idea as I give my kids a bath (“so you take the rubber duckie, and move it over the drain…”).

The BP oil leak occurred deep underwater, an area subject to different conditions than oil companies have had to deal with. BP is sparing no level of expertise to fix the issue, reports the New York Times:

Several veterans of that operation are orchestrating technicians in the Gulf of Mexico. To lead the effort, BP has brought in Mark Mazzella, its top well-control expert, who was mentored by Bobby Joe Cudd, a legendary Oklahoma well firefighter.

Didn’t even know one could be a legendary well firefighter. But the challenges of doing this in the Gulf are different. Popular Mechanics has a scorecard of each previous effort by BP to stop the leaking well. Do you remember one effort called “The Straw”? It is capturing a part of the oil, siphoning it to a surface ship. But it’s not without its risks:

The real gamble was in the original insertion—the damaged riser’s structural integrity is unknown, and any prodding could have worsened the spill, or prevented any hope of other riser- or BOP-related fixes.

Given the highly technical nature of these efforts, and the myriad complexities, does it make sense to crowdsource? I’d say it does, in that a proposed idea need not satisfy all elements of risk mitigation and possible complications. That puts too high a burden on idea submitters. Start with the idea, let the domain experts evaluate its feasibility.

Keep in  mind that people outside a company can solve technical challenges. Jeff Howe wrote in Wired about the guy who tinkers in a one-bedroom apartment above an auto body shop. This guy solved a vexing problem for Colgate involving the insertion of fluoride powder into a toothpaste tube.

Site Becomes a Place for Public Criticism

If BP were to set up a public site that allows anyone to participate, I can guarantee that some percentage of ideas and comments will be devoted to excoriating BP. In fact, it wouldn’t surprise me if much of it became that. A free-for-all that has nothing to do with solving the oil well leak.

A public forum receiving press attention during an extreme crisis presents angry individuals with a too-tempting target to make mischief. BP could spend more time deleting or responding to comments than getting much from it. The anger is too strong, too visceral on the part of many across the world.

Charlene Li talks about meeting criticism head-on in her book Open Leadership. Perhaps one way BP could handle this would be to set up a companion forum where criticism could be moved to. Keep an idea site dedicated to just that…ideas.

But I can see how BP understandably would not want to deal with such a site, as it potentially becomes a major PR pain on top of the existing maelstrom.

This reason strikes me as the one most likely to keep BP away from a crowdsourcing initiative to complement their other efforts. What do you think? Should BP be crowdsourcing solutions to the Gulf oil spill?

Wanted: Cars that Use Collective Intelligence to Improve Driving

Credit: woodleywonderworks

Every week, I drive in my car from Pleasanton, CA to San Francisco. You get some time to think when you make that drive. An idea that has occurred to me is…

We ought to be making better use of the data our cars generate.

It could make a difference in term of driver awareness, and safety.

This notion is consistent with something I heard Tim O’Reilly describe at the Web 2.0 Summit  last year: “web squared”. Which is an odd sounding term, I’ll admit.

Odd, but important. Here’s how O’Reilly and John Battelle describe “web squared” in a white paper:

The Web is no longer a collection of static pages of HTML that describe something in the world. Increasingly, the Web is the world – everything and everyone in the world casts an “information shadow,” an aura of data which, when captured and processed intelligently, offers extraordinary opportunity and mind bending implications. Web Squared is our way of exploring this phenomenon and giving it a name.

In the white paper, the increased use of sensors is a driver of this new trend. Sensors can track data on machinery and objects that can be turned into collective intelligence. Stanford futurist Paul Saffo sees sensors as the next great wave of technology innovation.

That’s some background for you. Now…how would this web squared collective intelligence be applied to driving?

Useful Data Goes Uncollected

As we drive, our cars produce a treasure trove of information:

  • Speed
  • Braking
  • Use of windshield wipers
  • Windshield wiper cleaning fluid usage
  • Steering wheel turning
  • Headlight usage

But none of it is collected. We see it, control it, on board as we drive. But that’s it. It’s not shared with anyone else. It’s just something we do while we drive.

Turning this Data into Collective Intelligence for Better Driving

Here’s what I would love to see. We’re driving along, and quietly, various data about our cars is collected and transmitted to the cloud. This data is tabulated in real-time. What such a system is looking for variances. Points of change. Because it’s these points of change that present the biggest headaches and safety issues for drivers.

Below are several ways that the data from cars can be used for effective collective intelligence to make driving safer.

Data Benefit
Speed Alert that traffic slows dramatically in 5 miles
Braking Alert that cars are slamming their brakes in 1 mile
Windshield wipers High frequency wipers in use 1 mile ahead
Wiper cleaning fluid Drivers unexpectedly cleaning windshield in 1 mile
Steering wheel turning Drivers veering sharply left in 1 mile
Headlights Drivers turning on headlights in 1 mile

Notice the way this should work. Not an alert for conditions right where you are. After all, you’ll know about those. It’s what’s coming up ahead of you where the value of such a system would work.

In the examples above, I imagine alerts for things happening 1 mile ahead, or even 5 miles. There’d be a visual and audio system of alerts. Think of it like a Twitter stream. Of data about conditions ahead. It’d generally be quiet, unobtrusive. Unless something materially changes in the road ahead of the driver. Kind of like a Garmin GPS unit telling you to “turn right in 1 mile”.

Such a system would take full advantage of GPS. As the data is relayed from cars, their location is noted. As a person drives, her location is noted, and plotted relative to identified upcoming changes.

Collective Intelligence Works at Scale

Collective intelligence requires a reasonably high participation rate to be of value. Sporadic, spot updates don’t provide sufficient data for this desired innovation to work.

Which means these systems would need to be built into cars. On-board computers that systematically track these variables and have the ability to transmit them to satellites. Like a Garmin GPS or GM OnStar unit.

And since scale is required, you’d want common standards among the automakers – GM, Ford, Chrysler, Toyota, Honda, Volkswagen, etc. No need to balkanize such a system.

It’s Just an Idea

As I noted at the start of this post, it’s just an idea for now. But it seems like a really good application of the web squared concept. I’d love to have better information on driving conditions, and there’s a wealth of data that can provide highly localized reports. We just need to be able to tap it.

Foursquare Check-in Etiquette

Anyone remember the early complaints about Twitter? That people were posting updates about what they’re eating for lunch? Robert Scoble noted this phenomenon in a blog post from last September about Twitter’s rise:

It tells me that Twitter isn’t lame anymore. Remember those days when Twitter was for telling all your friends you were having a tuna sandwich at Subway in Half Moon Bay?

I do.

Yes, Twitter has grown up and become much more than the report of what you’re eating for lunch. Which brings us to Foursquare and Gowalla.

These services are in their early stages, with Foursquare outnumbering Gowalla four-to-one in members. Some of us are experimenting with these location-based services. For me personally, it feels like those early days of Twitter (“What should I tweet?”).

The biggest difference since my early Twitter days is that I’ve got more experience with this sharing behavior, and I’m comfortable trying different approaches.

With that in mind, I wanted to describe some early thoughts on Foursquare and Gowalla etiquette.

The Check-in Sharing Hierarchy

Louis Gray wrote a post recently asking whether people are censoring their check-ins to maintain hipster cred. It’s a good, if somewhat painful, examination of the fact that we do have some serious hum-drum in our lives. People’s comments on the post are illuminating, as some admit this behavior, but also note that they don’t want to bore everyone.

There are three levels of sharing check-ins that Foursquare provides (Gowalla only has the latter two):

The three levels each have their own unique use cases, and their own check-in etiquette.

Share It with No One

I’ve done this before. I check in, but I don’t share it with anyone. Why? Two reasons:

  1. Just maintaining a record of my days’ activities
  2. Like to stay on top of the mayorships, badges and points

See, a valuable use case of checking in with Foursquare and Gowalla is the maintenance of a personal activity history. The combination of GPS location, pre-existing locations and one-click check-in makes it quite easy to create your personal record. Now, some of those check-ins are less-than-interesting. Like…

Checking in at a gas station

Now it may be boring, but I’ll bet there’s a badge out there for multiple gas station check-ins. Maybe someone will earn a Gas Guzzler badge (as opposed to the Douchebag badge). It’s all part of the fun. A festooned Foursquare profile.

But there is a role for curating your check-ins. I really don’t need to know about your gas station check-ins. That applies to my interests, and it applies to what I assume to be the interests of my connections on the location-based services. Sure, share your whereabouts, but please have some mercy on those who follow you. We successfully graduated past the “What are you eating for lunch?” stage of Twitter.

And good luck with that Gas Guzzler badge.

Share Only with Foursquare, Gowalla Connections

People that follow you on Foursquare and Gowalla are participating in another aspect of location-based social networks. The “keeping tabs” aspect. You see what others are doing in the course of their day. For instance, I was able to see that Techcrunch’s MG Siegler was in Japan a few weeks back, via his various Gowalla updates.

One commenter on Louis Gray’s blog post noted this use case:

I’ve also found a use case in ethically “stalking” various tech pundits (I hate that word) and found a couple of high value events I would otherwise have missed.

Personally, I look at things like work check-ins as de rigeur for this level of sharing. Whereas gas station check-ins may bore your connections, the work stuff is of greater interest. I’ll often see CEO Eugene Lee’s check-ins at Socialtext headquarters. As head of a major software company, I’m sure he has to travel a fair amount. So the check-ins to HQ tell me he’s working away in the office.

I check in to Spigit every day. Proud to say I’m the Foursquare “mayor” of Spigit, oh yes. But I’m competing with several colleagues for that title. I share these check-ins with my Foursquare and Gowalla connections.

But not with my Twitter/Facebook connections. Those folks didn’t decide to follow me based on my daily work check-ins.

Share with Twitter, Facebook Friends

However, I do share check-ins, even mundane ones, on Twitter at times. I’ll explain in a second.

First, interesting ones are a no-brainer. Should you find yourself with Anne Hathaway at a post-Oscars party, by all means, share that check-in! Or maybe you’re in a working session at the White House. Definitely passes the interestingness test.

There’s also a good use case for alerting your wider social networks as to your location for meet-ups. It’s a commonly cited use case for Foursquare/Gowalla.

However, I’ll admit as a father with a full-time job and a mortgage, my “interesting” check-ins are few and far between, and I rarely am trying to connect with others at Trader Joe’s. And I’m not alone. The majority of people will have mundane check-ins as they go about daily life.

It’s making the mundane interesting where the Foursquare/Gowalla art is.

Create “tweetable” check-ins. What’s going on around you that would be worth sharing? What will some people on Twitter and Facebook find interesting?

It’s something I do, and I admit it’s a bit of a game for me. “What can I tweet with this check-in?” I find it forces me to observe what’s around me, or step back from where I am consider the larger moment.

A couple examples below:

I’ll never do a straight  tweet of my check-in at a BART station. At least, I won’t unless I fat finger my iPhone, that is. But if I can report out the unusually cold weather we’re experiencing, yeah, tweet that!

As I said above, we’re early in this location-based check-in thing. Consider the observations above a start.

I’m @bhc3 on Twitter.

Follow

Get every new post delivered to your Inbox.

Join 641 other followers