Avoiding innovation errors through jobs-to-be-done analysis

The lean startup movement was developed to address an issue that bedeviled many entrepreneurs: how to introduce something new without blowing all your capital and time on the wrong offering. The premise is that someone has a vision for a new thing, and needs to iteratively test that vision (“fail fast”) to find product-market fit. It’s been a success as an innovation theory, and has penetrated the corporate world as well.

In a recent post, Mike Boysen takes issue with the fail fast approach. He argues that better understanding of customers’ jobs-to-be-done (i.e. what someone is trying to get done, regardless of solutions are used) at the front-end is superior to guessing continuously about whether something will be adopted by the market. To quote:

How many hypotheses does it take until you get it right? Is there any guarantee that you started in the right Universe? Can you quantify the value of your idea?

How many times does it take to win the lottery?

Mike advocates for organizations to invest more time at the front end understanding their customers’ jobs-to-be-done rather than iteratively guessing. I agree with him in principle. However, in my work with enterprises, I know that such an approach is a long way off as a standard course of action. There’s the Ideal vs. the Reality:

JTBD analysis - innovation ideal vs reality

The top process – Ideal – shows the right point to understand your target market’s jobs-to-be-done. It’s similar to what Strategyn’s Tony Ulwick outlines for outcome-driven innovation. In the Ideal flow, proper analysis has uncovered opportunities for underserved jobs-to-be-done. You then ideate ways to address the underserved outcomes. Finally, a develop-test-learn approach is valuable for identifying an optimal way to deliver the product or service.

However, here’s the Reality: most companies aren’t doing that. They don’t invest time in ongoing research to understand the jobs-to-be-done. Instead, ideas are generated in multiple ways. The bottom flow marked Reality highlights a process with more structure than most organizations actually have. Whether an organization follows all processes or not, the key is this: ideas are being generated continuously from a number of courses divorced from deep knowledge of jobs-to-be-done.

Inside-out analysis

In my experience working with large organizations, I’ve noticed that ideas tend to go through what I call “inside-out” analysis. Ideas are evaluated first on criteria that reflect the company’s own internal concerns. What’s important to us inside these four walls? Examples of such criteria:

  • Fits current plans?
  • Feasible with current assets?
  • Addresses key company goal?
  • Financials pencil out?
  • Leverages core competencies?

For operational, low level ideas inside-out analysis can work. Most of the decision parameters are knowable and the impact of a poor decision can be reversed. But as the scope of the idea increases, it’s insufficient to rely on inside-out analysis.

False positives, false negatives

Starting with the organization’s own needs first leads to two types of errors:

  • False positive: the idea matches the internal needs of the organization, with flying colors. That creates a too-quick mindset of ‘yes’ without understanding the customer perspective. This opens the door for bad ideas to be greenlighted.
  • False negative: the idea falls short on the internal criteria, or even more likely, on someone’s personal agenda. It never gets a fair hearing in terms of whether the market would value it. The idea is rejected prematurely.

In both cases, the lack of perspective about the idea’s intended beneficiaries leads to innovation errors. False positives are part of a generally rosy view about innovation. It’s good to try things out, it’s how we find our way forward. But it isn’t necessarily an objective of companies to spend money in such a pursuit. Mitigating the risk of investing limited resources in the wrong ideas is important.

In the realm of corporate innovation, false negatives are the bigger sin. They are the missed opportunities. The cases where someone actually had a bead on the future, but was snuffed out by entrenched executives, sclerotic processes or heavy-handed evaluations. Kodak, a legendary company sunk by the digital revolution, actually invented the digital camera in the 1970s. As the inventor, Steven Sasson, related to the New York Times:

“My prototype was big as a toaster, but the technical people loved it,” Mr. Sasson said. “But it was filmless photography, so management’s reaction was, ‘that’s cute — but don’t tell anyone about it.’ ”

It’s debatable whether the world was ready for digital photography at the time, as there was not yet much in the way of supporting infrastructure. But Kodak’s inside-out analysis focused on its effect on their core film business. And thus a promising idea was killed.

Start with outside-in analysis

Thus organizations find themselves with a gap in the innovation process. In the ideal world, rigor is brought to understanding the jobs-to-be-done opportunities at the front-end. In reality, much of innovation is generated without analysis of customers’ jobs beforehand. People will always continue to propose and to try out ideas on their own. Unfortunately, the easiest, most available basis of understanding the idea’s potential starts with an inside-out analysis. The gap falls between those few companies that invest in understanding customers’ jobs-to-be-done, and the majority who go right to inside-out analysis.

What’s needed is a way to bring the customers’ perspective into the process much earlier. Get that outside-in look quickly.

Three jobs-to-be-done tests

In my work with large organizations, I have been advising a switch in the process of evaluating ideas. The initial assessment of an idea should be outside-in focused. Specifically, there are three tests that any idea beyond the internal incremental level should pass:

jobs-to-be-done three tests

Each of the tests examines a critical part of the decision chain for customers.

Targets real job of enough people

The first test is actually two tests:

  1. Do people actually have the job-to-be-done that the idea intends to address?
  2. Are there enough of these people?

This is the simplest, most basic test. Most ideas should pass this, but not all. As written here previously, the Color app was developed to allow anyone – strangers, friends – within a short range to share pictures taken at a location. While a novel application of the Social Local Mobile (SoLoMo) trends, Color actually didn’t address a job-to-be-done of enough people.

A lot better than current solution

Assuming a real job-to-be-done, consideration must next be given to the incumbent solution used by the target customers. On what points does the proposed idea better satisfy the job-to-be-done than what is being done today? This should be a clear analysis. The improvement doesn’t have to be purely functional. It may better satisfy emotional needs. The key is that there is a clear understanding of how the proposed idea is better.

And not just a little better. It needs to be materially better to overcome people’s natural conservatism. Nobel Laureate Daniel Kahneman discusses two factors that drive this conservatism in his book, Thinking, Fast and Slow:

  • Endowment effect: We overvalue something we have currently over something we could get. Think of that old saying, “a bird in the hand is worth two in the bush”.
  • Uncertainty effect: Our bias shifts toward loss aversion when we consider how certain the touted benefits of something new are. The chance that something doesn’t live up to its potential looms larger in our psyche, and our aversion to loss causes to overweight the probability that something won’t live up to its potential.

In innovation, the rule-of-thumb that something needs to be ten times better than what it would replace reflects our inherent conservatism. I’ve argued that the problem with bitcoin is that it fails to substantially improve our current solutions to payments: government-issued currency.

Value exceeds cost to beneficiary

The final test is the most challenging. It requires you to walk in the shoes of your intended beneficiaries (e.g. customers). It’s an analysis of marginal benefits and costs:

Value of improvement over current solution > Incremental costs of adopting your new idea

Not the costs of the company to provide the idea, but those that are borne by the customer. These costs include monetary, learning processes, connections to other solutions, loss of existing data, etc. It’s a holistic look at tangible and intangible costs. Which admittedly, is the hardest analysis to do.

An example where the incremental costs didn’t cover the improvements is of a tire that Michelin introduced in the 1990s. The tire had a sensor and could run for 125 miles after being punctured. A sensor in the car would let the driver know about the issue. But for drivers, a daunting issue emerged: how do you get those tires fixed/replaced? They required special equipment that garages didn’t have and weren’t going to purchase. The costs of these superior tires did not outweigh the costs of not being able to get them fixed/replaced.

Recognize your points of uncertainty

While I present the three jobs-to-be-done tests as a sequential flow of yes/no decisions, in reality they are better utilized as measures of uncertainty. Think of them as gauges:

JTBD tests - certainty meters

Treat innovation as a learning activity. Develop an understanding for what’s needed to get to ‘yes’ for each of the tests. This approach is consistent with the lean startup philosophy. It provides guidance to the development of a promising idea.

Mike Boysen makes the fundamental point about understanding customers’ jobs-to-be-done to drive innovation. Use these three tests for those times when you cannot invest the time/resources at the front end to understand the opportunities.

I’m @bhc3 on Twitter.

Will customers adopt your innovation? Hope, fear and jobs-to-be-done

When will a customer decide your innovative product or service is worth adopting? It’s a question that marketers, strategists and others spend plenty of time thinking about. The factors are myriad and diverse. In this post, let’s examine two primary elements that influence both if an innovation will be adopted, and when it would happen:

  1. Decision weights assigned to probabilities
  2. Probability of job-to-be-done improvement

A quick primer on both factors follows. These factors are then mapped to the innovation adoption curve. Finally, they are used to analyze the adoption of smartwatches and DVRS.

Decision weights assigned to probabilities

Let’s start with decision weights, as that’s probably new for many of us. In his excellent book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman describes research he and a colleague did that examined the way people think about probabilities. Specifically, given different probabilities for a gain, how do people weight those probabilities?


Classic economics indicates that an outcome has a 25% probability, then 25% is the weight a rational person should assign to that outcome. If you’ve taken economics or statistics, you may recall being taught something along these lines. However, Kahneman and his colleague had anecdotally seen evidence that people didn’t act that way. So they conducted field experiments to determine how people actually incorporated probabilities into their decision making. The table below summarizes their findings:

Decision weights vs probability

The left side of the table shows that people assign greater weight to low probabilities than they should. Kahneman calls this the possibility effect. The mere fact that something could potentially happen has a disproportionate weight in decision-making. Maybe we should call this the “hope multiplier”. It’s strongest at the low end, with the effect eroding as probabilities increase. When the probability of a given outcome increases to 50% and beyond, we see the emergence of the uncertainty effect. In this case, the fact that something might not happen starts to loom larger in our psyche. This is because we are loss averse. We prefer avoiding losses to acquiring gains.

Because of loss aversion, an outcome that has an 80% probability isn’t weighted that way by people. We look at that 20% possibility that something will not happen (essentially a “loss”), and fear of that looms large. We thus discount the 80% probability to a too-low decision weight of 60.1.

Probability of job-to-be-done improvement

A job-to-be-done is something we want to accomplish. It consists of individual tasks and our expectation for each of those tasks. You rate the fulfillment of the expectations to determine how satisfied you are with a given job-to-be-done. This assessment is a cornerstone of the “job-to-be-done improvement” function:

Job-to-be-done improvement function

Dissatisfaction: How far away from customers’ expectations is the incumbent way that they fulfill a job-to-be-done? The further away, the greater the dissatisfaction. This analysis is really dependent on the relative importance of the individual job tasks.  More important tasks have greater influence on the overall level of satisfaction.

Solution improvement: How does the proposed innovation (product, service) address the entirety of the existing job? It will be replacing at least some, if not all, of the incumbent solution. What are the better ways it fulfills the different job tasks?

Cost: How much does the innovation cost? There’s the out-of-pocket expense. But there are other costs as well. Learning costs. Things you cannot do with the new solution that you currently can. The costs will be balanced against the increased satisfaction the new solution delivers.

These three elements are the basis of determining the fit with a given job-to-be-done. Because of their complexities, determining precise measures for each is challenging. But it is reasonable to assert a probability. In this case, the probability that the proposed solution will provide a superior experience to the incumbent solution.

Mapping decision weights across the innovation adoption curve

The decision weights described earlier are an average across a population. There is variance in those. The decision weights for each probability of gain in job-to-be-done will differ by adoption segment, as shown below:

Decision weights across innovation adoption curve

The green and red bars along the bottom of each segment indicate the different weights assigned to the same probabilities for each segment. For Innovators and Early Adopters, any possibility of an improvement in job-to-be-done satisfaction is overweighted significantly. At the right end, Laggards are hard-pressed to assign sufficient decision weights to anything but an absolutely certain probability of increased satisfaction.

Studies have shown that our preferences for risk-aversion and risk-seeking are at least somewhat genetically driven. My own experience also says that there can be variance in when you’re risk averse or not. It depends on the arena and your own experience in it. I believe each of us has a baseline of risk tolerance, and we vary from that baseline depending on circumstances.

Two cases in point: smartwatches and DVRs

The two factors – decision weights and probability of improved job-to-be-done satisfaction – work in tandem to determine how far the reach of a new innovation will go. Generally,

  • If the probability of job-to-be-done improvement is low, you’re playing primarily to the eternal optimists, Innovators and Early Adopters.
  • If the probability of improvement is high, reach will be farther but steps are needed to get later segments aware of the benefits, and to even alter their decision weights.

Let’s look at two innovations in the context of these factors.


SmartwatchSmartwatches have a cool factor. If you think of a long-term trend of progressively smaller computing devices – mainframes, minicomputers,  desktops, laptops, mobile devices – then the emergence of smartwatches is the logical next wave. Finally, it’s Dick Tracy time.

The challenge for the current generation of smartwatches is distinguishing themselves from the incumbent solution for people, smartphones. Not regular time wristwatches. But smartphones.  How much do smartwatches improve the jobs-to-be-done currently fulfilled by smartphones?

Some key jobs-to-be-done by smartphones today:

  • Email
  • Texting
  • Calls
  • Social apps (Facebook, Twitter, etc.)
  • Navigation
  • Games
  • Many, many more

When you consider current smartphone functionality, what job tasks are under-satisfied? In a Twitter discussion about smartwatches, the most compelling proposition was that the watch makes it easier to see updates as soon as they happen. Eliminate the pain of taking your phone out of your pocket or purse. Better satisfaction of the task of knowing when, who and what for emails, texts, social updates, etc.

But improvement in this task comes at a cost. David Breger wrote that he had to stop wearing his smartwatch. Why? The updates pulled his eyes to his watch. Constantly. To the point where his conversational companions noticed, affecting their interactions. What had been an improvement came with its own cost. There are, of course, those people who bury their faces in their phones wherever they are. The smartwatch is a win for them.

If I were to ballpark the probability that a smartwatch will deliver improvement in its targeted jobs-to-be-done, I’d say it’s 20%. Still, that’s good enough for the Innovators segment. I imagine their decision weights look something like this:

Decision weights - Innovators

The mere possibility of improvement drives these early tryers-of-new-things. It explains who was behind Pebble’s successful Kickstarter campaign. But the low probability of improving the targeted jobs-to-be-done dooms the smartwatch, as currently conceived, to the left side of the adoption curve.


DVRDigital video recorders make television viewing easier. Much easier. Back when TiVo was the primary game in town, early adopters passionately described how incredible the DVR was. It was life-changing. I recall hearing the praise back then, and I admit I rolled my eyes at these loons.

Not so these days.

DVRs have become more commonplace. With good reason. They offer a number of features which improve  various aspects of the television viewing job-to-be-done:

  • Pause a live program
  • Rewind to watch something again (your own instant replay for sports)
  • Set it and forget it scheduling
  • Easy playback of recorded shows
  • Easy recording without needing to handle separate media (VCR tape, DVD)

But there are costs. If you’ve got a big investment in VCR tapes or DVDs, you want to play those. It does cost money to purchase a DVR plan. The storage of the DVR has a ceiling. You have to learn how to set up and work with a DVR. It becomes part of the room decor. What happens if the storage drive crashes?

My estimate is that the DVR has an 80% probability of being better than incumbent solutions. Indeed, this has been recognized in the market. A recent survey estimates U.S. household adoption of DVRs at 44%. Basically, knocking on the door of the Late Majority. I imagine their decision weights look like this:

Decision weights - Late Majority

On the probability side of the ledger, they will need to experience DVRs themselves to understand its potential. For the Late Majority, this happens through experiencing an innovation through their Early Majority friends. They become aware of how much an innovation can improve their satisfaction.

On the decision weight, vendors must do the work of addressing the uncertainty that comes with the innovation. This means understanding the forces – allegiance to the incumbent solution, anxiety about the proposed solution – that must be overcome.

Two influential factors

As you consider your product or service innovation, pay attention to these two factors. The first – jobs-to-be-done – is central to getting adoption of any thing. without the proper spade work there, you will be flying blind into the market. The second factor is our human psyche, and how we harbor hope (possibility) and fear (uncertainty). Because people are geared differently, you’ll need to construct strategies (communication channels, messaging, product enhancements) that pull people toward your idea, overcoming their natural risk aversion.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

Why crowdsourcing works

CrowdCrowdsourcing is a method of solving problems through the distributed contributions of multiple people. It’s used to address tough problems that happen everyday. Ideas for new opportunities. Ways to solve problems. Uncovering an existing approach that addresses your need.

Time and again, crowdsourcing has been used successfully to solve challenges. But…why does it work? What’s the magic? What gives it an advantage over talking with your pals at work, or doing some brainstorming on your own? In a word: diversity. Cognitive diversity. Specifically these two principles:

  • Diverse inputs drive superior solutions
  • Cognitive diversity requires spanning gaps in social networks

These two principles work in tandem to deliver results.

Diverse inputs drive superior solutions

When trying to solve a challenge, what is the probability that any one person will have the best solution for it? It’s a simple mathematical reality: the odds of any single person providing the top answer are low.

How do we get around this? Partly by more participants; increased shots on goal. But even more important is diversity of thinking. People contributing based on their diverse cognitive toolkits:

Cognitive toolkit

As described by University of Michigan Professor Scott Page in The Difference, our cognitive toolkits consist of: different knowledge, perspectives and heuristics (problem-solving methods). Tapping into people’s cognitive toolkits brings fresh perspectives and novel approaches to solving a challenge. Indeed, a research study found that the probability of solving tough scientific challenges is three times higher if a person’s field of expertise is seven degrees outside the domain of the problem.

In another study, researchers analyzed the results of an online protein-folding game, Foldit.  Proteins fold themselves, but no one understands how they do so. This is particularly true of experts in the field of biochemistry. So the online game allows users to simulate it, with an eye towards better understanding the ways the proteins fold themselves. As reported by Andrew McAfee, the top players of Foldit were better than both computers and experts in the field at understanding the folding sequence. The surprising finding? None had taken chemistry beyond a high school course. It turns out spatial skills are more important to solve the problem than deep domain knowledge of proteins.

Those two examples provide real-world proof for the models and solution-seeking benefits of cognitive diversity described by Professor Page.

Solution landscape - cornstalksProblem solving can be thought of as building a solutions landscape, planted with different ideas. Each person achieves their local optimum, submitting the best idea they can for a given challenge based on their cognitive assets.

But here’s the rub: any one person’s idea is unlikely to be the best one that could be uncovered. This makes sense as both a probabilistic outcome, and based on our own experiences. However in aggregate, some ideas will stand out clearly from the rest. Cognitive diversity is the fertile ground where these best ideas will sprout.

In addition to being a source of novel ideas, cognitive diversity is incredibly valuable as feedback on others’ ideas. Ideas are improved as people contribute their distinct points of view. The initial idea is the seedling, and feedback provides the nutrients that allow it to grow.

Cognitive diversity requires spanning gaps in social networks

Cognitive diversity clearly has a significant positive effect on problem-solving. Generally when something has proven value to outcomes, companies adopt it as a key operating principle. Yet getting this diversity has not proven to be as easy and common as one might expect.


Strong weak no tiesBecause it’s dependent on human behavior. Left to our own devices, we tend to turn to our close connections for advice and feedback. These strong ties are the core of our day-in, day-out interactions.

But this natural human tendency to turn to our strong ties is why companies are challenged to leverage their cognitive diversity. University of Chicago Professor Ron Burt describes the issue as one of structural holes between nodes in a corporate social network in his paper, Structural Holes and Good Ideas (pdf). A structural hole is a gap between different groups in the organization. Information does not flow across structural holes.

In and of themselves, structural holes are not the problem. Rather, the issue is that when people operate primarily within their own node, their information sources are redundant. Over time, the people in the node know the same facts, develop the same assumptions and optimize to work together in harmony. Sort of like a silo of social ties.

Idea quality vs diversity of connectionsThe impact of this is a severe curtailment of fresh thinking, which impacts the quality of ideas. Professor Burt found empirical evidence for this in a study of Raytheon’s Supply Chain Group. 673 employees were characterized by their social network connections, plotting them on a spectrum from insular to diverse. These employees then provided one idea to improve supply chain management at Raytheon. Their ideas were then assessed by two senior executives.

The results? Employees with more diverse social connections provided higher quality ideas. To the right is a graph of the rated ideas, with a curve based on the average idea ratings versus the submitter’s level of network diversity. The curve shows that with each increase in the diversity of a person’s connections, the higher the value of their idea.

Employees with access to diverse sources of information provided better ideas.  Their access to nonredundant information allowed them to generate more novel, higher potential ideas. Inside organizations, there are employees who excel at making diverse connections across the organization. These people are the ones who will provide better ideas. They are brokers across the structural holes in social networks.

Professor Burt provides the key insight about these brokers:

People connected to groups beyond their own can expect to find themselves delivering valuable ideas, seeming to be gifted with creativity. This is not creativity born of genius; it is creativity as an import-export business. An idea mundane in one group can be a valuable insight in another.

An “import-export business”. Consider that for a moment. It’s a metaphor that well describes the key value of the brokers. They are exchange mechanisms for cognitive diversity. They are incredibly valuable to moving things forward inside organizations. But are organizations overly dependent on these super-connectors? Yes. Companies are leaving millions on the table by not enabling a more scalable, comprehensive and efficient means for exchanges of cognitive diversity.

Would if we could systematize what the most connected employees do?

Systematize the diverse connections

Crowdsourcing doesn’t eliminate the need for the super-connectors. They play a number of valuable roles inside organizations. But by crowdsourcing to solve problems, companies gain the following:

  • Deeper reach into the cognitive assets of all employees
  • Avoiding the strong ties trap of problem-solving
  • Faster surfacing of the best insights
  • Neutralize the biases that the super-connectors naturally have

As you consider ways to improve your decision-making and to foster greater cross-organizational collaboration, make crowdsourcing a key element of your strategic approach.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

Bell Labs Created Our Digital World. What They Teach Us about Innovation.

What do these following crucial, society-altering innovations have in common?

  • Transistors
  • Silicon-based semiconductors
  • Mobile communication
  • Lasers
  • Solar cells
  • UNIX operating system
  • Information theory (link)

They all have origins in the amazing Idea Factory, AT&T’s Bell Labs. I’ve had a chance to learn about Bell Labs via Jon Gertner’s new book, The Idea Factory: Bell Labs and the Great Age of American Innovation. (Disclosure: I was given a free copy of the book for review by TLC Book Tours.)

I don’t know about you, but really, I had no sense of the impact Bell Labs had on our current society. Gertner writes a compelling narrative intermingling the distinctive personalities of the innovators with layman points of view about the concepts they developed. In doing so, he brings alive an incredible institution that was accessible only as old black-and-white photos of men wearing ties around lab equipment.

For the history alone, read this book. You will gain knowledge about how the products that define life today came into being back in the 1940’s, 50’s and 60’s. I say that as someone who really wasn’t “in” to learning about these things. Gertner, a writer for Wired and the New York Times, invites you into the world of these fascinating, brilliant people and the challenges they overcame in developing some damn amazing technological achievements.

Those stories really carry the book. But just as interesting for innovation geeks are the lessons imparted from their hands-on work. There are several principles that created the conditions for innovation. Sure, the steady cash flow from the phone service monopoly AT&T held for several decades was a vital element. But that alone was not sufficient to drive innovation. How many companies with a strong, stable cash flow have frittered away that advantage?

Looking beyond the obvious advantage, several elements are seen which determined the Labs’ success. They are described in detail below.

#1: Inhabit a problem-rich environment

In an interview with a Bell Labs engineer, Gertner got this wonderful observation. Bell Labs inhabited “a problem-rich environment”.

“A problem-rich environment.” Yes.

Bell Labs’ problems were the build-out of the nation’s communications infrastructure. How do you maintain signal fidelity over long distances? How will people communicate the number they want? How can vacuum tube reliability be improved for signal transmission? How to maximize spectrum for mobile communications?

I really like this observation, because it sounds obvious, but really isn’t. Apply efforts to solving problems related to the market you serve. It’s something a company like 3M has successfully done for decades.

Where you see companies get this wrong is they stray from the philosophy of solving customer needs, becoming internally focused in their “problems”. For instance, what problem did New Coke solve for customers? And really, what problems is Google+ solving for people that aren’t handled by Facebook and Twitter?

A problem of, “our company needs to increase revenues, market share, profits, etc.” isn’t one that customers give a damn about. Your problem-rich environment should focus on the jobs-to-be-done of customers.

A corollary to inhabiting a problem-rich environment: focus innovation on solving identified problems. This vignette about John Pierce, a leader in Bell Labs, resonates with me:

Pierce was given free rein to pursue any ideas he might have. He considered the experience equivalent to being cast adrift without a compass. “Too much freedom is horrible.”

#2: Cognitive diversity gets breakthroughs

Bell Labs’ first president, Frank Jewett, saw the value of the labs in this way:

Modern industrial research “is likewise an instrument which can bring to bear an aggregate of creative force on any particular problem which is infinitely greater than any force which can be conceived of as residing in the intellectual capacity of an individual.”

The labs were deliberately stocked with scientists from different disciplines. The intention was to bring together people with different persepctives and knowledges to innovate on the problems they wanted solved.

For example, in developing the solid state transistor, Labs researchers were stumped to break through something called the “surface states barrier”. Physicist Walter Brattain worked with electrochemist Robert Gibney to discover a way to do so. Two separate fields working together to solve a critical issue in the development of semiconductors.

The value of cognitive diversity was systematically modeled by professor Scott Page. Bell Labs shows its value in practice.

#3: Expertise and HiPPOs can derail innovation

Ever seen some of these famously wrong predictions?

Ken Olson, President & Founder, Digital Equipment Corp. (1977): “There is no reason anyone would want a computer in their home.”

 Albert Einstein (1932): “There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.”

Western Union internal memo (1876): “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.”

Now, before we get too smug here…haven’t you personally been off on predictions before? I know I have. The point here is not to assume fundamental deficiencies of character and intellect. Rather, to point out that they will occur.

What makes wrong predictions more harmful is the position of the person who makes them. Experts are granted greater license to determine the feasibility and value of an idea. HiPPOs (high paid person’s opinion) are granted similar vaunted positions. In both cases, their positions when they get it wrong can undermione innovation.

Bell Labs was not immune. Two examples demonstrate this. One did not derail innovation, one did.

Mobile phones

In the late 1950s, Bell Labs engineers considered the idea that mobile phones would one day be small and portable to be utopian. Most considered mobile phones as necessarily bulky and limited to cars, due to the power required to transmit signals from the phone to a nearby antenna.

In this case, the engineers’ expertise on wireless communications was proved wrong. And AT&T became an active participant in the mobile market.


In the late 1950s, Bell Labs faced a fork in the road for developing transistors. The Labs had pioneered the development of the transistor. Over time, the need for ever smaller transistors was seen as a critical element to their commercialization. Bell Labs vice president of device development, Jack Morton, had a specific view on how transistor miniaturization should happen. He believed a reduction in components was the one right way. Even as development of his preferred methodology was proving technically difficult, he was unwilling to hear alternative ideas for addressing the need.

Meanwhile, engineers at Texas Instruments and Fairchild Semiconductor, simultaneously and independently, developed a different methodology for miniaturization, one that involved constructing all components within one piece of silicon. Their approach was superior, and both companies went on to success in semiconductors.

Bell Labs, pioneers in transistors, lost its technological lead and did not become a major player in the semiconductor industry.

With mobile phones, the experts could not see how sufficient power could be generated. Fortunately, their view did not derail AT&T’s progress in the mobile market. In the case of semiconductors, Bell Labs engineers were aware of the integrated circuit concept, before Texas Instruments and Fairchild introduced it. But the HiPPO, Jack Morton, held the view that such an approach could never be reliable. HiPPO killed innovation.

#4: Experiment and learn when it comes to new ideas

When you think you’ve got a big, disruptive idea, what’s the best way to  handle it? Go big or go home? Sure, if you’re the type to put the whole bundle on ’19’ at the roulette table.

Otherwise, take a cue from how Bell Labs handled the development of the first communications satellite. Sputnik had been launched a few years earlier, and the satellite race was on. The basics of what a satellite had to do? Take a signal from Location A and relay it Location B. Turns out, there were a couple models for how to do this: ‘passive’ and ‘active’ satellites.

Passive satellites could do one thing. Intercept a signal from Location A and reflect down to Location B. In so doing, they scattered the signal into millions of little bits, requiring high-powered receptors on the ground. Active satellites were much more equipped. They could take a signal, amplify it and direct it to different places it had to get to. This focused approach required much lower-powered receiving apparatus on the ground, a clear advantage.

But Bell Labs was just learning the dynamics of satellite technology. While active satellites were the obvious future for top business and military value, they were much more complicated to develop. Rather than try to do of that at the outset, John Pierce directed his team to start with the passive satellite. To start with an experiment. He explained his thinking:

“There’s a difference, you see, in thinking idly about something, and in setting out to do something. You begin to see what the problems are when you set out to do things, and that’s why we though [passive] would be a good idea.”

#5: Innovation can sow the seeds of one’s own destruction

Two observations by the author, John Gertner show that even the good fortune of innovation can open a company up for problems. First:

“In any company’s greatest achievements one might, with clarity of hindsight, locate the beginnings of its own demise.”

One sees this in the demise of formerly great companies who “make it”, then fail to move beyond what got them there (something noted in a previous post, It’s the Jobs-to-Be-Done, Stupid!). In a recent column, the New York Times Nick Bilton related this story:

“In a 2008 talk at the Yale School of Management, Gary T. DiCamillo, a former chief executive at Polaroid, said one reason that the company went out of business was that the revenue it was reaping from film sales acted like a blockade to any experimentation with new business models.”

Gertner’s second observation was this, with regard to Bell Labs’ various innovations that were freely taken up by others:

“All the innovations returned, ferociously, in the form of competition.”

This is generally going to be true. Even patented innovations will find substitute methodologies emerging to compete. Which fits a common meme, that ideas are worthless, execution is everything. It’s also seen in the dynamic of the first-to-market firm losing the market by subsequent entrants. After the innovation, relentless execution is the key to winning the market.

Excellent History and Innovation Insight

Wrapping this up, I recommend The Idea Factory. It delivers an excellent history of an institution, and its quirky personalities, that literally has defined our digital age. No, they didn’t invent the Internet. But all the pieces that have led to our ability to utilize the Internet can be traced to Bell Labs. Innovation students will also enjoy the processes and approaches taken to achieve all that Bell Labs does. Jon Gertner’s book is a good read.

I’m @bhc3 on Twitter.

Is Google+ More Facebook or More Twitter? Yes

Quick, what existing social network is Google+ most likely to displace in terms of people’s time?

Another Try by Google to Take On Facebook

Claire Cain Miller, New York Times

This isn’t a Facebook-killer, it’s a Twitter-killer.

Yishan Wong, Google+ post

A hearty congrats to Google for creating an offering that manages to be compared to both Facebook and Twitter. The initial press focused on Google+ as a Facebook competitor. But as people have gotten to play with it, more and more they are realizing that it’s just as much a Twitter competitor.

I wanted to understand how that’s possible. How is it Google+ competes with both of those services? To do so, I plotted Google+’s features against comparable features in both Facebook and Twitter. The objective was to understand:

  • Why are people thinking of Google+ as competitor to both existing social networks?
  • How did the Google team make use of the best of both services?

The chart below is shows where Google+ is more like Facebook or Twitter. The red check marks () and gray shading highlight which service a Google+ feature is more like.

A few notes about the chart.

Circles for tracking: Twitter has a very comparable feature with its Lists. Facebook also lets you put connections into lists; I know because I’ve put connections into lists (e.g. Family, High School, etc.). But I had a hard time figuring out where those lists are. in the Facebook UI. Seriously, where are they for accessing? They may be available somewhere, but it’s not readily accessible. So I didn’t consider Facebook as offering this as a core experience.

+1 voting on posts: Both Google+ and Facebook allow up votes on people’s posts.Twitter has the ‘favorite’ feature. Which is sort of like up voting. But not really. It’s not visible to others, and it’s more a bookmarking feature.

Posts in web search results: Google+ posts, the public ones, show up in Google search results. Not surprising there. Tweets do as well. Facebook posts for the most part do not. I understand some posts on public pages can. But the vast majority of Wall posts never show up in web search results.

Google+ One-Way Following Defines Its Experience

When you look at the chart above, on a strict feature count, Google+ is more like Facebook. It’s got comment threading, video chat,  inline media, and limited sharing.

But for me, the core defining design of Google+ is the one-way following. I can follow anyone on Google+. They may not follow back (er…put me in a circle), but I can see their public posts. This one-way following is what makes the experience more like Twitter for me. Knowing your public posts are out there for anyone to find and read is both boon and caution. For instance, I’ll post pics of my kids on Facebook, because I know who can see those pics – the people I’ve connected with. I don’t tend to post their pics on Twitter. Call me an old fashioned protective parent.

That’s my initial impression. Now as Google+ circles gain ground in terms of usage, they will become the Facebook equivalent of two-way following. Things like sharing and +mentions are issues that are hazy to me right now. Can someone reshare my “circle-only” post to others outside my circle? Do I have to turn off reshare every time? Does +mentioning someone outside my circle make them aware of the post?

Google has created quite a powerful platform here. While most features are not new innovations per se, Google+ benefits from the experience of both Twitter and Facebook. They’re off to a good start.

I’m @bhc3 on Twitter.

Four reasons enterprise software should skip native mobile apps

The desire to “consumerize” mobile apps for their own sake is stoking today’s outsized enthusiasm with device-specific enterprise mobile apps at a time when HTML5 is right there staring us all in the face.

Tony Byrne, Enterprise 2.0 B.S. List: Term No. 1 Consumerization

The runaway success of the iPhone app store has demonstrated that people love mobile, and seek the great user experiences that mobile apps provide. You see these wonderful little icons, beckoning you to give ’em a tap on your phone. You browse the app store, find an app that interests you, you decide to try it on and see if it fits.

[tweetmeme source=”bhc3″]

And all the cool kids are doing the native app thing. Path is an iPhone app. Facebook wins high praise for its iPhone app. And Wired ran a story declaring, essentially, that apps killed the web star.

There has been a clear market shift to the apps market, and consumers have gotten comfortable with the different apps on their phones. It’s come to define the mobile experience.

So why doesn’t that logic extend to the enterprise? Because the native app experience isn’t a good fit with enterprise software. Four reasons why.

1. Lists, clicks, text, images

Think about your typical enterprise software. It’s purpose is to get a job done. What does it consist of? Lists, clicks, text and images. And that’s just right. You are presented efficient ways of getting things done, and getting *to* things.

This is the stuff of the web.

For the most part, the on-board functionality afforded by a mobile OS and native features are not relevant for the enterprise software. When trying to manage a set of projects, or to track expenses, or to run a financial analysis…do you really need that awesome accelerator function? The accelerometer? The camera?

The functions of mobile hardware and OS are absolutely fantastic. They’re great for so many amazing apps. But they’re overkill for enterprise software.

2. Enterprise adoption is not premised on the app store

A key value of the app store is visibility for iPhone and Android  users. A convenient, ready-to-go market where downloads are easy and you get to experience them as soon as they’re loaded. This infrastructure lets apps “find their way” with their target markets.

An AdMob survey looked at how consumers find the mobile apps they download. Check out the top 3 below:

Users find apps by search, rankings and word-of-mouth. Great! As it should be. Definitely describes how I’ve found apps to download.

Irrelevant, however,  for enterprise software. Distribution and usage of enterprise software is not an app store process. Employees will use the software because:

  • It’s the corporate standard
  • They’re already using it
  • They need to use it
  • It’s already achieved network effects internally, it’s the “go to” place

Adoption via the app store is not needed. The employee will already have a URL for accessing the app. For example, I use gmail for both my personal and work emails. For whatever reason, the second work gmail will not “take” on the native email function of my iPhone. So I’ve been using the web version of gmail the last several months. It’s been easy, and I didn’t need to download any app. I knew where to access the site.

3. Mobile HTML looks damn good

Visually, native apps can look stunning. They are beautiful, and functional. No limitations of web constructs means freedom to create incredible user experiences.

But you know what? You can do a lot with HTML5. Taking a mobile web approach to styling the page and optimizing the user experience, one can create an experience to rival that of native apps.

As you can see on the right, an enterprise software page presented in a mobile browser need not be a sanitized list of things. It can pop, provide vibrant colors, present a form factor for accessing with the fattest fingers and be indistinguishable from a native app.

Indeed, designing for a mobile experience is actually a great exercise for enterprise software vendors. It puts the focus on simplicity and the most commonly used functions. It’s a slo a chance to re-imagine the UX of the software. It wouldn’t surprise me if elements the mobile optimized HTML find their way back to the main web experience.

4. Too many mobile OS’s to account for

We all know that Apple’s iOS has pushed smart phone usage dramatically. And corporations are looking at iOS for both iPhone and iPads. Meanwhile, Android has made a strong run and is the leading mobile OS now. However, in corporates, RIM’s various Blackberry flavors continue to have a strong installed base. On Microsoft’s Phone 7 OS, “developer momentum on Windows Phone 7 is already incredibly strong.” (ArsTechnica).

Four distinct OS’s, each with their own versions. Now, enterprise software vendors, you ready to staff up to maintain your version of native apps for each?

37signals recently announced it was dropping native apps for mobile. Instead, they’re focusing on mobile web versions of their software. In that announcement, they noted the challenge of having to specialize for both iOS and Android.

Meanwhile, Trulia CEO noted the burden of maintaining multiple native apps for mobile:

“As a brand publisher, I’m loathe to create native apps,” he told me, “it just adds massive overhead.” Indeed, those developers need to learn specific skills to building native mobile apps, arguably having nothing to do with his core business. They have to learn the different programming code, simulators and tech capabilities of each platform, and of each version of the platform. By diverting so much money into this, he’s having to forgo investment in other core innovation.

A balkanized world of OS variants creates administrative, operational support and development costs. Not good for anybody.

While I’m sure there are enterprise software apps that can benefit from the native OS capabilities, such as integrated photos, for most enterprise software, mobile should be an HTML5 game.

I’m @bhc3 on Twitter.

Three Pluses, Three Minuses of Quora as a KM System

This question was posted on Quora, “In 10 words or less, what is Quora?” My answer:

Powerful application of crowdsourcing and social networking to knowledge management

[tweetmeme source=”bhc3″]

Knowledge Management (aka “KM”) is a field that I don’t have personal experience in. It’s supposed to be practices, processes and systems where valuable knowledge of workers is collected and made available for others. KM continues to be an important topic for enterprises these days, but it also freighted with many failures and disappointments.

Without the benefit of a KM history, I wanted to look at Quora in the context of someone with an objective today: how do I make it easier for employees to find and share their knowledge?

In that cntext, I see three really good things about Quora, and three things that distort its value.

The Pluses

Purpose-Built: A premise of Enterprise 2.0 is that tools need to be lightweight and flexible for multiple purposes. That’s what you get with microblogging, wikis, blogs, forums. The problem there is that the flexibility undermines their value for delivering on specific needs. One must wade through a lot of other stuff to get to what you want.

Quora is purpose-built. It’s not a place for sharing links you find interesting or talking about the American Idol selection process. It’s a place where you know there will be relevant questions, and often good answers. Which means they can focus on delivering to the purpose, not try to be all-things to all people. Important for KM.

Crowdsourcing: Very, very important. Quora leverages the the principles of crowdsourcing to elicit knowledge. It’s not just a system for experts. Too often the focus of people is to get “the experts” on the record, assuming most others have little to add. That is a shame.

The ability to follow topics allows people to track areas of either interest (to find answers) or expertise (to provide answers). As Professor Scott Page notes in his book, The Difference, everyone has a unique set of cognitive skills. To assume there are the “masters” and then there’s the “riff raff” is to lose a significant percentage of knowledge. Crowdsourcing ensure broader opportunity to get at all relevant knowledge.

Social networking: We have people we like to follow. They may be friends, and we enjoy their takes on things. Or they may be people we admire, and who have demonstrated a capacity to provide valuable answers. The personal connection here, that we have an interest in a person as opposed to a topic is valuable.

By letting me follow people, I am exposed to things that have a higher likelihood of interest to me. We can’t all be on Quora, or a KM site. But some portion of our networks will be, and seeing what they’ve been up to keeps me interested and contributes to a serendipity in acquiring knowledge.

It’s also encouraging to know I have a set of people who are receptive to me questions and my answers. Much better than a cold system of questions and answers only.

The Minuses

Discerning the wheat from the chaff: Quora gets noisy. For some, too noisy. That happens in an open platform. There will be some great answers to questions, but some pretty bad ones too.  In terms of KM, some argue for restricting participation to only the known experts:

Few are blessed with serious, specifically relevant knowledge or know-how. Any system which facilitates overly broad participation will inextricably bury any expert knowledge under a pile of low value chatter. I am persuaded that for valuable ideas & thoughts to produce innovation there need to be a highly afferent and efferent system capable of synthesizing powerful multidimensional analytical databases with the know-how of subject matter experts, the imagination of visionaries and the creative mind of innovators who do not fret from the challenge of thinking.

The community culture needs to have a strict sense of what’s valuable, what’s not. And up-vote and down-vote accordingly.

Lots of followers means lots of up-votes: This is the downside of social networking. Some people have HUGE numbers of connections. Which means they have a built-in audience for their answers above and beyond the topic followers. An army of followers can come in and cause an answer to move to first position based on that alone, regardless of answer quality.

A good solution here is to employ a form of reputation to weight those votes. Don’t let just the volume of votes determine the top answer, look at the reputation of those who are voting.

You could also weight the answers themselves according to reputation, although I’m a little wary of that. Makes it harder for new voices with quality contributions to get traction.

Incentives to participate: I’m busy. You’re busy. We’re all busy. Who has time to participate? This will always be an issue. With things like microblogging, there’s a core communication need they satisfy. So that more closely aligns with my day-in-, day-out work. But answering some distant colleague’s question?

There are a lot of ways to address this. Getting participation early on from enthusiasts goes a long way in terms of demonstrating value (something Quora has done). Getting kudos for good answers is a huge motivator. Obviously, getting a good answer just once is critical to seeing the value. And Q&A seems like a perfect activity for applying game mechanics.

All in all, I really like the KM potential for Quora. It doesn’t need to be as heavily active as Twitter, but benefits from a broader participation than what is seen in Wikipedia. The minuses are challenges to overcome, but they are not insurmountable.

[tweetmeme source=”bhc3″]


Get every new post delivered to your Inbox.

Join 761 other followers