Stats say Hot Hand is real | Klay Thompson’s amazing quarter

Want to see something amazing? Take two minutes to watch what Golden State Warriors guard Klay Thompson did in the third quarter of a January 2015 game:

Klay is a great shooter in general, but that performance was unreal. He hit 13 shots in a row, 9 of which were three-pointers. It’s what I characterize as a “Hot Hand”. If you’ve played basketball, you likely understand the Hot Hand. Every shot feels like it’s going in, and it does. You can “feel” the definitive arc to the basket. There’s a hyper confidence in every shot. I like the way Jordan Ellenberg describes it on Deadspin:

The hot hand, after all, isn’t a general tendency for hits to follow hits and misses to follow misses. It’s an evanescent thing, a brief possession by a superior basketball being that inhabits a player’s body for a short glorious interval on the court, giving no warning of its arrival or departure.

The Hot Hand arrives when it damn well chooses, and leaves the same way. Clearly, it had an extended stay with Klay Thompson that night.

But, there is research that dismisses the notion of the Hot Hand. In 1985 Cornell and Stanford researchers – Thomas Gilovich, Robert Vallone, Amos Tversky – conducted an analysis (pdf) of their definition of the Hot Hand. Here’s how they describe a Hot Hand:

First, these terms imply that the probability of a hit should be greater following a hit than following a miss. Second, they imply that the number of streaks of successive hits or misses should exceed the number produced by a chance process with a constant hit rate [i.e. field goal percentage].

They also polled fans, and got these responses:

The fans were asked to consider a hypothetical player who shoots 50% from the field. Their average estimate of his field goal percentage was 61% “after having just made a shot,” and 42% “after having just missed a shot.” Moreover, the former estimate was greater than or equal to the latter for every respondent.

Their research, some of which is replicated below for analyzing Klay Thompson, showed that the belief in the Hot Hand is misguided. The streaks of made shots by players are all part of the normal distribution of shots one would expect statistically.

Oh, but that Klay Thompson quarter…! Let’s look at Klay Thompson’s performance to see if he really did have a Hot Hand.


Klay’s shooting history

To analyze Klay’s performance, we need data. Fortunately, the NBA provide handy stats on its players. I was able to get two seasons (2013-14, 2014-15) of shooting data for Thompson. The data includes the game, shot sequence, shot distance and whether the shot was made or not.

Klay is a 45.4% shooter from 2013-2015, taking 2,514 shots in 157 regular season games. I wanted to get a sense of how he shoots from different ranges, so I categorized his shots in four buckets:

  • Short: < 9 ft
  • Midshort: 9 – 16 ft
  • Midlong: 16 – 23.7
  • Long: >= 23.7 ft

With the ranges, we gain some insights about how Klay operates:

Klay Thompson shot stats by range

You can see in the left chart that he prefers longer shots. However, his shooting percentage is significantly better for close-in shots. Because of the marked difference between his short range shooting (55.7%) and the other distances, I’m going to remove shots taken from less than 9 feet away. Making close-range shots (e.g. layups) is not an example of a Hot Hand; that designation is reserved for longer shots.

Geeking out

You can see the data files on Github: 2013-14, 2014-15. They were combined to do the analysis. This R function categorizes the shots according to their distance.


Comparing Klay against the researchers’ definition of Hot Hand

Let’s look at a couple aspects of Klay’s performance vis-a-vis the analytical framework of the Cornell and Stanford researchers.

First, they looked at what happens after a player makes a shot. Did they have a higher shooting percentage after a made shot? In a modified version of this test, I looked at the set of longer shots that were taken consecutively. If there was a tendency to shoot higher percentages after a made shot, there must be a corollary lower shooting percentage after missed shots.

This table shows the longer shot FG% in aggregate for Klay Thompson after (i) made longer shots; (ii) missed longer shots; and (iii) when he hasn’t taken a prior longer shot. That third (iii) scenario includes both the first shot of a game, and cases where he shot a short ( < 9 ft) basket prior to taking a longer shot.

Scenario FG%
When Klay makes a shot, the FG% on his next shot is… 43.2%
When Klay misses a shot, the FG% on his next shot is… 42.5%
When Klay hasn’t taken a prior longer shot, the FG% on his shot is… 41.3%
His overall FG% for longer shots (>= 9 feet) is… 42.4%

Looking at the table, what do you notice? His highest field goal percentage occurs after a made shot! Does that  validate the belief that shooters “get hot” after making a shot? Unfortunately, no. Statistical analysis dismisses that possibility; which if we’re honest with ourselves, is the right call. There’s not enough of a difference in FG% after a made shot. This is consistent with the Cornell and Stanford researchers’ findings.

Next, there’s the question of whether Klay’s individual shots show any kind of pattern that’s not random. As an example, look at the graph of makes and misses for his first 100 longer (>= 9 ft) shots of the 2014-15 season:

Klay Thompson - first 100 longer shots 2014-15 season

You can see there are a number of runs in there. For instance, shots 12 – 19 are all misses, followed by shots 20 – 23, which are all makes. Maybe Klay’s shooting does exhibit a pattern misses-follow-misses, and makes-follow-makes? Turns out…no. Statistical analysis shows that the patterns of makes and misses are entirely random. This is consistent with the findings of the Cornell and Stanford researchers.

Finally, we look at an analysis that the researchers did on some NBA players back in 1985. They broke up the players’ shooting histories into sets of four shots. They then looked at each set of four shots. They wanted to see if there were higher frequencies of 3 or 4 made shots per set, as well as higher frequencies of 0 and 1 missed shot per set, than one would expect to happen normally. If so, that would indicate “streakiness” (i.e. evidence of Hot Hand).

I’ve run that analysis for Klay’s two seasons below:

0 made 1 made 2 made 3 made 4 made
Actual 55 152 184 77 18
Expected 54 158 174 85 16

As you see, Klay’s actual “made” frequencies are close to what would have been expected. Statistical analysis confirms: there is no unexpected streakiness in his shooting. This is consistent with the findings of the Cornell and Stanford researchers.

At this point, maybe the researchers were right! Klay hasn’t shown any evidence of a Hot Hand.

Geeking out

Prior shot analysis: For the prior shot analysis, I looked at only longer shots (i.e. >= 9 ft). But I also wanted them to be actual prior-next pairs of shots. That means I didn’t want the analyze the impact of a prior short shot on a subsequent longer shot. To keep the data set focused only on consecutive longer shots, I wrote this R program.

Both the prior shot and the next shot were coded as 0 (miss) and made (1). These values were aggregated into a 2×2 table:

next miss (0) next make (1)
prior miss (0) 436 366
prior make (1) 322 278

A Fisher’s exact test was then run on the table. The p-value is 0.8285. The null hypothesis that the next misses and makes are not correlated to prior misses and makes is accepted.

Runs of makes and misses: The entire sequence of 1,945 longer shots over two seasons were subjected to a test of randomness. I held off breaking the sequences into individual games, believing that such extra work would not yield materially different results. The specific test of randomness was the Bartels Ratio Test (background here) in R. The null hypothesis of randomness is accepted with a p-value of 0.9935.

Sets of four shots: For the actual number of made shots per set, first I broke the two seasons of Klay’s shots into 486 groups of four, via this R program. I then aggregated these into a table. For the expected number of made shots for 486 simulated groups of four, I created a binomial tree. The tree took Klay’s longer shot FG% for its probabilities: make @ 42.365%, miss @ 57.635%.

The actual numbers of made shots per set (i.e. 0 – 4) were compared to the expected number with simple correlation. The correlation coefficient is .99, with a p-value of 0.0005. The null hypothesis that the numbers aren’t covariant is rejected. Which in this case, means Klay’s sets of four didn’t show any unexpected variances (i.e. concentrations of makes and misses).


Evidence for Klay’s Hot Hand

OK, I’ve managed to show that Klay Thompson’s performance matches the results found by researchers Gilovich, Vallone and Tversky. But we saw what he did in that third quarter. Excluding the two short shots from that third quarter, Klay made 11 shots in a row. He was shooting out of his mind. What says that was evidence of a Hot Hand?

We can start by assessing the probability of hitting 11 longer shots in a row. As an example of how to do this, consider the coin flip. 50/50 odds of heads or tails. What are the odds of getting 3 heads in a row? You multiply the .5 probability three times: 12.5% probability of three heads in a row.

Klay’s FG% for longer shots is 42.4%. Let’s see what the probability of 11 made shots in a row is:

0.424 ^ 11 = 0.00789%.

Whoa! Talk about small odds. Over the course of Klay’s two seasons, he had roughly 177 sets of 11 shots. The expected number of times he’d make all 11 shots in a set is: 0.00789% of 177 = 0.013 times. Or looked at another way, we’d expect 1 such streak for every 12,674 sets of 11 shots, or 140,404 shots! Klay would need to play 144 years to shoot that many times.

Next, let’s look at the streaks Klay actually had over the two seasons:

Klay Thompson - shooting streaks 2013-15

The most common streak (292 of them) is 1 basket, which may violate your sense of a “streak”. But it’s important to measure all the intervals of streaks, including those with only 1 basket.

At the other end of the spectrum, look at that 11-shot streak. It really stands out for a couple reasons:

  • There’s only one of them. It’s very rare to hit 11 longer shots in a row.
  • There’s a yawning gap from the next longest streak, 7 shots. There was only 1 seven-shot streak as well.

When Klay starts a new streak (i.e. hits an initial longer basket), his average streak is 1.68 made shots in a row. I ran a separate statistical analysis from the 11 “coin flips” example above. Using his average of 1.68 streaks, the odds that he would hit 11 in a row are 0.0000226%. This ridiculously low probability complements the “coin flips” probability described above of 0.00789%.

Using either number, the point is this:

Klay’s streak of 11 made shots is statistically not supposed to happen. The fact that he had an 11-shot streak cannot be explained by normal chance. Something else was at work. This is the Hot Hand.

It is true that Hot Hands are rarer than fans think. Much of Klay’s shooting history validates the findings of the Cornell and Stanford researchers. But it is undeniable that Klay did have a Hot Hand in the third quarter of the January 2015 game.

I’m @bhc3 on Twitter.

Geeking out

To determine the number of streaks, I wrote an R program. The program looks at streaks only within games. There’s no credit for streaks that span from the end of one game to the start of the next game.

On a regular X-Y graph, the distribution of streaks is strongly right-skewed. Rather than use normal distribution analysis, I assumed a Poisson distribution. Poisson is most commonly used for analyzing events-per-unit-of-time; the unit of time is the interval. However, Poisson can be applied beyond time-based intervals. I treated each streak as its own interval. I then used the average number of shots per streak – 1.68 – as my lambda value. To get the probability of 11 shots in a streak, I ran this function in R: ppois(11, lambda = 1.68, lower = FALSE).

Avoiding innovation errors through jobs-to-be-done analysis

The lean startup movement was developed to address an issue that bedeviled many entrepreneurs: how to introduce something new without blowing all your capital and time on the wrong offering. The premise is that someone has a vision for a new thing, and needs to iteratively test that vision (“fail fast”) to find product-market fit. It’s been a success as an innovation theory, and has penetrated the corporate world as well.

In a recent post, Mike Boysen takes issue with the fail fast approach. He argues that better understanding of customers’ jobs-to-be-done (i.e. what someone is trying to get done, regardless of solutions are used) at the front-end is superior to guessing continuously about whether something will be adopted by the market. To quote:

How many hypotheses does it take until you get it right? Is there any guarantee that you started in the right Universe? Can you quantify the value of your idea?

How many times does it take to win the lottery?

Mike advocates for organizations to invest more time at the front end understanding their customers’ jobs-to-be-done rather than iteratively guessing. I agree with him in principle. However, in my work with enterprises, I know that such an approach is a long way off as a standard course of action. There’s the Ideal vs. the Reality:

JTBD analysis - innovation ideal vs reality

The top process – Ideal – shows the right point to understand your target market’s jobs-to-be-done. It’s similar to what Strategyn’s Tony Ulwick outlines for outcome-driven innovation. In the Ideal flow, proper analysis has uncovered opportunities for underserved jobs-to-be-done. You then ideate ways to address the underserved outcomes. Finally, a develop-test-learn approach is valuable for identifying an optimal way to deliver the product or service.

However, here’s the Reality: most companies aren’t doing that. They don’t invest time in ongoing research to understand the jobs-to-be-done. Instead, ideas are generated in multiple ways. The bottom flow marked Reality highlights a process with more structure than most organizations actually have. Whether an organization follows all processes or not, the key is this: ideas are being generated continuously from a number of courses divorced from deep knowledge of jobs-to-be-done.

Inside-out analysis

In my experience working with large organizations, I’ve noticed that ideas tend to go through what I call “inside-out” analysis. Ideas are evaluated first on criteria that reflect the company’s own internal concerns. What’s important to us inside these four walls? Examples of such criteria:

  • Fits current plans?
  • Feasible with current assets?
  • Addresses key company goal?
  • Financials pencil out?
  • Leverages core competencies?

For operational, low level ideas inside-out analysis can work. Most of the decision parameters are knowable and the impact of a poor decision can be reversed. But as the scope of the idea increases, it’s insufficient to rely on inside-out analysis.

False positives, false negatives

Starting with the organization’s own needs first leads to two types of errors:

  • False positive: the idea matches the internal needs of the organization, with flying colors. That creates a too-quick mindset of ‘yes’ without understanding the customer perspective. This opens the door for bad ideas to be greenlighted.
  • False negative: the idea falls short on the internal criteria, or even more likely, on someone’s personal agenda. It never gets a fair hearing in terms of whether the market would value it. The idea is rejected prematurely.

In both cases, the lack of perspective about the idea’s intended beneficiaries leads to innovation errors. False positives are part of a generally rosy view about innovation. It’s good to try things out, it’s how we find our way forward. But it isn’t necessarily an objective of companies to spend money in such a pursuit. Mitigating the risk of investing limited resources in the wrong ideas is important.

In the realm of corporate innovation, false negatives are the bigger sin. They are the missed opportunities. The cases where someone actually had a bead on the future, but was snuffed out by entrenched executives, sclerotic processes or heavy-handed evaluations. Kodak, a legendary company sunk by the digital revolution, actually invented the digital camera in the 1970s. As the inventor, Steven Sasson, related to the New York Times:

“My prototype was big as a toaster, but the technical people loved it,” Mr. Sasson said. “But it was filmless photography, so management’s reaction was, ‘that’s cute — but don’t tell anyone about it.’ ”

It’s debatable whether the world was ready for digital photography at the time, as there was not yet much in the way of supporting infrastructure. But Kodak’s inside-out analysis focused on its effect on their core film business. And thus a promising idea was killed.

Start with outside-in analysis

Thus organizations find themselves with a gap in the innovation process. In the ideal world, rigor is brought to understanding the jobs-to-be-done opportunities at the front-end. In reality, much of innovation is generated without analysis of customers’ jobs beforehand. People will always continue to propose and to try out ideas on their own. Unfortunately, the easiest, most available basis of understanding the idea’s potential starts with an inside-out analysis. The gap falls between those few companies that invest in understanding customers’ jobs-to-be-done, and the majority who go right to inside-out analysis.

What’s needed is a way to bring the customers’ perspective into the process much earlier. Get that outside-in look quickly.

Three jobs-to-be-done tests

In my work with large organizations, I have been advising a switch in the process of evaluating ideas. The initial assessment of an idea should be outside-in focused. Specifically, there are three tests that any idea beyond the internal incremental level should pass:

jobs-to-be-done three tests

Each of the tests examines a critical part of the decision chain for customers.

Targets real job of enough people

The first test is actually two tests:

  1. Do people actually have the job-to-be-done that the idea intends to address?
  2. Are there enough of these people?

This is the simplest, most basic test. Most ideas should pass this, but not all. As written here previously, the Color app was developed to allow anyone – strangers, friends – within a short range to share pictures taken at a location. While a novel application of the Social Local Mobile (SoLoMo) trends, Color actually didn’t address a job-to-be-done of enough people.

A lot better than current solution

Assuming a real job-to-be-done, consideration must next be given to the incumbent solution used by the target customers. On what points does the proposed idea better satisfy the job-to-be-done than what is being done today? This should be a clear analysis. The improvement doesn’t have to be purely functional. It may better satisfy emotional needs. The key is that there is a clear understanding of how the proposed idea is better.

And not just a little better. It needs to be materially better to overcome people’s natural conservatism. Nobel Laureate Daniel Kahneman discusses two factors that drive this conservatism in his book, Thinking, Fast and Slow:

  • Endowment effect: We overvalue something we have currently over something we could get. Think of that old saying, “a bird in the hand is worth two in the bush”.
  • Uncertainty effect: Our bias shifts toward loss aversion when we consider how certain the touted benefits of something new are. The chance that something doesn’t live up to its potential looms larger in our psyche, and our aversion to loss causes to overweight the probability that something won’t live up to its potential.

In innovation, the rule-of-thumb that something needs to be ten times better than what it would replace reflects our inherent conservatism. I’ve argued that the problem with bitcoin is that it fails to substantially improve our current solutions to payments: government-issued currency.

Value exceeds cost to beneficiary

The final test is the most challenging. It requires you to walk in the shoes of your intended beneficiaries (e.g. customers). It’s an analysis of marginal benefits and costs:

Value of improvement over current solution > Incremental costs of adopting your new idea

Not the costs of the company to provide the idea, but those that are borne by the customer. These costs include monetary, learning processes, connections to other solutions, loss of existing data, etc. It’s a holistic look at tangible and intangible costs. Which admittedly, is the hardest analysis to do.

An example where the incremental costs didn’t cover the improvements is of a tire that Michelin introduced in the 1990s. The tire had a sensor and could run for 125 miles after being punctured. A sensor in the car would let the driver know about the issue. But for drivers, a daunting issue emerged: how do you get those tires fixed/replaced? They required special equipment that garages didn’t have and weren’t going to purchase. The costs of these superior tires did not outweigh the costs of not being able to get them fixed/replaced.

Recognize your points of uncertainty

While I present the three jobs-to-be-done tests as a sequential flow of yes/no decisions, in reality they are better utilized as measures of uncertainty. Think of them as gauges:

JTBD tests - certainty meters

Treat innovation as a learning activity. Develop an understanding for what’s needed to get to ‘yes’ for each of the tests. This approach is consistent with the lean startup philosophy. It provides guidance to the development of a promising idea.

Mike Boysen makes the fundamental point about understanding customers’ jobs-to-be-done to drive innovation. Use these three tests for those times when you cannot invest the time/resources at the front end to understand the opportunities.

I’m @bhc3 on Twitter.

Will customers adopt your innovation? Hope, fear and jobs-to-be-done

When will a customer decide your innovative product or service is worth adopting? It’s a question that marketers, strategists and others spend plenty of time thinking about. The factors are myriad and diverse. In this post, let’s examine two primary elements that influence both if an innovation will be adopted, and when it would happen:

  1. Decision weights assigned to probabilities
  2. Probability of job-to-be-done improvement

A quick primer on both factors follows. These factors are then mapped to the innovation adoption curve. Finally, they are used to analyze the adoption of smartwatches and DVRS.

Decision weights assigned to probabilities

Let’s start with decision weights, as that’s probably new for many of us. In his excellent book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman describes research he and a colleague did that examined the way people think about probabilities. Specifically, given different probabilities for a gain, how do people weight those probabilities?

Why?

Classic economics indicates that an outcome has a 25% probability, then 25% is the weight a rational person should assign to that outcome. If you’ve taken economics or statistics, you may recall being taught something along these lines. However, Kahneman and his colleague had anecdotally seen evidence that people didn’t act that way. So they conducted field experiments to determine how people actually incorporated probabilities into their decision making. The table below summarizes their findings:

Decision weights vs probability

The left side of the table shows that people assign greater weight to low probabilities than they should. Kahneman calls this the possibility effect. The mere fact that something could potentially happen has a disproportionate weight in decision-making. Maybe we should call this the “hope multiplier”. It’s strongest at the low end, with the effect eroding as probabilities increase. When the probability of a given outcome increases to 50% and beyond, we see the emergence of the uncertainty effect. In this case, the fact that something might not happen starts to loom larger in our psyche. This is because we are loss averse. We prefer avoiding losses to acquiring gains.

Because of loss aversion, an outcome that has an 80% probability isn’t weighted that way by people. We look at that 20% possibility that something will not happen (essentially a “loss”), and fear of that looms large. We thus discount the 80% probability to a too-low decision weight of 60.1.

Probability of job-to-be-done improvement

A job-to-be-done is something we want to accomplish. It consists of individual tasks and our expectation for each of those tasks. You rate the fulfillment of the expectations to determine how satisfied you are with a given job-to-be-done. This assessment is a cornerstone of the “job-to-be-done improvement” function:

Job-to-be-done improvement function

Dissatisfaction: How far away from customers’ expectations is the incumbent way that they fulfill a job-to-be-done? The further away, the greater the dissatisfaction. This analysis is really dependent on the relative importance of the individual job tasks.  More important tasks have greater influence on the overall level of satisfaction.

Solution improvement: How does the proposed innovation (product, service) address the entirety of the existing job? It will be replacing at least some, if not all, of the incumbent solution. What are the better ways it fulfills the different job tasks?

Cost: How much does the innovation cost? There’s the out-of-pocket expense. But there are other costs as well. Learning costs. Things you cannot do with the new solution that you currently can. The costs will be balanced against the increased satisfaction the new solution delivers.

These three elements are the basis of determining the fit with a given job-to-be-done. Because of their complexities, determining precise measures for each is challenging. But it is reasonable to assert a probability. In this case, the probability that the proposed solution will provide a superior experience to the incumbent solution.

Mapping decision weights across the innovation adoption curve

The decision weights described earlier are an average across a population. There is variance in those. The decision weights for each probability of gain in job-to-be-done will differ by adoption segment, as shown below:

Decision weights across innovation adoption curve

The green and red bars along the bottom of each segment indicate the different weights assigned to the same probabilities for each segment. For Innovators and Early Adopters, any possibility of an improvement in job-to-be-done satisfaction is overweighted significantly. At the right end, Laggards are hard-pressed to assign sufficient decision weights to anything but an absolutely certain probability of increased satisfaction.

Studies have shown that our preferences for risk-aversion and risk-seeking are at least somewhat genetically driven. My own experience also says that there can be variance in when you’re risk averse or not. It depends on the arena and your own experience in it. I believe each of us has a baseline of risk tolerance, and we vary from that baseline depending on circumstances.

Two cases in point: smartwatches and DVRs

The two factors – decision weights and probability of improved job-to-be-done satisfaction – work in tandem to determine how far the reach of a new innovation will go. Generally,

  • If the probability of job-to-be-done improvement is low, you’re playing primarily to the eternal optimists, Innovators and Early Adopters.
  • If the probability of improvement is high, reach will be farther but steps are needed to get later segments aware of the benefits, and to even alter their decision weights.

Let’s look at two innovations in the context of these factors.

Smartwatches

SmartwatchSmartwatches have a cool factor. If you think of a long-term trend of progressively smaller computing devices – mainframes, minicomputers,  desktops, laptops, mobile devices – then the emergence of smartwatches is the logical next wave. Finally, it’s Dick Tracy time.

The challenge for the current generation of smartwatches is distinguishing themselves from the incumbent solution for people, smartphones. Not regular time wristwatches. But smartphones.  How much do smartwatches improve the jobs-to-be-done currently fulfilled by smartphones?

Some key jobs-to-be-done by smartphones today:

  • Email
  • Texting
  • Calls
  • Social apps (Facebook, Twitter, etc.)
  • Navigation
  • Games
  • Many, many more

When you consider current smartphone functionality, what job tasks are under-satisfied? In a Twitter discussion about smartwatches, the most compelling proposition was that the watch makes it easier to see updates as soon as they happen. Eliminate the pain of taking your phone out of your pocket or purse. Better satisfaction of the task of knowing when, who and what for emails, texts, social updates, etc.

But improvement in this task comes at a cost. David Breger wrote that he had to stop wearing his smartwatch. Why? The updates pulled his eyes to his watch. Constantly. To the point where his conversational companions noticed, affecting their interactions. What had been an improvement came with its own cost. There are, of course, those people who bury their faces in their phones wherever they are. The smartwatch is a win for them.

If I were to ballpark the probability that a smartwatch will deliver improvement in its targeted jobs-to-be-done, I’d say it’s 20%. Still, that’s good enough for the Innovators segment. I imagine their decision weights look something like this:

Decision weights - Innovators

The mere possibility of improvement drives these early tryers-of-new-things. It explains who was behind Pebble’s successful Kickstarter campaign. But the low probability of improving the targeted jobs-to-be-done dooms the smartwatch, as currently conceived, to the left side of the adoption curve.

DVRs

DVRDigital video recorders make television viewing easier. Much easier. Back when TiVo was the primary game in town, early adopters passionately described how incredible the DVR was. It was life-changing. I recall hearing the praise back then, and I admit I rolled my eyes at these loons.

Not so these days.

DVRs have become more commonplace. With good reason. They offer a number of features which improve  various aspects of the television viewing job-to-be-done:

  • Pause a live program
  • Rewind to watch something again (your own instant replay for sports)
  • Set it and forget it scheduling
  • Easy playback of recorded shows
  • Easy recording without needing to handle separate media (VCR tape, DVD)

But there are costs. If you’ve got a big investment in VCR tapes or DVDs, you want to play those. It does cost money to purchase a DVR plan. The storage of the DVR has a ceiling. You have to learn how to set up and work with a DVR. It becomes part of the room decor. What happens if the storage drive crashes?

My estimate is that the DVR has an 80% probability of being better than incumbent solutions. Indeed, this has been recognized in the market. A recent survey estimates U.S. household adoption of DVRs at 44%. Basically, knocking on the door of the Late Majority. I imagine their decision weights look like this:

Decision weights - Late Majority

On the probability side of the ledger, they will need to experience DVRs themselves to understand its potential. For the Late Majority, this happens through experiencing an innovation through their Early Majority friends. They become aware of how much an innovation can improve their satisfaction.

On the decision weight, vendors must do the work of addressing the uncertainty that comes with the innovation. This means understanding the forces – allegiance to the incumbent solution, anxiety about the proposed solution – that must be overcome.

Two influential factors

As you consider your product or service innovation, pay attention to these two factors. The first – jobs-to-be-done – is central to getting adoption of any thing. without the proper spade work there, you will be flying blind into the market. The second factor is our human psyche, and how we harbor hope (possibility) and fear (uncertainty). Because people are geared differently, you’ll need to construct strategies (communication channels, messaging, product enhancements) that pull people toward your idea, overcoming their natural risk aversion.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

Why crowdsourcing works

CrowdCrowdsourcing is a method of solving problems through the distributed contributions of multiple people. It’s used to address tough problems that happen everyday. Ideas for new opportunities. Ways to solve problems. Uncovering an existing approach that addresses your need.

Time and again, crowdsourcing has been used successfully to solve challenges. But…why does it work? What’s the magic? What gives it an advantage over talking with your pals at work, or doing some brainstorming on your own? In a word: diversity. Cognitive diversity. Specifically these two principles:

  • Diverse inputs drive superior solutions
  • Cognitive diversity requires spanning gaps in social networks

These two principles work in tandem to deliver results.

Diverse inputs drive superior solutions

When trying to solve a challenge, what is the probability that any one person will have the best solution for it? It’s a simple mathematical reality: the odds of any single person providing the top answer are low.

How do we get around this? Partly by more participants; increased shots on goal. But even more important is diversity of thinking. People contributing based on their diverse cognitive toolkits:

Cognitive toolkit

As described by University of Michigan Professor Scott Page in The Difference, our cognitive toolkits consist of: different knowledge, perspectives and heuristics (problem-solving methods). Tapping into people’s cognitive toolkits brings fresh perspectives and novel approaches to solving a challenge. Indeed, a research study found that the probability of solving tough scientific challenges is three times higher if a person’s field of expertise is seven degrees outside the domain of the problem.

In another study, researchers analyzed the results of an online protein-folding game, Foldit.  Proteins fold themselves, but no one understands how they do so. This is particularly true of experts in the field of biochemistry. So the online game allows users to simulate it, with an eye towards better understanding the ways the proteins fold themselves. As reported by Andrew McAfee, the top players of Foldit were better than both computers and experts in the field at understanding the folding sequence. The surprising finding? None had taken chemistry beyond a high school course. It turns out spatial skills are more important to solve the problem than deep domain knowledge of proteins.

Those two examples provide real-world proof for the models and solution-seeking benefits of cognitive diversity described by Professor Page.

Solution landscape - cornstalksProblem solving can be thought of as building a solutions landscape, planted with different ideas. Each person achieves their local optimum, submitting the best idea they can for a given challenge based on their cognitive assets.

But here’s the rub: any one person’s idea is unlikely to be the best one that could be uncovered. This makes sense as both a probabilistic outcome, and based on our own experiences. However in aggregate, some ideas will stand out clearly from the rest. Cognitive diversity is the fertile ground where these best ideas will sprout.

In addition to being a source of novel ideas, cognitive diversity is incredibly valuable as feedback on others’ ideas. Ideas are improved as people contribute their distinct points of view. The initial idea is the seedling, and feedback provides the nutrients that allow it to grow.

Cognitive diversity requires spanning gaps in social networks

Cognitive diversity clearly has a significant positive effect on problem-solving. Generally when something has proven value to outcomes, companies adopt it as a key operating principle. Yet getting this diversity has not proven to be as easy and common as one might expect.

Why?

Strong weak no tiesBecause it’s dependent on human behavior. Left to our own devices, we tend to turn to our close connections for advice and feedback. These strong ties are the core of our day-in, day-out interactions.

But this natural human tendency to turn to our strong ties is why companies are challenged to leverage their cognitive diversity. University of Chicago Professor Ron Burt describes the issue as one of structural holes between nodes in a corporate social network in his paper, Structural Holes and Good Ideas (pdf). A structural hole is a gap between different groups in the organization. Information does not flow across structural holes.

In and of themselves, structural holes are not the problem. Rather, the issue is that when people operate primarily within their own node, their information sources are redundant. Over time, the people in the node know the same facts, develop the same assumptions and optimize to work together in harmony. Sort of like a silo of social ties.

Idea quality vs diversity of connectionsThe impact of this is a severe curtailment of fresh thinking, which impacts the quality of ideas. Professor Burt found empirical evidence for this in a study of Raytheon’s Supply Chain Group. 673 employees were characterized by their social network connections, plotting them on a spectrum from insular to diverse. These employees then provided one idea to improve supply chain management at Raytheon. Their ideas were then assessed by two senior executives.

The results? Employees with more diverse social connections provided higher quality ideas. To the right is a graph of the rated ideas, with a curve based on the average idea ratings versus the submitter’s level of network diversity. The curve shows that with each increase in the diversity of a person’s connections, the higher the value of their idea.

Employees with access to diverse sources of information provided better ideas.  Their access to nonredundant information allowed them to generate more novel, higher potential ideas. Inside organizations, there are employees who excel at making diverse connections across the organization. These people are the ones who will provide better ideas. They are brokers across the structural holes in social networks.

Professor Burt provides the key insight about these brokers:

People connected to groups beyond their own can expect to find themselves delivering valuable ideas, seeming to be gifted with creativity. This is not creativity born of genius; it is creativity as an import-export business. An idea mundane in one group can be a valuable insight in another.

An “import-export business”. Consider that for a moment. It’s a metaphor that well describes the key value of the brokers. They are exchange mechanisms for cognitive diversity. They are incredibly valuable to moving things forward inside organizations. But are organizations overly dependent on these super-connectors? Yes. Companies are leaving millions on the table by not enabling a more scalable, comprehensive and efficient means for exchanges of cognitive diversity.

Would if we could systematize what the most connected employees do?

Systematize the diverse connections

Crowdsourcing doesn’t eliminate the need for the super-connectors. They play a number of valuable roles inside organizations. But by crowdsourcing to solve problems, companies gain the following:

  • Deeper reach into the cognitive assets of all employees
  • Avoiding the strong ties trap of problem-solving
  • Faster surfacing of the best insights
  • Neutralize the biases that the super-connectors naturally have

As you consider ways to improve your decision-making and to foster greater cross-organizational collaboration, make crowdsourcing a key element of your strategic approach.

I’m @bhc3 on Twitter, and I’m a Senior Consultant with HYPE Innovation.

Bell Labs Created Our Digital World. What They Teach Us about Innovation.

What do these following crucial, society-altering innovations have in common?

  • Transistors
  • Silicon-based semiconductors
  • Mobile communication
  • Lasers
  • Solar cells
  • UNIX operating system
  • Information theory (link)

They all have origins in the amazing Idea Factory, AT&T’s Bell Labs. I’ve had a chance to learn about Bell Labs via Jon Gertner’s new book, The Idea Factory: Bell Labs and the Great Age of American Innovation. (Disclosure: I was given a free copy of the book for review by TLC Book Tours.)

I don’t know about you, but really, I had no sense of the impact Bell Labs had on our current society. Gertner writes a compelling narrative intermingling the distinctive personalities of the innovators with layman points of view about the concepts they developed. In doing so, he brings alive an incredible institution that was accessible only as old black-and-white photos of men wearing ties around lab equipment.

For the history alone, read this book. You will gain knowledge about how the products that define life today came into being back in the 1940’s, 50’s and 60’s. I say that as someone who really wasn’t “in” to learning about these things. Gertner, a writer for Wired and the New York Times, invites you into the world of these fascinating, brilliant people and the challenges they overcame in developing some damn amazing technological achievements.

Those stories really carry the book. But just as interesting for innovation geeks are the lessons imparted from their hands-on work. There are several principles that created the conditions for innovation. Sure, the steady cash flow from the phone service monopoly AT&T held for several decades was a vital element. But that alone was not sufficient to drive innovation. How many companies with a strong, stable cash flow have frittered away that advantage?

Looking beyond the obvious advantage, several elements are seen which determined the Labs’ success. They are described in detail below.

#1: Inhabit a problem-rich environment

In an interview with a Bell Labs engineer, Gertner got this wonderful observation. Bell Labs inhabited “a problem-rich environment”.

“A problem-rich environment.” Yes.

Bell Labs’ problems were the build-out of the nation’s communications infrastructure. How do you maintain signal fidelity over long distances? How will people communicate the number they want? How can vacuum tube reliability be improved for signal transmission? How to maximize spectrum for mobile communications?

I really like this observation, because it sounds obvious, but really isn’t. Apply efforts to solving problems related to the market you serve. It’s something a company like 3M has successfully done for decades.

Where you see companies get this wrong is they stray from the philosophy of solving customer needs, becoming internally focused in their “problems”. For instance, what problem did New Coke solve for customers? And really, what problems is Google+ solving for people that aren’t handled by Facebook and Twitter?

A problem of, “our company needs to increase revenues, market share, profits, etc.” isn’t one that customers give a damn about. Your problem-rich environment should focus on the jobs-to-be-done of customers.

A corollary to inhabiting a problem-rich environment: focus innovation on solving identified problems. This vignette about John Pierce, a leader in Bell Labs, resonates with me:

Pierce was given free rein to pursue any ideas he might have. He considered the experience equivalent to being cast adrift without a compass. “Too much freedom is horrible.”

#2: Cognitive diversity gets breakthroughs

Bell Labs’ first president, Frank Jewett, saw the value of the labs in this way:

Modern industrial research “is likewise an instrument which can bring to bear an aggregate of creative force on any particular problem which is infinitely greater than any force which can be conceived of as residing in the intellectual capacity of an individual.”

The labs were deliberately stocked with scientists from different disciplines. The intention was to bring together people with different persepctives and knowledges to innovate on the problems they wanted solved.

For example, in developing the solid state transistor, Labs researchers were stumped to break through something called the “surface states barrier”. Physicist Walter Brattain worked with electrochemist Robert Gibney to discover a way to do so. Two separate fields working together to solve a critical issue in the development of semiconductors.

The value of cognitive diversity was systematically modeled by professor Scott Page. Bell Labs shows its value in practice.

#3: Expertise and HiPPOs can derail innovation

Ever seen some of these famously wrong predictions?

Ken Olson, President & Founder, Digital Equipment Corp. (1977): “There is no reason anyone would want a computer in their home.”

 Albert Einstein (1932): “There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will.”

Western Union internal memo (1876): “This ‘telephone’ has too many shortcomings to be seriously considered as a means of communication.”

Now, before we get too smug here…haven’t you personally been off on predictions before? I know I have. The point here is not to assume fundamental deficiencies of character and intellect. Rather, to point out that they will occur.

What makes wrong predictions more harmful is the position of the person who makes them. Experts are granted greater license to determine the feasibility and value of an idea. HiPPOs (high paid person’s opinion) are granted similar vaunted positions. In both cases, their positions when they get it wrong can undermione innovation.

Bell Labs was not immune. Two examples demonstrate this. One did not derail innovation, one did.

Mobile phones

In the late 1950s, Bell Labs engineers considered the idea that mobile phones would one day be small and portable to be utopian. Most considered mobile phones as necessarily bulky and limited to cars, due to the power required to transmit signals from the phone to a nearby antenna.

In this case, the engineers’ expertise on wireless communications was proved wrong. And AT&T became an active participant in the mobile market.

Semiconductors

In the late 1950s, Bell Labs faced a fork in the road for developing transistors. The Labs had pioneered the development of the transistor. Over time, the need for ever smaller transistors was seen as a critical element to their commercialization. Bell Labs vice president of device development, Jack Morton, had a specific view on how transistor miniaturization should happen. He believed a reduction in components was the one right way. Even as development of his preferred methodology was proving technically difficult, he was unwilling to hear alternative ideas for addressing the need.

Meanwhile, engineers at Texas Instruments and Fairchild Semiconductor, simultaneously and independently, developed a different methodology for miniaturization, one that involved constructing all components within one piece of silicon. Their approach was superior, and both companies went on to success in semiconductors.

Bell Labs, pioneers in transistors, lost its technological lead and did not become a major player in the semiconductor industry.

With mobile phones, the experts could not see how sufficient power could be generated. Fortunately, their view did not derail AT&T’s progress in the mobile market. In the case of semiconductors, Bell Labs engineers were aware of the integrated circuit concept, before Texas Instruments and Fairchild introduced it. But the HiPPO, Jack Morton, held the view that such an approach could never be reliable. HiPPO killed innovation.

#4: Experiment and learn when it comes to new ideas

When you think you’ve got a big, disruptive idea, what’s the best way to  handle it? Go big or go home? Sure, if you’re the type to put the whole bundle on ’19’ at the roulette table.

Otherwise, take a cue from how Bell Labs handled the development of the first communications satellite. Sputnik had been launched a few years earlier, and the satellite race was on. The basics of what a satellite had to do? Take a signal from Location A and relay it Location B. Turns out, there were a couple models for how to do this: ‘passive’ and ‘active’ satellites.

Passive satellites could do one thing. Intercept a signal from Location A and reflect down to Location B. In so doing, they scattered the signal into millions of little bits, requiring high-powered receptors on the ground. Active satellites were much more equipped. They could take a signal, amplify it and direct it to different places it had to get to. This focused approach required much lower-powered receiving apparatus on the ground, a clear advantage.

But Bell Labs was just learning the dynamics of satellite technology. While active satellites were the obvious future for top business and military value, they were much more complicated to develop. Rather than try to do of that at the outset, John Pierce directed his team to start with the passive satellite. To start with an experiment. He explained his thinking:

“There’s a difference, you see, in thinking idly about something, and in setting out to do something. You begin to see what the problems are when you set out to do things, and that’s why we though [passive] would be a good idea.”

#5: Innovation can sow the seeds of one’s own destruction

Two observations by the author, John Gertner show that even the good fortune of innovation can open a company up for problems. First:

“In any company’s greatest achievements one might, with clarity of hindsight, locate the beginnings of its own demise.”

One sees this in the demise of formerly great companies who “make it”, then fail to move beyond what got them there (something noted in a previous post, It’s the Jobs-to-Be-Done, Stupid!). In a recent column, the New York Times Nick Bilton related this story:

“In a 2008 talk at the Yale School of Management, Gary T. DiCamillo, a former chief executive at Polaroid, said one reason that the company went out of business was that the revenue it was reaping from film sales acted like a blockade to any experimentation with new business models.”

Gertner’s second observation was this, with regard to Bell Labs’ various innovations that were freely taken up by others:

“All the innovations returned, ferociously, in the form of competition.”

This is generally going to be true. Even patented innovations will find substitute methodologies emerging to compete. Which fits a common meme, that ideas are worthless, execution is everything. It’s also seen in the dynamic of the first-to-market firm losing the market by subsequent entrants. After the innovation, relentless execution is the key to winning the market.

Excellent History and Innovation Insight

Wrapping this up, I recommend The Idea Factory. It delivers an excellent history of an institution, and its quirky personalities, that literally has defined our digital age. No, they didn’t invent the Internet. But all the pieces that have led to our ability to utilize the Internet can be traced to Bell Labs. Innovation students will also enjoy the processes and approaches taken to achieve all that Bell Labs does. Jon Gertner’s book is a good read.

I’m @bhc3 on Twitter.

Is Google+ More Facebook or More Twitter? Yes

Quick, what existing social network is Google+ most likely to displace in terms of people’s time?

Another Try by Google to Take On Facebook

Claire Cain Miller, New York Times

This isn’t a Facebook-killer, it’s a Twitter-killer.

Yishan Wong, Google+ post

A hearty congrats to Google for creating an offering that manages to be compared to both Facebook and Twitter. The initial press focused on Google+ as a Facebook competitor. But as people have gotten to play with it, more and more they are realizing that it’s just as much a Twitter competitor.

I wanted to understand how that’s possible. How is it Google+ competes with both of those services? To do so, I plotted Google+’s features against comparable features in both Facebook and Twitter. The objective was to understand:

  • Why are people thinking of Google+ as competitor to both existing social networks?
  • How did the Google team make use of the best of both services?

The chart below is shows where Google+ is more like Facebook or Twitter. The red check marks () and gray shading highlight which service a Google+ feature is more like.

A few notes about the chart.

Circles for tracking: Twitter has a very comparable feature with its Lists. Facebook also lets you put connections into lists; I know because I’ve put connections into lists (e.g. Family, High School, etc.). But I had a hard time figuring out where those lists are. in the Facebook UI. Seriously, where are they for accessing? They may be available somewhere, but it’s not readily accessible. So I didn’t consider Facebook as offering this as a core experience.

+1 voting on posts: Both Google+ and Facebook allow up votes on people’s posts.Twitter has the ‘favorite’ feature. Which is sort of like up voting. But not really. It’s not visible to others, and it’s more a bookmarking feature.

Posts in web search results: Google+ posts, the public ones, show up in Google search results. Not surprising there. Tweets do as well. Facebook posts for the most part do not. I understand some posts on public pages can. But the vast majority of Wall posts never show up in web search results.

Google+ One-Way Following Defines Its Experience

When you look at the chart above, on a strict feature count, Google+ is more like Facebook. It’s got comment threading, video chat,  inline media, and limited sharing.

But for me, the core defining design of Google+ is the one-way following. I can follow anyone on Google+. They may not follow back (er…put me in a circle), but I can see their public posts. This one-way following is what makes the experience more like Twitter for me. Knowing your public posts are out there for anyone to find and read is both boon and caution. For instance, I’ll post pics of my kids on Facebook, because I know who can see those pics – the people I’ve connected with. I don’t tend to post their pics on Twitter. Call me an old fashioned protective parent.

That’s my initial impression. Now as Google+ circles gain ground in terms of usage, they will become the Facebook equivalent of two-way following. Things like sharing and +mentions are issues that are hazy to me right now. Can someone reshare my “circle-only” post to others outside my circle? Do I have to turn off reshare every time? Does +mentioning someone outside my circle make them aware of the post?

Google has created quite a powerful platform here. While most features are not new innovations per se, Google+ benefits from the experience of both Twitter and Facebook. They’re off to a good start.

I’m @bhc3 on Twitter.

Four reasons enterprise software should skip native mobile apps

The desire to “consumerize” mobile apps for their own sake is stoking today’s outsized enthusiasm with device-specific enterprise mobile apps at a time when HTML5 is right there staring us all in the face.

Tony Byrne, Enterprise 2.0 B.S. List: Term No. 1 Consumerization

The runaway success of the iPhone app store has demonstrated that people love mobile, and seek the great user experiences that mobile apps provide. You see these wonderful little icons, beckoning you to give ’em a tap on your phone. You browse the app store, find an app that interests you, you decide to try it on and see if it fits.

[tweetmeme source=”bhc3″]

And all the cool kids are doing the native app thing. Path is an iPhone app. Facebook wins high praise for its iPhone app. And Wired ran a story declaring, essentially, that apps killed the web star.

There has been a clear market shift to the apps market, and consumers have gotten comfortable with the different apps on their phones. It’s come to define the mobile experience.

So why doesn’t that logic extend to the enterprise? Because the native app experience isn’t a good fit with enterprise software. Four reasons why.

1. Lists, clicks, text, images

Think about your typical enterprise software. It’s purpose is to get a job done. What does it consist of? Lists, clicks, text and images. And that’s just right. You are presented efficient ways of getting things done, and getting *to* things.

This is the stuff of the web.

For the most part, the on-board functionality afforded by a mobile OS and native features are not relevant for the enterprise software. When trying to manage a set of projects, or to track expenses, or to run a financial analysis…do you really need that awesome accelerator function? The accelerometer? The camera?

The functions of mobile hardware and OS are absolutely fantastic. They’re great for so many amazing apps. But they’re overkill for enterprise software.

2. Enterprise adoption is not premised on the app store

A key value of the app store is visibility for iPhone and Android  users. A convenient, ready-to-go market where downloads are easy and you get to experience them as soon as they’re loaded. This infrastructure lets apps “find their way” with their target markets.

An AdMob survey looked at how consumers find the mobile apps they download. Check out the top 3 below:

Users find apps by search, rankings and word-of-mouth. Great! As it should be. Definitely describes how I’ve found apps to download.

Irrelevant, however,  for enterprise software. Distribution and usage of enterprise software is not an app store process. Employees will use the software because:

  • It’s the corporate standard
  • They’re already using it
  • They need to use it
  • It’s already achieved network effects internally, it’s the “go to” place

Adoption via the app store is not needed. The employee will already have a URL for accessing the app. For example, I use gmail for both my personal and work emails. For whatever reason, the second work gmail will not “take” on the native email function of my iPhone. So I’ve been using the web version of gmail the last several months. It’s been easy, and I didn’t need to download any app. I knew where to access the site.

3. Mobile HTML looks damn good

Visually, native apps can look stunning. They are beautiful, and functional. No limitations of web constructs means freedom to create incredible user experiences.

But you know what? You can do a lot with HTML5. Taking a mobile web approach to styling the page and optimizing the user experience, one can create an experience to rival that of native apps.

As you can see on the right, an enterprise software page presented in a mobile browser need not be a sanitized list of things. It can pop, provide vibrant colors, present a form factor for accessing with the fattest fingers and be indistinguishable from a native app.

Indeed, designing for a mobile experience is actually a great exercise for enterprise software vendors. It puts the focus on simplicity and the most commonly used functions. It’s a slo a chance to re-imagine the UX of the software. It wouldn’t surprise me if elements the mobile optimized HTML find their way back to the main web experience.

4. Too many mobile OS’s to account for

We all know that Apple’s iOS has pushed smart phone usage dramatically. And corporations are looking at iOS for both iPhone and iPads. Meanwhile, Android has made a strong run and is the leading mobile OS now. However, in corporates, RIM’s various Blackberry flavors continue to have a strong installed base. On Microsoft’s Phone 7 OS, “developer momentum on Windows Phone 7 is already incredibly strong.” (ArsTechnica).

Four distinct OS’s, each with their own versions. Now, enterprise software vendors, you ready to staff up to maintain your version of native apps for each?

37signals recently announced it was dropping native apps for mobile. Instead, they’re focusing on mobile web versions of their software. In that announcement, they noted the challenge of having to specialize for both iOS and Android.

Meanwhile, Trulia CEO noted the burden of maintaining multiple native apps for mobile:

“As a brand publisher, I’m loathe to create native apps,” he told me, “it just adds massive overhead.” Indeed, those developers need to learn specific skills to building native mobile apps, arguably having nothing to do with his core business. They have to learn the different programming code, simulators and tech capabilities of each platform, and of each version of the platform. By diverting so much money into this, he’s having to forgo investment in other core innovation.

A balkanized world of OS variants creates administrative, operational support and development costs. Not good for anybody.

While I’m sure there are enterprise software apps that can benefit from the native OS capabilities, such as integrated photos, for most enterprise software, mobile should be an HTML5 game.

I’m @bhc3 on Twitter.

Three Pluses, Three Minuses of Quora as a KM System

This question was posted on Quora, “In 10 words or less, what is Quora?” My answer:

Powerful application of crowdsourcing and social networking to knowledge management

[tweetmeme source=”bhc3″]

Knowledge Management (aka “KM”) is a field that I don’t have personal experience in. It’s supposed to be practices, processes and systems where valuable knowledge of workers is collected and made available for others. KM continues to be an important topic for enterprises these days, but it also freighted with many failures and disappointments.

Without the benefit of a KM history, I wanted to look at Quora in the context of someone with an objective today: how do I make it easier for employees to find and share their knowledge?

In that cntext, I see three really good things about Quora, and three things that distort its value.

The Pluses

Purpose-Built: A premise of Enterprise 2.0 is that tools need to be lightweight and flexible for multiple purposes. That’s what you get with microblogging, wikis, blogs, forums. The problem there is that the flexibility undermines their value for delivering on specific needs. One must wade through a lot of other stuff to get to what you want.

Quora is purpose-built. It’s not a place for sharing links you find interesting or talking about the American Idol selection process. It’s a place where you know there will be relevant questions, and often good answers. Which means they can focus on delivering to the purpose, not try to be all-things to all people. Important for KM.

Crowdsourcing: Very, very important. Quora leverages the the principles of crowdsourcing to elicit knowledge. It’s not just a system for experts. Too often the focus of people is to get “the experts” on the record, assuming most others have little to add. That is a shame.

The ability to follow topics allows people to track areas of either interest (to find answers) or expertise (to provide answers). As Professor Scott Page notes in his book, The Difference, everyone has a unique set of cognitive skills. To assume there are the “masters” and then there’s the “riff raff” is to lose a significant percentage of knowledge. Crowdsourcing ensure broader opportunity to get at all relevant knowledge.

Social networking: We have people we like to follow. They may be friends, and we enjoy their takes on things. Or they may be people we admire, and who have demonstrated a capacity to provide valuable answers. The personal connection here, that we have an interest in a person as opposed to a topic is valuable.

By letting me follow people, I am exposed to things that have a higher likelihood of interest to me. We can’t all be on Quora, or a KM site. But some portion of our networks will be, and seeing what they’ve been up to keeps me interested and contributes to a serendipity in acquiring knowledge.

It’s also encouraging to know I have a set of people who are receptive to me questions and my answers. Much better than a cold system of questions and answers only.

The Minuses

Discerning the wheat from the chaff: Quora gets noisy. For some, too noisy. That happens in an open platform. There will be some great answers to questions, but some pretty bad ones too.  In terms of KM, some argue for restricting participation to only the known experts:

Few are blessed with serious, specifically relevant knowledge or know-how. Any system which facilitates overly broad participation will inextricably bury any expert knowledge under a pile of low value chatter. I am persuaded that for valuable ideas & thoughts to produce innovation there need to be a highly afferent and efferent system capable of synthesizing powerful multidimensional analytical databases with the know-how of subject matter experts, the imagination of visionaries and the creative mind of innovators who do not fret from the challenge of thinking.

The community culture needs to have a strict sense of what’s valuable, what’s not. And up-vote and down-vote accordingly.

Lots of followers means lots of up-votes: This is the downside of social networking. Some people have HUGE numbers of connections. Which means they have a built-in audience for their answers above and beyond the topic followers. An army of followers can come in and cause an answer to move to first position based on that alone, regardless of answer quality.

A good solution here is to employ a form of reputation to weight those votes. Don’t let just the volume of votes determine the top answer, look at the reputation of those who are voting.

You could also weight the answers themselves according to reputation, although I’m a little wary of that. Makes it harder for new voices with quality contributions to get traction.

Incentives to participate: I’m busy. You’re busy. We’re all busy. Who has time to participate? This will always be an issue. With things like microblogging, there’s a core communication need they satisfy. So that more closely aligns with my day-in-, day-out work. But answering some distant colleague’s question?

There are a lot of ways to address this. Getting participation early on from enthusiasts goes a long way in terms of demonstrating value (something Quora has done). Getting kudos for good answers is a huge motivator. Obviously, getting a good answer just once is critical to seeing the value. And Q&A seems like a perfect activity for applying game mechanics.

All in all, I really like the KM potential for Quora. It doesn’t need to be as heavily active as Twitter, but benefits from a broader participation than what is seen in Wikipedia. The minuses are challenges to overcome, but they are not insurmountable.

[tweetmeme source=”bhc3″]

Three Reasons Google Should Acquire Delicious from Yahoo

[tweetmeme source=”bhc3″]

So the news is out. Yahoo plans to shutter Delicious, the largest social bookmarking site. Which is shocking, particularly among the tech savvy and socially oriented. Delicious is iconic for its application of social sharing and collective intelligence. Hard to believe Yahoo wants to shut it down.

But wait…this doesn’t have to be the end. Why not seek alternatives to shutting down the service? Might there be a logical company to take on Delicious, and all the value it holds? Why yes, one company comes to mind.

Google.

Delicious fits Google’s mission

Hmmm…what is it Google wants to do? What defines their corporate philosophy? Ah yes, here’s Google’s mission:


“Organize the world’s information.” Now, doesn’t that sound like the kind of thing that applies to Delicious? Millions of people organizing the world’s information, according to their own tags. Which makes it easier to find for others. Crowdsourced curation.

For that reason alone, Google would be wise to take on Delicious.

Glean new insights about what people value

Google’s pagerank is amazing. It’s incredibly good at finding nuggets. But it’s not perfect, as anyone who regularly use it knows. The use of links is powerful, but is a limited basis for identifying valuable web pages.

What people elect to bookmark is a different sort of valuation. Which is important, because not everyone blogs, or creates web pages with links to their favorite sites. But there is a distributed effort of indicating value via bookmarking.

This activity would be a valuable addition to Google’s search results. Take a look at this thread on Hacker News (a bunch of tech savvy types) about Delicious:

I added that highlighting. And here’s what Michael Arrington said when Yahoo experimented with adding Delicious bookmarks to its search results:

I have previously written that Delicious search is one of the best ways of searching for things when a standard search doesn’t pull up what you are looking for. After Google, it is my favorite “search engine.” Adding this information into Yahoo search is a great idea.

Google could leverage the activity of Delicious users to improve its search results, or at least give users an additional place to find content. Mine the tags to provide more context and connections among pages.

Note that Google, and Bing, are exploring different ways to apply social signals from Twitter and Facebook. Inclusion of Delicious in the search process would be consistent with that.

And Google would still benefit from its Adwords program here. Which would be a monetization strategy for Delicious, which has no ads.

Great PR move with the tech community

Google finds itself in a fight with Facebook for employees. Google is public, Facebook is pre-IPO. Social is hot, and Facebook is dominant in that. Google isn’t.

But as Allen Stern notes, Google does have a special appeal to the tech crowd for its developer-friendly moves. Stepping in and taking over a legendary Web 2.0 site like Delicious would be a good fit with that reputation. Enhance the usage of the data and make it easy for developers to access.

More importantly, Delicious holds a special appeal among the geekier set. Many of us are still active bookmarkers, and use the service. Google is known for being a geek-centric paradise, with a bunch of high-GPA, advanced degree types on its campuses.

What do you think it costs to run Delicious “as is”? I’d hazard a guess that it’s not too much. And Google is throwing off some serious cash ($10 billion in last 12 months):

So they do have some capacity, but obviously need to invest it wisely.

For a relatively low cost, they gain a treasure trove of data on relevance and value, and a solid boost to their PR. Seems like a big win to me. How about it Google? Why not step in and take over Delicious?

[tweetmeme source=”bhc3″]

Phone Cameras + Social Are Expanding the Historical Record

"There's a plane in the Hudson. I'm on the ferry going to pick up the people. Crazy."

In a critique of the rise of Instagram (current photo sharing app du jour), Laurie Voss argues that the rise of cheap, low fidelity cameras on phones is undermining the data contained in them. And it’s not just that these pictures are lower quality now, it’s affecting their value for future generations:

With these rubbish phone cameras we take terrible photos of some of our most important moments and cherished memories. I am not complaining about composition and lighting here; I’m not a photographer. I am talking about the quantity of meaningful visual data contained in these files. Future historians will decry forever the appalling lack of visual fidelity in the historical record of the last decade.

I read that, and at first though, “Yeah, that could be an issue.” But then I realized that, well no, it’s actually the opposite. The rise of cheap phone cameras is actually increasing the historical record. This even has disruptive innovation undertones to it.

Why?

Picture = Moment + Equipment

[tweetmeme source=”bhc3″]

When thinking about recording data for history pictorially, I consider two elements:

  • Moment
  • Equipment

"The line at 9 am at the Pleasanton @sfbart stretches for blocks. Huge crowd downtown today for #sfgiants parade."

Now moments are always going to arise. They may be significant moments, such as Janis Krums’ iconic picture above after a US Airways plan crash landed on the Hudson. Recently, the San Francisco Giants were celebrated for their 2010 World Series title with a ticker tape parade in downtown San Francisco. When I arrived at the Dublin/Pleasanton BART the morning of the victory parade, I was shocked by the number of people waiting in line for get to SF.

Just as important as the moment is the equipment. I’m not talking about the quality of the photographic equipment. I’m saying, “do you have something to take the picture?”

Before I got a phone with a camera on it, I had no way of photographing any moments. I could tweet about them, email a description of them and tell people about them. But there was no visual record at all.

I wasn’t carrying a camera around with me. Just not something I wanted to deal with as I also carried my ‘dumb’ phone.  And wallet. And keys. Just too much to deal with.

But a camera included with my mobile phone? Oh yeah, that works. I’ll have that with me at all times.

Which is a much better fit with the notion of capturing moments. They are unpredictable, and do not schedule themselves to when you’re carrying a separate camera.

As for the “quantity of meaningful visual data” being reduced, I think of it mathematically:

The X/Y variable represents the decrease in data per picture. If Y is the “full” data from a high resolution photo, then X is the reduced data set. The loss of scene details, the inability to discern people’s expressions, etc. Yeah, that is a loss due to low quality cameras.

The B/A variable represents the increased number of pictures enabled by the proliferation of convenient low quality cameras. If A is the quantity of photos with high resolution cameras, B is the overall number of photos inclusive of the low quality cameras.

Multiply the ratios, and I believe the overall historical record has been improved by the advent of phone cameras. In other words, “> 1”.

Sharing Is Caring

Something the higher quality, standalone cameras have lacked is connectivity. They miss that aspect we have to share something in the moment. The fact that I can share a picture just as soon as a I take it is extra incentive to take the picture in the first place.

I share my kids’ pics with family via email, and other pics end up in my Twitter and Facebook streams. You know how painful it is to upload photos from the camera and share them? Very.

Standalone cameras are like computer hard drives, locking data off in some siloed storage device somewhere. Good luck to historians in extracting that photographic data.

Convenience Wins Out

This is the disruptive innovation of convenience. People are swapping the separate cameras for the all-in-one mobile devices. And like any good low-end innovation, the quality will increase. Meaning more pictures with better detail and fidelity.

I mean, imagine if there were a bunch of phone cameras at Gettysburg?

Only known photo of Abraham Lincoln (center, without hat) at Gettysburg

We’d have thousands of pics, and it’d be a Twitter Trending Topic. As for the lower data per picture, damn the torpedoes, full steam ahead. Phone cameras will enrich the historical record for future generations.

[tweetmeme source =”bhc3″]

iPad’s Climb Up the Disruptive Innovation Cycle

Blockbuster’s recent bankruptcy filing was yet another chapter in the Clayton Christensen annals of disruptive innovation. A major brand with convenient locations that got disrupted by a website and the U.S. Mail. Note that we’re seeing the backend of the disruption, when it all seems so clear.

[tweetmeme source=”bhc3″]

How easy is it to see such a disruption beforehand? “Not very” would be the honest answer. What distinguishes a truly disruptive technology or business model from a flash-in-the-pan idea? Keep in mind the basis of a disruptive innovation:

A technology initially addressing low-end market needs that slowly moves upstream as its capabilities evolve.

From that perspective, think of all the things out there that have stayed low level and did not disrupt industries. Disruptive innovation is like a Category 5 hurricane: powerful, slow-moving and rare.

Which brings me to the Apple iPad. Are we witnessing a disruptive tropical depression?

DISRUPTIVE INNOVATION LADDER

The graphic below (via wikipedia reproduced on the TouchDraw iPad app) describes the levels of usage for disruptive technologies.

Disruptive Innovation Cycle

The target of the iPad here is the global laptop market. In that context, the beautiful, sublime, innovative iPad is solidly…in the low quality usage band of the chart above.

What represents the iPad’s “low quality use”?

  • Email
  • Surfing the web
  • Facebooking
  • Tweeting
  • Playing music

“Low quality” is not a pejorative term here. It’s a reflection of the computing power needed for the listed activities. This is the iPad’s entre into the laptop market. Consider how much of your own digital activity is covered by those items listed above. IPad already offers a great experience here.

Indeed, the Best Buy CMO recently confirmed the iPad’s move into this end of the market.

MEDIUM QUALITY USE

When you see those low quality uses, they’re primarily consumption oriented. If they are production oriented, they’re pretty basic. But there are things that can be done at the next level, medium quality use.

Games are well done on the iPad. They take advantage of the touch aspect of the device. In my opinion, games on the iPad are quickly moving up the quality ladder.

For the office, there are Apple’s apps. The Pages word processing app looks like a winner. For document production, Pages appears to fill the bill. Especially without a Microsoft Word app on the iPad. The other major office apps – spreadsheets and presentations – are available as well.

I really like the graphics program TouchDraw on the iPad. You can create very nice graphics, for business use, with just your finger. The simple graphic above was done with TouchDraw.

While I couldn’t possibly survey all apps that address different activities, I get the sense that a number of them qualify for medium or high quality uses. The question is the breadth of apps addressing the “power use cases” of laptop owners.

Finally, a word about the keyboard. I love it. I find it very easy to type out this post. It’s not without its imperfections, but generally I’m flying around it as I type. One disclaimer: I hunt-n-peck to type. I’ve never learned real typing.

GAPS THAT NEED TO BE CLOSED

In my experimenting to see how much I could do with an iPad instead of a laptop, I’ve found several areas that need to be shored up to move the overall experience to the medium quality use level.

Safari usage: Safari is the browser used for the web on the iPad. It is surprising how many sites aren’t built for usage via Safari. For example, wordpress.com, surprisingly in my view, doesn’t work well with Safari. Google Docs? Similar issue. Doesn’t work well, or at all, with Safari. I’m embedding HTML tags in this post-by-email blog post.

I cannot accept an event into my Google Calendar via Safari. I cannot create a WebEx meeting from Safari, and the WebEx iPad app doesn’t allow you to create an event. In short, doing business via iPad is tough.

As the iPad continues to gain market share, expect better support by websites for Safari. Which will dramatically improve the end user experience with the iPad.

Graphics uploads: Want to add a graphic to a document, presentation, wiki, blog or email? Hard to do. We’re used to having graphics on our local drive, and a simple button to upload/embed that graphic.

Where’s my master upload button on the iPad?!!

Answer: there isn’t one. The graphic above is one that I emailed to Flickr, grabbed the embed code and pasted it into this post. Which works fine for publicly accessible graphics. But not so much in the work context.

I’d like to see the native Photos app become a universal location for accessing graphics in any app.

Stuff at my fingertips: The ability to easily click around different apps on the PC tray at the bottom of my screen, and to click quickly among different websites via tabs, is a great productivity benefit. If you’re like me, you’re zipping around easily.

With iPad, it’s slower going back-n-forth. A lot of clicking the home button to get to other apps, or clicking the button on Safari to view other sites. Which is a pain, reducing the pace of work.

iPAD’S INEVITABLE CLIMB UP THE DISRUPTION CYCLE

So the iPad is still fundamentally in the low quality usage band, but with some clear indications of moving up. I’ve taken to using my iPad for my non-work hours computing needs.

My full expectation is that slowly, but surely, Apple and the third party app developers will improve the utility of the iPad experience. It will take some time.

But the key observation is this: Apple has the time to enhance the iPad. Two points:

That’s why I expect iPad to get better over time: market momentum. How about you? Are you thinking the iPad, and even the new crop of competitor tablets, will disrupt the laptop industry?

[tweetmeme source=”bhc3″]

Sent from my iPad

Should BP crowdsource solutions to solve the Gulf oil spill?

Clifford Krauss of the New York Times reports on BP’s latest effort to cap the oil leak, called “top kill”. He notes the following:

The consequences for BP are profound: A successful capping of the leaking well could finally begin to mend the company’s brittle image after weeks of failed efforts, and perhaps limit the damage to wildlife and marine life from reaching catastrophic levels.

A failure could mean several months more of leaking oil, devastating economic and environmental impacts across the gulf region, and mounting financial liabilities for the company. BP has already spent an estimated $760 million in fighting the spill, and two relief wells it is drilling as a last resort to seal the well may not be completed until August.

Let’s hope for the best. Given the challenges of the previous efforts, it sounds like it will take a monumental effort to stop the leaking well.

Which begs a question…should BP be tapping a larger set of minds to help solve the leaking well? Can they crowdsource a solution?

In a way, they’re already doing it. Sort of. You can call an idea hotline to suggest ways to stop the oil. They even have the number posted on their home page.

[tweetmeme source=”bhc3″]

But why not take it a step further? A formal crowdsourcing effort. I’ve heard that the folks at Innocentive asked this on an NPR report. Another vendor also pitched its idea management software, however BP didn’t bite. Spigit hasn’t pitched BP, but would certainly be willing to help.

There are some very good reasons to open it more publicly, and cast a call across the globe for ideas:

  • Diversity of ideas increases the odds of finding something that will be useful
  • While no one idea may solve it, visibility (as opposed to private phone calls) increases the odds of finding parts of ideas that lead to viable solutions
  • The brain power of enthusiastic participants across the globe is a good match to BP’s in-house experts
  • Potentially a good PR move, as the company demonstrates that it’s leaving no stone unturned to solve the leak

Crowdsourcing has proven its value in other endeavors, such as products, government services, technical problems and marketing. Surely it could do well here. But what might hold BP back? Three reasons:

  1. Little previous experience with crowdsourcing
  2. Deep technical domain experience is required
  3. Site becomes a place for public criticism

Are they valid? Let’s see.

Little Previous Crowdsourcing Experience

If a company hasn’t previously mastered open innovation and crowdsourcing, a crisis is a hell of a time to give it a go. This is far from comprehensive, but I did find a couple examples of BP’s forways in the world of crowdsouring and open innovation.

Headshift wrote up a case study about BP’s Beacon Awards. The internal awards recognize innovative marketing initiatives, and BP created a site for employees to submit ideas and vote on them. This example has a couple elements of note:

  • It’s an internal effort, where “mistakes” can be made as the company gets comfortable with the process of crowdsourcing
  • It was for marketing ideas in a time of relative calm, not time-is-ticking ideas during a crisis

BP also touts its open innovation efforts. Open innovation means working with others outside your organization to come up with new ways of tackling problems. In  a post on its website, it discusses its work with partners:

The need to work with others to solve tricky problems has most likely been around since humans learned to communicate, pooling their skills to achieve a desired mutual goal. In today’s world, collaboration between partner organisations has become highly sophisticated, particularly so in the energy industry where new challenges abound, be those in security of supply, cleaner energy sources, or the bringing together of different scientific and engineering disciplines to focus on a common problem.

Certainly the oil spill qualifies as a tricky problem.

So BP has experience in crowdsourcing internally on marketing ideas, and in open innovation with academia and industry partners. Not too shabby, and that argues for their having a favorable disposition toward crowdsourcing.

Deep Technical Domain Expertise Is Required

OK, I’ll admit. I have no idea how I’d stop the oil leak. Maybe I could come up with an idea as I give my kids a bath (“so you take the rubber duckie, and move it over the drain…”).

The BP oil leak occurred deep underwater, an area subject to different conditions than oil companies have had to deal with. BP is sparing no level of expertise to fix the issue, reports the New York Times:

Several veterans of that operation are orchestrating technicians in the Gulf of Mexico. To lead the effort, BP has brought in Mark Mazzella, its top well-control expert, who was mentored by Bobby Joe Cudd, a legendary Oklahoma well firefighter.

Didn’t even know one could be a legendary well firefighter. But the challenges of doing this in the Gulf are different. Popular Mechanics has a scorecard of each previous effort by BP to stop the leaking well. Do you remember one effort called “The Straw”? It is capturing a part of the oil, siphoning it to a surface ship. But it’s not without its risks:

The real gamble was in the original insertion—the damaged riser’s structural integrity is unknown, and any prodding could have worsened the spill, or prevented any hope of other riser- or BOP-related fixes.

Given the highly technical nature of these efforts, and the myriad complexities, does it make sense to crowdsource? I’d say it does, in that a proposed idea need not satisfy all elements of risk mitigation and possible complications. That puts too high a burden on idea submitters. Start with the idea, let the domain experts evaluate its feasibility.

Keep in  mind that people outside a company can solve technical challenges. Jeff Howe wrote in Wired about the guy who tinkers in a one-bedroom apartment above an auto body shop. This guy solved a vexing problem for Colgate involving the insertion of fluoride powder into a toothpaste tube.

Site Becomes a Place for Public Criticism

If BP were to set up a public site that allows anyone to participate, I can guarantee that some percentage of ideas and comments will be devoted to excoriating BP. In fact, it wouldn’t surprise me if much of it became that. A free-for-all that has nothing to do with solving the oil well leak.

A public forum receiving press attention during an extreme crisis presents angry individuals with a too-tempting target to make mischief. BP could spend more time deleting or responding to comments than getting much from it. The anger is too strong, too visceral on the part of many across the world.

Charlene Li talks about meeting criticism head-on in her book Open Leadership. Perhaps one way BP could handle this would be to set up a companion forum where criticism could be moved to. Keep an idea site dedicated to just that…ideas.

But I can see how BP understandably would not want to deal with such a site, as it potentially becomes a major PR pain on top of the existing maelstrom.

This reason strikes me as the one most likely to keep BP away from a crowdsourcing initiative to complement their other efforts. What do you think? Should BP be crowdsourcing solutions to the Gulf oil spill?

[tweetmeme source=”bhc3″]