About these ads

The Migration of Web Techniques to In-Store Retail Practices

Via ralphbijker on Flickr

Via ralphbijker on Flickr

Think about the companies doing the most technologically advanced stuff. Amazon. Google.

Grocery stores.

Say what…? The place where oranges sit in piles in the produce section. Boxes of cereal lines the aisles. The frigid ice cream aisle.

Well, they’re not in the league of Google and Amazon. But grocers are more than those aisles of food and ceilings of fluorescent lights you see. Two trends in the industry borrow heavily from the advancements on the Web:

  1. Website optimization
  2. Recommendations

I’m not talking about monitors with web pages inside stores. I mean the shopping experience has been affected by these developments. Here’s how.

Website Optimization => Store Layout and Merchandising

E-commerce sites live and die by their conversion rates. A key piece of the conversion rate puzzle is effective navigation and presentation of items to site visitors. One company that helps with that is  Tealeaf, which records and analyzes visitor behavior to help site owners optimize conversions and return visits.

In a physical space, you can’t record people’s clicks and actions. Or can you?

As reported in a recent Economist article, retailers are starting to video record shoppers’ behavior in the aisles. For instance, here’s how one supermarket used technology provided VideoMining to understand visitor behavior in its juice section:

Another study in a supermarket some 12% of people spent 90 seconds looking at juices, studying the labels but not selecting any. In supermarket decision-making time, that is forever. This implies that shoppers are very interested in juices as a healthy alternative to carbonated drinks, but are not sure which to buy. So there is a lot of scope for persuasion.

These are exactly the kind of metrics that e-commerce sites track to improve their conversion rates. Use of cameras in-store to do the same thing is analogous to tracking visitors to your website.

Personalized Recommendations

Amazon.com really led the movement to provide effective recommendations to existing customers. One report I’ve seen says that Amazon derives 35% of its sales from these recommendations. Amazon’s recommendations are generated from your shopping history, compared to others via collaborative filtering. The success of these recommendations has inspired others to build recommendation engine services, including Aggregate Knowledge, Baynote, MyBuys, RichRelevance and others.

The same thing is happening in-store as well. You know that loyalty card you present to your grocer to get discounts? It’s used to record your shopping history. Historically, grocers have done little with that information. It was more of a device to keep you coming back to the store.

But in the past few years, grocers have been getting hip to the idea that their customers’ shopping history can be used to personalize the shopping experience.

Once, I was product manager for just such a system, called SmartShop. Pay By Touch’s SmartShop used a Bayesian model to compare your purchases against those of other shoppers, and determine whether you exhibited stronger or weaker preferences for a category or product than the overall average. A set of 10 personalized item discounts were then selected for you based on your specific purchase preferences.

On a website, returning customers are presented with a set of recommendations as they shop. In-store, what’s the analog? Kiosks. Kiosks are the in-store interaction basis with customers. SmartShop notified you of discounts via a print-out from a kiosk at the front of the store. This was key – get you the discounts right at the point of decision, when you’re shopping. Not unlike e-commerce recommendations.

Prior to Pay By Touch’s demise, SmartShop was getting good traction among grocers, who were looking for ways to increase basket size, increase loyalty and differentiate themselves. And it wasn’t just SmartShop. Price Chopper and Ukrops use a recommendation system from Entry Point Communications. UK-based Tesco is the granddaddy of personalized recommendations, provided through Dunnhumby.

Teaching Old Dogs New Tricks

While e-commerce benefits from being all-digital and various identification mechanisms, grocery historically lacked these. But that’s changing. Retailer have picked up the best practices of their online brethren. Things are now much more measurable and personalization is no longer the province of the online players.

Looking forward to grocers introducing Twitter into the shopping experience…


For reference, here’s a white paper I wrote about SmartShop when I was at Pay By Touch:


See this post on FriendFeed: http://friendfeed.com/search?q=%22The+Migration+of+Web+Techniques+to+In-Store+Retail+Practices%22&who=everyone

About these ads

Supply-Demand Curves for Attention

The basic ideas behind the Attention Economy are simple. Such an economy facilitates a marketplace where consumers agree to receives services in exchange for their attention.

Alex Iskold, ReadWriteWeb, The Attention Economy: An Overview

The attention economy. It’s a natural evolution of our ever-growing thirst for information, and the easier means to create it. It’s everywhere, and it’s not going anywhere. The democratization of content production, the endless array of choices for consumption.

In Alex’s post, he listed four attention services, as they relate to e-commerce: alerts, news, search, shopping.  In the world of information, I focus on three use cases for the consumption of information:

  1. Search = you have a specific need now
  2. Serendipity = you happen across useful information
  3. Notifications = you’re tracking specific areas of interest

I’ve previously talked about these three use cases. In a post over on the Connectbeam blog, I wrote a longer post about the supply demand curves for content in the Attention Economy. What are the different ways to increase share of mind for workers’ contributions, in the context of those three consumption use cases.

The chart below is from that post. It charts the content demand curves for search, serendipity and notifications.


Following the blue dotted line…

  • For a given quantity of user generated content, people are willing to invest more attention on Search than on Notifications or Serendipity
  • For a given “price” of attention, people will consume more content via Search than for Notifications or Serendipity

Search has always been a primary use case. Google leveraged the power of that attention to dominate online ads.

Serendipity is relatively new entry in the world of consumption. Putting content in front of someone, content that they had not expressed any prior interest in. A lot of the e-commerce recommendation systems are built on this premise, such as Amazon.com’s recommendations. And companies like Aggregate Knowledge put related content in front of readers of media websites.

Notifications are content you have expressed a prior interest in, but don’t have an acute, immediate need for like you do with Search. I use the Enterprise 2.0 Room on FriendFeed for this purpose.

The demand curves above have two important qualities that differentiate them:

  • Where they fall in relation to each other on the X and Y axes
  • Their curves

As you can see with how I’ve drawn them, Search and Notifications are still the best way to command someone’s attention. Search = relevance + need. Notifications = relevance.

Serendipity commands less attention, but it can have the property of not requiring opt-in by a user. Which means you can put a lot of content in front of users, and some percentage of it will be useful. The risk is that a site overdoes it, and dumps too much Serendipitous-type content in front of users. That’s a good way to drive them away because they have to put too much attention on what they’re seeing. Hence the Serendipity curve. If you demand too much attention, you will greatly reduce the amount of content consumed. Aggregate Knowledge typically puts a limited number of recommendations in front of readers.

On the Connectbeam blog post, I connect these subjects to employee adoption of social software. Check it out if that’s an area of interest for you.


See this post on FriendFeed: http://friendfeed.com/search?q=%22Supply-Demand+Curves+for+Attention%22&who=everyone

Defrag 2008 Notes – Picasso, Information Day Trading, Stowe “The Flow” Boyd


One of the most consistently provocative conferences I attended last year — my own Money:Tech 2008 aside, of course — was Eric Norlin’s Defrag conference. Oodles of interesting people, lots of great conversation and all of it aimed at one of my favorite subjects: How we cope with the information tsunami.

Paul Kedrosky, Defrag 2008 Conference

I spent two days out in Denver earlier this week at Defrag 2008 with Connectbeam. As Kedrosky notes above, the conference is dedicated to managing the increasing amount of information we’re all exposed to. Now my conference experience is limited. I’ve been to five of them, all in 2008: Gartner Portals, BEA Participate, TechCrunch50, KMWorld, Defrag.

Defrag was my favorite by far. Both for the subject matter discussed and the attendees. The conference has an intimate feel to it, but a high wattage set of attendees.

In true information overflow style, I wanted to jot down some notes from the conference.

Professor William Duggan: He’s a professor at Columbia Business School. He gave the opening keynote: “Strategic Intuition”, which is the name of his book.  Duggan talked about how studies of the brain showed that we can over-attribute people’s actions as being left-brained or right-brained. Scientists are seeing that both sides of the brain are used in tackling problems.

He then got into the meat of his session – that people innovate by assembling unrelated data from their past experience. For example, he talked about how Picasso’s style emerged. Picasso’s original paintings were not like those for which he became famous. The spark? First, meeting with Henri Matisse, and admiring his style. In that meeting, Picasso happened to become fascinated with a piece of African sculpture. In one of those “aha!” moments, Picasso combined the styles of Matisse and African folk art to create his own distinctive style. He combined two unrelated influences to create his own style.

Duggan also described how all innovation is fundamentally someone “stealing” ideas from others. In “stealing”, he means that people assemble parts of what they’re exposed to. This is opposed to imitating, which to copy something in whole. That’s not innovation.

Re-imagining the metaphors behind collaborative tools: This session examined whether we need need ways of thinking about collaboration inside the enterprise. The premise here is that we need to come up with new metaphors that drive use cases and technology design. I’ll hold off on describing most of what was said. My favorite moment was when Jay Simons of Atlassian rebutted the whole notion of re-imagining the metaphors. He said the ones we have now are fine, e.g. “the water cooler”. What we need is to stop chasing new metaphors, and execute on the ones we have.

Rich Hoeg, Honeywell: Rich is a manager in Honeywell’s corporate IT group (and a Connectbeam customer). He talked about the adoption path of social software inside Honeywell, going from a departmental implementation to much wider implementation, and how his own career path mirrored that transition. He’s also a BarCamp guy. Cool to hear an honest-to-goodness geek making changes in the enterprise world.

Yatman Lai, Cisco: Yatman discussed Cisco’s initiatives around collaboration and tying together their various enterprise 2.0 apps. I think this is something we’ll see more of as time goes along. Companies are putting in place different social software apps, but they’re still siloed. Connecting these social computing apps will become more important in the future.

Stowe “The Flow”: Stowe Boyd apparently gave quite the interesting talk. I didn’t attend it, because Connectbeam had a presentation opposite his. But from what I gather, the most memorable claim Stowe made was that there’s no such thing as attention overload. That we all can be trained to watch a constant flow of information and activities go by, and get our work done. I think there will be a segment of the population that does indeed do this. If you can swing it, you’re going to be well-positioned to be in-the-know about the latest happenings and act on them.

But in talking with various people after the presentation, there was a sense that Stowe was overestimating the general population’s ability and desire to train their minds to handle both the work they need to do for their employers, and to take in the cascade of information flowing by (e.g. Twitter, FriendFeed). Realistically, we’ll asynchronously take in information, not in constant real-time.

We’re Becoming Day Traders in Information: I heard this quote a few times, not sure who said it (maybe someone from Sxipper or Workstreamr). It’s an intriguing idea. Each unit of information has value, and that value varies by person and circumstances. Things like Twitter are the trading platform. Of course, the problem with this analogy is that actual day traders work with stocks, cattle futures, options, etc. Someone has to actually produce something. If all we do is trade in information and conversations, who’s making stuff?

Mark Koenig: Mark is an analyst with Saugatuck Technology. He gave the closing keynote for Day 1, Social Computing and the Enterprise: Closing the Gaps. What are the gaps?

  1. Social network integration
  2. Information relevance
  3. Integration with enterprise applications
  4. The culture shift

Mark also believes in the enterprise market,  externally focused social computing will grow more than internally focused. Why? Easier ROI, more of a sales orientation.

Charlene Li: Former Forrester analyst Charlene Li led off Day 2 with her presentation, Harnessing the Implicit Valkue of the Social Graph. Now running her own strategic consulting firm, Altimeter Group, Charlene focused on how future application will weave “social” into everything they do. It will be a part of the experience, not a distinct, standalone social network thing. As she says, “social networks will be like air”. She ran the gamut of technologies in this presentation. You can see some tweets from the presentation here.

One thing she said was to “prepare for the demise of the org chart”. When I see things like that, I do laugh a bit. The org chart isn’t going anywhere. Enterprises will continue to have reporting structures for the next hundred years and beyond. What will change is the siloed way in which people only work with people within their reporting structures. Tearing down those walls will be an ongoing theme inside companies.

Neeraj Mathur, Sun Micro: Neeraj talked about Sun’s internal initiatives around social computing in his session, “Building Social Capital in an Enterprise”. Sun is pretty advanced in its internal efforts. One particular element stuck with me. It the rating that each employee receives based on their participation in the Sun social software. Called Community Equity, the personal rating is built on these elements (thanks for Lawrence Liu for tweeting them):

Contribution Q + Skills Q + Participation Q + Role Q = Personal Q

Sun’s approach is an implementation of an idea that Harvard Professor Andrew McAfee put out there, Should Knowledge Workers Have Enterprise 2.0 Ratings? It’s an interesting idea – companies can gain a lot of value from social computing, why not recognize those that do it well? Of course, it’s also got potential for unintended consequences, so it needs to be monitored.

Laura “Pistachio” Fitton: Twitter-ologist Laura Fitton led a panel called “Finding Serendipitous Content Through Context”. The session covered the value of serendipity, and the ways in which it happens. The panel included executives from Aggregate Knowledge and Zemanta, as well as Carla Thompson from Guidewire.

What interested me was the notions of what serendipity really is. For example, Zemanta does text matching on your blog post to find other blog posts that are related. So there’s an element of structured search to bring related articles.

So I asked this question: Does persistent keyword search, delivered as RSS or email, count as “serendipity”? Carla’s response was , no it doesn’t. Serendipity is based on randomness. It’s an interesting topic worth a future blog post potentially.

And of course, Laura encouraged people to tweet during the session, using the hash tag #serendip. The audience tweets are a good read.

Daniela Barbosa, Dow Jones, DataPortability.org: Daniela works for Dow Jones, with coverage of their Synaptica offering. She’s also an ardent supporter of data portability, serving as Chairperson of DataPortability.org. Her session was titled Pulling the Threads on User Data. She’s a librarian by training, but she kicks butt in leading edge thinking about data portability and organization. In her presentation, she says she’s just like you. She then pops up this picture of her computer at work:


Wow – now that’s some flow. Stowe Boyd would be proud.

Wrapping up: Those are some notes from what I heard there. I couldn’t get to everything, as I had booth duties for Connectbeam. Did plenty of demos for people. And got to meet many people in real life that I have followed and talked with online. Looking forward to Defrag 2009.


See this post on FriendFeed: http://friendfeed.com/search?q=%22Defrag+2008+Notes+-+Picasso%2C+Information+Day+Trading%2C+Stowe+%E2%80%9CThe+Flow%E2%80%9D+Boyd%22&who=everyone

Tim O’Reilly Course Corrects the Definition of Web 2.0

eBay was Web 2.0 before Web 2.0 was cool.

Tim O’Reilly wrote a nice piece the other day Why Dell.com (was) More Enterprise 2.0 Than Dell IdeaStorm. In the post, he re-asserted the proper definition of Web 2.0. Here’s a quote:

I define Web 2.0 as the design of systems that harness network effects to get better the more people use them, or more colloquially, as “harnessing collective intelligence.” This includes explicit network-enabled collaboration, to be sure, but it should encompass every way that people connected to a network create synergistic effects.

The impetus for Tim’s post was that people leave Google and its search engine off the list of Web 2.0 companies. As Tim writes, seeing the power of what Google’s search engine did was part of the notion of Web 2.0.

Here’s a way to represent what Tim is talking about:

I like that Tim sent out this reminder about Web 2.0. Here’s how Web 2.0 has become defined over the years:

  • Social networking
  • Ad supported
  • Bootstrapped
  • Fun and games
  • Anything that’s a web service

This seems to have fundamentally altered Web 2.0. I’m reminded of a post that Allen Stern wrote back in July, CenterNetworks Asks: How Many Web 2.0 Services Have Gone Mainstream? In that post, he wondered how many Web 2.0 companies will really ever go maintream.

Check out the comments on Allen’s blog and on FriendFeed:

I would say MySpace but that really came before Web 2.0

mainstream – Facebook/hi5/bebo, Flickr, Youtube, Slide, Photobucket, Rockyou

Oh and you’ll have to add Gmail to the list as well.

I’ve yet to see one, really. ;)

Is eBay web 2.0-ish? [this was mine]

Agree with Facebook, MySpace, YouTube. I’d add Blogs as another 2.0 winner. I’d put eBay and Amazon as 1.0 success stories

A better way to ask this is “which web services since 2000 have gone mainstream?” Blogger. Flickr. Gmail. Facebook. MySpace. Digg. YouTube. WordPress. Live Spaces

Look at those responses! You can see a massive disconnect between Tim O’Reilly’s original formulation of Web 2.0 and where we are today.

One example I see in there: Gmail. Gmail is a hosted email application. Does Gmail get better the more people use it? No. There’s no internal Gmail application functionality that makes it better the more people use it. It’s just an email app the way Yahoo Mail is an email app. Being a web service and ad-supported isn’t, strictly speaking, a Web 2.0 company.

Terms do take on a life of their own, and if the societal consensus for a definition changes over time, then that’s the new definition. But the responses to Allen Stern’s post highlight two problems:

  • People discount or ignore key components of the Web 2.0 definition
  • Web 2.0 is slowly coming to mean everything. Which means nothing.

Finally, Tim’s post helps me differentiate the times I should use “social media” as opposed to “Web 2.0″.

What do you think? Should we go back to first principles in defining what really is “Web 2.0″?


See this post on FriendFeed: http://friendfeed.com/search?q=%22Tim+O%E2%80%99Reilly+Course+Corrects+the+Definition+of+Web+2.0%22&who=everyone

Improving Search and Discovery: My Explicit Is Your Implicit

Two recent posts on the implicit web provide two different takes. They provide good context for the implicit web.Richard MacManus of ReadWriteWeb asks, Aggregate Knowledge’s Content Discovery – How Good is it, Really? Aggregate Knowledge runs a large-scale wisdom of crowds application, suggesting content for readers of a given article based on what others also viewed. For instance, on the Business Week site, you might be reading an article about the Apple iPod. Next to the article are the articles that readers of the Apple iPod article also viewed. MacManus finds the Aggregate Knowledge recommendations to be not very relevant. The recommended articles had no relationship to Apple or the iPod.

Over at CenterNetworks, Allen Stern writes that Toluu Helps You Like What Your Friends Like. Toluu lets you import your RSS feeds and friends who have also uploaded their RSS feeds. It applies some secret sauce to analyze your friends’ feeds and create recommendations for you. Stern finds the service a bit boring, as all the recommendations based on his friends’ feeds were the same.

In the case of Aggregate Knowledge, the recommendations were based on too wide a pipe. The implicit actions – clicks by everybody – led to irrelevant results because you essentially the most popular items. In the case Toluu, the recommendations were based on too narrow a pipe. The common perspectives of like-minded friends meant the recommendations were too homogeneous.

Both of these companies leverage the activities of others to deliver recommendations. The actions of others are the implicit activities used to improve search and discovery. A great, familiar example of applying implicit activities is Google search. Google analyzes links among websites and clicks in response to search results. Those links and clicks are the implicit actions that fuel its search relevance.

Which leads to an important consideration about implicit activities. You need a lot of explicit activity to have implicit activity.


That’s right. Implicit activities don’t exist in a vacuum. They start life as the explicit actions of somebody. This is a point that Harvard’s Andrew McAfee makes in a recent post.

Let’s take this thought a step further. Not all explicit actions are created equal. There are those that occur “in-the-flow” and those that occur “above-the-flow”, a smart concept described by Michael Idinopulos. In-the-flow are those actions that are part of the normal course of consumer activities, while above-the-flow takes an extra step by the user. A couple examples describe this further:

  • In-the-flow: clicks, purchases, bookmarks
  • Above-the-flow: tags, links, import of friends

Above-the-flow actions are hard to elicit from consumers. There needs to be something in it for them. Websites that require a majority of above-the-flow actions will find themselves challenged to grow quickly. They better have something really good to offer (such as Amazon.com’s purchase experience). Otherwise, the website should be able to survive on the participation of just a few users to provide value to the majority (e.g. YouTube).

So with all that in mind, let’s look at a few companies with actual or potential uses of the explicit-implicit duality:

Google Search

In an interview with VentureBeat, Google VP Marissa Mayer talks about two different forms of social search:

  1. Users label search results and share labels with friends. This labeling becomes the implicit activity that helps improve search results for others. This model is way too above-the-flow. Labeling? Sharing with friends? After experimenting with this, Mayer states that “overall the annotation model needs to evolve.” Not surprising.
  2. Google looks at your in-the-flow activity of emailing friends (via Gmail). It then marries the search histories of your most frequent email contacts to subtly alter the search result rankings. All of this implicit activity is derived from in-the-flow activities. For searches on specific topics, the more narrow implicit activity pipe of just your Gmail contacts is an interesting idea.


ThisNext is a platform for users to build out their own product recommendations. They find products on the web, grab an image, and rate and write about the product. Power users emerge as style mavens. The site is open to non-members for searching and browsing of products.

ThisNext probably relies a bit too heavily on above-the-flow activities. It takes a lot of work to find products, add them to your list of products and provide reviews. It also suffers from being a bit too wide a pipe in that there’s a lot of people whose recommendations I wouldn’t trust. How do I know who to trust on ThisNext?

Amazon Grapevine

Amazon, on the other hand, has a leg up in this sort of model. First, its recommendations are built on a high level of in-the-flow activities – users purchasing things they need. This is the “people who bought this also bought that” recommendation model. Rather than depend on the product whims of individuals, it uses good ol’ sales numbers (plus some secret sauce as well) for recommendations. This is a form of collaborative filtering.

Amazon Grapevine is a way of setting the pipe for implicit activities. The explicit activity is the review or rating. These activities are fed to your friends on Facebook. One possibility for Amazon down the road is to use the built-up reviews and ratings of your friends to influence the recommendations it provides on its website. Such a model would require some above-the-flow actions – add the Grapevine application, maintain your account and connections on Facebook. But these aren’t that onerous; the Facebook social network continues to be an explicit activity that has high value for individuals.

Yahoo Search

Yahoo bought the bookmarking and tag service del.icio.us back in 2005. It’s hard to know what, if anything, they’ve done with that service. But one intriguing possibility was hinted at in this TechCrunch post. The del.icio.us activity associated with a given web page is integrated into the search results. Yahoo search results would be ranked not just on links and previous clicks, but also on the number of times the web page had been bookmarked on del.icio.us. And, the tags associated to the website would be displayed, giving additional context to the site and enabling a user to click on the tags to see what other sites share similar characteristics.

This takes an above-the-flow activity performed by a relative few – bookmarking and tagging on del.icio.us – and turns it into implicit activity that helps a larger number of users. But with the Microsoft bid, who knows whether something like this could happen.

The use of implicit activity is a powerful basis to help users find content. Just don’t burden your users with too much of the wrong kind of explicit activity to get there. Two factors to consider in the use of implicit activity:

  1. How wide is the pipe of implicit activities?
  2. How much above-the-flow vs. in-the-flow activity is required?

Get every new post delivered to your Inbox.

Join 641 other followers