February 7, 2009 Leave a comment
Search, filtering, semantics, etc, etc. That’s the next wave of innovation in the real time web and that’s why FB opening up status is a big deal
Observations on technology and business from someone who should know better
May 27, 2008 9 Comments
In a recent post, I suggested that the semantic web might hold a solution for managing noise in social media. The semantic web can auto-generate tags for content, and these tags can be used to filter out subjects you don’t want to see.
As a follow-up, I wanted to see how four different services perform in terms of recommending tags for different content.
I’ve looked at the four services, each of which provide tag recommendations. Here they are, along with some information about how they approach their tag recommendations:
Twine and Diigo take the initiaitve, and apply tags based on analyzing the content. del.icio.us and Faviki follow a crowdsourced approach, leveraging the previous tag work of members to provide recommendations.
Note that Faviki just opened its public beta. So it suffers from a lack of activity around content thus far. That will be noticed in the following analysis.
I ran the six articles through the four tagging services:
The tag recommendations are below. Headline on the results? Recommendations appear to be a work in progress.
First, the New York Times iPhone article. Twine wins. Handily. At Diigo gave it a shot, but the nytimes tags really miss the mark. del.icio.us and Faviki weren’t even in the game.
Next, Robert Seidman’s post about Tivo. Twine comes up with several good tags. Diigo has something relevant. And again, del.icio.us and Faviki weren’t even in the game.
Now we get to the trick article, Michael Arrington’s no text blog entry Twitter! The table turn here. Twine comes up empty for the post. Based on the post’s presence on Techmeme and the 400+ comments on the blog post, a lot of people apparently bookmarked this post. This gives del.icio.us and Faviki something to work with, as seen below. And Diigo offers the single tag of…twitter!
Switching gears, this is a running-related article covering one of the top athletes in the world, Paula Radcliffe. Twine comes up the best here. Diigo manages “bombshell”…nice. del.icio.us and Faviki come up empty, presumably because no users bookmarked this article. And none of them could come up with tags of “running” or “marathon”.
I figured I’d run one of my own blog posts through this test. The post has been saved to del.icio.us a few times, so I figured there’d be something to work with there. Strangely, Twine comes up empty. Faviki…nuthin’.
Finally, I threw some science at the services. This article says that antioxidants don’t actually deliver what is promised. Twine comes up with a lot of tags, but misses the word “antioxidants”. Diigo only gets antioxidant. And someone must have bookmarked the article on del.icio.us, because it has a tag. Faviki…nada.
Twine clearly has the most advanced tag recommendation engine. It generates a bevy of tags. One thing I noticed between Twine and Diigo:
Obviously my sample size isn’t statistically relevant, but I see that pattern in the above results.
The other thing to note is that these services do a really great job with auto-generating tags. For instance, the antioxidant article has 685 words. Both Twine and Diigo were able to come up with only what’s relevant out of all those words.
With del.icio.us and Faviki, if someone else hasn’t previously tagged the content, they don’t generate tags. Crowdsourced tagging – free form on del.icio.us, structured per Wikipedia on Faviki – still has a lot of value though. Nothing like human eyes assessing what an article is about. Faviki will get better with time and activity.
Note that both Twine and Diigo allow manually entered tags as well, getting the best of both auto-generated and human-generated.
When it comes to using tags as a way to filter noise in social media, both system- and human-generated tags will be needed.
The results of this simple test show the promise of tagging, and where the work lies ahead to create a robust semantic tagging system that could be used for noise control.
See this item on FriendFeed: http://friendfeed.com/search?q=%22Tag+Recommendations+for+Content%3A+Ready+to+Filter+Noise%3F%22&public=1
May 21, 2008 7 Comments
On a FriendFeed discussion about the noise on the Web in general, Lindsay Donaghe posted this comment:
Actually I think it’s the same problem we have in general with the firehose of information we’re exposed (or expose ourselves) to on a daily basis. The struggle of where to apply our attention will only be resolved once someone develops intelligent agents to filter the bad stuff and alert us to the good stuff. Wish someone would hurry up and make those. That will be the ultimate killer app.
Louis Gray wrote this recently in his post Content Filters Proving Evasive for RSS, Social Media Sites:
So far, despite many users calling for content-based filters, solutions to block keywords or topics are missing from the vast majority of information spigots.
The recent meme about FriendFeed noise points to the frustration of some people with an inability to manage what content hits their screens. The two comments above underscore this feeling.
Here’s me own example. Dave Winer has two passions: technology and politics. For me personally, technology = signal. Politics = noise. I went through his FriendFeed stream for the month of May, and here are the 38 different political terms that show up:
So what to do? I’d like to suggest that the semantic web might be a solution for down the road.
Semantic web is still a confusing term. Two quotes from Wikipedia help describe it. This quote tells you generally what it’s about and importantly notes that there’s much development for the future:
The Semantic Web is an evolving extension of the World Wide Web in which the semantics of information and services on the web is defined, making it possible for the web to understand and satisfy the requests of people and machines to use the web content. At its core, the semantic web comprises a set of design principles, collaborative working groups, and a variety of enabling technologies. Some elements of the semantic web are expressed as prospective future possibilities that are yet to be implemented or realized.
This quote describes the problem that the semantic web will solve:
With HTML and a tool to render it (perhaps Web browser software, perhaps another user agent), one can create and present a page that lists items for sale. The HTML of this catalog page can make simple, document-level assertions such as “this document’s title is ‘Widget Superstore'”. But there is no capability within the HTML itself to assert unambiguously that, for example, item number X586172 is an Acme Gizmo with a retail price of €199, or that it is a consumer product.
It’s that last sentence there that addresses the noise issue. How does a server know that part X586172 can be categorized as a “consumer product”? That’s where the semantic web comes into play.
And how the noise can be controlled on FriendFeed.
One way to think of the semantic web is as tagging on steroids. In the example above, part X586172 is tagged as “consumer product”. And the tagging occurs without human intervention.
This is what’s needed on FriendFeed. The ability to take a wide range of terms that humans can understand are related. The relationship among the terms is tag.
Here’s what such an algorithm would do for Dave Winer’s political terms:
Now, imagine this in FriendFeed. Semantically-derived tags are appended to every item that flows through. Meanwhile, users have a new ‘Hide’ feature. Hide by topic. They could elect to hide streams with terms on a one-by-one basis. For instance, I’ll hide “robert reich”. I’ll hide “republicans”. I’ll hide “congress”. I’ll hide “obama”. I’ll hide “mitt romney”. I’ll hide…well, you get the picture.
In addition, users could just hide all items with the tag “politics”, and be done with it. Simple.
This could apply for all manner of topics: football, banking, Iraq, etc.
I’m not sure anything quite with this purpose exists yet. Reuters has been a leading player in the semantic web with its Open Calais initiative. However, Open Calais focuses of its tagging on people, places, and companies. So if Open Calais was applied to Dave Winder’s FriendFeed stream would have a lot of tags related to those topics. But not metadata tags.
A company called GroupSwim described their semantic tagging approach:
We use natural language processing to analyze the data our customers put into their sites. Our datasets tend to be much smaller but are high quality since someone doesn’t add something to GroupSwim unless they want to share it. Then, we compare the language used in the content to other semantic sources including WordNet, Wikipedia, etc. to do our automatic tagging and analysis.
Interesting, not sure what the tags they produce are. But it does give insight into a requirement: a core foundation of data against which all other data can be compared to derive tags. Something that would correctly map Obama and Clinton to a politics tag.
I’m sure there are other interesting approaches. It’d be great if someone was working on something in this area.
If anyone reading this knows of any semantic approaches that can apply metadata type of tags, feel free to leave a comment.
See this item on FriendFeed: http://friendfeed.com/search?q=who%3Aeveryone+%22friendfeed+noise+control+semantic+web+dave+winer%22
May 19, 2008 12 Comments
We recently went through a Twitter meme about whether it was mainstream yet. There is no debate as to whether FriendFeed is mainstream today – it’s not. The question really is, will FriendFeed ever see mainstream adoption? Robert Scoble played both sides of the coin (here, here).
FriendFeed will go mainstream. My definition of mainstream: 33% of Internet users are on it. It’s just going to take time, and it’ll look different from the way it does now.
Four points to cover in this mainstreaming question:
Harvard professor John Gourville has a great framework for analyzing whether a new technology will succeed. His “9x problem” says a new technology has to be nine times better than what it replaces. This is because of two reasons:
What does this mean in everyday terms? There’s comfort in the status quo, and fear of the unknown.
There’s the argument that FriendFeed is a complement, not a replacement to existing services. There’s some truth there, but the bottom line is that we only have 24 hours in day. Where will end up spending our time?
Here’s what FriendFeed will replace:
In terms of the “9x problem”, the nice thing is that people do not have to replace what they already do. Visit CNN? You can keep doing that. Like to see what’s on Digg? You can keep doing that.
Searching on FriendFeed will advance. You can do a search on a keyword or a semantically-derived tag, and specify the number of shares, likes or comments.
FriendFeed doesn’t require you to leave your favorite service. It’s the FriendFeed experience that will slowly steal more of your time. That mitigates the issue of people overvaluing what they already have. They won’t lose it, they’ll just spend less time on it. Thomas Hawk continues to be an active participant on Flickr, but more of his time is migrating to FriendFeed. As he says:
One of the best things about FriendFeed is that it gives you much of what you get from your favorite sites on the internet but in better ways.
I think FriendFeed will have the 9x problem beat, but it will take time.
The chart below, courtesy of Visualizing Economics, shows how long several popular technologies took to be adopted in the U.S.
Using my mainstream definition of 33% household penetration, here’s roughly when several technologies went mainstream:
In addition, here are some rough estimates of current levels of adoption for other technologies. Estimates are based on the number of U.S. Internet users, the recent Universal McCann survey of social media usage (warning, PDF opens with this link) and search engine rankings.
Yes, the date of FriendFeed mainstream adoption is pure speculation. But looking at the adoption rates of several other technologies, ten years from now is within reason (i.e. 2018). The RSS adoption is a decent benchmark.
Not surprisingly, Twitter dominates the content sources. Original blog posts are a distant #2 content source, and Google Reader shares are #3. That speaks volumes into the world of early technology adopters.
When FriendFeed becomes mainstream, the sources of content will change pretty dramatically as shown in this table:
The biggest change is in the FriendFeed Direct Post. Relative to blogging or Twittering, putting someone else’s content into the FriendFeed stream is the easiest thing for people to do. FriendFeed Direct Posts are similar to Diggs or Stumbles. Since all the content we create, submit, like or comment is part of our personal TV broadcast on FriendFeed, Direct Posts can be just as much fun for users as newly created content by someone you know.
Direct Posts will draw from both traditional media sites as well as from other people’s blogs. Expect media sites and blogs to have a “Post to FriendFeed” link on every article.
Twitter drops as a percentage of content here. Why? FriendFeed’s commenting system replaces a lot of what people like about Twitter. Blogs drop a bit as well. More people will blog in 2018, but many of those will be sporadic bloggers. Still, 10% of the content consisting of original author submissions is pretty good.
Google Reader shares hold as a percentage as more people recognize the value of RSS versus regular-old bookmarks inside their browsers. ‘Other’ goes up, because who knows what cool other stuff will be introduced over the next ten years.
Human nature won’t change. The same stuff that animates people today will continue to do so in the future. Politics, sex, technology and sports will be leaders in terms of what the content will be. There will be plenty of other topics as well. I can see the Iowa Chicks Knitting Club sharing and commenting on new patterns via FriendFeed.
One issue that will arise is that people will have multiple interests. They’ll essentially have various types of programming on their FriendFeed “TV channels”. For a good example of that today, see Dave Winer’s FriendFeed stream. Dave has two passions: technology and politics. I like the technology stuff, but I tend to ignore the political streams.
Well, this will become a bigger issue as FriendFeed expands. I personally like the noise of the people I follow, but my subscriptions seem to generally stick with recurring topics. But as more mainstream users come on board, the divergence of topics for any single person will likely increase.
FriendFeed will employ semantic web technologies to identify the topic of submitted items. These semantically-derived tags will be used to categorize content. Users can then subscribe only to content matching specific categories. How might this work?
A Dave Winer post with “Obama” in it is categorized as Politics. I could choose to hide all Dave Winer updates that are categorized in Politics.
The constant flow of new content, the rich comments and easy ‘Likes’, and the social aspect of FriendFeed will drive its mainstream adoption. It’s a terrific platform for self-expression and for engaging others who share your interests. It’s also got real potential to be a dominant platform for research. In the future, look for stories in magazines and newspapers asking, “Are we losing productivity because of FriendFeed?”
So what do you think? Will FriendFeed ever be mainstream? In ten years?
See this item on FriendFeed : http://friendfeed.com/search?q=who%3Aeveryone+%22yes.+friendfeed+will+be+mainstream+%28by+2018%29%22