Chlorine for the “Cesspool:” Why Google, Microsoft, and Yahoo Will Battle to Win Twitter

by Steve Broback on April 7, 2009

After several months of pontificating offline about this issue, I have finally decided to commit it to keystrokes. In a nutshell, it lays the case for why we built a conference about Twitter, and why we feel the service is a key strategic acquisition target. There is much more to the story than what I present here, but this captures the essence. If you want to offer the best Web search engine, you need to buy Twitter.


The bottom line is that whoever acquires Twitter will in essence take possession of an army of millions (soon to be tens of millions) of humans who are actively, accurately, and enthusiastically meta-tagging pages. In the arena of human-augmented search, Mahalo is a useful wheelbarrow, while Twitter is a fleet of 747 cargo planes. The search engine that integrates Twitter data properly will likely become recognized as the “best” search engine out there.

Let’s consider the search landscape and the vast fortune in stock value that Google has acquired (significantly at Yahoo’s expense.)

In the mid-1990’s Yahoo was the prime default destination site for Web surfers. It was the starting point for most looking to find something on the Web. It used a simple, but effective method of determining what site was relevant to what search terms — it analyzed what words were on the page, and heavily weighted the meta-tag information that the author entered to categorize the content. The closer this tagged text aligned with a search term, the higher it placed in search results.

The problem was that this was quickly gamed by those who are trying to sell things online. Soon content and tags no longer could be used as the central measurement of relevance. One would search for “best digital camera” and find pages dedicated to Viagra. This would often be a page whose tags and invisible background copy would say one thing, but the big image file on the center of the page would say another.

No worries, Google introduced a scheme where inbound links — “citations” would determine where a page placed. This would work great for quite some time. No reasonable human is inclined to link to giant banner ad for Viagra, so search results tended to be clean and correct. People migrated in droves to this new and better provider.

The problem is that Google is now being effectively gamed. Have you performed a google search lately? Does anyone remember the pristine results we saw in back in 1997? If you do, you likely agree with Google CEO that the Web has become a “cesspool” populated with splogs and irrelevant marketing drivel — and Google isn’t filtering it out particularly well. It’s not their fault, it’s just that the spammers are innovating big-time.

Consider what happens if Twitter data is incorporated into Google, Yahoo, or LiveSearch. Any tweet that contains a link effectively serves as a citation as well as a meta-tag for a page. This tag can be weighted for relevance by any number of factors (location, time, retweets, followership, etc.) I’m not saying it can’t be gamed, but I am saying it’s difficult to do.

Many bloggers are already talking about how is the now the first place they now go to get the information they want. Many others have read this writing on the wall as well.

While it’s true that the data can be integrated now, and a potential acquirer could conceivably just pay Twitter for the information, but in that scenario they’d lack the proprietary edge that is desperately needed.


Comments on this entry are closed.