O’Reilly on Incestual Linking

Tim O’Reilly has an interesting essay on something that is starting to happen more and more on the web — when media companies include links in a story, the links tend to go to internal rather than external content.

The example O’Reilly cites is of journalism professor Jay Rosen complaining about New York Times stories that had links in them, but the links just took him to New York Times searches rather than to external content. So, for example, if a story is about KPMG and the word KPMG is hyper-linked, I would expect it to go to KPMG’s site. Increasingly, however, the link goes to some internal “portal” page or search about KPMG.

There are a number of sites that I used to read all the time that I simply stopped doing so because they actively used this sort of linking to keep you at their site.

Now I don’t mind internal topics pages — in fact they can be very useful. But this is just like the difference between ads and editorial content. Companies need to clearly separate the two, or risk losing the user’s trust.

It’s just a gaming site, but WoWInsider.Com’s practice of doing this led me to simply stop reading the site. Every time I’d click on a link that appeared from the context that it would take me to an external site with additional information, it turned out it simply redirected me to an internal portal page which I had zero interest in reading. Moreover, I then had to actively hunt through the article to figure out where the links were that did actively go to any external sites. This was especially annoying since each story at WoWInsider had an area where the author linked to specific internal portal pages that he or she deemed relevant to the story itself.

After the 20th or so time that happened, I concluded that WoWInsider had little respect for its readers with such bizarre behavior and moved on. It’s not like it is the only provider of WoW-related news on the web.

O’Reilly outlines some ideas on how this system could be made helpful to users, but I’m skeptical,

When this trend spreads (and I say “when”, not “if”), this will be a tax on the utility of the web that must be counterbalanced by the utility of the intervening pages. If they are really good, with lots of useful, curated data that you wouldn’t easily find elsewhere, this may be an acceptable tax. In fact, they may even be beneficial, and a real way to increase the value of the site to its readers. If they are purely designed to capture additional clicks, they will be a degradation of the web’s fundamental currency, much like the black hat search engine pages that construct link farms out of search engine results.

I’d like to put out two guidelines for anyone adopting this “link to myself” strategy:

  1. Ensure that no more than 50% of the links on any page are to yourself. (Even this number may be too high.)
  2. Ensure that the pages you create at those destinations are truly more valuable to your readers than any other external link you might provide.

Except this is clearly nothing more than the latest version of the old trick of using Javascript to render the Back button useless. This is the result of some idiot executives sitting around thinking “this is the way we can squeeze more money out of our web offerings!” Making the portal pages more useful than the external links? Riiight.

Deep Linking, Road Maps and Bibliographic Citations

TechCentralStation.Com today posted an article defending the Dallas Morning News’ position banning deep linking. That paper is currently sending out cease and desist notifications to people who deep link to one of its stories. It wants all visitors to be funneled through its front page and required to register before visiting specific stories (that a) this is stupid and b) is technically achievable right now is besides the point).

James Miller argues that web sites have a property interest in specific URLs. According to Miller,

DallasNews.com has recently tried to limit deep links. Deep links bypass an Internet site’s home page and take a reader directly to the interior content. The normally libertarian web community is outraged over the paper’s assertion of its property rights. Property rights, however, confer the ability to exclude, for without exclusion it’s difficult to profit.

. . .

Deep linking has the potential to do vast damage to content providers. Imagine that I create my own table of contents for the New York Times and provide links to interior articles. Let’s say that my site becomes popular and attracts many viewers who used to use the site’s own, advertising-filled, table of contents. My parasite has now deprived the Times of advertising revenue. Other potential online news providers might now be reluctant to start sites because of the fear that similar parasites would eat their profits.

But a deep link is just a URL and what is a URL? An address to a publicly available resource on the web.

Now Miller is arguing that even if I make a resource publicly available, it is still my private property and I should be able to control the distribution of the address — people shouldn’t be able to just willy nilly republish my address if I want to have exclusive control over it.

Which makes about as much sense as saying that I have a property interest in my physical address and can veto any directories or maps that attempt to list or show the location of my house without first getting my permission.

To get back to information, what Miller is arguing for is akin to Newsweek asserting that nobody has a right to cite a specific article in a bibliography. After all, if I post on my web site today that I’ve already read the latest issue of Newsweek and all of the articles sucked except the one on page 32 about deep linking, you can use this knowledge to ignore everything but that single article, thereby depriving Newsweek and its advertisers of having the reader consult Newsweek’s own, official table of contents and/or browsing through the magazine sequentially (and thus viewing the ads that Newsweek and its advertisers depend on).

Certainly it would be possible to create some sort of property rights scheme for deep linking just as it would be possible to create a property rights scheme that would prevent me from telling you which article in Newsweek is worth reading this week. But, it does not seem very efficient to do so in either case (imagine if every annotated bibliography ever published had to seek permission from rights holders before citing and summarizing journal, magazine and newspaper articles.)

I don’t think Miller’s article succeeds at all in making the case that there should be any sort of property interest in normal, everyday deep linking. Certainly some forms of deep linking such as the pervasive linking by a competitor to proprietary business data may warrant attention, as happened with the Ticketmaster case, but granting newspapers a proprietary interest in what is essentially little more than a high tech version of a bibliographical cite doesn’t make a lot of sense.

Source:

Deep links? No way! James D. Miller, TechCentralStation.Com, May 13, 2002.