Archive of UserLand's first discussion group, started October 5, 1998.
Re: Automated deep linking
Author: Bruce Wyman Posted: 9/2/1999; 6:08:30 AM Topic: Automated deep linking Msg #: 10444 (In response to 10419) Prev/Next: 10443 / 10445
Dave writes:In my opinion, it's clearly OK to link to a story on another site as it relates to a story I'm writing, or a subject I believe my readers are interested in. For example, it would be fair for me to point to the InfoWorld article where they reversed their position on deep linking, or to the Washington Post article about antique software.But, again in my opinion, it would *not* be fair for me to write a script that reads the home page of the San Jose Mercury-News, pulls off links to all the articles and the headline text and emails them, mixed in with links scraped from the New York Times home page, to a list, unless the Mercury-News and the Times had given their permission.
But, it seems to me that the automatic email that is being generated doesn't have the actual content - only a set of pointers to the actual content which still resides on the author's servers. I view that email as simply a table of contents for another site and if someone is going to want to read that material, they still end up going to the author's site. It seems to me that the scraping is simply generating additional awareness of the author's content?
I'd agree that a line begins to be crossed if the scraper begins to summarize beyond a sentence or a short set of keywords, but otherwise it doesn't strike me as being different from any number of weblogs or nightly emails that get sent around with collected links. The major difference being the automation and editing of the scraped material.
As for RSS, it's a great idea, but it seems that the only real advantage it provides is that it becomes substantially easier for someone to create the tools that scavenges the material from an RSS file - the hard work of trying to find the information off of a source of news has been done by the authoring site. If some scraper is able to read through the custom code of a site, and gather the same material, it seems like they've got the same information, they just had to work harder to get it. And, do I break their scraping if I so much as change a style sheet rule or retweak my html tables? I don't know.
Aren't the scrapers just creating a table of contents for a given site and sending that along to a wider audience? How is that bad - especially if one has already expressed support for deep linking into other sites?
There are responses to this message:
- Attempted clear statement of D.Linking/Scraping controversy, Jeremy Bowers, 9/2/1999; 7:53:27 AM
This page was archived on 6/13/2001; 4:52:21 PM.
© Copyright 1998-2001 UserLand Software, Inc.