a program that silently visits Web sites, explores the links in that site, writes the URLs of the linked sites to disk, and continues in a recursive fashion until enough sites have been visited
a program that traverses the Web s hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced
a program that visits web pages automatically without requiring user interaction
a program which automatically scans a range of pre-defined web pages and collects the bibliographic information one is looking for
See Spider Web Server A web server is a computer that hosts one or more web sites. When you visit a web site, your internet browser receives the web page(s) from the web server that hosts the site.
Automated programs that search the web and collect information about websites. Also know as crawlers, spiders and wanderers normally associated with search engines. They identify sites and key terms that are indexed into search engine databases for retrieval by search engines.
Another term for Web crawler.
robot Web A software robot which trawls the WWW, generating all-encompassing Web indexes. Also known as Web crawlers or Web spiders. Source: UKOLN Metadata Glossary