an automated web browser that must interpret your page's HTML code, just like a regular browser does
a program that detects and indexes pages on a search engine
a simple software program that parses the HTML code of web pages to determine the content of these web pages
a software program used by major search engines, like Google and Yahoo, to essentially search and identify a Web site, determine what it is about and index the content on its central database
A web crawler (also known as web spider) is a program which browses the World Wide Web in a methodical, automated manner. A web crawler is one type of bot. Web crawlers not only keep a copy of all the visited pages for later processing - for example by a search engine but also index these pages to make the search narrower.
A program which looks at web pages to decide what they are about and follows the links on each page to find other web pages to examine. The spider will read the pages on your web site and send the information back to the search engine which then decides how important each page is for a given search.
A program that automatically discover, download, analyze, and index web pages for its assocated search engine. Spiders are used to feed pages to search engines. It's called a spider because it crawls over the World Wide Web in the process of spidering and indexing web pages. Since most web pages contain links to other web pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and follows the link. Large search engines, like Google, have many spiders working in parallel. Good server log software like AwStat can tell you exactly how many and what spiders visited your site, how many pages it spidered, and the bandwidth used in the process. For more information: Googlebot: Google's Web Crawler Yahoo! Slurp - Yahoo!'s Web Robot MSNBot frequently asked questions (FAQs) Search Engine Worlds has a good in depth resource called. Spiders Crawlers and Indexers!.