A spider is part of a search engine. It's a bit like electricity because, although you can't see it, it does exist. At least it does in cyberspace! Spiders are software programs which roam around the internet gathering information. Imagine the Internet is a city filled with homes and buildings (websites). The spider could be a person going from building to building making a report on what's in each one. Most search engines send out several spiders that work as a team: They visit websites, find out what they're about, look for any changes and follow links that they find on those websites to others. Web spiders are fast critters too - they can visit several million webpages in a single week! Once the spider has found a webpage - it relays the info back to HQ at the search engine. Another piece of software processes what the spider found, and decides what to do with it. Whenever a spider visits your website it leaves a special mark which appears in your 'referrer logs'.
an Internet search engine software application that automatically finds and collects information about web sites
A type of keyword search software.
Software used by search engines to locate new Web pages for their document databases.
A program that searches the Web for any variety of purposes, sometimes to catalog information or URLs, sometimes to count the number of URLs or servers.
Software that browses the Web in an automated manner and keeps a copy of visited pages in its database. Also known also as a crawler.
A computer program that searches the internet for web pages. Common web spiders are the one used by Google to index the web.
Another term for Web crawler.
See Main Definition: spider Usage: A spider that is specifically designed to traverse the World Wide Web, vs. a spider that is used within an Intranet.