A name for a program to explore the Web on its own, collecting information.
(bot) A program that runs automatically without human intervention, e.g. Search Engines commonly use a spider bot to scour the Web for Websites. See also: The Web Robots Pages
A programs that goes to websites for search engines. This program reads a website and determines if it belongs in the search index. Also know as a spider as it crawls all over the web.
Short for robot, a program that runs automatically.
automated programs that work without human supervision used by search engines to check web pages.
Web Robots are programs that traverse the Web automatically. Some people call them Web Wanderers, Crawlers, or Spiders.
When talking Net talk, this is the guy who accesses your website to gather information. Search engines use these (except for Yahoo -- they go for the human input).
A robot is a process that travels over the Web performing automated tasks like data collection.
Any browser program which follows hypertext links and accesses web pages but is not directly under human control. Examples are the search engine spiders, the "harvesting" programs which extract e-mail addresses and other data from web pages and various intelligent web searching programs. A database of web robots is maintained by Webcrawler.
A special program that obtains HTML documents from Web sites and follows the hyperlinks in those documents to obtain more documents.
Automated devices for removing parts upon ejection from an open mold rather than letting the parts drop. Also see parts picker. Robots also can perform secondary functions, such as inspection, degating, precise placement of parts on a conveyor, etc.
A computer program that runs automatically. Two types of robots are agents and spiders.
See Search Engine Spider.
A robot (or 'bot for short) in the computer sense is a program designed to automate some task, often just sending messages or collecting information. A spider is a type of robot designed to traverse the web performing some task (usually collecting data).
see "spider" [ edit
A robot is a piece of software that browses the web, retrieving and analysing web documents. This data can be used to index documents for search engines.
Programs used by search engines to comb through web pages. Robots traverse web pages and help determine the relevance of the page.
An autonomous Web-traversing program that seeks out and records information about Web documents that it encounters and examines during its travels. Spider and wanderer are synonyms for robot; these programs collect data that is used in Web search engines.
Are automated or dynamic programs. They may have specific or limited targets or functions. Sometimes, they may scour the web looking for new material, verifying links and other functionalities.
A small program that finds all the resources located in a specific portion of a network.
A computer program (also known as a spider, bot or web crawler) that scans the Internet to index web pages. Each search engine uses a spider to build its own database.
A program that runs on a computer 24 hours a day 7 days a week, it automates mundane tasks for the owner, even if the owner is not logged in. Bots are used on the Internet in a variety of ways, most popular is its use in IRC and search engines. IRC bots are programs that connect to an IRC network and interact with IRC in very much the same way a normal user does (in fact, IRC servers treat bots as regular users). Most IRC bots are used for channel control. In the world of Web searching, bots are also called spiders and crawlers, as they WWW by retrieving a document and following all the hyperlinks in it generating catalogs that can be accessed by search engines. Popular search sites like Alta Vista, Excite, and Lycos use this method.
a programme used by a search engine to roam the web, finding, ranking, and indexing web pages. (spider, webcrawler, crawler, web-bot, bot)
Generally the name given to a spider / crawler. Anything that is not a human visitor.
Also known as spider, bot or crawler. The part of a search engine that locates and indexes every page on the Web. Successful search engine optimisation depends on robots finding many or all a Web site's pages.
Any program that follows links on the web to access information such as search engines and email harvesters use a robot to do so
This term is practically synonymous with spider.
Increasingly the Web is being searched by "robots". These are pieces of software that read a web page, process it in some way (usually by analysing the content to see if it's of interest), and then following the links from that page onto the next web page. This process is fully automated. Each site can set up a policy file (usually robots.txt in the top directory) indicating how the site wishes to restrict such access. Restrictions can be complete, or on a per-directory basis.
a computer programme that can follow the hyperlinks on a web page, which is known as "crawling"
a computer program operated by a search engine, a research organization, a University or an individual
a computer program that automatically reads web pages and goes through every link that it finds
a computer program written to periodically explore as many web sites as it can find
a machine that works automatically and performs a specific job, saving time and energy without getting bored or tired, and also can be used for messy and dangerous jobs
an agent that operates as an interface to a Web-enabled application
an automated tool for exploring and retrieving files from the web
an image of the automatic self
a piece of software that automatically follows hyperlinks from one document to the next around the Web
a piece of software used by search engines to index web sites
a practical example of the applications of science and math, making it another element of engineering activity
a program on the Boss or Uplink system which responds automatically to netmail messages
a program, running by BOTS engine, where each instruction have a cycle count execution time
a program that automatically traverses the Web, retrieves documents, and uses the links on the documents to continue its search through the Web
a program that autonomously explores the Internet for a specific purpose
a program that goes out on the Web and looks around and gathers up words from Web pages
a program that is designed to automatically go out and explore the Internet for a specific purpose
a program that search engines use to follow links, read web pages, and create indexes of the information they find on the web pages
a program that traverses these links automatically
a program that traverses the World Wide Web, gathering some sort of information from each site it visits
a program which is started when e-mail arrives at the account
a program which will visit a web page, index it somewhere, and then visit all the hyperlinks in that page, indexing them all
a software application that automatically searches for information
a software program that automatically visits sites on the Net collecting whatever information it is looking for
a system that can be programmed to perform a range of mechanical functions and that responds to sensory input under automatic control
This is basically the name given to a search engine spider or crawler.
A robot (or spider) is a software program that search engines use to visit every website in its database. It follows the links on those websites to other websites and catalogs and updates all the information provided in those websites.
Computer robot programs, referred to sometimes as "crawlers" or "knowledge-bots" or "knowbots" that are used by search engines to roam the World Wide Web via the Internet, visit sites and databases, and keep the search engine database of web pages up to date. They obtain new pages, update known pages, and delete obsolete ones. Their findings are then integrated into the "home" database. Most large search engines operate several robots all the time. Even so, the Web is so enormous that it can take six months for robots to cover it, resulting in a certain degree of "out-of-datedness" in all the search engines.
A software program that crawls the internet, by following links and indexing web pages.
A program that is written to perform a specific task time after time; for instance a robot might index pages on the web for a search engine.
This is an automated program that visits web sites on behalf of search engines in order to fill and update their databases.
Same as Crawler.
Similar to a robot. Its an automated computer program that navigates the internet, usually looking for some specific type of content, such as links or email addresses.
In this context, a computer program that automatically "browses" web pages and downloads the content to the search engine's database for later use in producing search results.
Also referred to as a spider or crawler. A robot is a program that follows links to web pages.
Also called a spider, wanderer, or agent. These are programs that retrieve documents from the Web.
Program utilized by search engines to crawl the web to index and rank new web pages.
A program such as InfoSeek or Aliweb that searches huge numbers of files automatically when given search criteria (also called a worm).
A software that automatically explores the web.
An automated program that indexes websites and saves certain content.
an automated machine that can be programmed to perform a variety of specific mechanical functions
An automated process that performs mundane, repeatable tasks to provide information. Search engine robots or bots provide such functions, cataloging the internet for searchers to find information.
The software used to collect information for the search engine's database.
Often used to refer to a search engine spider.
(n) A computer-controlled device used in manufacturing for many purposes, such as assembly, painting, and material movement. Robotics is an important component of CAD/CAM and in the automation of production facilities.
Browser programs, which are not under human control, accessing web pages and following hypertext links, such as search engine spiders.
Another name for spider.
A robot is an automated program that accesses a web site and traverses through the site by following the links present on the pages. Known as a bot, robot, spider or Web Crawler.
Any automated process that interacts with the internet in a way that was originally intended for humans. Robots can serve several purposes: Spidering your website on behalf of a search engine. (This is good) Checking your website's availability and uptime. (Also good) Submitting your website's URL to the various search engines on an occasional basis. (Not good) Gathering email addresses from websites to use in spamming. (Evil) Logging in to chat rooms or IM to try to put commercial messages in front of others. (See "spim" - also evil)
(see also: SPIDER) a special computer program that Search Engines use to find information on the Internet
is the generic name given to a spider/spyder/crawler program that travels the web collecting data from websites.
AKA a spider, used to search html documents and web sites.
A name given to a spider/bot/crawler program that travels the web collecting data from websites.
An automated program such as a search engine, indexing program, or cataloging software, that requests Web pages much faster than human beings can. Other commonly used terms for robot include crawler and spider.
This is an automated program that automatically moves the Web's hypertext structure by taking a document back, and repeatedly retrieving all documents that are already referenced.
A program that reads web pages and follows links to other pages and sites. Robots are used to build the indexes used by meta search engines.
A Program that automatically transverses the WWW looking for URLs.
A fast, automated program, such as a search engine, indexing program, or cataloging software, that requests Web pages much faster than humans can.
spider See metacrawler
A browser -like program that automatically request web pages in order to index the page content (in the case of spiders ) or to retrieve specific information (in the case of programs like e-mail harvesters).
A program that constantly and automatically searches websites, selected on various criteria, to gather data to include in indexes (such as for search engines) or for other purposes (see Spider).
Sophisticated programs, also known as â€œbots,â€ â€œcrawlers,â€ or â€œspiders,â€ that search the Internet for Web page content and index it for use by the search engine. These programs continue their search to all the links contained within a site, constantly updating the search engineâ€™s database with new pages of content.
A program that automatically surfs the Web. Search engines use robots to surf the Web and catalog different Web sites in their databases. This allows the Web pages to be found when someone performs a search. Robots are commonly referred to as bots and spiders.
see Spider return to top of the page
A Web agent that visits sites to extract data from the site. It automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced. Most commonly, data from robots is used to build the catalogs for search engines. Web robots are sometimes referred to as Web Crawlers or Spiders. » Back to top of screen
Automated program that follows links to visit web sites on behalf of search engines to fill and update their database.
Any browser, not directly under human control, that follows hypertext links to access web pages.
A robot, also known as a spider or crawler, is a program that travels the Web to index websites and put them into a search engine. Major search engines all rely on spiders to visit and catalog new sites. WebCrawler and Lycos are examples of robots. Source: TechSoup.org
robot is a program that runs automatically without human intervention. Typically, a robot is endowed with some very basic logic so that it can react to different situations it may encounter. One common type of robot is a content-indexing spider, or webcrawler.
Any browser program, such as a search engine spider, which follows links and accesses web pages.
a computer who's job is to crawl webpages; some robots are simply looking for content, like those employed by search engines while other, less noble robots look for ï¿1/2harvestableï¿1/2 information like email address
A program that automatically searches the World Wide Web for files.
A tool also known as a Spiders that is employed by Search engines to regularly index web pages of registered sites.
A program that automatically does "some action" without user intervention. In the context of search engines, it usually refers to a program that mimics a browser to download web pages automatically. A spider is a type of robot. Some times referred to as Webbots.
A program used by a search engine to crawl the web in order to find, rank, and index new web pages.
(limited programming) Good Baptist wife.
A robot is essentially a computer program that visits web pages. These robots are created by the search indexes and help to build and maintain those indexes.
an automatic text-indexing system that visits servers and indexes their contents. Helps create vast searchable directories.
Also spider, crawler. The programs that explore the web automatically, generally creating an index for use by search engines. There are techniques for excluding such trawls from some or whole of a web site, but its not legally enforceable. See Robotstxt site for more.
In the context of search engine ranking, it implies the same thing as Spider . In a different context, it is also used to indicate a software which visits web sites and collects email addresses to be used for sending unsolicited bulk email.
A computer program (also known as a spider, bot or crawler) that travels the Internet to locate web pages. It indexes the documents in a database, which is then searched using a search engine. Each search engine uses a spider to build its database.
Used to refer to a piece of software that performs a function in the place of a human being. Specifically the search engine tools that surf the internet looking for pages to add to the search index is sometimes called a robot. The abbreviations bot or web bot is also used.
A software program that search engines use that visits every URL on the web, follows all the links, and catalogs all the text of every web page that (a) contains text, and (b) that can be visited or crawled. Also known as a spider or crawler, but the term "robots" is more and more commonly associated with automated agents.
An automated Web program. Frequently, this refers to a crawler.
A program that automatically searches the WWW for files and catalogues the results. See Also: WWW
(Or "crawler", "spider"). A program that automatically explores the Worldwide Web by retrieving a document and then retrieving some or all the documents to which it links, and then repeating the process on each new page it finds.
In Internet terms, a Robot is a program that is designed to automatically go out and explore the Internet for different purposes. This can also be called a worm or a crawler/webcrawler, spiders, worms, and can be used both to index content on the Internet, as Google crawlers do, or to garter other kind of information, such as email addresses.
The name given to a web spider or web crawler. Any visitor to a website that is not a human exploring the site.
A term for software programs that automatically explore the Web for a variety of purposes. Robots that collect resources for later database queries by users are sometimes called spiders.
mechanical device which can be programmed by the user to follow a sequence of commands.
A software application that automatically finds and retrieves information from the Web. Also called a "spider" or "crawler."
An automated programme which follows hypertext links and accesses web pages. Eg. search engine spiders or e-mail harvesting programmes
Is a software program that collects information on the Web automatically.
Any browser program that follows hypertext links and accesses Web pages but is not directly under human control. Example: search engine spiders, the harvesting software programs that extract e-mail addresses or other data from Web pages.
An autonomous program set to perform a task over a network, like indexing file contents on servers or checking hypertext links in Web files. See spider.