This is either a continuous loop in which a Bot requests a page and the server requests info from the bot in order to deliver the page or an intentional construct to sniff out bots that ignore robots.txt
An infinite loop that a spider may get caught in if it explores a dynamic site where the URLs of pages keep changing. For example, a home page may have a different URL and the search engine may not be able to ascertain that it is the home page that it has already indexed but under another URL. If search engines were to completely index dynamic web sites, they would inevitably have large amounts of redundant content and download millions of pages [ edit
A spider trap is anything that would prevent a spider from crawling your page or site. If a spider can't crawl through your site, the site won't get indexed.
Sites that are capable of producing a near infinite number of dynamically generated pages. See also, DYNAMIC PAGES
Spider trap is a loop that a spider may get caught when it explores a dynamic site. This happens when the URL of a site keeps changing and the spider finds it difficult to determine the page, say for exaample, and the home page, which has already ben indexed.
A spider trap refers to either a continuous loop where spiders are requesting pages and the server is requesting data to render the page or an intentional scheme designed to identify (and "ban") spiders that do not respect robots.txt. Return to Top of SEO Glossary
An infinite loop that a spider may get caught in if it explores a dynamic site where the URLs of pages keep changing. For example, the URL of a dynamic siteâ€(tm)s home page may change depending on the referring page. Because of this, the search engine spider may be unable to recognize that the home page has already indexed. If search engines were to completely index dynamic web sites, they would inevitably have large amounts of redundant content and download millions of pages. See also: Database-Driven, Database-Generated, Googlebot, Invisible Web, The, Spider, Static, Stop Character
A spider trap (or crawler trap) is a set of web pages that may intentionally or unintentionally be used to cause a web crawler or search bot to make an infinite number of requests or cause a poorly constructed crawler to crash. Web crawlers are also called web spiders from which the name is derived. Spider traps may be created to "catch" spambots or other crawlers that waste a website's bandwidth.