This is either a continuous loop in which a Bot requests a page and the server requests info from the bot in order to deliver the page or an intentional construct to sniff out bots that ignore robots.txt
An infinite loop that a spider may get caught in if it explores a dynamic site where the URLs of pages keep changing. For example, a home page may have a different URL and the search engine may not be able to ascertain that it is the home page that it has already indexed but under another URL. If search engines were to completely index dynamic web sites, they would inevitably have large amounts of redundant content and download millions of pages [ edit
A spider trap is anything that would prevent a spider from crawling your page or site. If a spider can't crawl through your site, the site won't get indexed.