A spider is anther name for a web crawler.
Web crawlers are used by search engines to index web sites. A web crawler browses the web in a Methodical way. Collecting info about sites. IT saves all links to a list and goes through the linked web sites in time. Web Crawler have algorithms to select only unique content from websites. Crawlers do not have unlimited bandwidth therefore they must function as economically as possible.
Crawlers should also be able to cope with the large volume and the fast range of change of the World Wide Web.
Web crawlers meet difficulties such as spider traps that send an infinite no of URLS to the crawler. Causing it to freeze.