robots.txt : Java Glossary


robots.txt is a file you can place in the root directory of your website to tell web crawlers (search engines) which pages to index and which to ignore. A typical robots.txt file might look like this:
# parts of the website not indexed
user-agent: *
disallow: /template.html
disallow: /include/
disallow: /jgloss/include/

It means, for all browsers, don’t look at the file template.html or anything in the two directories mentioned. There is no way to tell it to avoid certain file extensions. Note that the Sitemap directive takes a full URL (Uniform Resource Locator), unlike the others.

You can also control spiders with the robots meta tag or with the X-Robots-Tag field in the HTTP (Hypertext Transfer Protocol) response header. and with a sitemap. This is not a human-comprehensible HTML (Hypertext Markup Language) page but a gzipped XML (extensible Markup Language) document in a special format.

This page is posted
on the web at:

Optional Replicator mirror
on local hard disk J:

Please the feedback from other visitors, or your own feedback about the site.
Contact Roedy. Please feel free to link to this page without explicit permission.

Your face IP:[]
You are visitor number