Create a custom robots.txt file to control search engine crawler access to your Website
Specify which crawler this rule applies to
One path per line
One path per line
Location of your sitemap
A robots.txt file tells search engine crawlers which URLs they can access on your site. This is used mainly to avoid overloading your site with requests.
/admin/ - Administration areas/cgi-bin/ - Server scripts/tmp/ - Temporary files/logs/ - Log files/private/ - Private contentNote: robots.txt is a request, not a enforcement. Sensitive content should be properly secured, not just blocked via robots.txt.