Having a robots.txt file in the root folder of your website tells a search robot if its permitted to index the pages in your site. The robots.txt file can define which robots are allowed to index it and which folder. For example, the following content in the robots.txt file indicates that it applies to all search robots and that all folders are disallowed:

User-agent: * # applies to all robots 
Disallow: / # disallow indexing of all pages
 
This can also be done using a metatag : <META name="ROBOTS" content="NOINDEX, NOFOLLOW">
 
I wonder if most indexing robots actually implement this. Details at: http://www.w3.org/TR/REC-html40/appendix/notes.html#h-B.4