The robots.txt file (Wikipedia page) is read by search engine crawlers to know which parts of your website should be crawled.

It’s important to realize that your website can still be listed in Google even if it is not crawled.

This usually happens when other sites link to yours.

If you would like to completely block your website from being listed, it’s actually important to let it be crawled, but to include noindex tags in each page.

More information is available on the Google page about noindex.

Name: anything you like

File contents: your site is set up with two basic robots.txt files — one that allows crawling and one that does not.

If you would like to exclude certain parts of your website, you can easily create a new robots.txt page to do so. See the Google page for details.