Why it's important
A robots.txt file is a plain text file that specifies whether or not a search engine crawler should access a particular page, folder or document type. It's one of the first files accessed by a search engine crawler when it lands on your site.
Robots.txt files are useful to block crawling and indexing of duplicate content during a site redesign or as a result of URL parameters. By disallowing unimportant pages on your site you can help maximize search engine crawl budgets to ensure your most important content is being found. You can also block user-agents from search engines that don't send you any traffic to save some bandwidth.
If you're redesigning or migrating your site, don't forget to unblock your new site so it can be crawled and indexed. Remember that even if a page is disallowed in your robots.txt file, search engines can still find it via links, so add a meta robots tag to any page you don't want indexed.
Getting it done
Robots.txt files are made up of blocks of code that contain two basic parts: user-agent and directive. Every block can have multiple directives, but only one user-agent.
You can use a tool to create your robots.txt file or you can write one yourself, depending on how many directives you need to write. Beware of using a tool: if you use a different one to create your XML sitemap, you could wind up with a discrepancy that could block a page you want to get crawled.
Submit and test your file in Google Search Console before adding it to your site's root directory. This will help you prevent errors that could keep your site out of the search results.