Can we use regex in robots.txt file to block URLs? I have a few dynamic generated URLs. Can I use regex to block these URLs in a robots.txt file? robots.txt regular-expression Idell Wisozk 3 Months ago Answers 2 Subscribe Submit Answer Shanon Powlowski 3 Months ago Regular Expressions are not valid in robots.txt, but Google, Bing and some other bots do recognise some pattern matching. Say if you wanted to block all URLs that have a example any where in the URL, you can use a wild card entry * User-agent: * Disallow: /*example You can also use the dollar sign $ to specify that the URLs must end that way. So if you wanted to block all URLs that end with example, but not URLs that had aexample elsewhere in the URL you could use: User-agent: * Disallow: /*example$ More in-depth info for Google can be found here: Robots.txt Specifications, Bing here: How to Create a Robots.txt file and there is a an interactive guide on Moz here Miss Kailey Kuhn 3 Months ago For blocking urls by regular expressions not from crawling (downloading) but from indexing in search engines you can use https://developers.google.com/search/reference/robots_meta_tag dynamic adding of HTTP header X-Robots-Tag into mod_rewrite rules of Apache (or nginx) that widely supports regular expressions.