MyWebAuditBot

MyWebAuditBot is a bot software that MyWebAudit.com marketing tools for agencies use to crawl through a website and identify a range of issues that may impact the site visibility, security and performance in search results.

Data collected by MyWebAuditBot is used to:

  • Analyze and identify technical on-page SEO issues and improvements
  • Analyze and identify common accessibility issues and improvements
  • Analyze and identify website page performance issues and improvements

How MyWebAuditBot Crawls Your Site

When a user runs an audit for a website URL in our app, MyWebAuditBot’s crawls the webpage URL(s) and performs the analysis of the pages HTML to identify issues related to SEO, accessibility, page performance and technical issues.

How To Block MyWebAuditBot From Crawling Your Site

Most bots are harmless and can usually be helpful. However, you may still want to prevent them from crawling your site. The easiest way to do this is by using the robots.txt file. This file contains instructions on how a bot should process your site data.

Important: The robots.txt file must be placed in the top directory of the website host to which it applies. Otherwise, it will have no effect on the MyWebAuditBot behavior.

To stop MyWebAuditBot from crawling your site, add the following rules to your robots.txt file:

User-agent: MyWebAuditBot
Disallow: /

Important details:

  • A robots.txt file can help to restrict MyWebAuditBot from crawling certain pages or entire domains on your website. By default, MyWebAuditBot will crawl all pages on a subdomain, but placing a robots.txt file on each subdomain can help to restrict its access.
  • When creating a robots.txt file for your website, it is important to ensure that your HTTP status code always returns as a 200. Returning a 4xx or 5xx status code will prevent MyWebAuditBot from crawling your entire site. However, our crawler can still handle a 3xx status code for your robots.txt file.

Important details:

For more information about bots, please refer to https://www.robotstxt.org/.

Getting Support

MyWebAuditBot needs some time to discover changes in your robots.txt file. However, if you think that our bot is ignoring your "robots.txt" rules, please provide us with your website URL and the log entries showing MyWebAuditBot crawling the pages that it is not supposed to, and we will work quickly to resolve the issue.

If you have any questions or concerns about the MyWebAuditBot, please contact us at support@mywebaudit.com, and we will respond as soon as possible.