Crawl Errors
At Syndiket, our websites are built with an emphasis on design. We build fully customized, professionally designed websites that bring your business's vision to life.
Understanding Webpage Errors with
CrawlErrors - What Are Crawl Errors & How To Resolve them
This is the error in which a search engine tries to reach a page on your site but flop at it. First of all let’s elaborate the term crawling. Actually crawling is the process where a search engine tries to visit every page of your website through a bot. The search engine bot finds a link to your site and from there searches all your public pages. The bot scans pages and indexes of all content for use by Google, and also adds all links to these pages to the stack of pages that are yet to be crawled. Your main goal as the owner of the website is to ensure that the search engine bot can access all pages of the website. Otherwise, it will lead to crawl errors.
The thing for your concern is that you have to be sure that every link on your website leads to an actual page. That might be via a 301 redirect, but the page at the very end of that link should always return a 200 OK server response.
Google categorize crawl errors into two groups:
1) Site errors. You don’t want these, as they mean your whole website can’t be crawled/reached.
2) URL errors. You don’t want these either, but since they only relate to one specific URL per error, they are not difficult to maintain and fix.
Let’s get into the details on that.
Things to Keep In Mind
Site errors
Site errors are all the crawl errors that block the search engine bot from reaching your website. That can have many reasons, Most common reasons are mentioned below.
- DNS Errors: This is mostly a temporary issue. This means that the search engine is not able to communicate with your server. It might be down, for a short period of time, meaning your website can’t be visited. But Google will come back to your website later and crawl your site anyway. If you notice this with crawl errors in the Google search console, this probably means that Google has tried several times and is still not able to reach your site.
- Server errors: If your Search Console server shows errors, it means that the bot cannot reach your website. The request might have expired, which means that the search engine (f.i.) tried to visit your site, but it took time to load which indicates that the server served an error message. Server errors also occur when there are defects in your code that ceases a page from loading. Moreover it can also mean that your site has so many visitors that the server just couldn’t handle all the requests. A lot of these errors are returned as 5xx status codes, like the 500 and 503 status codes described here.
- Robots failure: Before crawling (f.i.), Googlebot also tries to crawl your robots.txt file just to see if there are areas on your site that you have not indexed. If this bot cannot access the robots.txt file, Google will postpone scanning until it gets access to the robots.txt file. Therefore, always make sure that it is available.
This will explain you a bit about crawl errors related to your whole site. Now we will dig in the craw error that may happen for particular pages of your site. – URL errors
URL Errors
As you know that URL error happens when a search engine bot tries to crawl a specific page of your website. When we discuss URL errors, we first discuss crawl errors, such as (soft) 404 Not Found errors. You should often check for errors of this type (use the Google Search Console or Bing Tools for Webmasters) and fix them. If the page / theme of this page does not actually return to your site, then serve 410 pages. If you have similar content on another page, please redirect 301 instead. Make sure your site map and internal links are still relevant.
Mostly a lot of these URL errors happen due to the internal links, which means that usually these errors are due to the fault of the owner of the website. If you remove a page from your site at some point, adjust or remove any inbound links to it as well. These links are useless, If that link remains the same, a bot will find it and follow it, only to find a dead end (404 Not found error). On your website you have to do some adjustment now and then on your internal links!
Another most occured URL error is the one with the words ‘submitted URL’ in the title. These errors appear when Google detects inconsistency in behavior. On the one hand, you submitted the URL for the index, so you tell Google: “Yes, I want you to index this page.” On the other hand, Google gets information from something else saying: “No, do not index this page.” What might be possible is that your page is blocked by your robots.txt file. Or that the page is marked ‘noindex’ by a meta tag or HTTP header. If you don’t fix the inconsistency in the message, Google will not be able to index your URL.
Within these mostly occurred errors might be an occasional DNS error or server error for that specific URL. Check this URL later and see if the error persists. Be sure to use fetch as Google and mark the error as fixed in Google Search Console if that is your main monitoring tool in this.
Break it Down
What Are The Different Types Of SEO?
At Syndiket, we believe four types of SEO exist – and we have an acronym to represent those 4 types of SEO. The acronym is T.R.A.P.
“T” stands for Technical, “R” stands for Relevancy, “A” stands for Authority, and “P” stands for popularity. Search engine optimization has many smaller divisions within the 4 types, but all of them can be placed into one of these 4 buckets.
Generally, technical SEO for local businesses carry the least importance for ranking. Technical SEO has a bare minimum that is required and this usually includes things like site speed, indexation issues, crawlability, and schema. Once the core technical parts are done, minimal upkeep is required.
Relevancy is one of trivium elements of SEO. It has equal importance with popularity signals and authority signals. Relevancy signals are based on algorithmic learning principles. Bots crawl the internet every time a searcher has a search. Each search is given a relevancy score and the URLs that pop up for a query. The higher the relevancy score you attain, the greater your aggregated rating becomes in Google’s eyes. Digital marketing is a strange thing in 2020, and ranking a website requires the website to be relevant on many fronts.
Google’s Co-creator, Larry Page, had a unique idea in 1998 which has led to the modern-day Google Empire. “Page Rank”, named after Larry Page himself, was the algorithm that established Google as a search engine giant. The algorithm ranked websites by authority.
Every page of a website has authority and the sum of all pages has another authority metric. The authority metric is largely determined by how many people link to them (backlinks). The aggregate score of all pages pointing to a domain creates the domain score, which is what Syndiket calls “Domain Rating”, per Ahrefs metrics. The more a site is referenced, the more authority it has. But, the real improvement to the algorithm came when Google began to classify authority weight.
If Tony Hawk endorsed Syndiket for skateboarding, it would carry a lot more authority than 5 random high school kids endorsing Syndiket. This differentiation in authority happened in 2012 with the Penguin update. Authority SEO is complicated but VERY important.
Popularity signals are especially strong for GMB or local SEO, but popularity and engagement are used for all rankings. The goal of this signal is for Google to verify its own algorithm. You can check off all the boxes, but if your content is something real people hate, Google has ways to measure that. Syndiket has proprietary methods of controlling CTR (click-through rate) but we also infuse CRO methods into our work to make sure people actually like the content. Social shares and likes are also included in this bucket.
Very Specific URL Errors
There are some URL errors that apply only to some sites. Therefore, I want to list them separately:
-
- Malware errors: If you encounter malware errors in webmaster tools, it means that Bing or Google detected malware at this URL. This may mean that software has been found that is used, for example, “to collect guarded information, or to disrupt their operation in general.” (Wikipedia). You must check this page and remove the malware.
-
- Mobile-specific URL errors: This refers to crawl errors associated with a particular page that occur on modern smartphones. If you have a responsive website, it is unlikely to appear. Probably for the piece of flash content that you already wanted to replace. If you have a separate mobile subdomain, such as m.example.com, you may encounter a lot of errors. Talk about the erroneous redirection lines from your desktop to this mobile site. You can also block this mobile site using the line in your robots.txt.
-
- Google News errors: There are some specific errors in Google News. The Google documentation has a list of these possible errors, so if your site is in Google News, you may find these crawl errors. They range from the lack of a headline to errors that indicate that your page does not have a news article. If this applies to your site, be sure to check it out for yourself.
Fix your crawl now by going through the link below.
Read more: Google Search Console: Crawl » https://www.google.com/webmasters/tools/crawl-stats
Want More Info?
Let's Chat!
I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper matti pibus leo.
Services
- SEO
- PPC
- Web Design
- Branding
- Consulting
Other
- About Us
- Case Studies
- Learn
- Tools
- Careers
Locations
- Nashville, TN
- Chattanooga, TN
- New York, NY
- San Francisco, CA
- Denver, CO
source https://www.syndiket.com/services/technical-seo/crawl-errors/
No comments:
Post a Comment