Link Errors

For the sake of a good search engine rating and a satisfying user experience, you want to be sure the links on your website are working correctly. However, there are a few common link errors that can occur when trying to create or edit links. Below we will explain these errors and how you can fix them and make sure the website is running smoothly and being crawled properly.

Page Has no Outgoing Links

If you see a “Page has no outgoing links” error in the Crawl Errors report, it means that Googlebot was unable to follow any links on that page. Here are the main causes and solutions:

  • The page contains links to other pages that are not working correctly. Fix broken links and make sure all links are pointing to valid pages.
  • The page uses JavaScript to create links. Googlebot cannot follow these types of links. You can either remove the JavaScript or make the links visible to Googlebot by using a search engine-friendly design.
  • The page contains Flash content that Googlebot can’t access. Either remove the Flash content or use a search engine-friendly design to enable Googlebot to access the content.
  • The page is blocked by robots.txt. Check your robots.txt file to ensure it isn’t blocking Googlebot from crawling your site.

Page Has Links to a Broken Page

The “Page has links to a broken page” error in the Crawl Errors report means that Googlebot was able to access the page but not the pages it links to. Here’s what to do depending on the cause:

  • The linked-to pages may be removed or moved. Check to see if the destination pages exist and if they have been moved. If they have, update the links on your page to point to the new location.
  • The linked-to pages may be blocked by robots.txt. Check your robots.txt file to ensure it isn’t blocking Googlebot from crawling the linked-to pages.
  • The linked-to pages may use JavaScript to create links and Googlebot can’t follow them. Remove the JavaScript or make the links visible by using a search engine-friendly design.
  • The linked-to pages may contain Flash content that Googlebot can’t access. Again, remove the Flash content or make the design search engine-friendly.

HTTPS Page Has Internal Links to HTTP

The HTTPS page has internal links to HTTP error means that your HTTPS page contains links to HTTP pages. This usually happens when you transition your site from HTTP to HTTPS and don’t update your internal links to use the new HTTPS URLs. Here’s how to fix it:

  • Update all links on your HTTPS pages to use HTTPS URLs.
  • Check your robots.txt file to ensure it doesn’t accidentally block Googlebot from crawling your HTTPS pages.
  • Make sure all other resources, such as images, are available over HTTPS.

Page Has Nofollow Incoming Internal Links Only

The “Page has nofollow incoming internal links only” error occurs when a page on your site can only be reached by following a link from another page on your site that uses the nofollow attribute. This is usually not intended, so you’ll need to update your internal linking to fix it.

Here’s how to fix this error:

  • Remove the nofollow attribute from the links pointing to the page.
  • Update the links to use HTTPS URLs if your site is SSL-enabled.
  • Check your robots.txt file to ensure it isn’t accidentally blocking Googlebot from crawling your pages.

Page Has Links to Redirects

The “Page has links to redirects” error comes up if a page on your site contains links to redirected pages. This usually happens when you change the URL of a page on your site and don’t update the links pointing to it. Here’s how you can fix this problem:

  • Update the links on your page to point to the new URL.
  • Check your robots.txt file to ensure it isn’t accidentally blocking Googlebot from crawling your pages.
  • Make sure all other resources, such as images, are available at the new URL.

Learn On-page SEO

URLs

What Are URLs? URL stands for Uniform Resource Locator and it represents the address of a given unique resource on the Web. A URL is a...

read more

Robots.txt

What Is Robots.txt? Robots.txt is a text file with instructions that search engine crawlers can read. These instructions define which areas of a...

read more

Learn Technical SEO

Broken redirects

Half of all internet users will encounter a 404 error page at some point in their life, whether it's because they've clicked on a broken link, typed...

read more

Redirect loop

A redirect loop is an infinite sequence of HTTP redirects. It occurs when a website keeps redirecting the user to the same page, making it...

read more

Learn off-page SEO

No Results Found

The page you requested could not be found. Try refining your search, or use the navigation above to locate the post.