How Can I Make Myself Stand Out As A Strong Candidate For My First Full-time Marketing Role?

Why Are My Pages Discovered But Not Indexed?

It’s really the quality of your overall website.

And that includes everything from the layout to the design.

Like, how you have things presented on your pages, how you integrate images, how you work with speed, all of those factors they kind of come into play there.”

So, review your website with these criteria in mind. How does the quality of your website compare to that of your competitors?

A thorough website audit is a good place to start.

Check For Duplicate Pages

Sometimes, a website might have low-quality or duplicate pages that the website manager has no knowledge of.

For example, a page might be reached via multiple URLs. You might have a “Contact Us” page that exists on both exampledomain.com/contact-us and exampledomain.com/contact-us/.

The URL with and the URL without the “trailing slash” are considered separate pages by Googlebot if it can reach them both, and the server returns a 200 status code. That is, they are both live pages.

There is a possibility that all of your pages may be duplicated in this same way.

You might also have a lot of URL parameters on your website that you are unaware of. These are URLs that contain “query strings,” such as exampledomain.com/dress?colour=red.

They are usually caused by filtering and sorting options on your website. In an ecommerce website, this might look like a product category page that is filtered down by criteria such as color, and able to be sorted by price.

As a result, the main features of the page do not change with this filtering and sorting, just the products listed. These are technically separate, crawlable pages and may be causing a lot of duplicates on your site.

You may think your website only has 100 high-quality pages on it. However, a Googlebot may see hundreds of thousands of near-duplicate pages as a result of these technical issues.

Ways To Fix “Discovered – Currently Not Indexed”

Once you have identified the likely causes of your URL not being indexed, you can attempt to fix it.

If your website has duplicate pages, low-quality, scraped content, or other quality issues, that is where to begin.

As a side benefit, you are likely to see your rankings improve across your pages as you work to fix these issues.

Signify The Page’s Importance

In the example of our opening question, there is a specific page that Mandeep is struggling to get indexed.

In this scenario, I would suggest trying to bolster the page’s importance in the eyes of the search engines. Give them a reason to crawl it.

Add The Page To The Website’s XML Sitemap

One way of showing Google that it is an important page that deserves to be crawled and indexed is by adding it to your website’s XML sitemap.

This is essentially a signpost to all of the URLs that you believe search bots should crawl.

Remember, Googlebot already knows that the page exists; it just doesn’t believe it is beneficial to crawl and index it.

If it is already in the XML sitemap, do not stop there. Consider these next steps.

Add Internal Links To The Page

Another way to show a page’s importance is by linking to it from internal pages on the site.

For example, adding the page to your primary navigation system, like the main menu.

Or add contextual links to it from within the copy on other pages on your website. These will signify to Googlebot that it is a significant page on your website.

Add External Links To The Page

Backlinks – they are a fundamental part of SEO. We’ve known for a while that Google will use links from other websites to determine a page’s relevance and authority to a subject.

If you struggle to show Google that your page is of enough quality to index, then having external links from reputable, relevant websites pointing to it can give additional reassurance of the page’s value.

For example, if the page you are struggling to get indexed is a specific red dress’s product detail page, then having that dress’s page featured in some fashion blogs may give Google the signal that it is a high-quality page.

Submit It To Be Crawled

Once you have made changes to your website, try resubmitting the page to be crawled via Google Search Console.

If you notice in the Google Search Console “Indexing” report that the URL is still within the “Discovered – currently not crawled” bucket after some time (it can take anywhere from a few days to a few weeks for Google to crawl a submitted page), then you know that you potentially still have some issues with the page.

In Summary

Optimize your website for crawling and indexing. If you do this, you are likely to see those pages move from “Discovered – currently not indexed” to “Indexed.”

Optimizing your particular website will require an in-depth analysis of the overall quality of the site and identifying how to convey the importance of the “Discovered – currently not indexed” pages to Googlebot.

More resources: 

Featured Image: Paulo Bobita/Search Engine Journal

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *