Google Causes Global SEO Tool Outages

Google Causes Global SEO Tool Outages

Google cracked down on web scrapers that harvest search results data, triggering global outages at many popular rank tracking tools like SEMRush that depend on providing fresh data from search results pages.

What happens if Google’s SERPs are completely blocked? A certain amount of data provided by tracking services have long been extrapolated by algorithms from a variety of data sources. It’s possible that one way around the current block is to extrapolate the data from other sources.

Blocked SEO Tools

What Google is doing is still having a major effect on the freshness of data that SEO tools are able to deliver. There are many SEO tools that are experiencing outages to data that can normally be obtained through scraping Google’s search engine results pages (SERPs).

@RyanJones, who operates several tools, tweeted an update today:

“Definitely affecting my tools as well – as we use a 3rd party data supplier and ALL the major ones were blocked yesterday. Many still are”

@seovision tweeted observations in Spanish (translated below) using a Spanish colloquialism to describe Google as a dog guarding vegetables it won’t eat, blocking the gardener who wants them and leaving the resource inaccessible to everyone.

“Since yesterday it seems that they have put in place a new anti-scraping system also in SERPs, which is stricter. They are getting very tough on scraping. …Like the gardener’s dog, I won’t sell you the data or let you get it.”

SEMRush is likely the most used tool whose data has not been refreshed. Popular SEO tool, SE Ranking, is another service that’s experiencing a loss of fresh data.

@LauraChiocciora posted a screenshot of a message received from the SE Ranking tool indicating that position tracking is back online but that SERP Features is still missing because of “technical issues.”

The full message in the screenshot is:

“Position tracking is back online. SERP Features tracking is still not available due to technical issues. Our team is already working on resolving the problem and providing you with the data as soon as possible.”

SERP Scraping Prohibited By Google

Google’s guidelines have long prohibited automated rank checking in the search results but apparently Google has also allowed many companies to scrape their search results and charge for accessing ranking data for the purposes of tracking keywords and rankings.

According to Google’s guidelines:

“Machine-generated traffic (also called automated traffic) refers to the practice of sending automated queries to Google. This includes scraping results for rank-checking purposes or other types of automated access to Google Search conducted without express permission. Machine-generated traffic consumes resources and interferes with our ability to best serve users. Such activities violate our spam policies and the Google Terms of Service.”

Related: 13 Black Hat Techniques That Can Harm An SEO Campaign

Blocking Scrapers Is Complex

It’s highly resource intensive to block scrapers, especially because they can respond to blocks by doing things like changing their IP address and user agent to get by any blocks. Another way to block scrapers is through targeting specific behaviors like how many pages are requested by a user. Excessive amounts of page requests can trigger a block. The problem to that approach is that it can become resource intensive keeping track of all the blocked IP addresses which can quickly number in the millions.

Tinggalkan Balasan

Alamat email Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *