Google explains:
“Individual Google crawlers and fetchers may or may not make use of caching, depending on the needs of the product they’re associated with. For example, Googlebot supports caching when re-crawling URLs for Google Search, and Storebot-Google only supports caching in certain conditions”
Guidance On Implementation
Google’s new documentation recommends contacting hosting or CMS providers for assistance. It also suggests (but doesn’t require) that publishers set the max-age field of the Cache-Control response header in order to help crawlers know when to crawl specific URLs.
Entirely New Blog Post
Google has also published a brand new blog post:
Crawling December: HTTP caching
Read the updated documentation:
HTTP Caching
Featured Image by Shutterstock/Asier Romero