First, I found some resources online here and here saying about the same thing:
For a normal/soft reload, the browser will re-validate the cache, checking to see if the files are modified.
I tested it on Chrome. I have a webpage index.html
which loads a few javascript files in the end of body
. When hitting the refresh button (soft/normal), from the network panel I saw index.html
was 304 Not Modified
, which was good. However, all the javascript files were loaded from memory cache
with status code 200. No revalidation!
Then I tried modifying one of the javascript files. Did the soft reload. And guess what? That file was still loaded from memory cache!
Why does Chrome do this? Doesn't that defeat the purpose of the refresh button?
Here is more information about Chrome's memory cache.
First, I found some resources online here and here saying about the same thing:
For a normal/soft reload, the browser will re-validate the cache, checking to see if the files are modified.
I tested it on Chrome. I have a webpage index.html
which loads a few javascript files in the end of body
. When hitting the refresh button (soft/normal), from the network panel I saw index.html
was 304 Not Modified
, which was good. However, all the javascript files were loaded from memory cache
with status code 200. No revalidation!
Then I tried modifying one of the javascript files. Did the soft reload. And guess what? That file was still loaded from memory cache!
Why does Chrome do this? Doesn't that defeat the purpose of the refresh button?
Here is more information about Chrome's memory cache.
Share Improve this question asked Aug 23, 2017 at 0:59 ShawnShawn 2,7353 gold badges26 silver badges49 bronze badges 6- How are you serving the files? If your server is adding cache control headers that may be the cause. I would check the Network tab for the cached assets and review their headers. – Rob M. Commented Aug 23, 2017 at 1:01
-
Doesn't that defeat the purpose of the refresh button?
Not really, or that would defeat the purpose of a hard refresh. – Keith Commented Aug 23, 2017 at 1:07 - @Rob The resources I linked in the beginning say that soft reload will "re-validate" the cache, even if the cache is not expired. If you open the page through the address bar, then it will not re-validate the cache if it's not expired. See here. So cache control shouldn't be the cause, right? – Shawn Commented Aug 23, 2017 at 1:08
-
I personally never let the browser control my website caching needs. For javascript files it's easy to version them, in the simplest form you can just put a query param on them eg..
<script src="/js/boot.js?ver=1"/>
But better than this is make it automatic from your build tool, I do this using webpack and my url's have the webpack hashes on them. – Keith Commented Aug 23, 2017 at 1:13 - 1 If the html doesn't get reloaded/updated, then the users still get the old javascript file, aren't they? – Shawn Commented Aug 23, 2017 at 1:15
2 Answers
Reset to default 14This is a relatively new behaviour which was introduced in 2017 by Chrome browser.
The well-known behaviour of browsers is to revalidate cached resource when the user refreshes the page (either by using CTRL+R bination or dedicated refresh button) by sending If-Modified-Since
or If-None-Match
header. It works for all resources obtained by GET request: stylesheets, scripts, htmls etc. This leads to tons of HTTP requests that in the majority of cases end with 304 Not Modified
responses.
The most popular websites are the ones with constantly changing content, so their users tend to refresh them habitually to get the latest news, tweets, videos and posts. It's not hard to imagine how many unnecessary requests were made every second and as it is said that the best request is the one never made, Facebook decided to address this problem and asked Chrome and Firefox to find a solution together.
Chrome came up with the described solution.
Instead of invalidating each subresource, it only checks if the HTML document changed. If it didn't, it means that it's very likely that everything else also wasn't modified, so it's returned from browser's cache. This works best when each resource has content addressed URL; for example, URL contains a hash of the content of the file. Users can always overe this behaviour by performing a hard refresh.
Firefox's solution gives more control to developers, and it's on a good way to be implemented by all browser vendors. That is the new Cache-control
directive: immutable
.
You can find more information about it here: https://developer.mozilla/pl/docs/Web/HTTP/Headers/Cache-Control#Revalidation_and_reloading
Resources:
- Facebook's article about the motivation behind proposed change, numbers, parisons: https://code.fb./web/this-browser-tweak-saved-60-of-requests-to-facebook/?utm_source=codedot_rss_feed
- Chromium team introducing new behaviour: https://blog.chromium/2017/01/reload-reloaded-faster-and-leaner-page_26.html
Browser caches are a little more plex than simple 200 and 304s than they once were and pay attention to server side directives in headers to tell them how to handle caching for each specific site.
We can adjust the browser caching profiles using various headers (such as Cache-Control
) by specifically setting the time before expires you can tell a browser to use the local copy instead of requesting a new fresh copy, these can be quite aggressive in the cases of content you really don't want changed (i.e a panies logo). By doing something like Cache-Control: public, max-age=31536000
Additionally you can also set the Expires
header which will allow you to almost do the same as Cache-Control
but with a little less control. It just sets the amount of time to pass before the browser considers a asset stale and re-requests. Although with a re-request we could still get a cached result if the not modified response code is sent back from the server.
A lot of web servers have settings enabled to allow more aggressive caching of certain asset files (js, images, css) but less aggressive caching of content files.