Skip to main content
The 2024 Developer Survey results are live! See the results

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

13
  • 1
    I updated my questions to include that I tried no-cache, but I'm still seeing the max-age=3600. Does the old file need to expire before the new no-cache file takes over? Commented Dec 21, 2012 at 20:25
  • 3
    @mike it would be nice to have a feature to invalidate/flush the cache like on CDN.
    – themihai
    Commented Dec 19, 2013 at 20:10
  • 3
    @mihai - it would be difficult to provide a cache invalidation feature because once served with a non-0 cache TTL any cache on the Internet (not just those under Google's control) is allowed (per HTTP spec) to cache the data. Commented Jun 20, 2014 at 22:11
  • 1
    After discussing with @aqquadro I figured out the issue: cloud.google.com/storage/docs/gsutil/addlhelp/… incorrectly stated that uploading a non-public object and then setting the ACL to public-read would result in a non-cacheable object. In fact the HTTP spec allows public objects to be cached by default, so to inhibit caching you need to set a Cache-Control header, for example using the command: gsutil -h Cache-Control:private cp -a public-read file.png gs://your-bucket. (I'll also fix the incorrect documentation.) Commented Jun 16, 2015 at 16:13
  • 1
    @Pier - not in one request. You'd have to do something like: gsutil -m -h "Cache-Control:no-cache, no-store, must-revalidate" gs://your-bucket/** which will make a request for every object in the bucket. Commented Jun 10, 2016 at 22:32