[Solved]-Google Cloud Storage file stuck in time after multiple updates/deletions

28đź‘Ť

âś…

GCS provides strong read-after-write consistency. If you upload a new version of an object and then download it back, you will always get the new version of the object.

However, by default, when a publicly readable object is fetched, GCS responds with a “Cache-Control” header allowing caching for up to 1 hour. Google itself, your web browser, or some firewall between you and Google may decide based on that header to cache your object. You can check to see if this is the problem by fetching the object again and tacking some bogus URL parameter onto the end of the request, like “?avoidTheCaches=1”.

You can prevent this from happening by setting the object’s “cacheControl” property to something like “private” or “max-age=0”. You can set this property as part of an upload. You can also modify the property for an existing object, but if the resource has already been cached, you’ll have to wait out the hour before you’ll see the new version.

0đź‘Ť

You can go to the object in your console, click on "Edit Metadata", then enter what you want in the "Cache control" field. If you want no caching at all, you can add the string no-store.

Documentation on the allowed values is here.

👤Max Bileschi

Leave a comment