[Solved]-Collectstatic and S3: not finding updated files

8👍

I think I solved this problem. Like you, I have spent so many hours on this problem. I am also on the bug report you found on bitbucket. Here is what I just accomplished.

I had

django-storages==1.1.8
Collectfast==0.1.11

This does not work at all. Delete everything for once does not work either. After that, it cannot pick up modifications and refuse to update anything.

The problem is with our time zone. S3 will say the files it has is last modified later than the ones we want to upload. django collectstatic will not try to copy the new ones over at all. And it will call the files “unmodified”. For example, here is what I see before my fix:

Collected static files in 0:00:45.292022.
Skipped 407 already synced files.
0 static files copied, 1 unmodified.

My solution is “To the hell with modified time!”. Besides the time zone problems that we are solving here, what if I made a mistake and need to roll back? It will refuse to deploy the old static files and leave my website broken.

Here is my pull request to Collectfast https://github.com/FundedByMe/collectfast/pull/11 . I still left a flag so if you really want to check modified time, you can still do it. Before it got merged, just use my code at https://github.com/sunshineo/collectfast

You have a nice day!

–Gordon
PS: Stayed up till 4:40am for this. My day is ruined for sure.

6👍

After hours of digging around, I found this bug report.

I changed my requirements to revert to a previous version of Django storages.

django-storages==1.1.5
👤Joker

3👍

You might want to consider using this plugin written by antonagestam on Github:
https://github.com/FundedByMe/collectfast

It compares the checksum of the files, which is a guaranteed way of determining when a file has changed. It’s the accepted answer at this other stackoverflow question: Faster alternative to manage.py collectstatic (w/ s3boto storage backend) to sync static files to s3?

2👍

There are some good answers here but I spent some time on this today so figured I’d contribute one more in case it helps someone in the future. Following advice found in other threads I confirmed that, for me, this was indeed caused by a difference in time zone. My django time wasn’t incorrect but was set to EST and S3 was set to GMT. In testing, I reverted to django-storages 1.1.5 which did seem to get collectstatic working. Partially due to personal preference, I was unwilling to a) roll back three versions of django-storages and lose any potential bug fixes or b) alter time zones for components of my project for what essentially boils down to a convenience function (albeit an important one).

I wrote a short script to do the same job as collectstatic without the aforementioned alterations. It will need a little modifying for your app but should work for standard cases if it is placed at the app level and ‘static_dirs’ is replaced with the names of your project’s apps. It is run via terminal with ‘python whatever_you_call_it.py -e environment_name (set this to your aws bucket).

import sys, os, subprocess
import boto3
import botocore
from boto3.session import Session
import argparse
import os.path, time
from datetime import datetime, timedelta
import pytz

utc = pytz.UTC
DEV_BUCKET_NAME = 'dev-homfield-media-root'
PROD_BUCKET_NAME = 'homfield-media-root'
static_dirs = ['accounts', 'messaging', 'payments', 'search', 'sitewide']

def main():
    try: 
        parser = argparse.ArgumentParser(description='Homfield Collectstatic. Our version of collectstatic to fix django-storages bug.\n')
        parser.add_argument('-e', '--environment', type=str, required=True, help='Name of environment (dev/prod)')
        args = parser.parse_args()
        vargs = vars(args)
        if vargs['environment'] == 'dev':
            selected_bucket = DEV_BUCKET_NAME
            print "\nAre you sure? You're about to push to the DEV bucket. (Y/n)"
        elif vargs['environment'] == 'prod':
            selected_bucket = PROD_BUCKET_NAME
            print "Are you sure? You're about to push to the PROD bucket. (Y/n)"
        else:
            raise ValueError

        acceptable = ['Y', 'y', 'N', 'n']
        confirmation = raw_input().strip()
        while confirmation not in acceptable:
            print "That's an invalid response. (Y/n)"
            confirmation = raw_input().strip()

        if confirmation == 'Y' or confirmation == 'y':
            run(selected_bucket)
        else:
            print "Collectstatic aborted."
    except Exception as e:
        print type(e)
        print "An error occured. S3 staticfiles may not have been updated."


def run(bucket_name):

    #open session with S3
    session = Session(aws_access_key_id='{aws_access_key_id}',
        aws_secret_access_key='{aws_secret_access_key}',
        region_name='us-east-1')
    s3 = session.resource('s3')
    bucket = s3.Bucket(bucket_name)

    # loop through static directories
    for directory in static_dirs:
        rootDir = './' + directory + "/static"
        print('Checking directory: %s' % rootDir)

        #loop through subdirectories
        for dirName, subdirList, fileList in os.walk(rootDir):
            #loop through all files in subdirectory
            for fname in fileList:
                try:
                    if fname == '.DS_Store':
                        continue

                    # find and qualify file last modified time
                    full_path = dirName + "/" + fname
                    last_mod_string = time.ctime(os.path.getmtime(full_path))
                    file_last_mod = datetime.strptime(last_mod_string, "%a %b %d %H:%M:%S %Y") + timedelta(hours=5)
                    file_last_mod = utc.localize(file_last_mod)

                    # truncate path for S3 loop and find object, delete and update if it has been updates
                    s3_path = full_path[full_path.find('static'):]
                    found = False
                    for key in bucket.objects.all():
                        if key.key == s3_path:
                            found = True 
                            last_mode_date = key.last_modified
                            if last_mode_date < file_last_mod:
                                key.delete()
                                s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path))
                                print "\tUpdated : " + full_path
                    if not found:
                        # if file not found in S3 it is new, send it up
                        print "\tFound a new file. Uploading : " + full_path
                        s3.Object(bucket_name, s3_path).put(Body=open(full_path, 'r'), ContentType=get_mime_type(full_path))
                except:
                    print "ALERT: Big time problems with: " + full_path + ". I'm bowin' out dawg, this shitz on u." 


def get_mime_type(full_path):
    try:
        last_index = full_path.rfind('.')
        if last_index < 0:
            return 'application/octet-stream'
        extension = full_path[last_index:]
        return {
            '.js' : 'application/javascript',
            '.css' : 'text/css',
            '.txt' : 'text/plain',
            '.png' : 'image/png',
            '.jpg' : 'image/jpeg',
            '.jpeg' : 'image/jpeg',
            '.eot' : 'application/vnd.ms-fontobject',
            '.svg' : 'image/svg+xml',
            '.ttf' : 'application/octet-stream',
            '.woff' : 'application/x-font-woff',
            '.woff2' : 'application/octet-stream'
        }[extension]
    except:
        'ALERT: Couldn\'t match mime type for '+ full_path + '. Sending to S3 as application/octet-stream.'

if __name__ == '__main__':
    main()

0👍

I had a similar problem pushing new files to a S3 bucket (previously working well), but is not problem about django or python, on my end I fixed the issue when I deleted my local repository and cloned it again.

Leave a comment