[Fixed]-AWS Elastic Beanstalk Container Commands Failing


Finally got to the bottom of it all, after deep-diving through the AWS docs and forums…

Essentially, there were a lot of changes that came along with Beanstalk moving from Amazon Linux to Amazon Linux 2. A lot of these changes are vaguely mentioned here.

One major difference for the Python platform as mentioned in the link above is that "the path to the application’s directory on Amazon EC2 instances of your environment is /var/app/current. It was /opt/python/current/app on Amazon Linux AMI platforms." This is crucial for when you’re trying to create the Django migrate scripts as I’ll explain further in detail below, or when you eb ssh into the Beanstalk instance and navigate it yourself.

Another major difference is the introduction of Platform hooks, which is mentioned in this wonderful article here. According to this article, "Platform hooks are a set of directories inside the application bundle that you can populate with scripts." Essentially these scripts will now handle what the previous container_commands handled in the .ebextensions config files. Here is the directory structure of these Platform hooks:
Platform hooks directory structure

Knowing this, and walking through this forum here, where wonderful community members went through the trouble of filling in the gaps in Amazon’s docs, I was able to successfully deploy with the following file set up:

(Please note that "MDGOnline" is the name of my Django app)


    git: []
    postgresql-devel: []
    libjpeg-turbo-devel: []


    command: find .platform/hooks/ -type f -iname "*.sh" -exec chmod +x {} \;
    /static: static
    /static_files: static_files
    WSGIPath: MDGOnline.wsgi:application



source /var/app/venv/*/bin/activate
cd /var/app/staging

python manage.py makemigrations
python manage.py migrate
python manage.py createfirstsuperuser
python manage.py collectstatic --noinput

Please note that the ‘.sh’ scripts need to be linux-based. I ran into an error for a while where the deployment would fail and provide this message in the logs: .platform\hooks\predeploy\01_migrations.sh failed with error fork/exec .platform\hooks\predeploy\01_migrations.sh: no such file or directory .
Turns out this was due to the fact that I created this script on my windows dev environment. My solution was to create it on the linux environment, and copy it over to my dev environment directory within Windows. There are methods to convert DOS to Unix out there I’m sure. This one looks promising dos2unix!

I really wish AWS could document this migration better, but I hope this answer can save someone the countless hours I spent getting this deployment to succeed.

Please feel free to ask me for clarification on any of the above!

EDIT: I’ve added a "container_command" to my config file above as it was brought to my attention that another user also encountered the "permission denied" error for the platform hook when deploying. This "01_sh_executable" command is to chmod all of the .sh scripts within the hooks directory of the app, so that Elastic Beanstalk can have the proper permission to execute them during the deployment process. I found this container command solution in this forum here:


This might work

    WSGIPath: mysite.wsgi:application
    /static: static
    python3-devel: []
    mariadb-devel: []
    command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py collectstatic --noinput"
    command: "source /var/app/venv/staging-LQM1lest/bin/activate && python manage.py migrate --noinput"
    leader_only: true


This works for me.

    command: "source /var/app/venv/*/bin/activate && python /var/app/staging/manage.py migrate --noinput"
    leader_only: true

Leave a comment