How to run a celery worker on AWS Elastic Beanstalk?

Versions:

  • Django 1.9.8
  • celery 3.1.23
  • django-celery 3.1.17
  • Python 2.7

I'm trying to run my celery worker on AWS Elastic Beanstalk. I use Amazon SQS as a celery broker.

Here is my settings.py

INSTALLED_APPS += ('djcelery',)
import djcelery
djcelery.setup_loader()
BROKER_URL = "sqs://%s:%s@" % (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY.replace('/', '%2F'))

When I type the line below on terminal, it starts the worker on my local. Also I've created a few tasks and they're executed correctly. How can I do this on AWS EB?

python manage.py celery worker --loglevel=INFO

I've found this question on StackOverflow. It says I should add a celery config to the .ebextensions folder which executes the script after deployment. But it doesn't work. I'd appreciate any help. After installing supervisor, I didn't do anything with it. Maybe that's what I'm missing. Here is the script.

files:
  "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
    mode: "000755"
    owner: root
    group: root
    content: |
      #!/usr/bin/env bash

      # Get django environment variables
      celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
      celeryenv=${celeryenv%?}

      # Create celery configuration script
      celeryconf="[program:celeryd]
      command=/opt/python/run/venv/bin/celery worker --loglevel=INFO

      directory=/opt/python/current/app
      user=nobody
      numprocs=1
      stdout_logfile=/var/log/celery-worker.log
      stderr_logfile=/var/log/celery-worker.log
      autostart=true
      autorestart=true
      startsecs=10

      ; Need to wait for currently executing tasks to finish at shutdown.
      ; Increase this if you have very long running tasks.
      stopwaitsecs = 600

      ; When resorting to send SIGKILL to the program to terminate it
      ; send SIGKILL to its whole process group instead,
      ; taking care of its children as well.
      killasgroup=true

      ; if rabbitmq is supervised, set its priority higher
      ; so it starts first
      ; priority=998

      environment=$celeryenv"

      # Create the celery supervisord conf script
      echo "$celeryconf" | tee /opt/python/etc/celery.conf

      # Add configuration script to supervisord conf (if not there already)
      if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
          then
          echo "[include]" | tee -a /opt/python/etc/supervisord.conf
          echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
      fi

      # Reread the supervisord config
      supervisorctl -c /opt/python/etc/supervisord.conf reread

      # Update supervisord in cache without restarting all services
      supervisorctl -c /opt/python/etc/supervisord.conf update

      # Start/Restart celeryd through supervisord
      supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd

Logs from EB: It looks like it works but still it doesn't execute my tasks.

-------------------------------------
/opt/python/log/supervisord.log
-------------------------------------
2016-08-02 10:45:27,713 CRIT Supervisor running as root (no user in config file)
2016-08-02 10:45:27,733 INFO RPC interface 'supervisor' initialized
2016-08-02 10:45:27,733 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2016-08-02 10:45:27,733 INFO supervisord started with pid 2726
2016-08-02 10:45:28,735 INFO spawned: 'httpd' with pid 2812
2016-08-02 10:45:29,737 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:14,684 INFO stopped: httpd (exit status 0)
2016-08-02 10:47:15,689 INFO spawned: 'httpd' with pid 4092
2016-08-02 10:47:16,727 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:47:23,701 INFO spawned: 'celeryd' with pid 4208
2016-08-02 10:47:23,854 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:47:24,858 INFO spawned: 'celeryd' with pid 4214
2016-08-02 10:47:35,067 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:52:36,240 INFO stopped: httpd (exit status 0)
2016-08-02 10:52:37,245 INFO spawned: 'httpd' with pid 4460
2016-08-02 10:52:38,278 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:52:45,677 INFO stopped: celeryd (exit status 0)
2016-08-02 10:52:46,682 INFO spawned: 'celeryd' with pid 4514
2016-08-02 10:52:46,860 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:52:47,865 INFO spawned: 'celeryd' with pid 4521
2016-08-02 10:52:58,054 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)
2016-08-02 10:55:03,135 INFO stopped: httpd (exit status 0)
2016-08-02 10:55:04,139 INFO spawned: 'httpd' with pid 4745
2016-08-02 10:55:05,173 INFO success: httpd entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2016-08-02 10:55:13,143 INFO stopped: celeryd (exit status 0)
2016-08-02 10:55:14,147 INFO spawned: 'celeryd' with pid 4857
2016-08-02 10:55:14,316 INFO stopped: celeryd (terminated by SIGTERM)
2016-08-02 10:55:15,321 INFO spawned: 'celeryd' with pid 4863
2016-08-02 10:55:25,518 INFO success: celeryd entered RUNNING state, process has stayed up for > than 10 seconds (startsecs)

Answers 1

  • I forgot to add an answer after solving this. This is how i fixed it. I've created a new file "99-celery.config" in my .ebextensions folder. In this file, I've added this code and it works perfectly. (don't forget the change your project name in line number 16, mine is molocate_eb)

    files:
      "/opt/elasticbeanstalk/hooks/appdeploy/post/run_supervised_celeryd.sh":
        mode: "000755"
        owner: root
        group: root
        content: |
          #!/usr/bin/env bash
    
          # Get django environment variables
          celeryenv=`cat /opt/python/current/env | tr '\n' ',' | sed 's/export //g' | sed 's/$PATH/%(ENV_PATH)s/g' | sed 's/$PYTHONPATH//g' | sed 's/$LD_LIBRARY_PATH//g'`
          celeryenv=${celeryenv%?}
    
          # Create celery configuraiton script
          celeryconf="[program:celeryd]
          ; Set full path to celery program if using virtualenv
          command=/opt/python/current/app/molocate_eb/manage.py celery worker --loglevel=INFO
    
          directory=/opt/python/current/app
          user=nobody
          numprocs=1
          stdout_logfile=/var/log/celery-worker.log
          stderr_logfile=/var/log/celery-worker.log
          autostart=true
          autorestart=true
          startsecs=10
    
          ; Need to wait for currently executing tasks to finish at shutdown.
          ; Increase this if you have very long running tasks.
          stopwaitsecs = 600
    
          ; When resorting to send SIGKILL to the program to terminate it
          ; send SIGKILL to its whole process group instead,
          ; taking care of its children as well.
          killasgroup=true
    
          ; if rabbitmq is supervised, set its priority higher
          ; so it starts first
          priority=998
    
          environment=$celeryenv"
    
          # Create the celery supervisord conf script
          echo "$celeryconf" | tee /opt/python/etc/celery.conf
    
          # Add configuration script to supervisord conf (if not there already)
          if ! grep -Fxq "[include]" /opt/python/etc/supervisord.conf
              then
              echo "[include]" | tee -a /opt/python/etc/supervisord.conf
              echo "files: celery.conf" | tee -a /opt/python/etc/supervisord.conf
          fi
    
          # Reread the supervisord config
          supervisorctl -c /opt/python/etc/supervisord.conf reread
    
          # Update supervisord in cache without restarting all services
          supervisorctl -c /opt/python/etc/supervisord.conf update
    
          # Start/Restart celeryd through supervisord
          supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd
    

    Edit: In case of an supervisor error on AWS, just be sure that;

    • You're using Python 2 not Python 3 since supervisor doesn't work on Python 3.
    • Don't forget to add supervisor to your requirements.txt.
    • If it still gives error(happened to me once), just 'Rebuild Environment' and it'll probably work.

Related Articles