Go to file
Selwin Ong d71a6c2ece Merge pull request from yaniv-g/master
Expand timeout string feature to both result_ttl and ttl, and add days possibility.
examples fix print in example
rq Merge pull request from yaniv-g/master
tests introduce job.update_meta() to store updated meta to Redis ()
.coveragerc Ignore local.py (it's tested in werkzeug instead).
.gitignore Allow passing backend classes (job, queue, worker, connection) from CLI and other APIs
.mailmap Add .mailmap
.travis.yml Remove versions
CHANGES.md Updated changelog and tag v0.8.
LICENSE Fix year.
MANIFEST.in Added a MANIFEST excluding tests from distribution
Makefile Clean dist+build folders before releasing
README.md Fix pip install from HEAD of origin/master
dev-requirements.txt Move mock to test-only dependencies.
py26-requirements.txt Install importlib on Travis' py26 environment.
requirements.txt Reverted click requirement back to >= 3.0
run_tests correct SLOW env var, run slow tests on ci
setup.cfg Move the flake8 config to setup.cfg
setup.py Update setup.py to ensure Python 2.6 dependencies are installed by pip.
tox.ini run python 3.5 on travis, adopt timeouts

README.md

RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.

RQ requires Redis >= 2.7.0.

Build status Downloads Can I Use Python 3? Coverage Status

Full documentation can be found here.

Getting started

First, run a Redis server, of course:

$ redis-server

To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function:

import requests

def count_words_at_url(url):
    """Just an example function that's called async."""
    resp = requests.get(url)
    return len(resp.text.split())

You do use the excellent requests package, don't you?

Then, create an RQ queue:

from redis import Redis
from rq import Queue

q = Queue(connection=Redis())

And enqueue the function call:

from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')

For a more complete example, refer to the docs. But this is the essence.

The worker

To start executing enqueued function calls in the background, start a worker from your project's directory:

$ rq worker
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default

That's about it.

Installation

Simply use the following command to install the latest released version:

pip install rq

If you want the cutting edge version (that may well be broken), use this:

pip install -e git+https://github.com/nvie/rq.git@master#egg=rq

Project history

This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations.