b5fbc3992b
A few things have changed. First of all, there is no separate copy of the argparse-based `rqinfo` anymore. It now fully utilizes the new Click subcommand. In other words: `rqinfo` and `rq info` both invoke the same function under the hood. In order to support this, the main command group now does NOT take a `url` option and initializes the connection. Besides supporting this alias pattern, this change was useful for two more reasons: (1) it allows us to add subcommands that don't need the Redis server running in the future, and (2) it makes the `--url` option an option underneath each subcommand. This avoids command invocations that look like this: $ rq --url <url> info --more --flags And instead allows us to pass the URL to each subcommand where it's deemed necessary: $ rq info --url <url> --more --flags Which is much friendlier to use/remember. |
10 years ago | |
---|---|---|
examples | 11 years ago | |
rq | 10 years ago | |
tests | 10 years ago | |
.coveragerc | 11 years ago | |
.gitignore | 11 years ago | |
.travis.yml | 11 years ago | |
CHANGES.md | 11 years ago | |
LICENSE | 13 years ago | |
MANIFEST.in | 12 years ago | |
README.md | 11 years ago | |
dev-requirements.txt | 11 years ago | |
py26-requirements.txt | 12 years ago | |
requirements.txt | 11 years ago | |
run_tests | 13 years ago | |
setup.cfg | 11 years ago | |
setup.py | 10 years ago | |
tox.ini | 11 years ago |
README.md
RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
Getting started
First, run a Redis server, of course:
$ redis-server
To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function:
import requests
def count_words_at_url(url):
"""Just an example function that's called async."""
resp = requests.get(url)
return len(resp.text.split())
You do use the excellent requests package, don't you?
Then, create a RQ queue:
from rq import Queue, use_connection
use_connection()
q = Queue()
And enqueue the function call:
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
For a more complete example, refer to the docs. But this is the essence.
The worker
To start executing enqueued function calls in the background, start a worker from your project's directory:
$ rqworker
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
That's about it.
Installation
Simply use the following command to install the latest released version:
pip install rq
If you want the cutting edge version (that may well be broken), use this:
pip install -e git+git@github.com:nvie/rq.git@master#egg=rq
Project history
This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations.