90b15fd0b6
When a pickled job string can't be unpickled because some required module isn't loadable, this leads to an `UnpickleError` in the worker (not in the horse). Currently we just assume "garbage" in the job's data field, and silently ignore it. This is bad. Really bad. Because it avoids the normal exception handling mechanism that RQ has. Historically, this "feature" was introduced to ignore any invalid pickle data ("bad strings") on queues, and go on. However, we must assume data inside `job.data` to be valid pickle data. While an invalid _format_ of pickle data (e.g. the string "blablah" isn't valid) leads to unpickle errors, unpickling errors will also occur when the job can't be validly constructed in memory for other reasons, like being unable to load a specific class. Django is a good example of this: try submitting jobs that use `django.conf.settings` while the `DJANGO_SETTINGS_MODULE` env var isn't set. Currently, RQ workers will drop these jobs and dismiss them like any non-valid pickle data. You won't be notified. This patch changes RQ's behaviour to never ignore invalid string data on any queue and _always_ handle these errors explicitly (but without bringing the main loop down, of course). |
12 years ago | |
---|---|---|
examples | 13 years ago | |
rq | 12 years ago | |
tests | 12 years ago | |
.env.fish | 12 years ago | |
.gitignore | 13 years ago | |
.travis.yml | 13 years ago | |
CHANGES.md | 12 years ago | |
LICENSE | 13 years ago | |
README.md | 13 years ago | |
requirements.txt | 12 years ago | |
run_tests | 13 years ago | |
setup.cfg | 12 years ago | |
setup.py | 12 years ago |
README.md
RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should be integrated in your web stack easily.
Getting started
First, run a Redis server, of course:
$ redis-server
To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function:
import requests
def count_words_at_url(url):
"""Just an example function that's called async."""
resp = requests.get(url)
return len(resp.text.split())
You do use the excellent requests package, don't you?
Then, create a RQ queue:
from rq import Queue, use_connection
use_connection()
q = Queue()
And enqueue the function call:
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
For a more complete example, refer to the docs. But this is the essence.
The worker
To start executing enqueued function calls in the background, start a worker from your project's directory:
$ rqworker
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
That's about it.
Installation
Simply use the following command to install the latest released version:
pip install rq
If you want the cutting edge version (that may well be broken), use this:
pip install -e git+git@github.com:nvie/rq.git@master#egg=rq
Project history
This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations.