2982486448
Connections can now be set explicitly on Queues, Workers, and Jobs. Jobs that are implicitly created by Queue or Worker API calls now inherit the connection of their creator's. For all RQ object instances that are created now holds that the "current" connection is used if none is passed in explicitly. The "current" connection is thus hold on to at creation time and won't be changed for the lifetime of the object. Effectively, this means that, given a default Redis connection, say you create a queue Q1, then push another Redis connection onto the connection stack, then create Q2. In that case, Q1 means a queue on the first connection and Q2 on the second connection. This is way more clear than it used to be. Also, I've removed the `use_redis()` call, which was named ugly. Instead, some new alternatives for connection management now exist. You can push/pop connections now: >>> my_conn = Redis() >>> push_connection(my_conn) >>> q = Queue() >>> q.connection == my_conn True >>> pop_connection() == my_conn Also, you can stack them syntactically: >>> conn1 = Redis() >>> conn2 = Redis('example.org', 1234) >>> with Connection(conn1): ... q = Queue() ... with Connection(conn2): ... q2 = Queue() ... q3 = Queue() >>> q.connection == conn1 True >>> q2.connection == conn2 True >>> q3.connection == conn1 True Or, if you only require a single connection to Redis (for most uses): >>> use_connection(Redis()) |
13 years ago | |
---|---|---|
bin | 13 years ago | |
examples | 13 years ago | |
rq | 13 years ago | |
tests | 13 years ago | |
.gitignore | 13 years ago | |
LICENSE | 13 years ago | |
README.md | 13 years ago | |
run_tests | 13 years ago | |
setup.cfg | 13 years ago | |
setup.py | 13 years ago |
README.md
RQ (Redis Queue) is a simple Python library for queueing jobs and processing them in the background with workers. It is backed by Redis and it is designed to have a low barrier to entry. It should easily be integrated in your web stack.
Getting started
First, run a Redis server, of course:
$ redis-server
To put jobs on queues, you don't have to do anything special, just define your typically lengthy or blocking function:
import requests
def count_words_at_url(url):
resp = requests.get(url)
return len(resp.text.split())
Then, create a RQ queue:
import rq import *
use_redis()
q = Queue()
And enqueue the function call:
from my_module import count_words_at_url
result = q.enqueue(count_words_at_url, 'http://nvie.com')
For a more complete example, refer to the docs. But this is the essence.
The worker
To start executing enqueued function calls in the background, start a worker from your project's directory:
$ rqworker
*** Listening for work on default
Got count_words_at_url('http://nvie.com') from default
Job result = 818
*** Listening for work on default
That's about it.
Installation
Simply use the following command to install the latest released version:
pip install rq
If you want the cutting edge version (that may well be broken), use this:
pip install -e git+git@github.com:nvie/rq.git@master#egg=rq
Project history
This project has been inspired by the good parts of Celery, Resque and this snippet, and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations.