From 4eb8425acc01dd6784648e3461c757d52aeb098e Mon Sep 17 00:00:00 2001 From: Vincent Driessen Date: Sat, 26 Nov 2011 09:31:59 +0100 Subject: [PATCH] Copy-paste README doc from gh-pages. --- README.md | 119 ++++++++++++++++++++++++------------------------------ 1 file changed, 53 insertions(+), 66 deletions(-) diff --git a/README.md b/README.md index fabe6b8..39623ac 100644 --- a/README.md +++ b/README.md @@ -1,80 +1,76 @@ # WARNING: DON'T USE THIS IN PRODUCTION (yet) -# RQ: Simple job queues for Python +RQ (_Redis Queue_) is a lightweight* Python library for queueing +jobs and processing them in the background with workers. It is backed by Redis +and it is extremely simple to use. -**RQ** is a lightweight Python library for queueing work and processing them in -workers. It is backed by Redis. +

+* It is under 20 kB in size and under 500 lines of code.

-# Putting jobs on queues -To put jobs on queues, first declare a Python function to be called on -a background process: +## Getting started - def slow_fib(n): - if n <= 1: - return 1 - else: - return slow_fib(n-1) + slow_fib(n-2) +First, run a Redis server, of course: -Notice anything? There's nothing special about a job! Any Python function can -be put on an RQ queue, as long as the function is in a module that is -importable from the worker process. +{% highlight console %} +$ redis-server +{% endhighlight %} -To calculate the 36th Fibonacci number in the background, simply do this: +To put jobs on queues, you don't have to do anything special, just define +your typically lengthy or blocking function: - from rq import Queue - from fib import slow_fib - - # Calculate the 36th Fibonacci number in the background - q = Queue() - q.enqueue(slow_fib, 36) +{% highlight python %} +import urllib2 -If you want to put the work on a specific queue, simply specify its name: +def count_words_at_url(url): + f = urllib2.urlopen(url) + count = 0 + while True: + line = f.readline() + if not line: + break + count += len(line.split()) + return count +{% endhighlight %} - q = Queue('math') - q.enqueue(slow_fib, 36) +Then, create a RQ queue: -You can use any queue name, so you can quite flexibly distribute work to your -own desire. Common patterns are to name your queues after priorities (e.g. -`high`, `medium`, `low`). +{% highlight python %} +import rq import * +use_redis() +q = Queue() +{% endhighlight %} +And enqueue the function call: -# The worker +{% highlight python %} +from my_module import count_words_at_url +result = q.enqueue( + count_words_at_url, 'http://nvie.com') +{% endhighlight %} -**NOTE: You currently need to create the worker yourself, which is extremely -easy, but RQ will include a custom script soon that can be used to start -arbitrary workers without writing any code.** +For a more complete example, refer to the [docs][d]. But this is the essence. -Creating a worker daemon is also extremely easy. Create a file `worker.py` -with the following content: +[d]: {{site.baseurl}}docs/ - from rq import Queue, Worker - q = Queue() - Worker(q).work() +### The worker -After that, start a worker instance: +To start executing enqueued function calls in the background, start a worker +from your project's directory: - python worker.py +{% highlight console %} +$ rqworker +*** Listening for work on default +Got count_words_at_url('http://nvie.com') from default +Job result = 818 +*** Listening for work on default +{% endhighlight %} -This will wait for work on the default queue and start processing it as soon as -messages arrive. +That's about it. -You can even watch several queues at the same time and start processing from -them: - from rq import Queue, Worker - - queues = map(Queue, ['high', 'normal', 'low']) - Worker(queues).work_burst() - -Which will keep popping jobs from the given queues, giving precedence to the -`high` queue, then `normal`, etc. It will return when there are no more jobs -left (contrast this to the previous example using `Worker.work()`, which will -never return since it keeps waiting for new work to arrive). - - -# Installation +## Installation Simply use the following command to install the latest released version: @@ -85,23 +81,14 @@ If you want the cutting edge version (that may well be broken), use this: pip install -e git+git@github.com:nvie/rq.git@master#egg=rq -# Project History +## Project history This project has been inspired by the good parts of [Celery][1], [Resque][2] and [this snippet][3], and has been created as a lightweight alternative to the heaviness of Celery or other AMQP-based queueing implementations. +[m]: http://pypi.python.org/pypi/mailer +[p]: http://docs.python.org/library/pickle.html [1]: http://www.celeryproject.org/ [2]: https://github.com/defunkt/resque [3]: http://flask.pocoo.org/snippets/73/ - -Project values: - -* Simplicity over completeness -* Fail-safety over performance -* Runtime insight over static configuration upfront - -This means that, to use RQ, you don't have to set up any queues up front, and -you don't have to specify any channels, exchanges, or whatnot. You can put -jobs onto any queue you want, at runtime. As soon as you enqueue a job, it is -created on the fly.