279 Commits (83f81b351d5a62b016111c1742d0fa7dd2c772c0)

Author SHA1 Message Date
Vincent Driessen 5963ec6d77 Try to make the expiration code a bit more readable. 13 years ago
Vincent Driessen 4b2838943b Rename rv_ttl to default_result_ttl. 13 years ago
Vincent Driessen 78864e2581 PEP8ify. 13 years ago
Selwin Ong a5e6765990 Added "result_ttl" property on jobs that determines how long job results are persisted in Redis. 13 years ago
Vincent Driessen 5f4bb8dfc5 Fix word ordering. 13 years ago
Vincent Driessen 1177caf4bf Set state to busy as early as possible. 13 years ago
Vincent Driessen f2d5ebf2fe Merge branch 'master' into selwin-warm-shutdown-2
Conflicts:
	rq/worker.py
13 years ago
Vincent Driessen 1536546613 Worker/horse distinction in signal handler is obsolete. 13 years ago
Vincent Driessen a6bb526773 Fix that doens't abort the currently running job on Ctrl+C. 13 years ago
Vincent Driessen 9bfd686be3 Merge hotfix/0.2.2 13 years ago
Vincent Driessen 4d9881eef2 Print version number when running the server. 13 years ago
Vincent Driessen 1bc0c3d223 Fix bug where pickling the return value caused an uncaught exception. 13 years ago
Selwin Ong a4f1de358f Raise a StopException when Control+C is pressed. 13 years ago
Vincent Driessen 2cb058e91a PEP8ify. 13 years ago
Vincent Driessen 84988bdf94 Fix typo.
This fixes #85.
13 years ago
Vincent Driessen 074d42fe54 Remove strict procname dependency.
This fixes #80.
13 years ago
Selwin Ong 50ba003cee Worker's "all" and "find_by_key" methods should accept "connection" as argument. 13 years ago
mattdennewitz 9f2f9e367c Class methods now use given "cls" 13 years ago
Vincent Driessen 2982486448 New connection management.
Connections can now be set explicitly on Queues, Workers, and Jobs.
Jobs that are implicitly created by Queue or Worker API calls now
inherit the connection of their creator's.

For all RQ object instances that are created now holds that the
"current" connection is used if none is passed in explicitly.  The
"current" connection is thus hold on to at creation time and won't be
changed for the lifetime of the object.

Effectively, this means that, given a default Redis connection, say you
create a queue Q1, then push another Redis connection onto the
connection stack, then create Q2. In that case, Q1 means a queue on the
first connection and Q2 on the second connection.

This is way more clear than it used to be.

Also, I've removed the `use_redis()` call, which was named ugly.
Instead, some new alternatives for connection management now exist.

You can push/pop connections now:

    >>> my_conn = Redis()
    >>> push_connection(my_conn)
    >>> q = Queue()
    >>> q.connection == my_conn
    True
    >>> pop_connection() == my_conn

Also, you can stack them syntactically:

    >>> conn1 = Redis()
    >>> conn2 = Redis('example.org', 1234)
    >>> with Connection(conn1):
    ...     q = Queue()
    ...     with Connection(conn2):
    ...         q2 = Queue()
    ...     q3 = Queue()
    >>> q.connection == conn1
    True
    >>> q2.connection == conn2
    True
    >>> q3.connection == conn1
    True

Or, if you only require a single connection to Redis (for most uses):

    >>> use_connection(Redis())
13 years ago
Vincent Driessen 15342f14d3 Store pickled function calls as strings.
This aids unpacking in the case of a function that isn't importable from
the worker's runtime. The unpickling will now (almost) always succeed,
and throw an ImportError later on, when the function is actually
accessed (thus imported implicitly).

The end result is a job on the failed queue, with exc_info describing
the import error, which is tremendously useful.
13 years ago
Vincent Driessen b8305a818f Safer, and shorter, version of the death penalty.
This case protects against JobTimeoutExceptions being raised immediately
after the job body has been (successfully) executed.  Still,
JobTimeoutExceptions pass through naturally, like any other exception,
to be handled by the default exception handler that writes failed jobs
to the failed queue.

Timeouts therefore are reported like any other exception.
13 years ago
Vincent Driessen 8a856e79ea Initial attempt at job timeouts. 13 years ago
Vincent Driessen 7ef3b5ade8 Cleanup job hashes for jobs without result, too. 13 years ago
Vincent Driessen 240d2d941d Extracted method.
This makes the act of moving failed jobs to the failed queue
responsibility of the FailedQueue itself, not of the Worker.

This fixes #32.
13 years ago
Vincent Driessen d64ad225eb Make FailedQueue a full subclass of Queue.
We will add special methods on it in the future.

This fixes #33.
13 years ago
Vincent Driessen bd08f24f15 Cosmetic changes to the command line output. 13 years ago
Vincent Driessen 0a0d9d1ceb Flake8 style fixes. 13 years ago
Vincent Driessen 1a8b80604d Minor refactoring to make the to-failed queue code a bit more readable. 13 years ago
Vincent Driessen 11c7dbb376 Consistently renamed "failure" -> "failed" queue.
Fixes #28.
13 years ago
Vincent Driessen 9f5b1545b6 Fix: store the job result in the correct key.
And expire job hash in Redis after 500 seconds (by default).

Fixes #27.
13 years ago
Vincent Driessen 8da204f74a Always use cPickle, never 'regular' pickle.
This fixes #18.
13 years ago
Vincent Driessen 9318825429 Abstract away from the concrete pickle implementation.
Choose cPickle, if available, for best performance.
13 years ago
Vincent Driessen 90a458ca8e Add more colorful terminal output.
For better visual parsability.
13 years ago
Vincent Driessen e05acfedce Fix putting jobs on the failure queue when they fail. 13 years ago
Vincent Driessen bffe6cbbde Encapsulate internal function call representation.
This means it's not allowed anymore to directly set func, args, and
kwargs.  Instead, use the for_call() constructor.
13 years ago
Vincent Driessen 370399f8f7 CHECKPOINT: dequeue_any now returns the queue that was popped from. 13 years ago
Vincent Driessen f516f8df2e CHECKPOINT: Handle failing and unreadable jobs.
Failing (or unreadable) jobs are correctly put on the failure queue by
the worker now.
13 years ago
Vincent Driessen b1650cb9b9 CHECKPOINT: Second part of the big refactoring.
Jobs are now stored in separate keys, and only job IDs are put on Redis
queues.  Much of the code has been hit by this change, but it is for the
good.

No really.
13 years ago
Vincent Driessen fdce187c27 Putting failed jobs on the failure queue. 13 years ago
Vincent Driessen 7eb8d92605 Put unreadable tasks on the failure queue. 13 years ago
Vincent Driessen 5c6f002878 Silently pass when trying to kill child that is already dead.
This fixes #16.
13 years ago
Vincent Driessen 039a132374 Add ellipsis, to indicate we're waiting here. 13 years ago
Vincent Driessen aecb0a1bf0 Simplify calling .work() or .work(burst=True). 13 years ago
Vincent Driessen 636b6690d6 Add the signal name to the debug message. 13 years ago
Vincent Driessen a154ef0bd9 Remove comment.
This ain't the right way to terminate when blocking by pop.
13 years ago
Vincent Driessen 62949c9adb Extra debug output. 13 years ago
Vincent Driessen dde3ea8ef7 Take down horse process when the worker is terminated. 13 years ago
Vincent Driessen 4ac243b3e8 Print what signal was received in a debug statement. 13 years ago
Vincent Driessen 7769d9875f Perform a warm shutdown on SIGTERM, too.
Just like with Ctrl+C (SIGINT), shutdown warmly at first when kill'ed
(SIGTERM).
13 years ago
Vincent Driessen 88cbaa1df9 Slight code reshuffle + added some comments on the construction. 13 years ago
Vincent Driessen 7cba8449d9 Add comments. 13 years ago
Vincent Driessen 1cbf92c166 Workaround for os.waitpid() throwing an OSError on SIGINT.
When SIGINT (``Ctrl+C``) is received when inside a blocking
os.waitpid(), OSError is thrown, effectively cancelling the wait.

However, to facilitate a "warm shutdown", as we intend, Ctrl+C is
perfectly allowed and we want to keep waiting for the child.  Therefore,
we perform a trick here, catching OSError, checking whether its cause
was SIGINT (errno == EINTR), and only in that case, loop to os.waitpid()
again.
13 years ago
Vincent Driessen e278bd2967 Exit gracefully when user hits Ctrl+C in a worker.
The currently running task will be waited for, so it can gracefully
be finished.  Further execution will be stopped.

If, during this waiting phase, Ctrl+C is hit again, the worker and the
horse will be terminated forcefully (this means work could be lost or
partially finished).
13 years ago
Vincent Driessen ba965a1dd9 Minor text change 13 years ago
Vincent Driessen 2d2b351f7c Change logging format. 13 years ago
Vincent Driessen 8678f26df0 Factor out call string. 13 years ago
Vincent Driessen 1358246238 Better logging output. 13 years ago
Vincent Driessen 55fd393626 Worker.find_by_key method now returns None for nonexisting workers. 13 years ago
Vincent Driessen 507558f6bc Avoid forked work horse to register death of its parent worker. 13 years ago
Vincent Driessen bd1778c610 Fix comment typo's. 13 years ago
Vincent Driessen c9ba66bd59 Register workers in a central set ("rq:workers"). 13 years ago
Vincent Driessen 1f12678468 Get Worker fetch methods. 13 years ago
Vincent Driessen 9e8a4d15be Document methods. 13 years ago
Vincent Driessen 6013227f4c Remove unused property. 13 years ago
Vincent Driessen a029e5437b Add beginnings of a rqworker script. 13 years ago
Vincent Driessen d780c929c0 Change semantics of work(). Add work_burst().
work() will now start the worker run loop, and work_burst() now leads to
the burst-then-quit behaviour.
13 years ago
Vincent Driessen 56c4445bb2 Shut up pyflakes. 13 years ago
Vincent Driessen a5ea45af57 Make the dequeue methods return values consistent.
I merely refactored the internal calls. No external API changes have been made in this commit. In order to make the dequeueing methods consistent, each dequeue method now returns a Job instance, which is just a nice lightweight wrapper around the job tuple.

The Job class makes it easier to pass the method call info around, along with some possible meta information, like the queue the job originated from.

This fixes #7.
13 years ago
Vincent Driessen f492a5ae2b Restructure some code.
No functional change, but leave the BLPOP'ing to the Queue, as the
queues know how to pop themselves.
13 years ago
Vincent Driessen 1a893e60cf Have work() return whether work has been done, or not.
And promote Worker to the rq namespace, so you can
    from rq import Worker
13 years ago
Vincent Driessen 1c9fa66bc1 Greatly simplify the setup.
Jobs don't even need to be tagged.  Any function can be put on queues.
13 years ago
Vincent Driessen 8dfdd452ef Bugfix.
Yeah, it's getting late.

It's my own fault.

I know.
13 years ago
Vincent Driessen 04c88577ed Bugfix: LPOP does not support multiple queue arguments.
Redis' BLPOP command takes multiple queue arguments, but LPOP can only
take a single queue.  Therefore, we need to loop over all queues
manually, in order, and raise an exception is no more work is available.
13 years ago
Vincent Driessen a77c3d9104 Support quitting when all work is done (i.e. queue is empty). 13 years ago
Vincent Driessen 98ffcd8e05 Create soft dependency on logbook. 13 years ago
Vincent Driessen 227e107a82 Oops, fix some old references to current_connection. 13 years ago
Vincent Driessen 518db8c24b Add better connection management.
To start using RQ, push a Redis connection up its stack, like so:

    from rq import push_connection
    push_connection(Redis())
13 years ago
Vincent Driessen d8d388c841 Log the results of jobs. 13 years ago
Vincent Driessen f21b2af2b6 Make it an actual PyPI-managable Python package. 13 years ago