10 KiB
0.5.6
- Job results are now logged on
DEBUGlevel. Thanks @tbaugis! - Modified
patch_connectionso Redis connection can be easily mocked - Customer exception handlers are now called if Redis connection is lost. Thanks @jlopex!
- Jobs can now depend on jobs in a different queue. Thanks @jlopex!
0.5.5
(August 25th, 2015)
- Add support for
--exception-handlercommand line flag - Fix compatibility with click>=5.0
- Fix maximum recursion depth problem for very large queues that contain jobs that all fail
0.5.4
(July 8th, 2015)
- Fix compatibility with raven>=5.4.0
0.5.3
(June 3rd, 2015)
- Better API for instantiating Workers. Thanks @RyanMTB!
- Better support for unicode kwargs. Thanks @nealtodd and @brownstein!
- Workers now automatically cleans up job registries every hour
- Jobs in
FailedQueuenow have their statuses set properly enqueue_call()no longer ignoresttl. Thanks @mbodock!- Improved logging. Thanks @trevorprater!
0.5.2
(April 14th, 2015)
- Support SSL connection to Redis (requires redis-py>=2.10)
- Fix to prevent deep call stacks with large queues
0.5.1
(March 9th, 2015)
- Resolve performance issue when queues contain many jobs
- Restore the ability to specify connection params in config
- Record
birth_dateanddeath_dateon Worker - Add support for SSL URLs in Redis (and
REDIS_SSLconfig option) - Fix encoding issues with non-ASCII characters in function arguments
- Fix Redis transaction management issue with job dependencies
0.5.0
(Jan 30th, 2015)
- RQ workers can now be paused and resumed using
rq suspendandrq resumecommands. Thanks Jonathan Tushman! - Jobs that are being performed are now stored in
StartedJobRegistryfor monitoring purposes. This also prevents currently active jobs from being orphaned/lost in the case of hard shutdowns. - You can now monitor finished jobs by checking
FinishedJobRegistry. Thanks Nic Cope for helping! - Jobs with unmet dependencies are now created with
deferredas their status. You can monitor deferred jobs by checkingDeferredJobRegistry. - It is now possible to enqueue a job at the beginning of queue using
queue.enqueue(func, at_front=True). Thanks Travis Johnson! - Command line scripts have all been refactored to use
click. Thanks Lyon Zhang! - Added a new
SimpleWorkerthat does not fork when executing jobs. Useful for testing purposes. Thanks Cal Leeming! - Added
--queue-classand--job-classarguments torqworkerscript. Thanks David Bonner! - Many other minor bug fixes and enhancements.
0.4.6
(May 21st, 2014)
- Raise a warning when RQ workers are used with Sentry DSNs using asynchronous transports. Thanks Wei, Selwin & Toms!
0.4.5
(May 8th, 2014)
- Fix where rqworker broke on Python 2.6. Thanks, Marko!
0.4.4
(May 7th, 2014)
- Properly declare redis dependency.
- Fix a NameError regression that was introduced in 0.4.3.
0.4.3
(May 6th, 2014)
- Make job and queue classes overridable. Thanks, Marko!
- Don't require connection for @job decorator at definition time. Thanks, Sasha!
- Syntactic code cleanup.
0.4.2
(April 28th, 2014)
- Add missing depends_on kwarg to @job decorator. Thanks, Sasha!
0.4.1
(April 22nd, 2014)
- Fix bug where RQ 0.4 workers could not unpickle/process jobs from RQ < 0.4.
0.4.0
(April 22nd, 2014)
-
Emptying the failed queue from the command line is now as simple as running
rqinfo -Xorrqinfo --empty-failed-queue. -
Job data is unpickled lazily. Thanks, Malthe!
-
Removed dependency on the
timeslibrary. Thanks, Malthe! -
Job dependencies! Thanks, Selwin.
-
Custom worker classes, via the
--worker-class=path.to.MyClasscommand line argument. Thanks, Selwin. -
Queue.all()andrqinfonow report empty queues, too. Thanks, Rob! -
Fixed a performance issue in
Queue.all()when issued in large Redis DBs. Thanks, Rob! -
Birth and death dates are now stored as proper datetimes, not timestamps.
-
Ability to provide a custom job description (instead of using the default function invocation hint). Thanks, İbrahim.
-
Fix: temporary key for the compact queue is now randomly generated, which should avoid name clashes for concurrent compact actions.
-
Fix:
Queue.empty()now correctly deletes job hashes from Redis.
0.3.13
(December 17th, 2013)
- Bug fix where the worker crashes on jobs that have their timeout explicitly removed. Thanks for reporting, @algrs.
0.3.12
(December 16th, 2013)
- Bug fix where a worker could time out before the job was done, removing it from any monitor overviews (#288).
0.3.11
(August 23th, 2013)
- Some more fixes in command line scripts for Python 3
0.3.10
(August 20th, 2013)
- Bug fix in setup.py
0.3.9
(August 20th, 2013)
-
Python 3 compatibility (Thanks, Alex!)
-
Minor bug fix where Sentry would break when func cannot be imported
0.3.8
(June 17th, 2013)
-
rqworkerandrqinfohave a--urlargument to connect to a Redis url. -
rqworkerandrqinfohave a--socketoption to connect to a Redis server through a Unix socket. -
rqworkerreadsSENTRY_DSNfrom the environment, unless specifically provided on the command line. -
Queuehas a new API that supports pagingget_jobs(3, 7), which will return at most 7 jobs, starting from the 3rd.
0.3.7
(February 26th, 2013)
- Fixed bug where workers would not execute builtin functions properly.
0.3.6
(February 18th, 2013)
-
Worker registrations now expire. This should prevent
rqinfofrom reporting about ghosted workers. (Thanks, @yaniv-aknin!) -
rqworkerwill automatically clean up ghosted worker registrations from pre-0.3.6 runs. -
rqworkergrew a-qflag, to be more silent (only warnings/errors are shown)
0.3.5
(February 6th, 2013)
-
ended_atis now recorded for normally finished jobs, too. (Previously only for failed jobs.) -
Adds support for both
RedisandStrictRedisconnection types -
Makes
StrictRedisthe default connection type if none is explicitly provided
0.3.4
(January 23rd, 2013)
- Restore compatibility with Python 2.6.
0.3.3
(January 18th, 2013)
-
Fix bug where work was lost due to silently ignored unpickle errors.
-
Jobs can now access the current
Jobinstance from within. Relevant documentation here. -
Custom properties can be set by modifying the
job.metadict. Relevant documentation here. -
Custom properties can be set by modifying the
job.metadict. Relevant documentation here. -
rqworkernow has an optional--passwordflag. -
Remove
logbookdependency (in favor oflogging)
0.3.2
(September 3rd, 2012)
-
Fixes broken
rqinfocommand. -
Improve compatibility with Python < 2.7.
0.3.1
(August 30th, 2012)
-
.enqueue()now takes aresult_ttlkeyword argument that can be used to change the expiration time of results. -
Queue constructor now takes an optional
async=Falseargument to bypass the worker (for testing purposes). -
Jobs now carry status information. To get job status information, like whether a job is queued, finished, or failed, use the property
status, or one of the new boolean accessor propertiesis_queued,is_finishedoris_failed. -
Jobs return values are always stored explicitly, even if they have to explicit return value or return
None(with given TTL of course). This makes it possible to distinguish between a job that explicitly returnedNoneand a job that isn't finished yet (seestatusproperty). -
Custom exception handlers can now be configured in addition to, or to fully replace, moving failed jobs to the failed queue. Relevant documentation here and here.
-
rqworkernow supports passing in configuration files instead of the many command line options:rqworker -c settingswill sourcesettings.py. -
rqworkernow supports one-flag setup to enable Sentry as its exception handler:rqworker --sentry-dsn="http://public:secret@example.com/1"Alternatively, you can use a settings file and configureSENTRY_DSN = 'http://public:secret@example.com/1'instead.
0.3.0
(August 5th, 2012)
-
Reliability improvements
- Warm shutdown now exits immediately when Ctrl+C is pressed and worker is idle
- Worker does not leak worker registrations anymore when stopped gracefully
-
.enqueue()does not consume thetimeoutkwarg anymore. Instead, to pass RQ a timeout value while enqueueing a function, use the explicit invocation instead:```python q.enqueue(do_something, args=(1, 2), kwargs={'a': 1}, timeout=30) ``` -
Add a
@jobdecorator, which can be used to do Celery-style delayed invocations:```python from redis import StrictRedis from rq.decorators import job # Connect to Redis redis = StrictRedis() @job('high', timeout=10, connection=redis) def some_work(x, y): return x + y ```Then, in another module, you can call
some_work:```python from foo.bar import some_work some_work.delay(2, 3) ```
0.2.2
(August 1st, 2012)
- Fix bug where return values that couldn't be pickled crashed the worker
0.2.1
(July 20th, 2012)
- Fix important bug where result data wasn't restored from Redis correctly (affected non-string results only).
0.2.0
(July 18th, 2012)
q.enqueue()accepts instance methods now, too. Objects will be pickle'd along with the instance method, so beware.q.enqueue()accepts string specification of functions now, too. Example:q.enqueue("my.math.lib.fibonacci", 5). Useful if the worker and the submitter of work don't share code bases.- Job can be assigned custom attrs and they will be pickle'd along with the rest of the job's attrs. Can be used when writing RQ extensions.
- Workers can now accept explicit connections, like Queues.
- Various bug fixes.
0.1.2
(May 15, 2012)
- Fix broken PyPI deployment.
0.1.1
(May 14, 2012)
- Thread-safety by using context locals
- Register scripts as console_scripts, for better portability
- Various bugfixes.
0.1.0:
(March 28, 2012)
- Initially released version.