* Added FailedJobRegistry.
* Added job.failure_ttl.
* queue.enqueue() now supports failure_ttl
* Added registry.get_queue().
* FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL.
* StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry.
* Failed jobs are now added to FailedJobRegistry.
* Added FailedJobRegistry.requeue()
* Document the new `FailedJobRegistry` and changes in custom exception handler behavior.
* Added worker.disable_default_exception_handler.
* Document --disable-default-exception-handler option.
* Deleted worker.failed_queue.
* Deleted "move_to_failed_queue" exception handler.
* StartedJobRegistry should no longer move jobs to FailedQueue.
* Deleted requeue_job
* Fixed test error.
* Make requeue cli command work with FailedJobRegistry
* Added .pytest_cache to gitignore.
* Custom exception handlers are no longer run in reverse
* Restored requeue_job function
* Removed get_failed_queue
* Deleted FailedQueue
* Updated changelog.
* Document `failure_ttl`
* Updated docs.
* Remove job.status
* Fixed typo in test_registry.py
* Replaced _pipeline() with pipeline()
* FailedJobRegistry no longer fails on redis-py>=3
* Fixes test_clean_registries
* Worker names are now randomized
* Added a note about random worker names in CHANGES.md
* Worker will now stop working when encountering an unhandled exception.
* Worker should reraise SystemExit on cold shutdowns
* Added anchor.js to docs
* Support for Sentry-SDK (#1045)
* Updated RQ to support sentry-sdk
* Document Sentry integration
* Install sentry-sdk before running tests
* Improved rq info CLI command to be more efficient when displaying lar… (#1046)
* Improved rq info CLI command to be more efficient when displaying large number of workers
* Fixed an rq info --by-queue bug
* Fixed worker.total_working_time bug (#1047)
* queue.enqueue() no longer accepts `timeout` argument (#1055)
* Clean worker registry (#1056)
* queue.enqueue() no longer accepts `timeout` argument
* Added clean_worker_registry()
* Show worker hostname and PID on cli (#1058)
* Show worker hostname and PID on cli
* Improve test coverage
* Remove Redis version check when SSL is used
* Bump version to 1.0
* Removed pytest_cache/README.md
* Changed worker logging to use exc_info=True
* Removed unused queue.dequeue()
* Fixed typo in CHANGES.md
* setup_loghandlers() should always call logger.setLevel() if specified
- `job.status` has been removed. Use `job.get_status()` and `job.set_status()` instead. Thanks @selwin!
- `FailedQueue` has been replaced with `FailedJobRegistry`:
* `get_failed_queue()` function has been removed. Please use `FailedJobRegistry(queue=queue)` instead.
* `move_to_failed_queue()` has been removed.
* RQ now provides a mechanism to automatically cleanup failed jobs. By default, failed jobs are kept for 1 year.
* Thanks @selwin!
- RQ's custom job exception handling mechanism has also changed slightly:
* RQ's default exception handling mechanism (moving jobs to `FailedJobRegistry`) can be disabled by doing `Worker(disable_default_exception_handler=True)`.
* Custom exception handlers are no longer executed in reverse order.
* Thanks @selwin!
- `Worker` names are now randomized. Thanks @selwin!
- `timeout` argument on `queue.enqueue()` has been deprecated in favor of `job_timeout`. Thanks @selwin!
- Sentry integration has been reworked:
* RQ now uses the new [sentry-sdk](https://pypi.org/project/sentry-sdk/) in place of the deprecated [Raven](https://pypi.org/project/raven/) library
* RQ will look for the more explicit `RQ_SENTRY_DSN` environment variable instead of `SENTRY_DSN` before instantiating Sentry integration
@ -61,7 +61,7 @@ In addition, you can add a few options to modify the behaviour of the queued
job. By default, these are popped out of the kwargs that will be passed to the
job. By default, these are popped out of the kwargs that will be passed to the
job function.
job function.
* `timeout` specifies the maximum runtime of the job before it's interrupted
* `job_timeout` specifies the maximum runtime of the job before it's interrupted
and marked as `failed`. Its default unit is second and it can be an integer or a string representing an integer(e.g. `2`, `'2'`). Furthermore, it can be a string with specify unit including hour, minute, second(e.g. `'1h'`, `'3m'`, `'5s'`).
and marked as `failed`. Its default unit is second and it can be an integer or a string representing an integer(e.g. `2`, `'2'`). Furthermore, it can be a string with specify unit including hour, minute, second(e.g. `'1h'`, `'3m'`, `'5s'`).
* `result_ttl` specifies the expiry time of the key where the job result will
* `result_ttl` specifies the expiry time of the key where the job result will
be stored
be stored
@ -72,8 +72,9 @@ job function.
* `job_id` allows you to manually specify this job's `job_id`
* `job_id` allows you to manually specify this job's `job_id`
* `at_front` will place the job at the *front* of the queue, instead of the
* `at_front` will place the job at the *front* of the queue, instead of the
back
back
* `description` to add additional description to enqueued jobs.
* `kwargs` and `args` lets you bypass the auto-pop of these arguments, ie:
* `kwargs` and `args` lets you bypass the auto-pop of these arguments, ie:
specify a `timeout` argument for the underlying job function.
specify a `description` argument for the underlying job function.
In the last case, it may be advantageous to instead use the explicit version of
In the last case, it may be advantageous to instead use the explicit version of
`.enqueue()`, `.enqueue_call()`:
`.enqueue()`, `.enqueue_call()`:
@ -82,7 +83,7 @@ In the last case, it may be advantageous to instead use the explicit version of
q = Queue('low', connection=redis_conn)
q = Queue('low', connection=redis_conn)
q.enqueue_call(func=count_words_at_url,
q.enqueue_call(func=count_words_at_url,
args=('http://nvie.com',),
args=('http://nvie.com',),
timeout=30)
job_timeout=30)
```
```
For cases where the web process doesn't have access to the source code running
For cases where the web process doesn't have access to the source code running