130 Commits (master)

Author SHA1 Message Date
Selwin Ong 37ddcb51cd
Reliable queue (#1911)
* Use lmove() when working on a single queue

* Skip reliable queue tests if Redis server doesn't support LMOVE

* Better test coverage

* job.origin should be string

* Added test for job that gets orphaned if worker.execute_job() fails

* Fix job tests

* worker.run_maintenance_tasks() now cleans intermediate queues

* Fixed import ordering

* No need to run slow tests and flake8 on SSL tests

* Minor typing fixes

* Fixed linting
2 years ago
Rob Hudson ea063edf0a
Update linting configuration (#1915)
* Update linting configuration

This removes flake8 in favor of ruff, which also provides isort support, and
updates all files to be black, isort, and ruff compliant. This also adds black
and ruff checks to the tox and Github linting workflow.

* Tweak the code coverage config and calls
2 years ago
Selwin Ong 64cb1a27b9
Worker pool (#1874)
* First stab at implementating worker pool

* Use process.is_alive() to check whether a process is still live

* Handle shutdown signal

* Check worker loop done

* First working version of `WorkerPool`.

* Added test for check_workers()

* Added test for pool.start()

* Better shutdown process

* Comment out test_start() to see if it fixes CI

* Make tests pass

* Make CI pass

* Comment out some tests

* Comment out more tests

* Re-enable a test

* Re-enable another test

* Uncomment check_workers test

* Added run_worker test

* Minor modification to dead worker detection

* More test cases

* Better process name for workers

* Added back pool.stop_workers() when signal is received

* Cleaned up cli.py

* WIP on worker-pool command

* Fix test

* Test that worker pool ignores consecutive shutdown signals

* Added test for worker-pool CLI command.

* Added timeout to CI jobs

* Fix worker pool test

* Comment out test_scheduler.py

* Fixed worker-pool in burst mode

* Increase test coverage

* Exclude tests directory from coverage.py

* Improve test coverage

* Renamed `Pool(num_workers=2) to `Pool(size=2)`

* Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`"

This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b.

* Renamed Pool to WorkerPool

* Added a new TestCase that doesn't use LocalStack

* Added job_class, worker_class and serializer arguments to WorkerPool

* Use parse_connection() in WorkerPool.__init__

* Added CLI arguments for worker-pool

* Minor WorkerPool and test fixes

* Fixed failing CLI test

* Document WorkerPool
2 years ago
Yang Yang 9db728921d
Improve the lint situation (#1688)
* Move common flake8 options into config file

Currently --max-line-length being specified in two places. Just use the
existing value in the config file as the source of truth.

Move --count and --statistics to config file as well.

* Fix some lints
2 years ago
Hugo d5175c38da
Drop python2-specific syntax (#1674)
* Drop syntax required only for Python 2

* Drop python2-style super() calls

Co-authored-by: Selwin Ong <selwin.ong@gmail.com>
2 years ago
Hugo 61a4a1720b
Use unittest.mock instead of mock (#1673)
This module has been included in Python itself since 3.3.

Fixes: https://github.com/rq/rq/issues/1646
2 years ago
Josh Cohen a3fba1ca1f
Add missing functionality for CanceledJobRegistry (#1560) 3 years ago
Josh Cohen b80045d615
Respect serializer (#1538)
* Add serializer where missing in code

* Fix cli

* Pass option to command

* Add tests for serializer option

* Merge branch 'master' into respect-serializer
- Update enqueue cli to resp. serializer

* Address @selwin's review
3 years ago
Bo Bayles adc03e8119
Use result_ttl for synchronous queues (#1510) 3 years ago
Josh Cohen 591f11bcc3
Ensure pipeline in multi mode after dep setup (#1498) 4 years ago
Josh Cohen 456743b225
Make `Queue.enqueue`, `Queue.enqueue_call`, `Queue.enqueue_at``Queue.parse_args` accept `pipeline` arg, add `Queue.enqueue_many` method (#1485)
* Make `enqueue_*` and `Queue.parse_args` accept `pipeline` arg

* undo bad docs

* Fix lints in new code

* Implement enqueue_many, refactor dependency setup out to method

* Make name consistant

* Refactor enqueue_many, update tests,  add docs

* Refactor out enqueueing from dependency setup

* Move new docs to queue page

* Fix section header

* Fix section header again

* Update rq version to 1.9.0
4 years ago
Selwin Ong f3e924cdd1
Added job.worker_name (#1375)
* Added job.worker_name

* Fix compatibility with Redis server 3.x

* Document job.worker_name

* Removed some Python 2 compatibility stuff.

* Remove unused codes
4 years ago
Selwin Ong 39fb709c10
get_redis_server_version() should handle 4 digit version numbers (#1322) 4 years ago
Selwin Ong 49b156ecc7
Job retry feature. Docs WIP (#1299)
* Initial implementation of Retry class

* Fixes job.refresh() under Python 3.5

* Remove the use of text_type in job.py

* Retry can be scheduled

* monitor_work_horse() should call handle_job_failure() with queue argument.

* Flake8 fixes

* Added docs for job retries
4 years ago
Paul Spooren 73506b26fc
Add get_job_position and get_position feature (#1271)
Fix #1197

Signed-off-by: Paul Spooren <mail@aparcar.org>
5 years ago
thomas 01ebe25f56 Address Deleted Dependencies
1) Check if `created_at` when checking if dependencies are met.

   If `created_at` is `None` then the job has been deleted. This is sort of hack - we just need one of the fields on the job's hash that is ALWAYS populated. You can persist a job to redis without setting status...

2) Job#fetch_dependencies no longer raises NoSuchJob.

   If one of a job's dependencies has been deleted from Redis, it is not returned from `fetch_dependencies` and no exception is raised.
5 years ago
thomas 4440669f3c Fix patches for python2 5 years ago
thomas 7ea5a32a55 Alway set status 'FINISHED' when job is Successful
Method Queue#enqueue_dependents checks the status of all dependencies of all dependents, and enqueues those dependents for which all dependencies are FINISHED.

The enqueue_dependents method WAS called from Worker#handle_job_success called BEFORE the status of the successful job was set in Redis, so enqueue_dependents explicitly excluded the _successful_ job from interrogation of dependency statuses as the it would never be true in the existing code path, but it was assumed that this would be final status after the current pipeline was executed.

This commit changes Worker#handle_job_success so that it persists the status of the successful job to Redis, everytime a job completes(not only if it has a ttl) and does so before enqueue_dependents is called. This allows for enqueue_dependents to be less reliant on the out of band state of the current _successful job being handled_.
5 years ago
thomas 4a64244f40 Only enqueue dependents for all dependencies are FINISHED 5 years ago
Babatunde Olusola e1cbc3736c
Implement Customizable Serializer Support (#1219)
* Implement Customizable Serializer Support

* Refractor serializer instance methods

* Update tests with other serializers

* Edit function description

* Edit function description

* Raise appropriate exception

* Update tests for better code coverage

* Remove un-used imports and un-necessary code

* Refractor resolve_serializer

* Remove un-necessary alias from imports

* Add documentation

* Refractor tests, improve documentation
5 years ago
Selwin Ong d8bd455c12
enqueue_at should support explicit args and kwargs (#1211) 5 years ago
Selwin Ong baa0cc268a
Job scheduling (#1163)
* First RQScheduler prototype

* WIP job scheduling

* Fixed Python 2.7 tests

* Added ScheduledJobRegistry.get_scheduled_time(job)

* WIP on scheduler's threading mechanism

* Fixed test errors

* Changed scheduler.acquire_locks() to instance method

* Added scheduler.prepare_registries()

* Somewhat working implementation of RQ scheduler

* Only call stop_scheduler if there's a scheduler present

* Use OSError rather than ProcessLookupError for PyPy compatibility

* Added `auto_start` argument to scheduler.acquire_locks()

* Make RQScheduler play better with timezone

* Fixed test error

* Added --with-scheduler flag to rq worker CLI

* Fix tests on Python 2.x

* More Python 2 fixes

* Only call `scheduler.start` if worker is run in non burst mode

* Fixed an issue where running worker with scheduler would fail sometimes

* Make `worker.stop_scheduler()` more resilient to errors

* worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks

* Scheduler can now work with worker in both burst and non burst mode

* Fixed scheduler logging message

* Always log scheduler errors when running

* Improve scheduler error logging message

* Removed testing code

* Scheduler should periodically try to acquire locks for other queues it doesn't have

* Added tests for scheduler.should_reacquire_locks

* Added queue.enqueue_in()

* Fixes queue.enqueue_in() in Python 2.7

* First stab at documenting job scheduling

* Remove unused methods

* Remove Python 2.6 logging compatibility code

* Remove more unused imports

* Added convenience methods to access job registries from queue

* Added test for worker.run_maintenance_tasks()

* Simplify worker.queue_names() and worker.queue_keys()

* Updated changelog to mention RQ's new job scheduling mechanism.
5 years ago
Thomas Matecki 80c82f731f Multi Dependency Support - Registration & Enqueue Call (#1155)
* Multi Dependency Support - Registration & Enqueue Call

Internal API changes to support multiple dependencies.
* Store all of a job's _dependencies_ in a redis set. Delete that set when a job is deleted.
* Add Job#fetch_dependencies method - which return all jobs a job is dependent upon and optionally _WATCHES_ all dependency ids.
* Use Job#fetch_dependencies in Queue#call_enqueue. `fetch_dependencies` now sets WATCH and raises InvalidJobDependency, rather than call_enqueue.

`Queue` and `Job` public APIs still expect single ids of jobs for `depends_on` but internally register them in a way that could support multiple jobs being passed as dependencies.

Next up: need to update Queue#enqueue_dependents

* Use existing fetch_many method to get dependencies.

Modify fetch_dependencies to use fetch_many.

* Remove default value for fetch_many's connection parameter

* PR review housekeeping

* Remove a duplicate test
* Oneline something
* Fix missing colon in dependencies key
* Delete job key, dependents and dependencies at once

* More Fixes From Code Review

Updates to Job, Queue and associated tests.

* When Checking dependencies Avoid, trip to Redis

* When checking the status of a job, we have a 'clean' status of all dependencies(returned from Job#fetch_dependencies) and the job keys are WATCHed, so there's no reason to go back to Redis to get the status _again_.
* Looks as though, the `_status` set in `Job#restore` was bytes while it was converted to text(`as_text`) in `Job#get_status` - for consistency(and tests) converting to text in `restore` as well.
* In `Queue#enqueue_call`, moved WATCH of dependencies_key to before fetching dependencies. This doesn't really matter but seems more _correct_ - one can imagine some rogue API adding a dependency after they've been fetched but before they've been WATCHEed.

* Update Job#get_status to get _local_ status

* If refresh=False is passed, don't get status from Redis; return the value of _status. This is to avoid a trip to Redis if the caller can guarantee that the value of `_status` is _clean_.

* More Fixups

* Expire dependency keys in Job#cleanup
* Consistency in Job#fetch_dependencies
5 years ago
Selwin Ong f9d42e8a17
Added logging statements to handle_job_success and handle_job_failure (#1112) 6 years ago
Selwin Ong b14c4e288d
Added checks for 0 ttl (#1110) 6 years ago
Selwin Ong c4cbb3af2f
RQ v1.0! (#1059)
* Added FailedJobRegistry.

* Added job.failure_ttl.

* queue.enqueue() now supports failure_ttl

* Added registry.get_queue().

* FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL.

* StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry.

* Failed jobs are now added to FailedJobRegistry.

* Added FailedJobRegistry.requeue()

* Document the new `FailedJobRegistry` and changes in custom exception handler behavior.

* Added worker.disable_default_exception_handler.

* Document --disable-default-exception-handler option.

* Deleted worker.failed_queue.

* Deleted "move_to_failed_queue" exception handler.

* StartedJobRegistry should no longer move jobs to FailedQueue.

* Deleted requeue_job

* Fixed test error.

* Make requeue cli command work with FailedJobRegistry

* Added .pytest_cache to gitignore.

* Custom exception handlers are no longer run in reverse

* Restored requeue_job function

* Removed get_failed_queue

* Deleted FailedQueue

* Updated changelog.

* Document `failure_ttl`

* Updated docs.

* Remove job.status

* Fixed typo in test_registry.py

* Replaced _pipeline() with pipeline()

* FailedJobRegistry no longer fails on redis-py>=3

* Fixes test_clean_registries

* Worker names are now randomized

* Added a note about random worker names in CHANGES.md

* Worker will now stop working when encountering an unhandled exception.

* Worker should reraise SystemExit on cold shutdowns

* Added anchor.js to docs

* Support for Sentry-SDK (#1045)

* Updated RQ to support sentry-sdk

* Document Sentry integration

* Install sentry-sdk before running tests

* Improved rq info CLI command to be more efficient when displaying lar… (#1046)

* Improved rq info CLI command to be more efficient when displaying large number of workers

* Fixed an rq info --by-queue bug

* Fixed worker.total_working_time bug (#1047)

* queue.enqueue() no longer accepts `timeout` argument (#1055)

* Clean worker registry (#1056)

* queue.enqueue() no longer accepts `timeout` argument

* Added clean_worker_registry()

* Show worker hostname and PID on cli (#1058)

* Show worker hostname and PID on cli

* Improve test coverage

* Remove Redis version check when SSL is used

* Bump version to 1.0

* Removed pytest_cache/README.md

* Changed worker logging to use exc_info=True

* Removed unused queue.dequeue()

* Fixed typo in CHANGES.md

* setup_loghandlers() should always call logger.setLevel() if specified
6 years ago
Wolfgang Langner abf6881114 Fix #1040 queue default timeout bug. (#1042)
Add test for queue default_timeout.
6 years ago
Selwin Ong 6559b0ffd7
Replace "timeout" argument in queue.enqueue() with "job_timeout" (#1010) 6 years ago
Paul Robertson e86fb57366 add is_async property to queue (#982) 6 years ago
chevell c2b939d2df Replace 'async' keyword with 'is_async' for Queue objects (#977)
* Replaced async keyword with is_async in the Queue class to fix reserved keyword syntax errors in Python 3.7

* Updated tests to use is_async keyword when instantiating Queue objects

* Updated docs to reference is_async keyword for Queue objects

* Updated tox.ini, setup.py and .travis.yml with references to Python 3.7
7 years ago
Selwin Ong 92c88d3f4d Merge pull request #878 from theodesp/Issue-731
Fixed #731 - Support for deleting Queues
7 years ago
Theo c095fe1825 Fixed #731 - Code review issues. Added delete_jobs parameter and pipelining. 7 years ago
Theo 160fe99323 Fixed #731 - Support for deleting Queues 7 years ago
Theo 096c5ad3c2 Fixed #866 - Flak8 errors 7 years ago
Alexey Katichev 09697e567f revert back job.cleanup changes 8 years ago
Alexey Katichev 3596449cc0 remove implicit cleanup call from job.save 8 years ago
Selwin Ong f760fcb20f job.delete() should cleans itself from FailedQueue and various registries. 8 years ago
Jannis Leidel c019662430
Allow passing backend classes (job, queue, worker, connection) from CLI and other APIs
This includes:

- a partial refactor of the CLI to organize the shared options
- extends the tests in areas where passing custom backend classes makes sense
- allow setting the core CLI options as env vars
- minor cosmetic changes here and there
8 years ago
Julien Surloppe dc3bba9362 Another check on failed status and test 8 years ago
Samuel Colvin afc7469c27 fetch_job - check correct queue, fix #728 8 years ago
Stefan Hammer 301e5c927b Raise an exception if a given dependency does not exist
Adapted some tests to the change: the dependency has to be saved first.
8 years ago
Arnold Krille df22f127eb Test the worker in its own subprocess
- run with an empty queue
- schedule one job (which uses get_current_connection and get_current_job) and
run `rqworker`
- schedule a job that itself schedules `access_self` and run `rqworker`
- Make sure the job didn't fail by assuring the failed queue is still empty
  afterwards.
- Install this package locally when running in travis.
  This actually unifies the behaviour of tox and travis as tox also builds the
  package and then installs it into each test environment.
- fix flake8 (as run by tox)
9 years ago
Arnold Krille 9df0a853d8 Fix indentation and newlines according to flake8 9 years ago
Antoine Leclair 2f36cedd50 Typo in test docstring 9 years ago
Selwin Ong 2485334100 Merge pull request #609 from tornstrom/master
Allow meta when enqueing
9 years ago
Tornstrom 50a114a0a8 Allow meta when enqueing 9 years ago
Selwin Ong 8bbd833855 Merge pull request #600 from glaslos/cancel_remove
Cancel and Delete differences
9 years ago
ahxxm b06f112cb0 fix tests
syntax: assertEquals -> assertEqual, assertNotEquals -> assertNotEqual
usage: status of worker and job now will use get/set method instead of property method
9 years ago
glaslos 0a6df13d9d delete dependents and delete in cleanup. Fixed tests. 9 years ago
Javier Lopez a2d0e4f933 Clarify test_enqueue_dependents_on_multiple_queues 9 years ago