* Use lmove() when working on a single queue
* Skip reliable queue tests if Redis server doesn't support LMOVE
* Better test coverage
* job.origin should be string
* Added test for job that gets orphaned if worker.execute_job() fails
* Fix job tests
* worker.run_maintenance_tasks() now cleans intermediate queues
* Fixed import ordering
* No need to run slow tests and flake8 on SSL tests
* Minor typing fixes
* Fixed linting
* Update linting configuration
This removes flake8 in favor of ruff, which also provides isort support, and
updates all files to be black, isort, and ruff compliant. This also adds black
and ruff checks to the tox and Github linting workflow.
* Tweak the code coverage config and calls
* First stab at implementating worker pool
* Use process.is_alive() to check whether a process is still live
* Handle shutdown signal
* Check worker loop done
* First working version of `WorkerPool`.
* Added test for check_workers()
* Added test for pool.start()
* Better shutdown process
* Comment out test_start() to see if it fixes CI
* Make tests pass
* Make CI pass
* Comment out some tests
* Comment out more tests
* Re-enable a test
* Re-enable another test
* Uncomment check_workers test
* Added run_worker test
* Minor modification to dead worker detection
* More test cases
* Better process name for workers
* Added back pool.stop_workers() when signal is received
* Cleaned up cli.py
* WIP on worker-pool command
* Fix test
* Test that worker pool ignores consecutive shutdown signals
* Added test for worker-pool CLI command.
* Added timeout to CI jobs
* Fix worker pool test
* Comment out test_scheduler.py
* Fixed worker-pool in burst mode
* Increase test coverage
* Exclude tests directory from coverage.py
* Improve test coverage
* Renamed `Pool(num_workers=2) to `Pool(size=2)`
* Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`"
This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b.
* Renamed Pool to WorkerPool
* Added a new TestCase that doesn't use LocalStack
* Added job_class, worker_class and serializer arguments to WorkerPool
* Use parse_connection() in WorkerPool.__init__
* Added CLI arguments for worker-pool
* Minor WorkerPool and test fixes
* Fixed failing CLI test
* Document WorkerPool
* Move common flake8 options into config file
Currently --max-line-length being specified in two places. Just use the
existing value in the config file as the source of truth.
Move --count and --statistics to config file as well.
* Fix some lints
* Make `enqueue_*` and `Queue.parse_args` accept `pipeline` arg
* undo bad docs
* Fix lints in new code
* Implement enqueue_many, refactor dependency setup out to method
* Make name consistant
* Refactor enqueue_many, update tests, add docs
* Refactor out enqueueing from dependency setup
* Move new docs to queue page
* Fix section header
* Fix section header again
* Update rq version to 1.9.0
* Initial implementation of Retry class
* Fixes job.refresh() under Python 3.5
* Remove the use of text_type in job.py
* Retry can be scheduled
* monitor_work_horse() should call handle_job_failure() with queue argument.
* Flake8 fixes
* Added docs for job retries
1) Check if `created_at` when checking if dependencies are met.
If `created_at` is `None` then the job has been deleted. This is sort of hack - we just need one of the fields on the job's hash that is ALWAYS populated. You can persist a job to redis without setting status...
2) Job#fetch_dependencies no longer raises NoSuchJob.
If one of a job's dependencies has been deleted from Redis, it is not returned from `fetch_dependencies` and no exception is raised.
Method Queue#enqueue_dependents checks the status of all dependencies of all dependents, and enqueues those dependents for which all dependencies are FINISHED.
The enqueue_dependents method WAS called from Worker#handle_job_success called BEFORE the status of the successful job was set in Redis, so enqueue_dependents explicitly excluded the _successful_ job from interrogation of dependency statuses as the it would never be true in the existing code path, but it was assumed that this would be final status after the current pipeline was executed.
This commit changes Worker#handle_job_success so that it persists the status of the successful job to Redis, everytime a job completes(not only if it has a ttl) and does so before enqueue_dependents is called. This allows for enqueue_dependents to be less reliant on the out of band state of the current _successful job being handled_.
* First RQScheduler prototype
* WIP job scheduling
* Fixed Python 2.7 tests
* Added ScheduledJobRegistry.get_scheduled_time(job)
* WIP on scheduler's threading mechanism
* Fixed test errors
* Changed scheduler.acquire_locks() to instance method
* Added scheduler.prepare_registries()
* Somewhat working implementation of RQ scheduler
* Only call stop_scheduler if there's a scheduler present
* Use OSError rather than ProcessLookupError for PyPy compatibility
* Added `auto_start` argument to scheduler.acquire_locks()
* Make RQScheduler play better with timezone
* Fixed test error
* Added --with-scheduler flag to rq worker CLI
* Fix tests on Python 2.x
* More Python 2 fixes
* Only call `scheduler.start` if worker is run in non burst mode
* Fixed an issue where running worker with scheduler would fail sometimes
* Make `worker.stop_scheduler()` more resilient to errors
* worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks
* Scheduler can now work with worker in both burst and non burst mode
* Fixed scheduler logging message
* Always log scheduler errors when running
* Improve scheduler error logging message
* Removed testing code
* Scheduler should periodically try to acquire locks for other queues it doesn't have
* Added tests for scheduler.should_reacquire_locks
* Added queue.enqueue_in()
* Fixes queue.enqueue_in() in Python 2.7
* First stab at documenting job scheduling
* Remove unused methods
* Remove Python 2.6 logging compatibility code
* Remove more unused imports
* Added convenience methods to access job registries from queue
* Added test for worker.run_maintenance_tasks()
* Simplify worker.queue_names() and worker.queue_keys()
* Updated changelog to mention RQ's new job scheduling mechanism.
* Multi Dependency Support - Registration & Enqueue Call
Internal API changes to support multiple dependencies.
* Store all of a job's _dependencies_ in a redis set. Delete that set when a job is deleted.
* Add Job#fetch_dependencies method - which return all jobs a job is dependent upon and optionally _WATCHES_ all dependency ids.
* Use Job#fetch_dependencies in Queue#call_enqueue. `fetch_dependencies` now sets WATCH and raises InvalidJobDependency, rather than call_enqueue.
`Queue` and `Job` public APIs still expect single ids of jobs for `depends_on` but internally register them in a way that could support multiple jobs being passed as dependencies.
Next up: need to update Queue#enqueue_dependents
* Use existing fetch_many method to get dependencies.
Modify fetch_dependencies to use fetch_many.
* Remove default value for fetch_many's connection parameter
* PR review housekeeping
* Remove a duplicate test
* Oneline something
* Fix missing colon in dependencies key
* Delete job key, dependents and dependencies at once
* More Fixes From Code Review
Updates to Job, Queue and associated tests.
* When Checking dependencies Avoid, trip to Redis
* When checking the status of a job, we have a 'clean' status of all dependencies(returned from Job#fetch_dependencies) and the job keys are WATCHed, so there's no reason to go back to Redis to get the status _again_.
* Looks as though, the `_status` set in `Job#restore` was bytes while it was converted to text(`as_text`) in `Job#get_status` - for consistency(and tests) converting to text in `restore` as well.
* In `Queue#enqueue_call`, moved WATCH of dependencies_key to before fetching dependencies. This doesn't really matter but seems more _correct_ - one can imagine some rogue API adding a dependency after they've been fetched but before they've been WATCHEed.
* Update Job#get_status to get _local_ status
* If refresh=False is passed, don't get status from Redis; return the value of _status. This is to avoid a trip to Redis if the caller can guarantee that the value of `_status` is _clean_.
* More Fixups
* Expire dependency keys in Job#cleanup
* Consistency in Job#fetch_dependencies
* Added FailedJobRegistry.
* Added job.failure_ttl.
* queue.enqueue() now supports failure_ttl
* Added registry.get_queue().
* FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL.
* StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry.
* Failed jobs are now added to FailedJobRegistry.
* Added FailedJobRegistry.requeue()
* Document the new `FailedJobRegistry` and changes in custom exception handler behavior.
* Added worker.disable_default_exception_handler.
* Document --disable-default-exception-handler option.
* Deleted worker.failed_queue.
* Deleted "move_to_failed_queue" exception handler.
* StartedJobRegistry should no longer move jobs to FailedQueue.
* Deleted requeue_job
* Fixed test error.
* Make requeue cli command work with FailedJobRegistry
* Added .pytest_cache to gitignore.
* Custom exception handlers are no longer run in reverse
* Restored requeue_job function
* Removed get_failed_queue
* Deleted FailedQueue
* Updated changelog.
* Document `failure_ttl`
* Updated docs.
* Remove job.status
* Fixed typo in test_registry.py
* Replaced _pipeline() with pipeline()
* FailedJobRegistry no longer fails on redis-py>=3
* Fixes test_clean_registries
* Worker names are now randomized
* Added a note about random worker names in CHANGES.md
* Worker will now stop working when encountering an unhandled exception.
* Worker should reraise SystemExit on cold shutdowns
* Added anchor.js to docs
* Support for Sentry-SDK (#1045)
* Updated RQ to support sentry-sdk
* Document Sentry integration
* Install sentry-sdk before running tests
* Improved rq info CLI command to be more efficient when displaying lar… (#1046)
* Improved rq info CLI command to be more efficient when displaying large number of workers
* Fixed an rq info --by-queue bug
* Fixed worker.total_working_time bug (#1047)
* queue.enqueue() no longer accepts `timeout` argument (#1055)
* Clean worker registry (#1056)
* queue.enqueue() no longer accepts `timeout` argument
* Added clean_worker_registry()
* Show worker hostname and PID on cli (#1058)
* Show worker hostname and PID on cli
* Improve test coverage
* Remove Redis version check when SSL is used
* Bump version to 1.0
* Removed pytest_cache/README.md
* Changed worker logging to use exc_info=True
* Removed unused queue.dequeue()
* Fixed typo in CHANGES.md
* setup_loghandlers() should always call logger.setLevel() if specified
* Replaced async keyword with is_async in the Queue class to fix reserved keyword syntax errors in Python 3.7
* Updated tests to use is_async keyword when instantiating Queue objects
* Updated docs to reference is_async keyword for Queue objects
* Updated tox.ini, setup.py and .travis.yml with references to Python 3.7
This includes:
- a partial refactor of the CLI to organize the shared options
- extends the tests in areas where passing custom backend classes makes sense
- allow setting the core CLI options as env vars
- minor cosmetic changes here and there
- run with an empty queue
- schedule one job (which uses get_current_connection and get_current_job) and
run `rqworker`
- schedule a job that itself schedules `access_self` and run `rqworker`
- Make sure the job didn't fail by assuring the failed queue is still empty
afterwards.
- Install this package locally when running in travis.
This actually unifies the behaviour of tox and travis as tox also builds the
package and then installs it into each test environment.
- fix flake8 (as run by tox)
syntax: assertEquals -> assertEqual, assertNotEquals -> assertNotEqual
usage: status of worker and job now will use get/set method instead of property method