* Add support for a callback on stopped jobs
This function will run when an active job is stopped using the
send_stopped_job_command_function
* Remove testing async job with stopped callback
* Remove stopped job test from simpleworker case.
I can't stop the job from the test until the work() method returns, at which point the
job can't be stopped.
* Improve coverage
* Add test for stopped callback execution
* Move stopped callback check out of execution func
* Use SimpleWorker for stopped callback test
* Call stopped callback directly in main proc
* Remove unused imports
* Fix import order
* Fix import order
* Fix death penalty class arg
* Fix worker instance init
Sorry these commits are so lazy
* Use lmove() when working on a single queue
* Skip reliable queue tests if Redis server doesn't support LMOVE
* Better test coverage
* job.origin should be string
* Added test for job that gets orphaned if worker.execute_job() fails
* Fix job tests
* worker.run_maintenance_tasks() now cleans intermediate queues
* Fixed import ordering
* No need to run slow tests and flake8 on SSL tests
* Minor typing fixes
* Fixed linting
* Update linting configuration
This removes flake8 in favor of ruff, which also provides isort support, and
updates all files to be black, isort, and ruff compliant. This also adds black
and ruff checks to the tox and Github linting workflow.
* Tweak the code coverage config and calls
* First stab at implementating worker pool
* Use process.is_alive() to check whether a process is still live
* Handle shutdown signal
* Check worker loop done
* First working version of `WorkerPool`.
* Added test for check_workers()
* Added test for pool.start()
* Better shutdown process
* Comment out test_start() to see if it fixes CI
* Make tests pass
* Make CI pass
* Comment out some tests
* Comment out more tests
* Re-enable a test
* Re-enable another test
* Uncomment check_workers test
* Added run_worker test
* Minor modification to dead worker detection
* More test cases
* Better process name for workers
* Added back pool.stop_workers() when signal is received
* Cleaned up cli.py
* WIP on worker-pool command
* Fix test
* Test that worker pool ignores consecutive shutdown signals
* Added test for worker-pool CLI command.
* Added timeout to CI jobs
* Fix worker pool test
* Comment out test_scheduler.py
* Fixed worker-pool in burst mode
* Increase test coverage
* Exclude tests directory from coverage.py
* Improve test coverage
* Renamed `Pool(num_workers=2) to `Pool(size=2)`
* Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`"
This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b.
* Renamed Pool to WorkerPool
* Added a new TestCase that doesn't use LocalStack
* Added job_class, worker_class and serializer arguments to WorkerPool
* Use parse_connection() in WorkerPool.__init__
* Added CLI arguments for worker-pool
* Minor WorkerPool and test fixes
* Fixed failing CLI test
* Document WorkerPool
* consolidate job failure callback execution
* fix
* success callback as well
* merge fix
* create Callback class and change how callbacks serde
* deprecation
* add tests
* string format
* pr fix
* revert serialization changes
* fix timeout typing
* add documentation
* add test
* fix a bug
* fix tests
* move job heartbeat call to worker and make sure there is always a callback timeout
* fix test
* fix test
* Remove unused code from compat module
* Remove unused dictconfig
* Remove total_ordering compat layer
* Remove compatibility layer
This completely removes the compat module. It moves utilities
functions (`as_text` and `decode_redis_hash`) to the `utils`
module, is eliminates the use of the proxies `text_type` and
`string_types`, using the `str` construct directly.
* Remove compat module
Finishes the cleaning of the compatibility module.
The last function being the `is_python_version` function
which was being used internally.
* Fix old import
* Fix Imports
* Remove Dummy (Force GH Actions)
* Fix Imports
* Organize Imports
* WIP job results
* Result can now be saved
* Successfully saved and restored result
* result.save() should accept pipeline
* Successful results are saved
* Failures are now saved properly too.
* Added test for Result.get_latest()
* Checkpoint
* Got Result.all() to work
* Added Result.count(), Result.delete()
* Backward compatibility for job.result and job.exc_info
* Added some typing
* More typing stuff
* Fixed typing in job.py
* More typing updates
* Only keep the last 10 results
* Documented job.results()
* Got results test to pass
* Don't run test_results.py on Redis server < 5.0
* Fixed mock import on some Python versions
* Remove Redis 3 from test matrix
* Jobs should never use the new Result implementation if server is < 5.0
* Results should only be created is Redis stream is supported.
* Added back Redis 3 to test matrix
* Fixed job.supports_redis_streams
* Fixed worker test
* Updated docs.
* Fix job.dependencies_are_met() if dependency is canceled
* Slightly better test coverage on dependencies_are_met()
* Fixed job.cancel(enqueue_dependent=True)
* added Dependency class with allow_failures
* Requested changes
* Check type before setting `job.dependency_allow_fail` within `Job.create`
* Set `job.dependency_allow_fail` within `Job.create`
* Added test to ensure persistence of `dependency_allow_fail`
* Removed typing and allow mixed list of ints and Job objects
* Convert dependency_allow_fail boolean to integer during serialization to avoid redis DataError
* Updated `test_multiple_dependencies_are_accepted_and_persisted` test to include `Dependency` cases
* Adding placeholder test to test actual behavior of new `Dependency` usage in `depends_on`
* Updated `test_job_dependency` to include cases using `Dependency`
* Added dependency_allow_fail logic to `Job.restore`
* Renamed `dependency_allow_fail` to a simpler `allow_failure`
* Update docs to add section about the new `Dependency` class and use-case
* Updated `Job.dependencies_are_met` logic to take `FAILED` and `STOPPED` jobs into account when `allow_failure=True`
* Updated `test_job_dependency` test. Still failing with `Dependency` case.
* Fix `allow_failure` type coercion in `Job.restore`
* Re-arrange tests, so default `Dependency.allow_failure` is before explicit `allow_failure=True`
* Fixed Dependency, so it works correctly when allow_failure=True
* Attempt to execute pipeline prior to queueing a failed job's dependents. test_create_and_cancel_job_enqueue_dependents_in_registry test now passes.
* Added `Depedency` test utilizing multiple dependencies
* Removed irrelevant on_success and on_failure keyword arguments in example
* Replaced use of long_running_job
* Add test to verify `Dependency.jobs` contraints
* Suppress connection error in handle_job_failure
* test_dependencies have passed
* All tests pass if enqueue_dependents called without pipeline.watch()
* All tests now pass
* Removed print statements
* Cleanup Dependency implementation
* Renamed job.allow_failure to job.allow_dependency_failures
Co-authored-by: mattchan <mattchan@tencent.com>
Co-authored-by: Mike Hill <mhilluniversal@gmail.com>
There are small typos in:
- docs/docs/exceptions.md
- docs/docs/jobs.md
- rq/queue.py
- tests/fixtures.py
- tests/test_job.py
Fixes:
- Should read `slightly` rather than `slighty`.
- Should read `requeuing` rather than `requeueing`.
- Should read `implementers` rather than `implementors`.
- Should read `definition` rather than `defition`.
- Should read `canceled` rather than `canceld`.
Signed-off-by: Tim Gates <tim.gates@iress.com>
This bug has opened a lot of possible race-conditions, since the
watch-logic from redis did not fail anymore, if dependencies have been
changed in parallel.
* Fix `job.cancel` to remove job from registiries if not in queue
* Remove old queue remove call
* Block the ability to cancel job that are already canceled
* Fix py35 compat
* Rename helper method
* job: add get_meta() function
The newly introduced function returns meta data stored for the job. This
is required since job.meta stays an empty dict until the job is
finished or failed.
With the new function it's possible to store arbiatraty states/stages of
the job and allow the user to track progress. A long running job may
return custom stages like `downloading_data`, `unpacking_data`,
`processing_data`, etc.
This may allow better interfaces since users can track progress.
Signed-off-by: Paul Spooren <mail@aparcar.org>
* docs: add missing `refresh` arg to get_status()
This was previously missing.
Signed-off-by: Paul Spooren <mail@aparcar.org>
* Support enqueueing with on_success_callback
* Got success callback to execute properly
* Added on_failure callback
* Document success and failure callbacks
* More Flake8 fixes
* Mention that options can also be passed in via environment variables
* Increase coverage on test_callbacks.py
* adds unit test for a deserialization error
This tests that deserialization exceptions are properly logged, and fails in
the manner described in #1422 .
* Catch deserializing errors in Worker.handle_exception()
This fixes#1422 , and makes
tests/test_worker.py::TestWorker::test_deserializing_failure_is_handled
pass.
* made unit test less specific
This is required to get the test to pass under other serializers / other
python versions.
* Added generic DeserializationError
* switched ValueError to DeserializationError in a test
The changed test is creating an invalid job, which now raises
DeserializationError when data is accessed, as opposed to ValueError.
* cleanup jobs that are not really running due to zombie workers
* remove registry entries for zombie jobs
* return only the job ids on cleanup
* test zombie job cleanup
* format code
* rename variable to explain that second element in tuple is expiry, not score
* remove worker_key
* detect zombie jobs using old heartbeats
* reuse get_expired_job_ids
* set score using current_timestamp
* test idle jobs using stale heartbeats
* extract timeout into variable
* move heartbeats into StartedJobRegistry
* use registry.heartbeat in tests
* remove heartbeats when job removed from StartedJobRegistry
* remove idle and expired jobs from both wip and heartbeats set
* send heartbeat_ttl to registry.add
* typo
* revert everything 😶
* only keep job heartbeats as score (and get rid of job timeouts as scores
* calculate heartbeat_ttl in an overrideable function + override it in SimpleWorker + move storing StartedJobRegistry scores to job.heartbeat()
* set heartbeat to monitoring interval for infinite timeouts
* track elapsed_execution_time as part of worker
* reset current job working time when work on a job is done
* persisting the job working time as part of monitoring
* Also accept lists and tuples as value of `depends_on`.
* The elements of the lists/tuples may be either Jobs or Job IDs.
* A single Job / Job ID is still accepted as well.
* Represent _all_ job dependencies in `Job.to_dict()`.
We now represent the entire list, instead of just the first element.
* Fix some doctext regarding plurality of dependencies.
* Add unit tests for job dependencies.
* One unit test establishes a pattern for checking execution order as affected by dependencies.
* Another unit test applies this pattern to the new capability to name multiple dependencies.
* Add unit test for new `depends_on` input formats.
Also test that these are properly persisted.
* Repair `Job.restore()`.
Need to convert bytes back to strings when reloading `dependency_ids`.
* Maintain backwards compat. in `Job.to_dict()`.
Keep the old `dependency_id` (singular) key.
* Provide coverage for new test fixture.
* Simplify some code.
Cut some superfluous `as_text()` calls left over from an earlier commit.
* Check for `dependency_id` in `Job.restore()` for backwd. compat.
Also eliminate use of `as_text()` here, in favor of `.decode()`.
* Switch to snake case instead of camel case.
* Eliminate some usages of `as_text()`.
Also cut some `print` statements.
* Cleanup.
* Accept arbitrary iterables for `Job`'s `depends_on` kwarg.
Instead of requiring a list or tuple, we now make use of `ensure_list()`.
* Add test fixtures.
* Provide a system to get two workers working simultaneously, using `multiprocessing`.
* Define a simple job that just says whether its dependencies are met.
* In `rpush`, make an option to record the name of the worker.
* Improve unit tests on execution order with dependencies.
These now actually have two workers going, which makes a more thorough test.
* Add unit test examining `Job.dependencies_are_met()` at execution time.
* Redesign dependency execution order unit tests.
* Simplify assertions.
* Improve doctext and formatting.
* Move fixture tests to new, dedicated module `test_fixtures.py`.
* Use `enqueue` instead of `enqueue_call` in new unit tests.
* feat: added job heartbeat to track whether job is actually executing
heartbeat might be needed in cases when worker was hardkilled or the whole VM/docker was forcibly rebooted.
* fixed tests
* fixed test coverage issue
* chore: renamed job.heartbeat stuff according to review feedback
* chore: pipelined worker heartbeat and job heartbeat
* docs: documented job.heartbeat property
* fixes after review
* docs: updated last_heartbeat description
* chore: review
Co-authored-by: Ruslan Mullakhmetov <ruslan@twentythree.net>
* Initial implementation of Retry class
* Fixes job.refresh() under Python 3.5
* Remove the use of text_type in job.py
* Retry can be scheduled
* monitor_work_horse() should call handle_job_failure() with queue argument.
* Flake8 fixes
* Added docs for job retries
1) Check if `created_at` when checking if dependencies are met.
If `created_at` is `None` then the job has been deleted. This is sort of hack - we just need one of the fields on the job's hash that is ALWAYS populated. You can persist a job to redis without setting status...
2) Job#fetch_dependencies no longer raises NoSuchJob.
If one of a job's dependencies has been deleted from Redis, it is not returned from `fetch_dependencies` and no exception is raised.
* Add job status setting in enqueue_at (and in enqueue_in) methods
Update tests for this change
Closes: #1179
* Add status param to create_job func, rework enqueue_at status setting
* Multi Dependency Support - Registration & Enqueue Call
Internal API changes to support multiple dependencies.
* Store all of a job's _dependencies_ in a redis set. Delete that set when a job is deleted.
* Add Job#fetch_dependencies method - which return all jobs a job is dependent upon and optionally _WATCHES_ all dependency ids.
* Use Job#fetch_dependencies in Queue#call_enqueue. `fetch_dependencies` now sets WATCH and raises InvalidJobDependency, rather than call_enqueue.
`Queue` and `Job` public APIs still expect single ids of jobs for `depends_on` but internally register them in a way that could support multiple jobs being passed as dependencies.
Next up: need to update Queue#enqueue_dependents
* Use existing fetch_many method to get dependencies.
Modify fetch_dependencies to use fetch_many.
* Remove default value for fetch_many's connection parameter
* PR review housekeeping
* Remove a duplicate test
* Oneline something
* Fix missing colon in dependencies key
* Delete job key, dependents and dependencies at once
* More Fixes From Code Review
Updates to Job, Queue and associated tests.
* When Checking dependencies Avoid, trip to Redis
* When checking the status of a job, we have a 'clean' status of all dependencies(returned from Job#fetch_dependencies) and the job keys are WATCHed, so there's no reason to go back to Redis to get the status _again_.
* Looks as though, the `_status` set in `Job#restore` was bytes while it was converted to text(`as_text`) in `Job#get_status` - for consistency(and tests) converting to text in `restore` as well.
* In `Queue#enqueue_call`, moved WATCH of dependencies_key to before fetching dependencies. This doesn't really matter but seems more _correct_ - one can imagine some rogue API adding a dependency after they've been fetched but before they've been WATCHEed.
* Update Job#get_status to get _local_ status
* If refresh=False is passed, don't get status from Redis; return the value of _status. This is to avoid a trip to Redis if the caller can guarantee that the value of `_status` is _clean_.
* More Fixups
* Expire dependency keys in Job#cleanup
* Consistency in Job#fetch_dependencies
* Convert `_dependency_id` to `_dependency_ids`
Change `Job`s tracking from a single id of it's dependencies from a single _id_ to a list of _id_s. This change should be private to `Job` - especially leaving `Job#to_dict` and `Job#restore`s treatment of a single 'dependency_id' intact.
This change modifies existing tests.
* Remove reliance upon dependency property in tests
... use dependency.id not `_dependency_id`
* Re-add assertions for Falsey Values
* Add _dependency_id property
For backwards compatibility with other libs such as django-rq and rq-scheduler
* Added FailedJobRegistry.
* Added job.failure_ttl.
* queue.enqueue() now supports failure_ttl
* Added registry.get_queue().
* FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL.
* StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry.
* Failed jobs are now added to FailedJobRegistry.
* Added FailedJobRegistry.requeue()
* Document the new `FailedJobRegistry` and changes in custom exception handler behavior.
* Added worker.disable_default_exception_handler.
* Document --disable-default-exception-handler option.
* Deleted worker.failed_queue.
* Deleted "move_to_failed_queue" exception handler.
* StartedJobRegistry should no longer move jobs to FailedQueue.
* Deleted requeue_job
* Fixed test error.
* Make requeue cli command work with FailedJobRegistry
* Added .pytest_cache to gitignore.
* Custom exception handlers are no longer run in reverse
* Restored requeue_job function
* Removed get_failed_queue
* Deleted FailedQueue
* Updated changelog.
* Document `failure_ttl`
* Updated docs.
* Remove job.status
* Fixed typo in test_registry.py
* Replaced _pipeline() with pipeline()
* FailedJobRegistry no longer fails on redis-py>=3
* Fixes test_clean_registries
* Worker names are now randomized
* Added a note about random worker names in CHANGES.md
* Worker will now stop working when encountering an unhandled exception.
* Worker should reraise SystemExit on cold shutdowns
* Added anchor.js to docs
* Support for Sentry-SDK (#1045)
* Updated RQ to support sentry-sdk
* Document Sentry integration
* Install sentry-sdk before running tests
* Improved rq info CLI command to be more efficient when displaying lar… (#1046)
* Improved rq info CLI command to be more efficient when displaying large number of workers
* Fixed an rq info --by-queue bug
* Fixed worker.total_working_time bug (#1047)
* queue.enqueue() no longer accepts `timeout` argument (#1055)
* Clean worker registry (#1056)
* queue.enqueue() no longer accepts `timeout` argument
* Added clean_worker_registry()
* Show worker hostname and PID on cli (#1058)
* Show worker hostname and PID on cli
* Improve test coverage
* Remove Redis version check when SSL is used
* Bump version to 1.0
* Removed pytest_cache/README.md
* Changed worker logging to use exc_info=True
* Removed unused queue.dequeue()
* Fixed typo in CHANGES.md
* setup_loghandlers() should always call logger.setLevel() if specified
* Replaced async keyword with is_async in the Queue class to fix reserved keyword syntax errors in Python 3.7
* Updated tests to use is_async keyword when instantiating Queue objects
* Updated docs to reference is_async keyword for Queue objects
* Updated tox.ini, setup.py and .travis.yml with references to Python 3.7
* Define redis key prefix as class variable
Some prefixes were hardcoded in several places. This made it hard to
use custom prefixes via subclasses.
Resolves#920
* fixup! Define redis key prefix as class variable
* Initial take on delete_dependents
* Add tests including corner cases
* No need to canel dependents since they are not in a queue yet anyway
* The dependents keys can be deleted in all cases
* Update tests to included saved jobs in the deletion tests
* Correctly use pipeline in cancel method
* Unused connection
* Include dependents into dict format of job
* Add TODO
* Address comments from selwin
* Delete dependents key in redis if delete_dependents is called on its own
* Address recent comments from selwin
* Small change to trigger travis
* Remove TODO referring to canceled job state
* Remove dependent_ids from to_dict
* Address recent comments from selwin