* Add feature to enqueue dependents at the front of queues
* Add documentation for the Dependency(enqueue_at_front=...) parameter
* docs: Add `enqueue_at_front` to list of parameters for Dependency
* test: Update dependency test to not rely on Redis ordering
* refactor: Save enqueue_at_front boolean in job.meta instead of separate instance attr
* fix: Made enqueue_at_front an instance attribute instead of putting it inside meta
* Fix job.dependencies_are_met() if dependency is canceled
* Slightly better test coverage on dependencies_are_met()
* Fixed job.cancel(enqueue_dependent=True)
* Move common flake8 options into config file
Currently --max-line-length being specified in two places. Just use the
existing value in the config file as the source of truth.
Move --count and --statistics to config file as well.
* Fix some lints
* added Dependency class with allow_failures
* Requested changes
* Check type before setting `job.dependency_allow_fail` within `Job.create`
* Set `job.dependency_allow_fail` within `Job.create`
* Added test to ensure persistence of `dependency_allow_fail`
* Removed typing and allow mixed list of ints and Job objects
* Convert dependency_allow_fail boolean to integer during serialization to avoid redis DataError
* Updated `test_multiple_dependencies_are_accepted_and_persisted` test to include `Dependency` cases
* Adding placeholder test to test actual behavior of new `Dependency` usage in `depends_on`
* Updated `test_job_dependency` to include cases using `Dependency`
* Added dependency_allow_fail logic to `Job.restore`
* Renamed `dependency_allow_fail` to a simpler `allow_failure`
* Update docs to add section about the new `Dependency` class and use-case
* Updated `Job.dependencies_are_met` logic to take `FAILED` and `STOPPED` jobs into account when `allow_failure=True`
* Updated `test_job_dependency` test. Still failing with `Dependency` case.
* Fix `allow_failure` type coercion in `Job.restore`
* Re-arrange tests, so default `Dependency.allow_failure` is before explicit `allow_failure=True`
* Fixed Dependency, so it works correctly when allow_failure=True
* Attempt to execute pipeline prior to queueing a failed job's dependents. test_create_and_cancel_job_enqueue_dependents_in_registry test now passes.
* Added `Depedency` test utilizing multiple dependencies
* Removed irrelevant on_success and on_failure keyword arguments in example
* Replaced use of long_running_job
* Add test to verify `Dependency.jobs` contraints
* Suppress connection error in handle_job_failure
* test_dependencies have passed
* All tests pass if enqueue_dependents called without pipeline.watch()
* All tests now pass
* Removed print statements
* Cleanup Dependency implementation
* Renamed job.allow_failure to job.allow_dependency_failures
Co-authored-by: mattchan <mattchan@tencent.com>
Co-authored-by: Mike Hill <mhilluniversal@gmail.com>
There are small typos in:
- docs/docs/exceptions.md
- docs/docs/jobs.md
- rq/queue.py
- tests/fixtures.py
- tests/test_job.py
Fixes:
- Should read `slightly` rather than `slighty`.
- Should read `requeuing` rather than `requeueing`.
- Should read `implementers` rather than `implementors`.
- Should read `definition` rather than `defition`.
- Should read `canceled` rather than `canceld`.
Signed-off-by: Tim Gates <tim.gates@iress.com>
* Added CrossPlatformDeathPenalty that doesn't rely on signals
* Updated `SimpleWorker`'s `death_penalty_class` to utilize `CrossPlatformDeathPenalty` to allow use on Windows machines
* Changed `CrossPlatformDeathPenalty` to `TimerDeathPenalty`
* Removed overridden `death_penalty_class` in `SimpleWorker` until feature matures
* Added section in testing.md explaining how to utilize `SimpleWorker` on Windows OS
* Replaced usage of chatting with .format for python 3.5 compatibility
* Add tests for new timeout feature
* Explicitly set defaults.CALLBACK_TIMEOUT
* Implemented cross-thread method of raising errors by using ctypes
* Finished writing tests for new TimerDeathPenalty
* rq.worker: remove useless set_state call in horse
The state should already have been set properly by the worker in
`execute_job`
`prepare_job_execution` is only called by `perform_job` which should only be
called by `main_work_horse`/`fork_work_horse` (themselves only called by `execute_job`).
Let `execute_job` do the bookkeeping.
* worker: update SimpleWorker's state in execute_job
This bug has opened a lot of possible race-conditions, since the
watch-logic from redis did not fail anymore, if dependencies have been
changed in parallel.
* Fixes a bug that causes leftover job keys when result_ttl=0
* Fixed a buggy worker.maintain_heartbeats() behavior
* Fixed a bug in worker.maintain_heartbeats().
* use shutil.get_terminal_size instead of click.get_terminal_size()
resolves warning:
rq/cli/helpers.py:107: DeprecationWarning: 'click.get_terminal_size()' is deprecated and will be removed in Click 8.1. Use 'shutil.get_terminal_size()' instead.
termwidth, _ = click.get_terminal_size()
* remove StrictVersion from rq
* asyncio.get_event_loop -> asyncio.new_event_loop()
resolves warning:
tests/test_job.py::TestJob::test_create_job_with_async
rq/rq/job.py:839: DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()
* Add python 3.10 to matrix
Co-authored-by: rpkak <rpkak@users.noreply.github.com>
* Fix `job.cancel` to remove job from registiries if not in queue
* Remove old queue remove call
* Block the ability to cancel job that are already canceled
* Fix py35 compat
* Rename helper method
* job: add get_meta() function
The newly introduced function returns meta data stored for the job. This
is required since job.meta stays an empty dict until the job is
finished or failed.
With the new function it's possible to store arbiatraty states/stages of
the job and allow the user to track progress. A long running job may
return custom stages like `downloading_data`, `unpacking_data`,
`processing_data`, etc.
This may allow better interfaces since users can track progress.
Signed-off-by: Paul Spooren <mail@aparcar.org>
* docs: add missing `refresh` arg to get_status()
This was previously missing.
Signed-off-by: Paul Spooren <mail@aparcar.org>
* Extract `Job.get_call_string` logic to `utils.get_call_string`
* Remove an outdaded comment
* Move `truncate_long_string` to `utils`
* Remove `truncate` parameter in `get_call_string`
* Add a test for `get_call_string`
* Move `truncate_long_string` to module's top level
* Add a test case for `truncate_long_string` suite
* Support enqueueing with on_success_callback
* Got success callback to execute properly
* Added on_failure callback
* Document success and failure callbacks
* More Flake8 fixes
* Mention that options can also be passed in via environment variables
* Increase coverage on test_callbacks.py
* Make `enqueue_*` and `Queue.parse_args` accept `pipeline` arg
* undo bad docs
* Fix lints in new code
* Implement enqueue_many, refactor dependency setup out to method
* Make name consistant
* Refactor enqueue_many, update tests, add docs
* Refactor out enqueueing from dependency setup
* Move new docs to queue page
* Fix section header
* Fix section header again
* Update rq version to 1.9.0
* adds unit test for a deserialization error
This tests that deserialization exceptions are properly logged, and fails in
the manner described in #1422 .
* Catch deserializing errors in Worker.handle_exception()
This fixes#1422 , and makes
tests/test_worker.py::TestWorker::test_deserializing_failure_is_handled
pass.
* made unit test less specific
This is required to get the test to pass under other serializers / other
python versions.
* Added generic DeserializationError
* switched ValueError to DeserializationError in a test
The changed test is creating an invalid job, which now raises
DeserializationError when data is accessed, as opposed to ValueError.
* Removed deprecated (object) inheritance
Add new py38,py39 versions to tox, removed deprecated py27,py34
Replace enum internal function with Enum class
* fix
* cleanup jobs that are not really running due to zombie workers
* remove registry entries for zombie jobs
* return only the job ids on cleanup
* test zombie job cleanup
* format code
* rename variable to explain that second element in tuple is expiry, not score
* remove worker_key
* detect zombie jobs using old heartbeats
* reuse get_expired_job_ids
* set score using current_timestamp
* test idle jobs using stale heartbeats
* extract timeout into variable
* move heartbeats into StartedJobRegistry
* use registry.heartbeat in tests
* remove heartbeats when job removed from StartedJobRegistry
* remove idle and expired jobs from both wip and heartbeats set
* send heartbeat_ttl to registry.add
* typo
* revert everything 😶
* only keep job heartbeats as score (and get rid of job timeouts as scores
* calculate heartbeat_ttl in an overrideable function + override it in SimpleWorker + move storing StartedJobRegistry scores to job.heartbeat()
* set heartbeat to monitoring interval for infinite timeouts
* track elapsed_execution_time as part of worker
* reset current job working time when work on a job is done
* persisting the job working time as part of monitoring
* implemented round-robin and random access to queues
* added tests for RoundRobinQueue
* reverted change in gitignore
* removed linebreak
* added tests for random queues
* added documentation for round robin and random queues
* moved round robin strategy to worker
* reverted changes to queue.py
* reverted changes to workers.md
* reverted changes to test_queue
* added tests for RoundRobinWorker and RandomWorker
* added doc for round robin and random workers
* removed f-strings for backward compatibility
* corrected a mistake
* minor changes (code style)
* now using _ordered_queues instead of queues for reordering queues
* Also accept lists and tuples as value of `depends_on`.
* The elements of the lists/tuples may be either Jobs or Job IDs.
* A single Job / Job ID is still accepted as well.
* Represent _all_ job dependencies in `Job.to_dict()`.
We now represent the entire list, instead of just the first element.
* Fix some doctext regarding plurality of dependencies.
* Add unit tests for job dependencies.
* One unit test establishes a pattern for checking execution order as affected by dependencies.
* Another unit test applies this pattern to the new capability to name multiple dependencies.
* Add unit test for new `depends_on` input formats.
Also test that these are properly persisted.
* Repair `Job.restore()`.
Need to convert bytes back to strings when reloading `dependency_ids`.
* Maintain backwards compat. in `Job.to_dict()`.
Keep the old `dependency_id` (singular) key.
* Provide coverage for new test fixture.
* Simplify some code.
Cut some superfluous `as_text()` calls left over from an earlier commit.
* Check for `dependency_id` in `Job.restore()` for backwd. compat.
Also eliminate use of `as_text()` here, in favor of `.decode()`.
* Switch to snake case instead of camel case.
* Eliminate some usages of `as_text()`.
Also cut some `print` statements.
* Cleanup.
* Accept arbitrary iterables for `Job`'s `depends_on` kwarg.
Instead of requiring a list or tuple, we now make use of `ensure_list()`.
* Add test fixtures.
* Provide a system to get two workers working simultaneously, using `multiprocessing`.
* Define a simple job that just says whether its dependencies are met.
* In `rpush`, make an option to record the name of the worker.
* Improve unit tests on execution order with dependencies.
These now actually have two workers going, which makes a more thorough test.
* Add unit test examining `Job.dependencies_are_met()` at execution time.
* Redesign dependency execution order unit tests.
* Simplify assertions.
* Improve doctext and formatting.
* Move fixture tests to new, dedicated module `test_fixtures.py`.
* Use `enqueue` instead of `enqueue_call` in new unit tests.
* clean_worker_registry cleans in batches to prevent submitting too much data to redis at once when there are a large number of invalid keys
* Address code review comments
Rename MAX_REMOVABLE_KEYS to MAX_KEYS
* Fix tests
Co-authored-by: Joel Harris <combolations@gmail.com>