* added Dependency class with allow_failures
* Requested changes
* Check type before setting `job.dependency_allow_fail` within `Job.create`
* Set `job.dependency_allow_fail` within `Job.create`
* Added test to ensure persistence of `dependency_allow_fail`
* Removed typing and allow mixed list of ints and Job objects
* Convert dependency_allow_fail boolean to integer during serialization to avoid redis DataError
* Updated `test_multiple_dependencies_are_accepted_and_persisted` test to include `Dependency` cases
* Adding placeholder test to test actual behavior of new `Dependency` usage in `depends_on`
* Updated `test_job_dependency` to include cases using `Dependency`
* Added dependency_allow_fail logic to `Job.restore`
* Renamed `dependency_allow_fail` to a simpler `allow_failure`
* Update docs to add section about the new `Dependency` class and use-case
* Updated `Job.dependencies_are_met` logic to take `FAILED` and `STOPPED` jobs into account when `allow_failure=True`
* Updated `test_job_dependency` test. Still failing with `Dependency` case.
* Fix `allow_failure` type coercion in `Job.restore`
* Re-arrange tests, so default `Dependency.allow_failure` is before explicit `allow_failure=True`
* Fixed Dependency, so it works correctly when allow_failure=True
* Attempt to execute pipeline prior to queueing a failed job's dependents. test_create_and_cancel_job_enqueue_dependents_in_registry test now passes.
* Added `Depedency` test utilizing multiple dependencies
* Removed irrelevant on_success and on_failure keyword arguments in example
* Replaced use of long_running_job
* Add test to verify `Dependency.jobs` contraints
* Suppress connection error in handle_job_failure
* test_dependencies have passed
* All tests pass if enqueue_dependents called without pipeline.watch()
* All tests now pass
* Removed print statements
* Cleanup Dependency implementation
* Renamed job.allow_failure to job.allow_dependency_failures
Co-authored-by: mattchan <mattchan@tencent.com>
Co-authored-by: Mike Hill <mhilluniversal@gmail.com>
There are small typos in:
- docs/docs/exceptions.md
- docs/docs/jobs.md
- rq/queue.py
- tests/fixtures.py
- tests/test_job.py
Fixes:
- Should read `slightly` rather than `slighty`.
- Should read `requeuing` rather than `requeueing`.
- Should read `implementers` rather than `implementors`.
- Should read `definition` rather than `defition`.
- Should read `canceled` rather than `canceld`.
Signed-off-by: Tim Gates <tim.gates@iress.com>
* Added CrossPlatformDeathPenalty that doesn't rely on signals
* Updated `SimpleWorker`'s `death_penalty_class` to utilize `CrossPlatformDeathPenalty` to allow use on Windows machines
* Changed `CrossPlatformDeathPenalty` to `TimerDeathPenalty`
* Removed overridden `death_penalty_class` in `SimpleWorker` until feature matures
* Added section in testing.md explaining how to utilize `SimpleWorker` on Windows OS
* Replaced usage of chatting with .format for python 3.5 compatibility
* Add tests for new timeout feature
* Explicitly set defaults.CALLBACK_TIMEOUT
* Implemented cross-thread method of raising errors by using ctypes
* Finished writing tests for new TimerDeathPenalty
* rq.worker: remove useless set_state call in horse
The state should already have been set properly by the worker in
`execute_job`
`prepare_job_execution` is only called by `perform_job` which should only be
called by `main_work_horse`/`fork_work_horse` (themselves only called by `execute_job`).
Let `execute_job` do the bookkeeping.
* worker: update SimpleWorker's state in execute_job
The logging root logger was used in the utils which can cause noise when
inspecting logs. Properly defining a logger and using it as is done
everywhere increase consistency to the overall codebase.
Co-authored-by: olaure <o.s.c.l.a.u.r.e.nt@gmail.com>
The handling of WatchErrors in setup_dependencies() did not reset
the local status of the job, so if, due to parallel processing, all
dependencies get finished while enqueuing the job (causing
WatchError to be raised), the job returned to enqueue_call()
remained in DEFERRED state, which resulted in no execution at all.
This bug has opened a lot of possible race-conditions, since the
watch-logic from redis did not fail anymore, if dependencies have been
changed in parallel.
* Fixes a bug that causes leftover job keys when result_ttl=0
* Fixed a buggy worker.maintain_heartbeats() behavior
* Fixed a bug in worker.maintain_heartbeats().
* use shutil.get_terminal_size instead of click.get_terminal_size()
resolves warning:
rq/cli/helpers.py:107: DeprecationWarning: 'click.get_terminal_size()' is deprecated and will be removed in Click 8.1. Use 'shutil.get_terminal_size()' instead.
termwidth, _ = click.get_terminal_size()
* remove StrictVersion from rq
* asyncio.get_event_loop -> asyncio.new_event_loop()
resolves warning:
tests/test_job.py::TestJob::test_create_job_with_async
rq/rq/job.py:839: DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()
* Add python 3.10 to matrix
Co-authored-by: rpkak <rpkak@users.noreply.github.com>
* Fix `job.cancel` to remove job from registiries if not in queue
* Remove old queue remove call
* Block the ability to cancel job that are already canceled
* Fix py35 compat
* Rename helper method
In #1496, we observed a situation where the `work()` method crashes after
`.subscribe()`, but prior to the `try/except` block which normally cleans up
after `.subscribe()`.
Specifically, `.subscribe()` launches a thread in non-daemon mode. Because of
that setting, Python will keep the calling worker process active, even if the
main thread has crashed. This resulted in a syndrome where a worker process was
running, but doing no work.
The change launches this thread in daemon mode, i.e. prevents a "zombie" pubsub
thread from keeping the process up.
(An additional change we could make, discussed in #1496 but deferred, would be
to improve the error handling/trapping scope in `.work()` such that all
failures trigger resource cleanup.)