* Make new workflow
Descriped in https://github.com/rq/rq/pull/1465#issuecomment-842464560
* For testing
* failing tests
For testing
* log file
For testing
* log
For testing
* no newlines
For testing
* no "
For testing
* no only one issue
For testing
* as job
For testing
* use artifacts
For testing
* use artifacts2
For testing
* use artifacts3
For testing
* use artifacts4
For testing
* finish
* name
* only if "normal" workflow doesn't fail
https://github.com/rq/rq/pull/1470#discussion_r641343532
* Extract `Job.get_call_string` logic to `utils.get_call_string`
* Remove an outdaded comment
* Move `truncate_long_string` to `utils`
* Remove `truncate` parameter in `get_call_string`
* Add a test for `get_call_string`
* Move `truncate_long_string` to module's top level
* Add a test case for `truncate_long_string` suite
* Support enqueueing with on_success_callback
* Got success callback to execute properly
* Added on_failure callback
* Document success and failure callbacks
* More Flake8 fixes
* Mention that options can also be passed in via environment variables
* Increase coverage on test_callbacks.py
* Make `enqueue_*` and `Queue.parse_args` accept `pipeline` arg
* undo bad docs
* Fix lints in new code
* Implement enqueue_many, refactor dependency setup out to method
* Make name consistant
* Refactor enqueue_many, update tests, add docs
* Refactor out enqueueing from dependency setup
* Move new docs to queue page
* Fix section header
* Fix section header again
* Update rq version to 1.9.0
* adds unit test for a deserialization error
This tests that deserialization exceptions are properly logged, and fails in
the manner described in #1422 .
* Catch deserializing errors in Worker.handle_exception()
This fixes#1422 , and makes
tests/test_worker.py::TestWorker::test_deserializing_failure_is_handled
pass.
* made unit test less specific
This is required to get the test to pass under other serializers / other
python versions.
* Added generic DeserializationError
* switched ValueError to DeserializationError in a test
The changed test is creating an invalid job, which now raises
DeserializationError when data is accessed, as opposed to ValueError.
* Removed deprecated (object) inheritance
Add new py38,py39 versions to tox, removed deprecated py27,py34
Replace enum internal function with Enum class
* fix
* cleanup jobs that are not really running due to zombie workers
* remove registry entries for zombie jobs
* return only the job ids on cleanup
* test zombie job cleanup
* format code
* rename variable to explain that second element in tuple is expiry, not score
* remove worker_key
* detect zombie jobs using old heartbeats
* reuse get_expired_job_ids
* set score using current_timestamp
* test idle jobs using stale heartbeats
* extract timeout into variable
* move heartbeats into StartedJobRegistry
* use registry.heartbeat in tests
* remove heartbeats when job removed from StartedJobRegistry
* remove idle and expired jobs from both wip and heartbeats set
* send heartbeat_ttl to registry.add
* typo
* revert everything 😶
* only keep job heartbeats as score (and get rid of job timeouts as scores
* calculate heartbeat_ttl in an overrideable function + override it in SimpleWorker + move storing StartedJobRegistry scores to job.heartbeat()
* set heartbeat to monitoring interval for infinite timeouts
* track elapsed_execution_time as part of worker
* reset current job working time when work on a job is done
* persisting the job working time as part of monitoring
* Newer pip install command from git
Updated cutting edge pip install command to use HTTPS rather than insecure git+git protocol, as recommended by pip documentation. This allows the command to work with pip >= 21.0.1
* Updated pip command in README
* implemented round-robin and random access to queues
* added tests for RoundRobinQueue
* reverted change in gitignore
* removed linebreak
* added tests for random queues
* added documentation for round robin and random queues
* moved round robin strategy to worker
* reverted changes to queue.py
* reverted changes to workers.md
* reverted changes to test_queue
* added tests for RoundRobinWorker and RandomWorker
* added doc for round robin and random workers
* removed f-strings for backward compatibility
* corrected a mistake
* minor changes (code style)
* now using _ordered_queues instead of queues for reordering queues
* Also accept lists and tuples as value of `depends_on`.
* The elements of the lists/tuples may be either Jobs or Job IDs.
* A single Job / Job ID is still accepted as well.
* Represent _all_ job dependencies in `Job.to_dict()`.
We now represent the entire list, instead of just the first element.
* Fix some doctext regarding plurality of dependencies.
* Add unit tests for job dependencies.
* One unit test establishes a pattern for checking execution order as affected by dependencies.
* Another unit test applies this pattern to the new capability to name multiple dependencies.
* Add unit test for new `depends_on` input formats.
Also test that these are properly persisted.
* Repair `Job.restore()`.
Need to convert bytes back to strings when reloading `dependency_ids`.
* Maintain backwards compat. in `Job.to_dict()`.
Keep the old `dependency_id` (singular) key.
* Provide coverage for new test fixture.
* Simplify some code.
Cut some superfluous `as_text()` calls left over from an earlier commit.
* Check for `dependency_id` in `Job.restore()` for backwd. compat.
Also eliminate use of `as_text()` here, in favor of `.decode()`.
* Switch to snake case instead of camel case.
* Eliminate some usages of `as_text()`.
Also cut some `print` statements.
* Cleanup.
* Accept arbitrary iterables for `Job`'s `depends_on` kwarg.
Instead of requiring a list or tuple, we now make use of `ensure_list()`.
* Add test fixtures.
* Provide a system to get two workers working simultaneously, using `multiprocessing`.
* Define a simple job that just says whether its dependencies are met.
* In `rpush`, make an option to record the name of the worker.
* Improve unit tests on execution order with dependencies.
These now actually have two workers going, which makes a more thorough test.
* Add unit test examining `Job.dependencies_are_met()` at execution time.
* Redesign dependency execution order unit tests.
* Simplify assertions.
* Improve doctext and formatting.
* Move fixture tests to new, dedicated module `test_fixtures.py`.
* Use `enqueue` instead of `enqueue_call` in new unit tests.
If a UNIX socket path is passed to the constructor of the Redis client,
`redis.client.Redis`, the value of keyword argument `unix_socket_path`
is passed to the constructor of `UnixDomainSocketConnection` with the
key `path`.
When RQ's scheduler creates its own Redis connection, it instantiates
class `redis.client.Redis` with keyword arguments obtained from the
connection pool. If the pooled connection is a
`UnixDomainSocketConnection`, its keyword arguments contain `path`, as
given on instantiation. This results in a `TypeError: __init__() got an
unexpected keyword argument 'path'`.
This change renames the key back to `unix_socket_path` before the
keyword arguments dictionary is used to instantiate
`redis.client.Redis`.
* clean_worker_registry cleans in batches to prevent submitting too much data to redis at once when there are a large number of invalid keys
* Address code review comments
Rename MAX_REMOVABLE_KEYS to MAX_KEYS
* Fix tests
Co-authored-by: Joel Harris <combolations@gmail.com>
* Ensure that the custom serializer defined is passed into the job fetch calls
* add serializer as argument to fetch_many and dequeue_any methods
* add worker test for custom serializer
* move json serializer to serializers.py
* Added send_stop_job_command().
* send_stop_job_command now accepts just connection and job_id
* Document send_job_job_command
* Updated test coverage