* Update linting configuration
This removes flake8 in favor of ruff, which also provides isort support, and
updates all files to be black, isort, and ruff compliant. This also adds black
and ruff checks to the tox and Github linting workflow.
* Tweak the code coverage config and calls
* First stab at implementating worker pool
* Use process.is_alive() to check whether a process is still live
* Handle shutdown signal
* Check worker loop done
* First working version of `WorkerPool`.
* Added test for check_workers()
* Added test for pool.start()
* Better shutdown process
* Comment out test_start() to see if it fixes CI
* Make tests pass
* Make CI pass
* Comment out some tests
* Comment out more tests
* Re-enable a test
* Re-enable another test
* Uncomment check_workers test
* Added run_worker test
* Minor modification to dead worker detection
* More test cases
* Better process name for workers
* Added back pool.stop_workers() when signal is received
* Cleaned up cli.py
* WIP on worker-pool command
* Fix test
* Test that worker pool ignores consecutive shutdown signals
* Added test for worker-pool CLI command.
* Added timeout to CI jobs
* Fix worker pool test
* Comment out test_scheduler.py
* Fixed worker-pool in burst mode
* Increase test coverage
* Exclude tests directory from coverage.py
* Improve test coverage
* Renamed `Pool(num_workers=2) to `Pool(size=2)`
* Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`"
This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b.
* Renamed Pool to WorkerPool
* Added a new TestCase that doesn't use LocalStack
* Added job_class, worker_class and serializer arguments to WorkerPool
* Use parse_connection() in WorkerPool.__init__
* Added CLI arguments for worker-pool
* Minor WorkerPool and test fixes
* Fixed failing CLI test
* Document WorkerPool
* WIP job results
* Result can now be saved
* Successfully saved and restored result
* result.save() should accept pipeline
* Successful results are saved
* Failures are now saved properly too.
* Added test for Result.get_latest()
* Checkpoint
* Got Result.all() to work
* Added Result.count(), Result.delete()
* Backward compatibility for job.result and job.exc_info
* Added some typing
* More typing stuff
* Fixed typing in job.py
* More typing updates
* Only keep the last 10 results
* Documented job.results()
* Got results test to pass
* Don't run test_results.py on Redis server < 5.0
* Fixed mock import on some Python versions
* Remove Redis 3 from test matrix
* Jobs should never use the new Result implementation if server is < 5.0
* Results should only be created is Redis stream is supported.
* Added back Redis 3 to test matrix
* Fixed job.supports_redis_streams
* Fixed worker test
* Updated docs.
* Support enqueueing with on_success_callback
* Got success callback to execute properly
* Added on_failure callback
* Document success and failure callbacks
* More Flake8 fixes
* Mention that options can also be passed in via environment variables
* Increase coverage on test_callbacks.py
* modify zadd calls for redis-py 3.0
redis-py 3.0 changes the zadd interface that accepts a single
mapping argument that is expected to be a dict.
https://github.com/andymccurdy/redis-py#mset-msetnx-and-zadd
* change FailedQueue.push_job_id to always push a str
redis-py 3.0 does not attempt to cast values to str and is left
to the user.
* remove Redis connection patching
Since in redis-py 3.0, Redis == StrictRedis class, we no longer
need to patch _zadd and other methods.
Ref: https://github.com/rq/rq/pull/1016#issuecomment-441010847
This patches the connection object (which is either a StrictRedis
instance or a Redis instance), to have alternative class methods that
behave exactly like their StrictRedis counterparts, no matter whether
which type the object is. Only the ambiguous methods are patched. The
exhaustive list:
- _zadd (fixes argument order)
- _lrem (fixes argument order)
- _setex (fixes argument order)
- _pipeline (always returns a StrictPipeline)
- _ttl (fixes return value)
- _pttl (fixes return value)
This makes it possible to call the methods reliably without polluting
the RQ code any further.
This reverts commit 1ab8c19696 and
reintroduces all changes made by @dstufft.
Still, it needs more patches to reeanble the default log-to-console
behaviour. See #121.
Connections can now be set explicitly on Queues, Workers, and Jobs.
Jobs that are implicitly created by Queue or Worker API calls now
inherit the connection of their creator's.
For all RQ object instances that are created now holds that the
"current" connection is used if none is passed in explicitly. The
"current" connection is thus hold on to at creation time and won't be
changed for the lifetime of the object.
Effectively, this means that, given a default Redis connection, say you
create a queue Q1, then push another Redis connection onto the
connection stack, then create Q2. In that case, Q1 means a queue on the
first connection and Q2 on the second connection.
This is way more clear than it used to be.
Also, I've removed the `use_redis()` call, which was named ugly.
Instead, some new alternatives for connection management now exist.
You can push/pop connections now:
>>> my_conn = Redis()
>>> push_connection(my_conn)
>>> q = Queue()
>>> q.connection == my_conn
True
>>> pop_connection() == my_conn
Also, you can stack them syntactically:
>>> conn1 = Redis()
>>> conn2 = Redis('example.org', 1234)
>>> with Connection(conn1):
... q = Queue()
... with Connection(conn2):
... q2 = Queue()
... q3 = Queue()
>>> q.connection == conn1
True
>>> q2.connection == conn2
True
>>> q3.connection == conn1
True
Or, if you only require a single connection to Redis (for most uses):
>>> use_connection(Redis())
Since the test suite `flushdb()`'s after running each test, we should
make sure the database is empty before we even start running tests.
This patch will make sure to never destroy any local production data
inside the running Redis instance.
This fixes#25.
There is no job tuple anymore, but instead Jobs are picklable by
themselves natively. Furthermore, I've added a way to annotate Jobs
with created_at and enqueued_at timestamps, to drive any future Job
performance stats. (And to enable requeueing, while keeping hold of the
queue that the Job originated from.)
This fixes#17.