* WIP job results
* Result can now be saved
* Successfully saved and restored result
* result.save() should accept pipeline
* Successful results are saved
* Failures are now saved properly too.
* Added test for Result.get_latest()
* Checkpoint
* Got Result.all() to work
* Added Result.count(), Result.delete()
* Backward compatibility for job.result and job.exc_info
* Added some typing
* More typing stuff
* Fixed typing in job.py
* More typing updates
* Only keep the last 10 results
* Documented job.results()
* Got results test to pass
* Don't run test_results.py on Redis server < 5.0
* Fixed mock import on some Python versions
* Remove Redis 3 from test matrix
* Jobs should never use the new Result implementation if server is < 5.0
* Results should only be created is Redis stream is supported.
* Added back Redis 3 to test matrix
* Fixed job.supports_redis_streams
* Fixed worker test
* Updated docs.
* Support enqueueing with on_success_callback
* Got success callback to execute properly
* Added on_failure callback
* Document success and failure callbacks
* More Flake8 fixes
* Mention that options can also be passed in via environment variables
* Increase coverage on test_callbacks.py
* modify zadd calls for redis-py 3.0
redis-py 3.0 changes the zadd interface that accepts a single
mapping argument that is expected to be a dict.
https://github.com/andymccurdy/redis-py#mset-msetnx-and-zadd
* change FailedQueue.push_job_id to always push a str
redis-py 3.0 does not attempt to cast values to str and is left
to the user.
* remove Redis connection patching
Since in redis-py 3.0, Redis == StrictRedis class, we no longer
need to patch _zadd and other methods.
Ref: https://github.com/rq/rq/pull/1016#issuecomment-441010847
This patches the connection object (which is either a StrictRedis
instance or a Redis instance), to have alternative class methods that
behave exactly like their StrictRedis counterparts, no matter whether
which type the object is. Only the ambiguous methods are patched. The
exhaustive list:
- _zadd (fixes argument order)
- _lrem (fixes argument order)
- _setex (fixes argument order)
- _pipeline (always returns a StrictPipeline)
- _ttl (fixes return value)
- _pttl (fixes return value)
This makes it possible to call the methods reliably without polluting
the RQ code any further.
This reverts commit 1ab8c19696 and
reintroduces all changes made by @dstufft.
Still, it needs more patches to reeanble the default log-to-console
behaviour. See #121.
Connections can now be set explicitly on Queues, Workers, and Jobs.
Jobs that are implicitly created by Queue or Worker API calls now
inherit the connection of their creator's.
For all RQ object instances that are created now holds that the
"current" connection is used if none is passed in explicitly. The
"current" connection is thus hold on to at creation time and won't be
changed for the lifetime of the object.
Effectively, this means that, given a default Redis connection, say you
create a queue Q1, then push another Redis connection onto the
connection stack, then create Q2. In that case, Q1 means a queue on the
first connection and Q2 on the second connection.
This is way more clear than it used to be.
Also, I've removed the `use_redis()` call, which was named ugly.
Instead, some new alternatives for connection management now exist.
You can push/pop connections now:
>>> my_conn = Redis()
>>> push_connection(my_conn)
>>> q = Queue()
>>> q.connection == my_conn
True
>>> pop_connection() == my_conn
Also, you can stack them syntactically:
>>> conn1 = Redis()
>>> conn2 = Redis('example.org', 1234)
>>> with Connection(conn1):
... q = Queue()
... with Connection(conn2):
... q2 = Queue()
... q3 = Queue()
>>> q.connection == conn1
True
>>> q2.connection == conn2
True
>>> q3.connection == conn1
True
Or, if you only require a single connection to Redis (for most uses):
>>> use_connection(Redis())
Since the test suite `flushdb()`'s after running each test, we should
make sure the database is empty before we even start running tests.
This patch will make sure to never destroy any local production data
inside the running Redis instance.
This fixes#25.
There is no job tuple anymore, but instead Jobs are picklable by
themselves natively. Furthermore, I've added a way to annotate Jobs
with created_at and enqueued_at timestamps, to drive any future Job
performance stats. (And to enable requeueing, while keeping hold of the
queue that the Job originated from.)
This fixes#17.