490 Commits (375ace1747a25bb26c080c5ad5f51da394bb8059)

Author SHA1 Message Date
JackBoreczky 016da14723
Fix custom serializer in job fetches (#1381)
* Ensure that the custom serializer defined is passed into the job fetch calls

* add serializer as argument to fetch_many and dequeue_any methods

* add worker test for custom serializer

* move json serializer to serializers.py
4 years ago
BobReid 75a610bd4d
Fix RQScheduler when run with SSL connection (#1383)
* Quick and dirty set up of SSL

* copy connection kwargs in scheduler

* fix

* chmod the cert

* Skip SSL tests in CI
4 years ago
Selwin Ong 492e77d86d
send_stop_job_command (#1376)
* Added send_stop_job_command().

* send_stop_job_command now accepts just connection and job_id

* Document send_job_job_command

* Updated test coverage
4 years ago
Selwin Ong f3e924cdd1
Added job.worker_name (#1375)
* Added job.worker_name

* Fix compatibility with Redis server 3.x

* Document job.worker_name

* Removed some Python 2 compatibility stuff.

* Remove unused codes
4 years ago
Selwin Ong d1528d776d
Release scheduler lock when running in burst mode (#1374)
* Fixed an issue where scheduler lock is not release when running worker in burst mode

* Remove unused import
4 years ago
Ruslan Mullakhmetov ed264f08bb
feat: added job heartbeat to track whether job is actually executing (#1349)
* feat: added job heartbeat to track whether job is actually executing

heartbeat might be needed in cases when worker was hardkilled or the whole VM/docker was forcibly rebooted.

* fixed tests

* fixed test coverage issue

* chore: renamed job.heartbeat stuff according to review feedback

* chore: pipelined worker heartbeat and job heartbeat

* docs: documented job.heartbeat property

* fixes after review

* docs: updated last_heartbeat description

* chore: review

Co-authored-by: Ruslan Mullakhmetov <ruslan@twentythree.net>
4 years ago
Selwin Ong a721db34b1
Workers can listen to external commands via pubsub (#1363)
* Added a way to send shutdown command via pubsub

* Added kill-horse command

* Added kill horse command

* Added send_kill_horse_command() and send_shutdown_command()

* Document worker commands
4 years ago
Nikita Romaniuk 2da957a68d
scheduler: now operates with chunks of jobs (#1355)
* scheduler: now operates with chunks of jobs

* scheduler: set default chunk_size for ScheduledJobRegistry.get_jobs_to_schedule

* scheduler: fixed missing indent

* scheduler: added test for get_jobs_to_schedule() with chunk_size parameter

* scheduler: fixed test for passing python 3.5 (no f-strings)

* scheduler: fixed chunk_size in test make it lighter to run
4 years ago
Ruslan Mullakhmetov 9adcd7e50c
feat: avoided "zombie" processes after killing work horse (#1348)
* feat: avoided "zombie" processes after killing work horse by setting work horse process group and killing this group

* fixed tests

* tests: added test to check that all workhorse subprocesses are killed

* tests: updated guthub run tests dependencies since they are not using (dev-)requirements.txt

Co-authored-by: Ruslan Mullakhmetov <ruslan@twentythree.net>
4 years ago
Vladimir Ulupov 237e69123a
pass retry param to enqueue_at func (#1343) 4 years ago
Selwin Ong 01d71c8984
Fixes an issue where retried jobs should not be put in FailedJobRegistry (#1336) 4 years ago
Selwin Ong 39fb709c10
get_redis_server_version() should handle 4 digit version numbers (#1322) 4 years ago
nerok 7bf100ebe7
Allow retries to be set through decorator (#1319)
Co-authored-by: Didrik Koren <didrik.koren@uninett.no>
4 years ago
Ruslan Mullakhmetov c2931b45b6
handled unhandled exceptions in horse (#1303)
* handled unhandled exceptions in horse to prevent a job from being silently dropped without going into FailedRegistry

* changes after review

* made sure that work_horse always terminates in a proper way with tests

* minor refactoring

* fix for failing test

* fixes for the other tests

- removed exception handling (done in monitor_work_horse)
- adjusted some tests for the checks that are not relevant anymore

* review suggested changes

* cleanup

Co-authored-by: Ruslan Mullakhmetov <ruslan@twentythree.net>
4 years ago
Selwin Ong 49b156ecc7
Job retry feature. Docs WIP (#1299)
* Initial implementation of Retry class

* Fixes job.refresh() under Python 3.5

* Remove the use of text_type in job.py

* Retry can be scheduled

* monitor_work_horse() should call handle_job_failure() with queue argument.

* Flake8 fixes

* Added docs for job retries
4 years ago
wevsty 4e1eb97056
Split kill_house() fix issues #1234 (#1300)
* Split kill_house() fix issues #1234

Details View issues #1234

* Removing the catch finally

* rename wait_horse() to wait_for_horse()

* rename wait_horse() to wait_for_horse()

* update test_handle_shutdown_request()

Change test_handle_shutdown_request() exitcode assert

* Restore kill_horse() output

* optimization wait_for_horse()
5 years ago
Paul Spooren 73506b26fc
Add get_job_position and get_position feature (#1271)
Fix #1197

Signed-off-by: Paul Spooren <mail@aparcar.org>
5 years ago
Selwin Ong 1d8ea8e7a3
Worker key TTLs are set to be a bit longer to account for system hiccups (#1279)
* Worker key TTLs are set to be a bit longer to account for system hiccups

* Fix test_work_horse_force_death
5 years ago
Evan Ackmann bd0fcc1a07
Took into account DST when computing localtime zones. This wasn't ac… (#1258)
* Took into account DST when computing localtime zones.  This wasn't accounted for when using non-UTC datetimes with queue.enqueue_at()

* Updates tests with mocked timezones to test both scenarios
5 years ago
Selwin Ong 21bf5890c0 Merge remote-tracking branch 'origin/master' into multi-dependencies 5 years ago
Bo Bayles f0846a7645
Use pickle.HIGHEST_PROTOCOL by default (#1254) 5 years ago
thomas 0b528dae4b Update Job#dependencies_are_met ...
... such that it fetch all dependency status using SMEMBERS and HGET rather than SORT.
5 years ago
thomas 01ebe25f56 Address Deleted Dependencies
1) Check if `created_at` when checking if dependencies are met.

   If `created_at` is `None` then the job has been deleted. This is sort of hack - we just need one of the fields on the job's hash that is ALWAYS populated. You can persist a job to redis without setting status...

2) Job#fetch_dependencies no longer raises NoSuchJob.

   If one of a job's dependencies has been deleted from Redis, it is not returned from `fetch_dependencies` and no exception is raised.
5 years ago
thomas 9f15df2d55 rename dependencies_finished to dependencies_are_met 5 years ago
Thomas Matecki d5921814e4 Change get_dependency_statuses to dependencies_finished
Convert method on Job to return a boolean and rename. Also use
fetch_many in Queue#enqueue_dependents.
5 years ago
Thomas Matecki a69d91d2b2 Do not watch dependency key set 5 years ago
thomas 4440669f3c Fix patches for python2 5 years ago
thomas 7ea5a32a55 Alway set status 'FINISHED' when job is Successful
Method Queue#enqueue_dependents checks the status of all dependencies of all dependents, and enqueues those dependents for which all dependencies are FINISHED.

The enqueue_dependents method WAS called from Worker#handle_job_success called BEFORE the status of the successful job was set in Redis, so enqueue_dependents explicitly excluded the _successful_ job from interrogation of dependency statuses as the it would never be true in the existing code path, but it was assumed that this would be final status after the current pipeline was executed.

This commit changes Worker#handle_job_success so that it persists the status of the successful job to Redis, everytime a job completes(not only if it has a ttl) and does so before enqueue_dependents is called. This allows for enqueue_dependents to be less reliant on the out of band state of the current _successful job being handled_.
5 years ago
thomas 4a64244f40 Only enqueue dependents for all dependencies are FINISHED 5 years ago
Thomas Matecki ee215a1853 Create get_dependencies_statuses method on Job
This method shall be used in Queue#enqueue_dependendents to determine if all of a dependents' dependencies have been _FINISHED_.
5 years ago
Babatunde Olusola e1cbc3736c
Implement Customizable Serializer Support (#1219)
* Implement Customizable Serializer Support

* Refractor serializer instance methods

* Update tests with other serializers

* Edit function description

* Edit function description

* Raise appropriate exception

* Update tests for better code coverage

* Remove un-used imports and un-necessary code

* Refractor resolve_serializer

* Remove un-necessary alias from imports

* Add documentation

* Refractor tests, improve documentation
5 years ago
Selwin Ong cfe389bd65
FailedJobRegistry.requeue() resets job.started_at and job.ended_at (#1227) 5 years ago
Selwin Ong 636d6d2f54
registry.cleanup() now writes information to job.exc_info (#1226) 5 years ago
Samuel Colvin 4036471203
fixing HerokuWorkerShutdownTestCase after #1194 (#1213) 5 years ago
Selwin Ong d8bd455c12
enqueue_at should support explicit args and kwargs (#1211) 5 years ago
Selwin Ong fda4b35f46
Fixes Job.fetch when return value is unpickleable (#1184)
* Fixes Job.fetch when return value is unpickleable

* Fixed connection test in newer versions of Redis
5 years ago
Ivan Kiryanov ed67de22c6 Add job status setting in enqueue_at (and in enqueue_in) methods (#1181)
* Add job status setting in enqueue_at (and in enqueue_in) methods

Update tests for this change
Closes: #1179

* Add status param to create_job func, rework enqueue_at status setting
5 years ago
Selwin Ong ccfd4a02cb
Failed jobs will now auto expire (#1182) 5 years ago
mr-trouble 5f949f4cef Add a hard kill from the parent process with a 10% increased timeout … (#1169)
* Add a hard kill from the parent process with a 10% increased timeout in case the forked process gets stuck and cannot stop itself.

* Added test for the force kill of the parent process.

* Changed 10% to +1 second, and other misc changes based on review comments.
5 years ago
Selwin Ong baa0cc268a
Job scheduling (#1163)
* First RQScheduler prototype

* WIP job scheduling

* Fixed Python 2.7 tests

* Added ScheduledJobRegistry.get_scheduled_time(job)

* WIP on scheduler's threading mechanism

* Fixed test errors

* Changed scheduler.acquire_locks() to instance method

* Added scheduler.prepare_registries()

* Somewhat working implementation of RQ scheduler

* Only call stop_scheduler if there's a scheduler present

* Use OSError rather than ProcessLookupError for PyPy compatibility

* Added `auto_start` argument to scheduler.acquire_locks()

* Make RQScheduler play better with timezone

* Fixed test error

* Added --with-scheduler flag to rq worker CLI

* Fix tests on Python 2.x

* More Python 2 fixes

* Only call `scheduler.start` if worker is run in non burst mode

* Fixed an issue where running worker with scheduler would fail sometimes

* Make `worker.stop_scheduler()` more resilient to errors

* worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks

* Scheduler can now work with worker in both burst and non burst mode

* Fixed scheduler logging message

* Always log scheduler errors when running

* Improve scheduler error logging message

* Removed testing code

* Scheduler should periodically try to acquire locks for other queues it doesn't have

* Added tests for scheduler.should_reacquire_locks

* Added queue.enqueue_in()

* Fixes queue.enqueue_in() in Python 2.7

* First stab at documenting job scheduling

* Remove unused methods

* Remove Python 2.6 logging compatibility code

* Remove more unused imports

* Added convenience methods to access job registries from queue

* Added test for worker.run_maintenance_tasks()

* Simplify worker.queue_names() and worker.queue_keys()

* Updated changelog to mention RQ's new job scheduling mechanism.
5 years ago
Thomas Matecki 80c82f731f Multi Dependency Support - Registration & Enqueue Call (#1155)
* Multi Dependency Support - Registration & Enqueue Call

Internal API changes to support multiple dependencies.
* Store all of a job's _dependencies_ in a redis set. Delete that set when a job is deleted.
* Add Job#fetch_dependencies method - which return all jobs a job is dependent upon and optionally _WATCHES_ all dependency ids.
* Use Job#fetch_dependencies in Queue#call_enqueue. `fetch_dependencies` now sets WATCH and raises InvalidJobDependency, rather than call_enqueue.

`Queue` and `Job` public APIs still expect single ids of jobs for `depends_on` but internally register them in a way that could support multiple jobs being passed as dependencies.

Next up: need to update Queue#enqueue_dependents

* Use existing fetch_many method to get dependencies.

Modify fetch_dependencies to use fetch_many.

* Remove default value for fetch_many's connection parameter

* PR review housekeeping

* Remove a duplicate test
* Oneline something
* Fix missing colon in dependencies key
* Delete job key, dependents and dependencies at once

* More Fixes From Code Review

Updates to Job, Queue and associated tests.

* When Checking dependencies Avoid, trip to Redis

* When checking the status of a job, we have a 'clean' status of all dependencies(returned from Job#fetch_dependencies) and the job keys are WATCHed, so there's no reason to go back to Redis to get the status _again_.
* Looks as though, the `_status` set in `Job#restore` was bytes while it was converted to text(`as_text`) in `Job#get_status` - for consistency(and tests) converting to text in `restore` as well.
* In `Queue#enqueue_call`, moved WATCH of dependencies_key to before fetching dependencies. This doesn't really matter but seems more _correct_ - one can imagine some rogue API adding a dependency after they've been fetched but before they've been WATCHEed.

* Update Job#get_status to get _local_ status

* If refresh=False is passed, don't get status from Redis; return the value of _status. This is to avoid a trip to Redis if the caller can guarantee that the value of `_status` is _clean_.

* More Fixups

* Expire dependency keys in Job#cleanup
* Consistency in Job#fetch_dependencies
5 years ago
Selwin Ong af678243e1
Added `delete_job` argument to registry.remove()` (#1161) 5 years ago
Yongtao Zhang 5bb03b9c2c Fix rq info not found workers information error, Fixes #1139 (#1149)
* Fix rq info not found workers information error, Fixes #1139

* Add test(#1149)
5 years ago
Thomas Matecki 75644ba948 Multi Dependency Support [Internal API Changes] (#1147)
* Convert `_dependency_id` to `_dependency_ids`

Change `Job`s tracking from a single id of it's dependencies from a single _id_ to a list of _id_s. This change should be private to `Job` - especially leaving `Job#to_dict` and `Job#restore`s treatment of a single 'dependency_id' intact.

This change modifies existing tests.

* Remove reliance upon dependency property in tests

... use dependency.id not  `_dependency_id`

* Re-add assertions for Falsey Values

* Add _dependency_id property

For backwards compatibility with other libs such as django-rq and rq-scheduler
5 years ago
Vladimir Protasov 8c34e2b353 Store worker's RQ and Python versions (#1125)
* Store worker version to Redis

* Store worker's Python version to Redis

* Store worker version in __init__ body as suggested in review
5 years ago
Vladimir Protasov b62b9b0727 Fix unreliable test (#1126)
Also make error message more useful in case of future failures.
5 years ago
Bartłomiej Biernacki 51efc20371 Add failure_ttl on job decorator (#1130)
Without it you cannot specify custom timeout for failed jobs created using decorator
5 years ago
Selwin Ong d1813cdff9 Fixed test errors caused by _sentry_trace_headers 6 years ago
Selwin Ong f9d42e8a17
Added logging statements to handle_job_success and handle_job_failure (#1112) 6 years ago
Selwin Ong b14c4e288d
Added checks for 0 ttl (#1110) 6 years ago
Selwin Ong 549648bd1b
rq info management command now cleans up registries when first run (#1107)
* rq info management command now cleans up registries when first run

* Deleted print statement

* Improve CLI test coverage

* Fixed CLI test on Linux
6 years ago
Paul Robertson e1c135d4de add the ability to have the worker stop executing after a max amount of jobs (#1094)
* add the ability to have the worker stop executing after a max amount of jobs

* rename to max-jobs

* updated logging messages
6 years ago
Ted Summer 79a6fd7999 Fix timeout adding job to StartedJobRegistry (#1086)
* Fix timeout adding job to StartedJobRegistry

* Fix prepare_job_execution handling neg timeout

* Add test for inf job timeout in StartedJobRegistry

* refactor(worker): simplify checking neg timeout
6 years ago
Selwin Ong 7021cedaf9
Implemented Job.fetch_many (#1072) 6 years ago
Selwin Ong c4cbb3af2f
RQ v1.0! (#1059)
* Added FailedJobRegistry.

* Added job.failure_ttl.

* queue.enqueue() now supports failure_ttl

* Added registry.get_queue().

* FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL.

* StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry.

* Failed jobs are now added to FailedJobRegistry.

* Added FailedJobRegistry.requeue()

* Document the new `FailedJobRegistry` and changes in custom exception handler behavior.

* Added worker.disable_default_exception_handler.

* Document --disable-default-exception-handler option.

* Deleted worker.failed_queue.

* Deleted "move_to_failed_queue" exception handler.

* StartedJobRegistry should no longer move jobs to FailedQueue.

* Deleted requeue_job

* Fixed test error.

* Make requeue cli command work with FailedJobRegistry

* Added .pytest_cache to gitignore.

* Custom exception handlers are no longer run in reverse

* Restored requeue_job function

* Removed get_failed_queue

* Deleted FailedQueue

* Updated changelog.

* Document `failure_ttl`

* Updated docs.

* Remove job.status

* Fixed typo in test_registry.py

* Replaced _pipeline() with pipeline()

* FailedJobRegistry no longer fails on redis-py>=3

* Fixes test_clean_registries

* Worker names are now randomized

* Added a note about random worker names in CHANGES.md

* Worker will now stop working when encountering an unhandled exception.

* Worker should reraise SystemExit on cold shutdowns

* Added anchor.js to docs

* Support for Sentry-SDK (#1045)

* Updated RQ to support sentry-sdk

* Document Sentry integration

* Install sentry-sdk before running tests

* Improved rq info CLI command to be more efficient when displaying lar… (#1046)

* Improved rq info CLI command to be more efficient when displaying large number of workers

* Fixed an rq info --by-queue bug

* Fixed worker.total_working_time bug (#1047)

* queue.enqueue() no longer accepts `timeout` argument (#1055)

* Clean worker registry (#1056)

* queue.enqueue() no longer accepts `timeout` argument

* Added clean_worker_registry()

* Show worker hostname and PID on cli (#1058)

* Show worker hostname and PID on cli

* Improve test coverage

* Remove Redis version check when SSL is used

* Bump version to 1.0

* Removed pytest_cache/README.md

* Changed worker logging to use exc_info=True

* Removed unused queue.dequeue()

* Fixed typo in CHANGES.md

* setup_loghandlers() should always call logger.setLevel() if specified
6 years ago
Wolfgang Langner abf6881114 Fix #1040 queue default timeout bug. (#1042)
Add test for queue default_timeout.
6 years ago
Wolfgang Langner 8fc987dc68 Make logging in worker consitent. (#1030)
Switch some messages from warn to info because it is normal requested bahavior.
6 years ago
Chyroc 7eb95bf405 refactor: use try ImportError instead of py-version check (#1034) 6 years ago
Finnci 14db0ecd26 Update/add flag for description logging (#991)
* test workers

* indent

* add docs and add option to the cli

* rename flag for cli

* logging
6 years ago
Samuel Colvin 2f35222ddb skip test_1_sec_shutdown with pypy (#1020)
* skip test_1_sec_shutdown with pypy, fix #1019

* skip all HerokuWorkerShutdownTestCase with pypy
6 years ago
Darshan Rai ada2ad03ca modify zadd calls for redis-py 3.0 (#1016)
* modify zadd calls for redis-py 3.0

redis-py 3.0 changes the zadd interface that accepts a single
mapping argument that is expected to be a dict.
https://github.com/andymccurdy/redis-py#mset-msetnx-and-zadd

* change FailedQueue.push_job_id to always push a str

redis-py 3.0 does not attempt to cast values to str and is left
to the user.

* remove Redis connection patching

Since in redis-py 3.0, Redis == StrictRedis class, we no longer
need to patch _zadd and other methods.
Ref: https://github.com/rq/rq/pull/1016#issuecomment-441010847
6 years ago
Selwin Ong 6559b0ffd7
Replace "timeout" argument in queue.enqueue() with "job_timeout" (#1010) 6 years ago
Selwin Ong ad66d872f0 Fixed a unicode test. 6 years ago
Selwin Ong 47d291771f
SimpleWorker's ttl must always be longer than jobs. (#1002) 6 years ago
Paul Robertson e86fb57366 add is_async property to queue (#982) 6 years ago
chevell c2b939d2df Replace 'async' keyword with 'is_async' for Queue objects (#977)
* Replaced async keyword with is_async in the Queue class to fix reserved keyword syntax errors in Python 3.7

* Updated tests to use is_async keyword when instantiating Queue objects

* Updated docs to reference is_async keyword for Queue objects

* Updated tox.ini, setup.py and .travis.yml with references to Python 3.7
7 years ago
Theofanis Despoudis 875cc27c2f #908 Using a timeout string value for job works (#955)
Fixes https://github.com/rq/rq/issues/908
7 years ago
Theofanis Despoudis d6b12c2402 Issue 872 (#954)
* Fixes #872 - Use -1 to indicate infinite ttl

* Fixes #872 Restored comma

* #872 Code review fix
7 years ago
Selwin Ong 531fde8e3c worker.main_work_horse should always return 0 7 years ago
Thomas Kriechbaumer 3133d94b58 add periodic worker heartbeats (#945)
* add periodic worker heartbeats

fixes #944

* improve worker default option handling
7 years ago
Selwin Ong 63a04d275e Use dbsize() to test for empty Redis database 7 years ago
Selwin Ong c639018fb9 Registry objects can be instantiated by passing a queue object. 7 years ago
stj 487ef72f21 Define redis key prefix as class variable (#939)
* Define redis key prefix as class variable

Some prefixes were hardcoded in several places. This made it hard to
use custom prefixes via subclasses.

Resolves #920

* fixup! Define redis key prefix as class variable
7 years ago
Christophe Olinger a6eb5d37ee Delete dependents of job explicitely (#916)
* Initial take on delete_dependents

* Add tests including corner cases

* No need to canel dependents since they are not in a queue yet anyway

* The dependents keys can be deleted in all cases

* Update tests to included saved jobs in the deletion tests

* Correctly use pipeline in cancel method

* Unused connection

* Include dependents into dict format of job

* Add TODO

* Address comments from selwin

* Delete dependents key in redis if delete_dependents is called on its own

* Address recent comments from selwin

* Small change to trigger travis

* Remove TODO referring to canceled job state

* Remove dependent_ids from to_dict

* Address recent comments from selwin
7 years ago
Selwin Ong 7a3c85f185
Added the ability to fetch workers by queue (#911)
* job.exc_info is now compressed.

* job.data is now stored in compressed format.

* Added worker_registration.unregister.

* Added worker_registration.get_keys().

* Modified Worker.all(), Worker.all_keys() and Worker.count() to accept "connection" and "queue" arguments.
7 years ago
John Lucas 34c403ec8d Add meta to decorator, move depends_on + at_front to decorator (#892) 7 years ago
Samuel Colvin df571e14fd improve logging in worker.py (#902)
* improve logging in worker

* tests for log_result_lifespan
7 years ago
Selwin Ong f500186f3d
Job compression (#907)
job.exc_info and job.data is now stored in compressed format in Redis.

* job.data is now stored in compressed format.
7 years ago
vanife ff36e0656e Fixed an issue where `birth` not present in Redis (#901)
* Fixed an issue where `birth` not present in Redis

Fixed an issue where worker.refresh() may fail if `birth` is not present in Redis

* added test coverage
7 years ago
Selwin Ong 7b9c3b6b66 Fixed an issue where worker.refresh() may fail if last_heartbeat is not present in Redis. 7 years ago
Selwin Ong 1d7b5e834b
Worker statistics (#897)
* First stab at implementing worker statistics.

* Moved worker data restoration logic to worker.refresh().

* Failed and successfull job counts are now properly incremented.

* Worker now keeps track of total_working_time

* Ensure job.ended_at is set in the case of unhandled job failure.

* handle_job_failure shouldn't crash if job.started_at is not present.
7 years ago
Selwin Ong 92c88d3f4d Merge pull request #878 from theodesp/Issue-731
Fixed #731 - Support for deleting Queues
7 years ago
Theo c095fe1825 Fixed #731 - Code review issues. Added delete_jobs parameter and pipelining. 7 years ago
Samuel Colvin 260fd84f51 add milliseconds into timestamps, fix #721 7 years ago
Selwin Ong 19bc288378 Merge pull request #877 from theodesp/Issue-809
Fixed #809 - Added tests for various cli config parameters
7 years ago
Theo 160fe99323 Fixed #731 - Support for deleting Queues 7 years ago
Theo 0fab93d683 Fixed #809 - Added tests for various cli config parameters 7 years ago
Theo 261f4ac3d5 Fixed #866 - Flak8 errors 7 years ago
Theo 096c5ad3c2 Fixed #866 - Flak8 errors 7 years ago
Samuel Colvin 423da3683c remove python 2.6 support 7 years ago
Theo ee64114e6e Fixed #870 Improved test coverage for connections.py and utils.py 7 years ago
Selwin Ong 0efb87a46b Fixed test error in Python 3. 8 years ago
Selwin Ong 54bc04bb45 job.save() shouldn't crash on unpickleable return value. 8 years ago
Alexey Katichev 09697e567f revert back job.cleanup changes 8 years ago
Alexey Katichev 3596449cc0 remove implicit cleanup call from job.save 8 years ago
Alexey Katichev a0113c83cf introduce job.update_meta() to store updated meta to Redis (#823)
* introduce job.update_meta() to store updated meta to Redis

This closes nvie/rq#811

* rename update_meta to save_meta
8 years ago
Selwin Ong dc45ab8799 Worker.find_by_key should use hmget instead of repeated hget calls. (#826) 8 years ago
luojiebin cd529d0ce1 Fixed issue#72 (#818)
* Added a custom exception for timeout transfer

* Added a util to transfer timeout to a united format

* Transfer timeout format when creating a queue or enqueue jobs

* Fixed typos

* Fixed bug in transfer_timeout function

* Added test for function transfer_timeout

* Updated transfer_timeout to allow uppercase unit

* Renamed function in utils
8 years ago
Peng Liu b7d4b4ec1b Solve the UnicodeDecodeError while decode literal things. (#817)
* Solve the UnicodeDecodeError while decode literal things.

* Add test case for when worker result is a unicode or str object that other than
pure ascii content.
8 years ago
Felipe Lacerda cab89254b5 Make `Queue.enqueue_job()` execute immediately if `async=False` (#798)
Currently, the job is being performed inside `enqueue_call()`, which
means that `async=False` has no effect if `enqueue_job()` is called
directly. This commit fixes that.
8 years ago