You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

317 lines
7.8 KiB
Python

"""
This file contains all jobs that are used in tests. Each of these test
fixtures has a slightly different characteristics.
"""
import contextlib
import os
Job scheduling (#1163) * First RQScheduler prototype * WIP job scheduling * Fixed Python 2.7 tests * Added ScheduledJobRegistry.get_scheduled_time(job) * WIP on scheduler's threading mechanism * Fixed test errors * Changed scheduler.acquire_locks() to instance method * Added scheduler.prepare_registries() * Somewhat working implementation of RQ scheduler * Only call stop_scheduler if there's a scheduler present * Use OSError rather than ProcessLookupError for PyPy compatibility * Added `auto_start` argument to scheduler.acquire_locks() * Make RQScheduler play better with timezone * Fixed test error * Added --with-scheduler flag to rq worker CLI * Fix tests on Python 2.x * More Python 2 fixes * Only call `scheduler.start` if worker is run in non burst mode * Fixed an issue where running worker with scheduler would fail sometimes * Make `worker.stop_scheduler()` more resilient to errors * worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks * Scheduler can now work with worker in both burst and non burst mode * Fixed scheduler logging message * Always log scheduler errors when running * Improve scheduler error logging message * Removed testing code * Scheduler should periodically try to acquire locks for other queues it doesn't have * Added tests for scheduler.should_reacquire_locks * Added queue.enqueue_in() * Fixes queue.enqueue_in() in Python 2.7 * First stab at documenting job scheduling * Remove unused methods * Remove Python 2.6 logging compatibility code * Remove more unused imports * Added convenience methods to access job registries from queue * Added test for worker.run_maintenance_tasks() * Simplify worker.queue_names() and worker.queue_keys() * Updated changelog to mention RQ's new job scheduling mechanism.
5 years ago
import signal
import subprocess
import sys
import time
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
from multiprocessing import Process
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
from redis import Redis
from rq import Connection, Queue, get_current_connection, get_current_job
Worker pool (#1874) * First stab at implementating worker pool * Use process.is_alive() to check whether a process is still live * Handle shutdown signal * Check worker loop done * First working version of `WorkerPool`. * Added test for check_workers() * Added test for pool.start() * Better shutdown process * Comment out test_start() to see if it fixes CI * Make tests pass * Make CI pass * Comment out some tests * Comment out more tests * Re-enable a test * Re-enable another test * Uncomment check_workers test * Added run_worker test * Minor modification to dead worker detection * More test cases * Better process name for workers * Added back pool.stop_workers() when signal is received * Cleaned up cli.py * WIP on worker-pool command * Fix test * Test that worker pool ignores consecutive shutdown signals * Added test for worker-pool CLI command. * Added timeout to CI jobs * Fix worker pool test * Comment out test_scheduler.py * Fixed worker-pool in burst mode * Increase test coverage * Exclude tests directory from coverage.py * Improve test coverage * Renamed `Pool(num_workers=2) to `Pool(size=2)` * Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`" This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b. * Renamed Pool to WorkerPool * Added a new TestCase that doesn't use LocalStack * Added job_class, worker_class and serializer arguments to WorkerPool * Use parse_connection() in WorkerPool.__init__ * Added CLI arguments for worker-pool * Minor WorkerPool and test fixes * Fixed failing CLI test * Document WorkerPool
2 years ago
from rq.command import send_kill_horse_command, send_shutdown_command
from rq.decorators import job
Worker pool (#1874) * First stab at implementating worker pool * Use process.is_alive() to check whether a process is still live * Handle shutdown signal * Check worker loop done * First working version of `WorkerPool`. * Added test for check_workers() * Added test for pool.start() * Better shutdown process * Comment out test_start() to see if it fixes CI * Make tests pass * Make CI pass * Comment out some tests * Comment out more tests * Re-enable a test * Re-enable another test * Uncomment check_workers test * Added run_worker test * Minor modification to dead worker detection * More test cases * Better process name for workers * Added back pool.stop_workers() when signal is received * Cleaned up cli.py * WIP on worker-pool command * Fix test * Test that worker pool ignores consecutive shutdown signals * Added test for worker-pool CLI command. * Added timeout to CI jobs * Fix worker pool test * Comment out test_scheduler.py * Fixed worker-pool in burst mode * Increase test coverage * Exclude tests directory from coverage.py * Improve test coverage * Renamed `Pool(num_workers=2) to `Pool(size=2)` * Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`" This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b. * Renamed Pool to WorkerPool * Added a new TestCase that doesn't use LocalStack * Added job_class, worker_class and serializer arguments to WorkerPool * Use parse_connection() in WorkerPool.__init__ * Added CLI arguments for worker-pool * Minor WorkerPool and test fixes * Fixed failing CLI test * Document WorkerPool
2 years ago
from rq.job import Job
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
from rq.worker import HerokuWorker, Worker
def say_pid():
return os.getpid()
13 years ago
def say_hello(name=None):
"""A job with a single argument and a return value."""
if name is None:
name = 'Stranger'
return 'Hi there, %s!' % (name,)
13 years ago
async def say_hello_async(name=None):
"""A async job with a single argument and a return value."""
return say_hello(name)
def say_hello_unicode(name=None):
"""A job with a single argument and a return value."""
return str(say_hello(name)) # noqa
def do_nothing():
"""The best job in the world."""
pass
def raise_exc():
raise Exception('raise_exc error')
def raise_exc_mock():
return raise_exc
13 years ago
def div_by_zero(x):
"""Prepare for a division-by-zero exception."""
return x / 0
13 years ago
def long_process():
time.sleep(60)
return
def some_calculation(x, y, z=1):
"""Some arbitrary calculation with three numbers. Choose z smartly if you
want a division by zero exception.
"""
return x * y / z
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
def rpush(key, value, append_worker_name=False, sleep=0):
"""Push a value into a list in Redis. Useful for detecting the order in
which jobs were executed."""
if sleep:
time.sleep(sleep)
if append_worker_name:
value += ':' + get_current_job().worker_name
redis = get_current_connection()
redis.rpush(key, value)
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
def check_dependencies_are_met():
return get_current_job().dependencies_are_met()
13 years ago
def create_file(path):
"""Creates a file at the given path. Actually, leaves evidence that the
job ran."""
with open(path, 'w') as f:
f.write('Just a sentinel.')
def create_file_after_timeout(path, timeout):
time.sleep(timeout)
create_file(path)
def create_file_after_timeout_and_setsid(path, timeout):
os.setsid()
create_file_after_timeout(path, timeout)
def launch_process_within_worker_and_store_pid(path, timeout):
p = subprocess.Popen(['sleep', str(timeout)])
with open(path, 'w') as f:
f.write('{}'.format(p.pid))
p.wait()
def access_self():
assert get_current_connection() is not None
assert get_current_job() is not None
def modify_self(meta):
j = get_current_job()
j.meta.update(meta)
j.save()
def modify_self_and_error(meta):
j = get_current_job()
j.meta.update(meta)
j.save()
return 1 / 0
def echo(*args, **kwargs):
return args, kwargs
class Number:
def __init__(self, value):
self.value = value
@classmethod
def divide(cls, x, y):
return x * y
def div(self, y):
return self.value / y
class CallableObject:
def __call__(self):
return u"I'm callable"
class UnicodeStringObject:
def __repr__(self):
return u'é'
class ClassWithAStaticMethod:
@staticmethod
def static_method():
return u"I'm a static method"
with Connection():
Worker pool (#1874) * First stab at implementating worker pool * Use process.is_alive() to check whether a process is still live * Handle shutdown signal * Check worker loop done * First working version of `WorkerPool`. * Added test for check_workers() * Added test for pool.start() * Better shutdown process * Comment out test_start() to see if it fixes CI * Make tests pass * Make CI pass * Comment out some tests * Comment out more tests * Re-enable a test * Re-enable another test * Uncomment check_workers test * Added run_worker test * Minor modification to dead worker detection * More test cases * Better process name for workers * Added back pool.stop_workers() when signal is received * Cleaned up cli.py * WIP on worker-pool command * Fix test * Test that worker pool ignores consecutive shutdown signals * Added test for worker-pool CLI command. * Added timeout to CI jobs * Fix worker pool test * Comment out test_scheduler.py * Fixed worker-pool in burst mode * Increase test coverage * Exclude tests directory from coverage.py * Improve test coverage * Renamed `Pool(num_workers=2) to `Pool(size=2)` * Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`" This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b. * Renamed Pool to WorkerPool * Added a new TestCase that doesn't use LocalStack * Added job_class, worker_class and serializer arguments to WorkerPool * Use parse_connection() in WorkerPool.__init__ * Added CLI arguments for worker-pool * Minor WorkerPool and test fixes * Fixed failing CLI test * Document WorkerPool
2 years ago
@job(queue='default')
def decorated_job(x, y):
return x + y
def black_hole(job, *exc_info):
# Don't fall through to default behaviour (moving to failed queue)
return False
RQ v1.0! (#1059) * Added FailedJobRegistry. * Added job.failure_ttl. * queue.enqueue() now supports failure_ttl * Added registry.get_queue(). * FailedJobRegistry.add() now assigns DEFAULT_FAILURE_TTL. * StartedJobRegistry.cleanup() now moves expired jobs to FailedJobRegistry. * Failed jobs are now added to FailedJobRegistry. * Added FailedJobRegistry.requeue() * Document the new `FailedJobRegistry` and changes in custom exception handler behavior. * Added worker.disable_default_exception_handler. * Document --disable-default-exception-handler option. * Deleted worker.failed_queue. * Deleted "move_to_failed_queue" exception handler. * StartedJobRegistry should no longer move jobs to FailedQueue. * Deleted requeue_job * Fixed test error. * Make requeue cli command work with FailedJobRegistry * Added .pytest_cache to gitignore. * Custom exception handlers are no longer run in reverse * Restored requeue_job function * Removed get_failed_queue * Deleted FailedQueue * Updated changelog. * Document `failure_ttl` * Updated docs. * Remove job.status * Fixed typo in test_registry.py * Replaced _pipeline() with pipeline() * FailedJobRegistry no longer fails on redis-py>=3 * Fixes test_clean_registries * Worker names are now randomized * Added a note about random worker names in CHANGES.md * Worker will now stop working when encountering an unhandled exception. * Worker should reraise SystemExit on cold shutdowns * Added anchor.js to docs * Support for Sentry-SDK (#1045) * Updated RQ to support sentry-sdk * Document Sentry integration * Install sentry-sdk before running tests * Improved rq info CLI command to be more efficient when displaying lar… (#1046) * Improved rq info CLI command to be more efficient when displaying large number of workers * Fixed an rq info --by-queue bug * Fixed worker.total_working_time bug (#1047) * queue.enqueue() no longer accepts `timeout` argument (#1055) * Clean worker registry (#1056) * queue.enqueue() no longer accepts `timeout` argument * Added clean_worker_registry() * Show worker hostname and PID on cli (#1058) * Show worker hostname and PID on cli * Improve test coverage * Remove Redis version check when SSL is used * Bump version to 1.0 * Removed pytest_cache/README.md * Changed worker logging to use exc_info=True * Removed unused queue.dequeue() * Fixed typo in CHANGES.md * setup_loghandlers() should always call logger.setLevel() if specified
6 years ago
def add_meta(job, *exc_info):
job.meta = {'foo': 1}
job.save()
return True
def save_key_ttl(key):
# Stores key ttl in meta
job = get_current_job()
ttl = job.connection.ttl(key)
job.meta = {'ttl': ttl}
job.save_meta()
def long_running_job(timeout=10):
time.sleep(timeout)
return 'Done sleeping...'
def run_dummy_heroku_worker(sandbox, _imminent_shutdown_delay):
"""
Run the work horse for a simplified heroku worker where perform_job just
creates two sentinel files 2 seconds apart.
:param sandbox: directory to create files in
:param _imminent_shutdown_delay: delay to use for HerokuWorker
"""
sys.stderr = open(os.path.join(sandbox, 'stderr.log'), 'w')
class TestHerokuWorker(HerokuWorker):
imminent_shutdown_delay = _imminent_shutdown_delay
def perform_job(self, job, queue):
create_file(os.path.join(sandbox, 'started'))
# have to loop here rather than one sleep to avoid holding the GIL
# and preventing signals being received
for i in range(20):
time.sleep(0.1)
create_file(os.path.join(sandbox, 'finished'))
w = TestHerokuWorker(Queue('dummy'))
w.main_work_horse(None, None)
class DummyQueue:
pass
Job scheduling (#1163) * First RQScheduler prototype * WIP job scheduling * Fixed Python 2.7 tests * Added ScheduledJobRegistry.get_scheduled_time(job) * WIP on scheduler's threading mechanism * Fixed test errors * Changed scheduler.acquire_locks() to instance method * Added scheduler.prepare_registries() * Somewhat working implementation of RQ scheduler * Only call stop_scheduler if there's a scheduler present * Use OSError rather than ProcessLookupError for PyPy compatibility * Added `auto_start` argument to scheduler.acquire_locks() * Make RQScheduler play better with timezone * Fixed test error * Added --with-scheduler flag to rq worker CLI * Fix tests on Python 2.x * More Python 2 fixes * Only call `scheduler.start` if worker is run in non burst mode * Fixed an issue where running worker with scheduler would fail sometimes * Make `worker.stop_scheduler()` more resilient to errors * worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks * Scheduler can now work with worker in both burst and non burst mode * Fixed scheduler logging message * Always log scheduler errors when running * Improve scheduler error logging message * Removed testing code * Scheduler should periodically try to acquire locks for other queues it doesn't have * Added tests for scheduler.should_reacquire_locks * Added queue.enqueue_in() * Fixes queue.enqueue_in() in Python 2.7 * First stab at documenting job scheduling * Remove unused methods * Remove Python 2.6 logging compatibility code * Remove more unused imports * Added convenience methods to access job registries from queue * Added test for worker.run_maintenance_tasks() * Simplify worker.queue_names() and worker.queue_keys() * Updated changelog to mention RQ's new job scheduling mechanism.
5 years ago
Worker pool (#1874) * First stab at implementating worker pool * Use process.is_alive() to check whether a process is still live * Handle shutdown signal * Check worker loop done * First working version of `WorkerPool`. * Added test for check_workers() * Added test for pool.start() * Better shutdown process * Comment out test_start() to see if it fixes CI * Make tests pass * Make CI pass * Comment out some tests * Comment out more tests * Re-enable a test * Re-enable another test * Uncomment check_workers test * Added run_worker test * Minor modification to dead worker detection * More test cases * Better process name for workers * Added back pool.stop_workers() when signal is received * Cleaned up cli.py * WIP on worker-pool command * Fix test * Test that worker pool ignores consecutive shutdown signals * Added test for worker-pool CLI command. * Added timeout to CI jobs * Fix worker pool test * Comment out test_scheduler.py * Fixed worker-pool in burst mode * Increase test coverage * Exclude tests directory from coverage.py * Improve test coverage * Renamed `Pool(num_workers=2) to `Pool(size=2)` * Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`" This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b. * Renamed Pool to WorkerPool * Added a new TestCase that doesn't use LocalStack * Added job_class, worker_class and serializer arguments to WorkerPool * Use parse_connection() in WorkerPool.__init__ * Added CLI arguments for worker-pool * Minor WorkerPool and test fixes * Fixed failing CLI test * Document WorkerPool
2 years ago
def kill_worker(pid: int, double_kill: bool, interval: float = 1.5):
Job scheduling (#1163) * First RQScheduler prototype * WIP job scheduling * Fixed Python 2.7 tests * Added ScheduledJobRegistry.get_scheduled_time(job) * WIP on scheduler's threading mechanism * Fixed test errors * Changed scheduler.acquire_locks() to instance method * Added scheduler.prepare_registries() * Somewhat working implementation of RQ scheduler * Only call stop_scheduler if there's a scheduler present * Use OSError rather than ProcessLookupError for PyPy compatibility * Added `auto_start` argument to scheduler.acquire_locks() * Make RQScheduler play better with timezone * Fixed test error * Added --with-scheduler flag to rq worker CLI * Fix tests on Python 2.x * More Python 2 fixes * Only call `scheduler.start` if worker is run in non burst mode * Fixed an issue where running worker with scheduler would fail sometimes * Make `worker.stop_scheduler()` more resilient to errors * worker.dequeue_job_and_maintain_ttl() should also periodically run maintenance tasks * Scheduler can now work with worker in both burst and non burst mode * Fixed scheduler logging message * Always log scheduler errors when running * Improve scheduler error logging message * Removed testing code * Scheduler should periodically try to acquire locks for other queues it doesn't have * Added tests for scheduler.should_reacquire_locks * Added queue.enqueue_in() * Fixes queue.enqueue_in() in Python 2.7 * First stab at documenting job scheduling * Remove unused methods * Remove Python 2.6 logging compatibility code * Remove more unused imports * Added convenience methods to access job registries from queue * Added test for worker.run_maintenance_tasks() * Simplify worker.queue_names() and worker.queue_keys() * Updated changelog to mention RQ's new job scheduling mechanism.
5 years ago
# wait for the worker to be started over on the main process
time.sleep(interval)
os.kill(pid, signal.SIGTERM)
if double_kill:
# give the worker time to switch signal handler
time.sleep(interval)
os.kill(pid, signal.SIGTERM)
class Serializer:
def loads(self):
pass
def dumps(self):
pass
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
def start_worker(queue_name, conn_kwargs, worker_name, burst):
"""
Start a worker. We accept only serializable args, so that this can be
executed via multiprocessing.
"""
# Silence stdout (thanks to <https://stackoverflow.com/a/28321717/14153673>)
with open(os.devnull, 'w') as devnull:
with contextlib.redirect_stdout(devnull):
w = Worker([queue_name], name=worker_name, connection=Redis(**conn_kwargs))
w.work(burst=burst)
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
def start_worker_process(queue_name, connection=None, worker_name=None, burst=False):
"""
Use multiprocessing to start a new worker in a separate process.
"""
connection = connection or get_current_connection()
conn_kwargs = connection.connection_pool.connection_kwargs
p = Process(target=start_worker, args=(queue_name, conn_kwargs, worker_name, burst))
p.start()
return p
Multidependencies (#1397) * Also accept lists and tuples as value of `depends_on`. * The elements of the lists/tuples may be either Jobs or Job IDs. * A single Job / Job ID is still accepted as well. * Represent _all_ job dependencies in `Job.to_dict()`. We now represent the entire list, instead of just the first element. * Fix some doctext regarding plurality of dependencies. * Add unit tests for job dependencies. * One unit test establishes a pattern for checking execution order as affected by dependencies. * Another unit test applies this pattern to the new capability to name multiple dependencies. * Add unit test for new `depends_on` input formats. Also test that these are properly persisted. * Repair `Job.restore()`. Need to convert bytes back to strings when reloading `dependency_ids`. * Maintain backwards compat. in `Job.to_dict()`. Keep the old `dependency_id` (singular) key. * Provide coverage for new test fixture. * Simplify some code. Cut some superfluous `as_text()` calls left over from an earlier commit. * Check for `dependency_id` in `Job.restore()` for backwd. compat. Also eliminate use of `as_text()` here, in favor of `.decode()`. * Switch to snake case instead of camel case. * Eliminate some usages of `as_text()`. Also cut some `print` statements. * Cleanup. * Accept arbitrary iterables for `Job`'s `depends_on` kwarg. Instead of requiring a list or tuple, we now make use of `ensure_list()`. * Add test fixtures. * Provide a system to get two workers working simultaneously, using `multiprocessing`. * Define a simple job that just says whether its dependencies are met. * In `rpush`, make an option to record the name of the worker. * Improve unit tests on execution order with dependencies. These now actually have two workers going, which makes a more thorough test. * Add unit test examining `Job.dependencies_are_met()` at execution time. * Redesign dependency execution order unit tests. * Simplify assertions. * Improve doctext and formatting. * Move fixture tests to new, dedicated module `test_fixtures.py`. * Use `enqueue` instead of `enqueue_call` in new unit tests.
4 years ago
def burst_two_workers(queue, timeout=2, tries=5, pause=0.1):
"""
Get two workers working simultaneously in burst mode, on a given queue.
Return after both workers have finished handling jobs, up to a fixed timeout
on the worker that runs in another process.
"""
w1 = start_worker_process(queue.name, worker_name='w1', burst=True)
w2 = Worker(queue, name='w2')
jobs = queue.jobs
if jobs:
first_job = jobs[0]
# Give the first worker process time to get started on the first job.
# This is helpful in tests where we want to control which worker takes which job.
n = 0
while n < tries and not first_job.is_started:
time.sleep(pause)
n += 1
# Now can start the second worker.
w2.work(burst=True)
w1.join(timeout)
def save_result(job, connection, result):
"""Store job result in a key"""
connection.set('success_callback:%s' % job.id, result, ex=60)
def save_exception(job, connection, type, value, traceback):
"""Store job exception in a key"""
connection.set('failure_callback:%s' % job.id, str(value), ex=60)
def save_result_if_not_stopped(job, connection, result=""):
connection.set('stopped_callback:%s' % job.id, result, ex=60)
def erroneous_callback(job):
"""A callback that's not written properly"""
pass
Worker pool (#1874) * First stab at implementating worker pool * Use process.is_alive() to check whether a process is still live * Handle shutdown signal * Check worker loop done * First working version of `WorkerPool`. * Added test for check_workers() * Added test for pool.start() * Better shutdown process * Comment out test_start() to see if it fixes CI * Make tests pass * Make CI pass * Comment out some tests * Comment out more tests * Re-enable a test * Re-enable another test * Uncomment check_workers test * Added run_worker test * Minor modification to dead worker detection * More test cases * Better process name for workers * Added back pool.stop_workers() when signal is received * Cleaned up cli.py * WIP on worker-pool command * Fix test * Test that worker pool ignores consecutive shutdown signals * Added test for worker-pool CLI command. * Added timeout to CI jobs * Fix worker pool test * Comment out test_scheduler.py * Fixed worker-pool in burst mode * Increase test coverage * Exclude tests directory from coverage.py * Improve test coverage * Renamed `Pool(num_workers=2) to `Pool(size=2)` * Revert "Renamed `Pool(num_workers=2) to `Pool(size=2)`" This reverts commit a1306f89ad0d8686c6bde447bff75e2f71f0733b. * Renamed Pool to WorkerPool * Added a new TestCase that doesn't use LocalStack * Added job_class, worker_class and serializer arguments to WorkerPool * Use parse_connection() in WorkerPool.__init__ * Added CLI arguments for worker-pool * Minor WorkerPool and test fixes * Fixed failing CLI test * Document WorkerPool
2 years ago
def _send_shutdown_command(worker_name, connection_kwargs, delay=0.25):
time.sleep(delay)
send_shutdown_command(Redis(**connection_kwargs), worker_name)
def _send_kill_horse_command(worker_name, connection_kwargs, delay=0.25):
"""Waits delay before sending kill-horse command"""
time.sleep(delay)
send_kill_horse_command(Redis(**connection_kwargs), worker_name)
class CustomJob(Job):
"""A custom job class just to test it"""