diff --git a/docs/docs/index.md b/docs/docs/index.md index 1c24c6b..e42539f 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -111,8 +111,8 @@ You can also enqueue multiple jobs in bulk with `queue.enqueue_many()` and `Queu ```python jobs = q.enqueue_many( [ - Queue.prepare_data(count_words_at_url, 'http://nvie.com', job_id='my_job_id'), - Queue.prepare_data(count_words_at_url, 'http://nvie.com', job_id='my_other_job_id'), + Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'), + Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'), ] ) ``` @@ -123,8 +123,8 @@ which will enqueue all the jobs in a single redis `pipeline` which you can optio with q.connection.pipeline() as pipe: jobs = q.enqueue_many( [ - Queue.prepare_data(count_words_at_url, 'http://nvie.com', job_id='my_job_id'), - Queue.prepare_data(count_words_at_url, 'http://nvie.com', job_id='my_other_job_id'), + Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_job_id'), + Queue.prepare_data(count_words_at_url, ('http://nvie.com',), job_id='my_other_job_id'), ], pipeline=pipe )