Local Cluster¶
For convenience you can start a local cluster from your Python session.
>>> from distributed import Client, LocalCluster
>>> cluster = LocalCluster()
LocalCluster("127.0.0.1:8786", workers=8, ncores=8)
>>> client = Client(cluster)
<Client: scheduler=127.0.0.1:8786 processes=8 cores=8>
You can dynamically scale this cluster up and down:
>>> worker = cluster.add_worker()
>>> cluster.remove_worker(worker)
Alternatively, a LocalCluster
is made for you automatically if you create
an Client
with no arguments:
>>> from distributed import Client
>>> client = Client()
>>> client
<Client: scheduler=127.0.0.1:8786 processes=8 cores=8>
API¶
-
class
distributed.deploy.local.
LocalCluster
(n_workers=None, threads_per_worker=None, processes=True, loop=None, start=True, ip=None, scheduler_port=0, silence_logs=50, diagnostics_port=8787, services={}, worker_services={}, nanny=None, **worker_kwargs)¶ Create local Scheduler and Workers
This creates a “cluster” of a scheduler and workers running on the local machine.
Parameters: - n_workers (int) – Number of workers to start
- processes (bool) – Whether to use processes (True) or threads (False). Defaults to True
- threads_per_worker (int) – Number of threads per each worker
- scheduler_port (int) – Port of the scheduler. 8786 by default, use 0 to choose a random port
- silence_logs (logging level) – Level of logs to print out to stdout.
logging.CRITICAL
by default. Use a falsey value like False or None for no change. - ip (string) – IP address on which the scheduler will listen, defaults to only localhost
- kwargs (dict) – Extra worker arguments, will be passed to the Worker constructor.
Examples
>>> c = LocalCluster() # Create a local cluster with as many workers as cores >>> c LocalCluster("127.0.0.1:8786", workers=8, ncores=8)
>>> c = Client(c) # connect to local cluster
Add a new worker to the cluster >>> w = c.start_worker(ncores=2) # doctest: +SKIP
Shut down the extra worker >>> c.remove_worker(w) # doctest: +SKIP
-
close
()¶ Close the cluster
-
scale_down
(*args, **kwargs)¶ Remove
workers
from the clusterGiven a list of worker addresses this function should remove those workers from the cluster. This may require tracking which jobs are associated to which worker address.
This can be implemented either as a function or as a Tornado coroutine.
-
scale_up
(*args, **kwargs)¶ Bring the total count of workers up to
n
This function/coroutine should bring the total number of workers up to the number
n
.This can be implemented either as a function or as a Tornado coroutine.
-
start_worker
(ncores=0, **kwargs)¶ Add a new worker to the running cluster
Parameters: - port (int (optional)) – Port on which to serve the worker, defaults to 0 or random
- ncores (int (optional)) – Number of threads to use. Defaults to number of logical cores
Examples
>>> c = LocalCluster() >>> c.start_worker(ncores=2)
Returns: Return type: The created Worker or Nanny object. Can be discarded.
-
stop_worker
(w)¶ Stop a running worker
Examples
>>> c = LocalCluster() >>> w = c.start_worker(ncores=2) >>> c.stop_worker(w)