Skip to content

LocalityScheduling

Delta edited this page Oct 18, 2023 · 1 revision

Locality scheduling

Locality scheduling is intended for applications for which

  • Jobs have large input files.
  • Each input file is used by many jobs.

The goal of locality scheduling is to minimize the amount of data transfer to hosts by preferentially sending jobs to hosts that already have some or all of the input files required by those jobs.

To use locality scheduling you must declare the large input files as sticky (so they're not deleted on the client) and no_delete (so they're not deleted on the server).

There are two variants of locality scheduling:

  • Limited: use when the total number of input files at any point is fairly small (of order 100).
  • Standard: use in other cases. A highly project-specific version of this is used by Einstein@home. A more general version has been designed but is not yet implemented.

Limited locality scheduling

Limited locality scheduling (LLS) uses BOINC's standard share-memory job cache scheduling mechanism. It assumes that the ratio of the job cache size to the number of data files is sufficiently large that there is usually at least one job in the cache for a given data file. It dispatches jobs that use files resident on the client in preference to jobs that don't.

The size of the job cache is specified in the project config file. It can be be set to 1000 or so on machines with large RAM. Thus LLS works best with applications that have no more than a few hundred active input files.

To use LLS with a particular application, set the locality_scheduling field of its database record to 1 (currently this must be done by editing the database).

When using LLS, you should generate jobs in an order that cycles among the input files. If you generate a long sequence of jobs for a given file, they will fill up the job cache and the policy won't work well.

When you have no more jobs for an input file, you can arrange for it to be deleted from clients.

Clone this wiki locally