StarPU Handbook
|
The StarPU library and tools's behavior can be tuned using the following environment variables. To access these variables, you can use the provided functions.
These functions allow to fine-tune the behavior of StarPU according to your preferences and requirements by leveraging environment variables.
Setting it to non-zero will prevent StarPU from binding its threads to CPUs. This is for instance useful when running the test suite in parallel.
By default StarPU uses the OS-provided CPU binding to determine how many and which CPU cores it should use. This is notably useful when running several StarPU-MPI processes on the same host, to let the MPI launcher set the CPUs to be used. Default value is 1.
If that binding is erroneous (e.g. because the job scheduler binds to just one core of the allocated cores), you can set STARPU_WORKERS_GETBIND to 0 to make StarPU use all cores of the machine.
Passing an array of integers in STARPU_WORKERS_CPUID specifies on which logical CPU the different workers should be bound. For instance, if STARPU_WORKERS_CPUID="0 1 4 5"
, the first worker will be bound to logical CPU #0, the second CPU worker will be bound to logical CPU #1 and so on. Note that the logical ordering of the CPUs is either determined by the OS, or provided by the library hwloc
in case it is available. Ranges can be provided: for instance, STARPU_WORKERS_CPUID="1-3
5"
will bind the first three workers on logical CPUs #1, #2, and #3, and the fourth worker on logical CPU #5. Unbound ranges can also be provided: STARPU_WORKERS_CPUID="1-"
will bind the workers starting from logical CPU #1 up to last CPU.
Note that the first workers correspond to the CUDA workers, then come the OpenCL workers, and finally the CPU workers. For example, if we have STARPU_NCUDA=1
, STARPU_NOPENCL=1
, STARPU_NCPU=2
and STARPU_WORKERS_CPUID="0 2 1 3"
, the CUDA device will be controlled by logical CPU #0, the OpenCL device will be controlled by logical CPU #2, and the logical CPUs #1 and #3 will be used by the CPU workers.
If the number of workers is larger than the array given in STARPU_WORKERS_CPUID, the workers are bound to the logical CPUs in a round-robin fashion: if STARPU_WORKERS_CPUID="0 1"
, the first and the third (resp. second and fourth) workers will be put on CPU #0 (resp. CPU #1).
This variable is ignored if the field starpu_conf::use_explicit_workers_bindid passed to starpu_init() is set.
Setting STARPU_WORKERS_CPUID or STARPU_WORKERS_COREID overrides the binding provided by the job scheduler, as described for STARPU_WORKERS_GETBIND.
Same as STARPU_WORKERS_CPUID, but bind the workers to cores instead of PUs (hyperthreads).
Specify how many threads StarPU should run on each core. The default is 1 because kernels are usually already optimized for using a full core. Setting this to e.g. 2 instead allows exploiting hyperthreading.
Tell StarPU to bind the thread that calls starpu_initialize() to a reserved CPU, subtracted from the CPU workers.
Tell StarPU to bind the thread that calls starpu_initialize() to the given CPU ID (using logical numbering).
Same as STARPU_MAIN_THREAD_CPUID, but bind the thread that calls starpu_initialize() to the given core (using logical numbering), instead of the PU (hyperthread).
Tell StarPU to create several workers which won't be able to work concurrently. It will by default create combined workers, which size goes from 1 to the total number of CPU workers in the system. STARPU_MIN_WORKERSIZE and STARPU_MAX_WORKERSIZE can be used to change this default.
Specify the minimum size of the combined workers. Default value is 2.
Specify the minimum size of the combined workers. Default value is the number of CPU workers in the system.
Specify how many elements are allowed between combined workers created from hwloc
information. For instance, in the case of sockets with 6 cores without shared L2 caches, if STARPU_SYNTHESIZE_ARITY_COMBINED_WORKER is set to 6, no combined worker will be synthesized beyond one for the socket and one per core. If it is set to 3, 3 intermediate combined workers will be synthesized, to divide the socket cores into 3 chunks of 2 cores. If it set to 2, 2 intermediate combined workers will be synthesized, to divide the socket cores into 2 chunks of 3 cores, and then 3 additional combined workers will be synthesized, to divide the former synthesized workers into a bunch of 2 cores, and the remaining core (for which no combined worker is synthesized since there is already a normal worker for it).
Default value is 2, thus makes StarPU tend to build binary trees of combined workers.
Disable asynchronous copies between CPU and GPU devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. One can call starpu_asynchronous_copy_disabled() to check whether asynchronous data transfers between CPU and accelerators are disabled.
See also STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY and STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.
Set to 1 to make task transfer time estimations artificially include the time that will be needed to write back data to the main memory.
Disable (1) or Enable (0) pinning host memory allocated through starpu_malloc(), starpu_memory_pin() and friends. Default value is Enable. This permits to test the performance effect of memory pinning.
Set minimum exponential backoff of number of cycles to pause when spinning. Default value is 1.
Set maximum exponential backoff of number of cycles to pause when spinning. Default value is 32.
Defined internally by StarPU when running in master slave mode.
Disable (0) or Enable (1) support for memory mapping between memory nodes. The default is Disabled. One can call starpu_map_enabled() to check whether memory mapping support between memory nodes is enabled.
Enable (1) or Disable(0) data locality enforcement when picking up a worker to execute a task. Default value is Disable.
Specify the number of CPU workers (thus not including workers dedicated to control accelerators). Note that by default, StarPU will not allocate more CPU workers than there are physical CPUs, and that some CPUs are used to control the accelerators.
Specify the number of CPU cores that should not be used by StarPU, so the application can use starpu_get_next_bindid() and starpu_bind_thread_on() to bind its own threads.
This option is ignored if STARPU_NCPU or starpu_conf::ncpus is set.
Deprecated. You should use STARPU_NCPU.
Specify the number of CUDA devices that StarPU can use. If STARPU_NCUDA is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable STARPU_WORKERS_CUDAID. By default, StarPU will create as many CUDA workers as there are GPU devices.
Specify the number of workers per CUDA device, and thus the number of kernels which will be concurrently running on the devices, i.e. the number of CUDA streams. Default value is 1.
Specify whether the cuda driver should use one thread per stream (1) or to use a single thread to drive all the streams of the device or all devices (0), and STARPU_CUDA_THREAD_PER_DEV determines whether is it one thread per device or one thread for all devices. Default value is 0. Setting it to 1 is contradictory with setting STARPU_CUDA_THREAD_PER_DEV.
Specify whether the cuda driver should use one thread per device (1) or to use a single thread to drive all the devices (0). Default value is 1. It does not make sense to set this variable if STARPU_CUDA_THREAD_PER_WORKER is set to to 1 (since STARPU_CUDA_THREAD_PER_DEV is then meaningless).
Specify how many asynchronous tasks are submitted in advance on CUDA devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. Default value is 2. Setting the value to 0 forces a synchronous execution of all tasks.
Select which CUDA devices should be used to run CUDA workers (similarly to the STARPU_WORKERS_CPUID environment variable). On a machine equipped with 4 GPUs, setting STARPU_WORKERS_CUDAID="1 3"
and STARPU_NCUDA=2
specifies that 2 CUDA workers should be created, and that they should use CUDA devices #1 and #3 (the logical ordering of the devices is the one reported by CUDA).
This variable is ignored if the field starpu_conf::use_explicit_workers_cuda_gpuid passed to starpu_init() is set.
Disable asynchronous copies between CPU and CUDA devices. One can call starpu_asynchronous_cuda_copy_disabled() to check whether asynchronous data transfers between CPU and CUDA accelerators are disabled.
See also STARPU_DISABLE_ASYNCHRONOUS_COPY and STARPU_DISABLE_ASYNCHRONOUS_OPENCL_COPY.
Enable (1) or Disable (0) direct CUDA transfers from GPU to GPU, without copying through RAM. Default value is Enable. This permits to test the performance effect of GPU-Direct.
Specify if CUDA workers should do only fast allocations when running the datawizard progress of other memory nodes. This will pass the internal value _STARPU_DATAWIZARD_ONLY_FAST_ALLOC to allocation methods. Default value is 0, allowing CUDA workers to do slow allocations.
This can also be specified with starpu_conf::cuda_only_fast_alloc_other_memnodes.
Specify the number of OpenCL devices that StarPU can use. If STARPU_NOPENCL is lower than the number of physical devices, it is possible to select which GPU devices should be used by the means of the environment variable STARPU_WORKERS_OPENCLID. By default, StarPU will create as many OpenCL workers as there are GPU devices.
Note that by default StarPU will launch CUDA workers on GPU devices. You need to disable CUDA to allow the creation of OpenCL workers.
Select which GPU devices should be used to run OpenCL workers (similarly to the STARPU_WORKERS_CPUID environment variable) On a machine equipped with 4 GPUs, setting STARPU_WORKERS_OPENCLID="1 3"
and STARPU_NOPENCL=2
specifies that 2 OpenCL workers should be created, and that they should use GPU devices #1 and #3.
This variable is ignored if the field starpu_conf::use_explicit_workers_opencl_gpuid passed to starpu_init() is set.
Specify how many asynchronous tasks are submitted in advance on OpenCL devices. This for instance permits to overlap task management with the execution of previous tasks, but it also allows concurrent execution on Fermi cards, which otherwise bring spurious synchronizations. Default value is 2. Setting the value to 0 forces a synchronous execution of all tasks.
Specify that OpenCL workers can also be run on CPU devices. By default, the OpenCL driver only enables GPU devices.
Specify that OpenCL workers can ONLY be run on CPU devices. By default, the OpenCL driver enables GPU devices.
Disable asynchronous copies between CPU and OpenCL devices. The AMD implementation of OpenCL is known to fail when copying data asynchronously. When using this implementation, it is therefore necessary to disable asynchronous data transfers. One can call starpu_asynchronous_opencl_copy_disabled() to check whether asynchronous data transfers between CPU and OpenCL accelerators are disabled.
See also STARPU_DISABLE_ASYNCHRONOUS_COPY and STARPU_DISABLE_ASYNCHRONOUS_CUDA_COPY.
Specify the number of Maxeler FPGA devices that StarPU can use. If STARPU_NMAX_FPGA is lower than the number of physical devices, it is possible to select which Maxeler FPGA devices should be used by the means of the environment variable STARPU_WORKERS_MAX_FPGAID. By default, StarPU will create as many Maxeler FPGA workers as there are GPU devices.
Select which Maxeler FPGA devices should be used to run Maxeler FPGA workers (similarly to the STARPU_WORKERS_CPUID environment variable). On a machine equipped with 4 Maxeler FPGAs, setting STARPU_WORKERS_MAX_FPGAID="1 3"
and STARPU_NMAX_FPGA=2
specifies that 2 Maxeler FPGA workers should be created, and that they should use Maxeler FPGA devices #1 and #3 (the logical ordering of the devices is the one reported by the Maxeler stack).
Disable asynchronous copies between CPU and Maxeler FPGA devices. One can call starpu_asynchronous_max_fpga_copy_disabled() to check whether asynchronous data transfers between CPU and Maxeler FPGA devices are disabled.
Specify the number of MPI master slave devices that StarPU can use.
Specift the number of threads to use on the MPI Slave devices.
Specify whether the master should use one thread per slave, or one thread for driver all slaves. Default value is 0.
Specify the rank of the MPI process which will be the master. Default value is 0.
Disable asynchronous copies between CPU and MPI Slave devices. One can call starpu_asynchronous_mpi_ms_copy_disabled() to check whether asynchronous data transfers between CPU and MPI Slave devices are disabled.
Specify the number of TCP/IP master slave devices that StarPU can use.
Specify the number of TCP/IP master slave processes that are expected to be run. This should be provided both to the master and to the slaves.
Specify (for slaves) the IP address of the master so they can connect to it. They will then automatically connect to each other.
Specify the port of the master, for connexions between slaves and the master. Default value is 1234.
Specify the number of threads to use on the TCP/IP Slave devices.
Specify whether the master should use one thread per slave, or one thread for driver all slaves. Default value is 0.
Disable asynchronous copies between CPU and TCP/IP Slave devices. One can call starpu_asynchronous_tcpip_ms_copy_disabled() to check whether asynchronous data transfers between CPU and TCP/IP Slave devices are disabled.
Specify the number of HIP devices that StarPU can use. If STARPU_NHIP is lower than the number of physical devices, it is possible to select which HIP devices should be used by the means of the environment variable STARPU_WORKERS_HIPID. By default, StarPU will create as many HIP workers as there are HIP devices.
Select which HIP devices should be used to run HIP workers (similarly to the STARPU_WORKERS_HIPID environment variable). On a machine equipped with 4 HIP devices, setting STARPU_WORKERS_HIPID="1 3"
and STARPU_NHIP=2
specifies that 2 HIP workers should be created, and that they should use HIP devices #1 and #3.
This variable is ignored if the field starpu_conf::use_explicit_workers_hip_gpuid passed to starpu_init() is set.
Disable asynchronous copies between CPU and HIP devices. One can call starpu_asynchronous_hip_copy_disabled() to check whether asynchronous data transfers between CPU and HIP accelerators are disabled.
Tell StarPU to bind its MPI thread to the given CPU id, subtracted from the CPU workers (unless STARPU_NCPU is defined).
Default value is -1, it will let StarPU allocate a CPU.
Same as STARPU_MPI_THREAD_CPUID, but bind the MPI thread to the given core ID, instead of the PU (hyperthread).
Setting it to non-zero will prevent StarPU from binding the MPI to a separate core. This is for instance useful when running the testsuite on a single system.
Enable (1) or disable (0) MPI GPUDirect support. Default value (-1) is to enable if available. If STARPU_MPI_GPUDIRECT is explicitly set to 1, StarPU-MPI will warn if MPI does not provide the GPUDirect support.
This variable allows to supercede PSM2 detection when asking for MPI GPUDirect support. This is helpful when using old intel compilers, for which PSM2 detection is always true. The default (1) is to enable it. If PSM2 is detected whereas it should not be, this variable can be set to 0.
The arity of the automatically-detected reduction trees follows the following rule: when the data to be reduced is of small size a flat tree is unrolled i.e. all the contributing nodes send their contribution to the root of the reduction. When the data to be reduced is of big size, a binary tree is used instead. The default threshold between flat and binary tree is 1024 bytes. By setting the environment variable with a negative value, all the automatically detected reduction trees will use flat trees. If this value is set to 0, then binary trees will always be selected. Otherwise, the setup value replaces the default 1024.
Select the scheduling policy from those proposed by StarPU: work random, stealing, greedy, with performance models, etc.
Use STARPU_SCHED=help
to get the list of available schedulers.
Specify the location of a dynamic library to choose a user-defined scheduling policy. See Using a New Scheduling Policy for more information.
Set the minimum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_min_priority.
Set the maximum priority used by priorities-aware schedulers. The flag can also be set through the field starpu_conf::global_sched_ctx_max_priority.
Set to 1 to calibrate the performance models during the execution. Set to 2 to drop the previous values and restart the calibration from scratch. Set to 0 to disable calibration, this is the default behaviour.
Note: this currently only applies to dm
and dmda
scheduling policies.
Define the minimum number of calibration measurements that will be made before considering that the performance model is calibrated. Default value is 10.
Enable (1) or disable (0) data prefetching. Default value is Enable.
If prefetching is enabled, when a task is scheduled to be executed e.g. on a GPU, StarPU will request an asynchronous transfer in advance, so that data is already present on the GPU when the task starts. As a result, computation and data transfers are overlapped.
To estimate the cost of a task StarPU takes into account the estimated computation time (obtained thanks to performance models). The alpha factor is the coefficient to be applied to it before adding it to the communication part.
To estimate the cost of a task StarPU takes into account the estimated data transfer time (obtained thanks to performance models). The beta factor is the coefficient to be applied to it before adding it to the computation part.
Define the execution time penalty of a joule (Energy-based Scheduling).
For a modular scheduler with sorted queues below the decision component, workers pick up a task which has most of its data already available. Setting this to 0 disables this.
For a modular scheduler with queues above the decision component, it is usually sorted by priority. Setting this to 0 disables this.
For a modular scheduler with queues below the decision component, they are usually sorted by priority. Setting this to 0 disables this.
Define the idle power of the machine (Energy-based Scheduling).
Enable on-line performance monitoring (Enabling On-line Performance Monitoring).
Enable on-line performance monitoring of codelets (Per-codelet Feedback). (enabled by default)
Enable on-line energy monitoring of tasks (Per-codelet Feedback). (disabled by default)
Specify which PAPI events should be recorded in the trace (PAPI counters).
Enable the locality aware mode of Heteroprio which guides the distribution of tasks to workers in order to reduce the data transfers between memory nodes.
Choose between the different push strategies for locality aware Heteroprio: WORKER
, LcS
, LS_SDH
, LS_SDH2
, LS_SDHB
, LC_SMWB
, AUTO
(by default: AUTO). These are detailed in Using locality aware Heteroprio
[ARCH] Specify the number of memory nodes contained in an affinity group. An affinity group will be composed of the closest memory nodes to a worker of a given architecture, and this worker will look for tasks available inside these memory nodes, before considering stealing tasks outside this group. ARCH can be CPU
, CUDA
, OPENCL
, SCC
, MPI_MS
, etc.
[ARCH] Specify the number of buckets in the local memory node in which a worker will look for available tasks, before this worker starts looking for tasks in other memory nodes' buckets. ARCH indicates that this number is specific to a given arch which can be: CPU
, CUDA
, OPENCL
, SCC
, MPI_MS
, etc.
Enable the auto calibration mode of Heteroprio which assign priorities to tasks automatically
Specify the path of the directory where Heteroprio stores data about program executions. By default, these are stored in the same directory used by perfmodel.
Specify the filename where Heteroprio will save data about the current program's execution.
Choose how Heteroprio groups similar tasks. It can be 0
to group the tasks with the same perfmodel or the same codelet's name if no perfmodel was assigned. Or, it could be 1
to group the tasks only by codelet's name.
Enable the printing of priorities' data every time they get updated.
Enable the printing of priorities' order for each architecture every time there's a reordering.
Specify the heuristic which will be used to assign priorities automatically. It should be an integer between 0 and 27.
Specify the period (in number of tasks pushed), between priorities reordering operations.
Set the location of the file libOpenCL.so
of the OCL ICD implementation. The SOCL test suite is only run when SOCL_OCL_LIB_OPENCL is defined.
Set the directory where ICD files are installed. This is useful when using SOCL with OpenCL ICD (https://forge.imag.fr/projects/ocl-icd/). Default directory is /etc/OpenCL/vendors
. StarPU installs ICD files in the directory $prefix/share/starpu/opencl/vendors
.
Deprecated. You should use STARPU_MPI_STATS.
Enable (!= 0) or Disable (0) communication statistics for starpumpi (Debugging MPI). Default value is Disable.
Disable (0) or Enable (!= 0) communication cache for starpumpi (MPI Support). Default value is Enable.
Enable (1) communication trace for starpumpi (MPI Support). Also needs for StarPU to have been configured with the option --enable-verbose.
Enable (1) statistics for the communication cache (MPI Support). Messages are printed on the standard output when data are added or removed from the received communication cache.
Disable (0) the use of priorities to order MPI communications (MPI Support).
Set the number of send requests that StarPU-MPI will emit concurrently. Default value is 10. Setting it to 0 removes the limit of concurrent send requests.
Set the number of requests that StarPU-MPI will submit to MPI before polling for termination of existing requests. Default value is 10. Setting it to 0 removes the limit: all requests to submit to MPI will be submitted before polling for termination of existing ones.
Setting to a number makes StarPU believe that there are as many MPI nodes, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. Of course, it does not provide computation results and timing.
Setting to a number makes StarPU believe that it runs the given MPI node, even if it was run on only one MPI node. This allows e.g. to simulate the execution of one of the nodes of a big cluster without actually running the rest. Of course, it does not provide computation results and timing.
Disable (0) dynamic collective operations: grouping same requests to different nodes until the data becomes available and then use a broadcast tree to execute requests.
By now, it is only supported with the NewMadeleine library (see Using the NewMadeleine communication library).
Disable (1) releasing the write acquire of receiving handles when data is received but the communication library still needs the data. Set to 0 by default to unlock as soon as possible tasks which only require a read access on the handle; write access will become possible for tasks when the communication library will not need the data anymore.
By now, it is only supported with the NewMadeleine library (see Using the NewMadeleine communication library).
When mpi_sync_clocks
is available, this library will be used to have more precise clock synchronization in traces coming from different nodes. However, the clock synchronization process can take some time (several seconds) and can be disabled by setting this variable to 0
. In that case, a less precise but faster synchronization will be used. See Tracing MPI applications for more details.
When set to a positive value, activates the interleaving of the execution of tasks with the progression of MPI communications (MPI Support). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used. When set to 0, the MPI progression thread does not use at all the driver given by users, and only focuses on making MPI communications progress.
When set to a positive value, the interleaving of the execution of tasks with the progression of MPI communications mechanism to execute several tasks before checking communication requests again (MPI Support). The starpu_mpi_init_conf() function must have been called by the application for that environment variable to be used, and the STARPU_MPI_DRIVER_CALL_FREQUENCY environment variable set to a positive value.
When set to a positive value, this makes the starpu_mpi_*recv* functions block when the memory allocation required for network reception overflows the available main memory (as typically set by STARPU_LIMIT_CPU_MEM)
When set to 1, the MPI Driver will immediately allocate the data for early requests instead of issuing a data request and blocking. Default value is 0, issuing a data request. Because it is an early request and we do not know its real priority, the data request will assume STARPU_DEFAULT_PRIO. In cases where there are many data requests with priorities greater than STARPU_DEFAULT_PRIO the MPI drive could be blocked for long periods.
When set to 1 (default value is 0), this makes StarPU check that it was really build with simulation support. This is convenient in scripts to avoid using a native version, that would try to update performance models...
When set to 1 (which is the default value), data transfers (over PCI bus, typically) are taken into account in SimGrid mode.
When set to 1 (which is the default value), CUDA malloc costs are taken into account in SimGrid mode.
When set to 1 (which is the default value), CUDA task and transfer queueing costs are taken into account in SimGrid mode.
When unset or set to 0, the platform file created for SimGrid will contain PCI bandwidths and routes.
When unset or set to 1, simulate within SimGrid the GPU transfer queueing.
Define the size of the file used for folding virtual allocation, in MiB. Default value is 1, thus allowing 64GiB virtual memory when Linux's sysctl vm.max_map_count
value is the default 65535.
When set to 1 (which is the default value), task submission costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially for the beginning of the execution.
When set to 1 (which is the default value), task push costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially with large dependency arities.
When set to 1 (which is the default value), fetching input costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, especially regarding data transfers.
When set to 1 (0 is the default value), scheduling costs are taken into account in SimGrid mode. This provides more accurate SimGrid predictions, and allows studying scheduling overhead of the runtime system. However, it also makes simulation non-deterministic.
Enable (1) or disable (0) multi interpreters in the StarPU Python interface (Multiple Interpreters). Default value is Disable.
Enable (1) or disable (0) using per-interpreter GIL (Python Parallelism). Default value is Disable for now, until python is fully ready for this.
Specify the main directory in which StarPU stores its configuration files. Default value is $HOME
on Unix environments, and $USERPROFILE
on Windows environments.
Only used on Windows environments. Specify the main directory in which StarPU is installed (Running a Basic StarPU Application on Microsoft Visual C)
Specify the main directory in which StarPU stores its performance model files. Default value is $STARPU_HOME/.starpu/sampling
. See Storing Performance Model Files for more details.
Specify a list of directories separated with ':' in which StarPU stores its performance model files. See Storing Performance Model Files for more details.
When set to 0, StarPU will assume that CPU devices do not have the same performance, and thus use different performance models for them, thus making kernel calibration much longer, since measurements have to be made for each CPU core.
When set to 1, StarPU will assume that all CUDA devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all CUDA GPUs.
When set to 1, StarPU will assume that all OpenCL devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all OpenCL GPUs.
When set to 1, StarPU will assume that all MPI Slave devices have the same performance, and thus share performance models for them, thus allowing kernel calibration to be much faster, since measurements only have to be once for all MPI Slaves.
When set, force the hostname to be used when managing performance model files. Models are indexed by machine name. When running for example on a homogenenous cluster, it is possible to share the models between machines by setting export STARPU_HOSTNAME=some_global_name
.
Similar to STARPU_HOSTNAME but to define multiple nodes on a heterogeneous cluster. The variable is a list of hostnames that will be assigned to each StarPU-MPI rank considering their position and the value of starpu_mpi_world_rank() on each rank. When running, for example, on a heterogeneous cluster, it is possible to set individual models for each machine by setting export STARPU_MPI_HOSTNAMES="name0 name1 name2"
. Where rank 0 will receive name0
, rank1 will receive name1
, and so on. This variable has precedence over STARPU_HOSTNAME.
Specify the directory where the OpenCL codelet source files are located. The function starpu_opencl_load_program_source() looks for the codelet in the current directory, in the directory specified by the environment variable STARPU_OPENCL_PROGRAM_DIR, in the directory share/starpu/opencl
of the installation directory of StarPU, and finally in the source directory of StarPU.
Disable verbose mode at runtime when StarPU has been configured with the option --enable-verbose. Also disable the display of StarPU information and warning messages.
Set the minimum level of debug when StarPU has been configured with the option --enable-mpi-verbose.
Set the maximum level of debug when StarPU has been configured with the option --enable-mpi-verbose.
Specify in which file the debugging output should be saved to.
Specify in which directory to save the generated trace if FxT is enabled.
Specify in which file to save the generated trace if FxT is enabled.
Enable (1) or disable (0) the FxT trace generation in /tmp/prof_file_XXX_YYY
(the directory and file name can be changed with STARPU_FXT_PREFIX and STARPU_FXT_SUFFIX). Default value is Disable.
Specify which events will be recorded in traces. By default, all events (but VERBOSE_EXTRA
ones) are recorded. One can set this variable to a comma- or pipe-separated list of the following categories, to record only events belonging to the selected categories:
USER
TASK
TASK_VERBOSE
TASK_VERBOSE_EXTRA
DATA
DATA_VERBOSE
WORKER
WORKER_VERBOSE
DSM
DSM_VERBOSE
SCHED
SCHED_VERBOSE
LOCK
LOCK_VERBOSE
EVENT
EVENT_VERBOSE
MPI
MPI_VERBOSE
MPI_VERBOSE_EXTRA
HYP
HYP_VERBOSE
The choice of which categories have to be recorded is a tradeoff between required information for offline analyzis and acceptable overhead introduced by tracing. For instance, to inspect with ViTE which tasks workers execute, one has to at least select the TASK
category.
Events in VERBOSE_EXTRA
are very costly to record and can have an important impact on application performances. This is why there are disabled by default, and one has to explicitly select their categories using this variable to record them.
Specify the maximum number of megabytes that should be available to the application on the CUDA device with the identifier devid
. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_CUDA_MEM.
Specify the maximum number of megabytes that should be available to the application on each CUDA devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.
Specify the maximum number of megabytes that should be available to the application on the OpenCL device with the identifier devid
. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_OPENCL_MEM.
Specify the maximum number of megabytes that should be available to the application on each OpenCL devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.
Specify the maximum number of megabytes that should be available to the application on the HIP device with the identifier devid
. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory. When defined, the variable overwrites the value of the variable STARPU_LIMIT_HIP_MEM.
Specify the maximum number of megabytes that should be available to the application on each HIP devices. This variable is intended to be used for experimental purposes as it emulates devices that have a limited amount of memory.
Specify the maximum number of megabytes that should be available to the application in the main CPU memory. Setting it enables allocation cache in main memory. Setting it to zero lets StarPU overflow memory.
Note: for now not all StarPU allocations get throttled by this parameter. Notably MPI reception are not throttled unless STARPU_MPI_MEM_THROTTLE is set to 1.
Specify the maximum number of megabytes that should be available to the application on the NUMA node with the OS identifier devid
. Setting it overrides the value of STARPU_LIMIT_CPU_MEM.
Specify the maximum number of megabytes that should be available to the application on each NUMA node. This is the same as specifying that same amount with STARPU_LIMIT_CPU_NUMA_devid_MEM for each NUMA node number. The total memory available to StarPU will thus be this amount multiplied by the number of NUMA nodes used by StarPU. Any STARPU_LIMIT_CPU_NUMA_devid_MEM additionally specified will take over STARPU_LIMIT_CPU_NUMA_MEM.
Specify the maximum available PCI bandwidth of the system in MB/s. This can only be effective with simgrid simulation. This allows to easily override the bandwidths stored in the platform file generated from measurements on the native system. This can thus be used accelerate or slow down the system bandwidth.
Enable (1) or disable (0) the StarPU suballocator. Default value is to enable it to amortize the cost of GPU and pinned RAM allocations for small allocations: StarPU allocate large chunks of memory at a time, and suballocates the small buffers within them.
Specify the minimum percentage of memory that should be available in GPUs, i.e. not used at all by StarPU (or in main memory, when using out of core), below which a eviction pass is performed. Default value is 0%.
Specify the target percentage of memory that should be available in GPUs, i.e. not used at all by StarPU (or in main memory, when using out of core), when performing a periodic eviction pass. Default value is 0%.
Specify the minimum percentage of number of buffers that should be clean in GPUs (or in main memory, when using out of core), i.e. used by StarPU, but for which a copy is available in memory (or on disk, when using out of core), below which asynchronous writebacks will be issued. Default value is 5%.
Specify the target percentage of number of buffers that should be reached in GPUs (or in main memory, when using out of core), i.e. used by StarPU, but for which a copy is available in memory (or on disk, when using out of core), when performing an asynchronous writeback pass. Default value is 10%.
Specify a path where StarPU can push data when the main memory is getting full.
Specify the backend to be used by StarPU to push data when the main memory is getting full. Default value is unistd
(i.e. using read/write functions), other values are stdio
(i.e. using fread/fwrite), unistd_o_direct
(i.e. using read/write with O_DIRECT), leveldb
(i.e. using a leveldb database), and hdf5
(i.e. using HDF5 library).
Specify the maximum size in MiB to be used by StarPU to push data when the main memory is getting full. Default value is unlimited.
Allow users to control the task submission flow by specifying to StarPU a maximum number of submitted tasks allowed at a given time, i.e. when this limit is reached task submission becomes blocking until enough tasks have completed, specified by STARPU_LIMIT_MIN_SUBMITTED_TASKS. Setting it enables allocation cache buffer reuse in main memory. See How To Reduce The Memory Footprint Of Internal Data Structures.
Allow users to control the task submission flow by specifying to StarPU a submitted task threshold to wait before unblocking task submission. This variable has to be used in conjunction with STARPU_LIMIT_MAX_SUBMITTED_TASKS which puts the task submission thread to sleep. Setting it enables allocation cache buffer reuse in main memory. See How To Reduce The Memory Footprint Of Internal Data Structures.
Set the buffer size for recording trace events in MiB. Setting it to a big size allows to avoid pauses in the trace while it is recorded on the disk. This however also consumes memory, of course. Default value is 64.
When set to 1
, indicate that StarPU should automatically generate a Paje trace when starpu_shutdown() is called.
When the variable STARPU_GENERATE_TRACE is set to 1
to generate a Paje trace, this variable can be set to specify options (see starpu_fxt_tool –help
).
Enable gathering various data statistics (Data Statistics).
When set to 0, disable the display of memory statistics on data which have not been unregistered at the end of the execution (Memory Feedback).
When set to 1, display at the end of the execution the maximum memory used by StarPU for internal data structures during execution.
Enable the display of data transfers statistics when calling starpu_shutdown() (Profiling). By default, statistics are printed on the standard error stream, use the environment variable STARPU_BUS_STATS_FILE to define another filename.
Define the name of the file where to display data transfers statistics, see STARPU_BUS_STATS.
Enable the display of workers statistics when calling starpu_shutdown() (Profiling). When combined with the environment variable STARPU_PROFILING, it displays the energy consumption (Energy-based Scheduling). By default, statistics are printed on the standard error stream, use the environment variable STARPU_WORKER_STATS_FILE to define another filename.
Define the name of the file where to display workers statistics, see STARPU_WORKER_STATS.
When set to 0, data statistics will not be displayed at the end of the execution of an application (Data Statistics).
When set to a value other than 0, allows to make StarPU print an error message whenever StarPU does not terminate any task for the given time (in µs), but lets the application continue normally. Should be used in combination with STARPU_WATCHDOG_CRASH (see Detecting Stuck Conditions).
When set to a value other than 0, trigger a crash when the watch dog is reached, thus allowing to catch the situation in gdb, etc (see Detecting Stuck Conditions)
Delay the activation of the watchdog by the given time (in µs). This can be convenient for letting the application initialize data etc. before starting to look for idle time.
Print the progression of tasks. This is convenient to determine whether a program is making progress in task execution, or is just stuck.
When this variable contains a job id, StarPU will raise SIGTRAP
when the task with that job id is being pushed to the scheduler, which will be nicely caught by debuggers (see Debugging Scheduling)
When this variable contains a job id, StarPU will raise SIGTRAP
when the task with that job id is being scheduled by the scheduler (at a scheduler-specific point), which will be nicely caught by debuggers. This only works for schedulers which have such a scheduling point defined (see Debugging Scheduling)
When this variable contains a job id, StarPU will raise SIGTRAP
when the task with that job id is being popped from the scheduler, which will be nicely caught by debuggers (see Debugging Scheduling)
When this variable contains a job id, StarPU will raise SIGTRAP
when the task with that job id is being executed, which will be nicely caught by debuggers (see Debugging Scheduling)
When set to a value other than 1, it disables actually calling the kernel functions, thus allowing to quickly check that the task scheme is working properly, without performing the actual application-provided computation.
History-based performance models will drop measurements which are really far froom the measured average. This specifies the allowed variation. Default value is 50 (%), i.e. the measurement is allowed to be x1.5 faster or /1.5 slower than the average.
The random scheduler and some examples use random numbers for their own working. Depending on the examples, the seed is by default juste always 0 or the current time() (unless SimGrid mode is enabled, in which case it is always 0). STARPU_RAND_SEED allows to set the seed to a specific value.
When set to a positive value, StarPU will create a arbiter, which implements an advanced but centralized management of concurrent data accesses (see Concurrent Data Accesses).
When defined to 1, NUMA nodes are taking into account by StarPU, i.e. StarPU will expose one StarPU memory node per NUMA node, and will thus schedule tasks according to data locality, migrated data when appropriate, etc.
STARPU_MAIN_RAM is then associated to the NUMA node associated to the first CPU worker if it exists, the NUMA node associated to the first GPU discovered otherwise. If StarPU doesn't find any NUMA node after these steps, STARPU_MAIN_RAM is the first NUMA node discovered by StarPU.
Applications should thus rather pass a NULL
pointer and a -1 memory node to starpu_data_*_register
functions, so that StarPU can manage memory as it wishes.
If the application wants to control memory allocation on NUMA nodes for some data, it can use starpu_malloc_on_node and pass the memory node to the starpu_data_*_register
functions to tell StarPU where the allocation was made. starpu_memory_nodes_get_count_by_kind() and starpu_memory_node_get_ids_by_type() can be used to get the memory nodes numbers of the CPU memory nodes.
starpu_memory_nodes_numa_id_to_devid() and starpu_memory_nodes_numa_devid_to_id() are also available to convert between OS NUMA id and StarPU memory node number.
If this variable is unset, or set to 0, CPU memory is considered as only one memory node (STARPU_MAIN_RAM) and it will be up to the OS to manage migration etc. and the StarPU scheduler will not know about it.
When defined, a file named after its contents will be created at the end of the execution. This file will contain the sum of the idle times of all the workers.
When defined to the path of an XML file, hwloc
will use this file as input instead of detecting the current platform topology, which can save significant initialization time.
To produce this XML file, use lstopo file.xml
By default, StarPU catch signals SIGINT
, SIGSEGV
and SIGTRAP
to perform final actions such as dumping FxT trace files even though the application has crashed. Setting this variable to a value other than 1 will disable this behaviour. This should be done on JVM systems which may use these signals for their own needs. The flag can also be set through the field starpu_conf::catch_signals.
Choose between the different resizing policies proposed by StarPU for the hypervisor: idle
, app_driven
, feft_lp
, teft_lp
, ispeed_lp
, throughput_lp
etc.
Use SC_HYPERVISOR_POLICY=help
to get the list of available policies for the hypervisor
Choose how should the hypervisor be triggered: speed
if the resizing algorithm should be called whenever the speed of the context does not correspond to an optimal precomputed value, idle
it the resizing algorithm should be called whenever the workers are idle for a period longer than the value indicated when configuring the hypervisor.
Indicate the moment when the resizing should be available. The value correspond to the percentage of the total time of execution of the application. Default value is the resizing frame.
Indicate the ratio of speed difference between contexts that should trigger the hypervisor. This situation may occur only when a theoretical speed could not be computed and the hypervisor has no value to compare the speed to. Otherwise the resizing of a context is not influenced by the the speed of the other contexts, but only by the the value that a context should have.
By default the values of the speed of the workers is printed during the execution of the application. If the value 1 is given to this environment variable this printing is not done.
By default the hypervisor resizes the contexts in a lazy way, that is workers are firstly added to a new context before removing them from the previous one. Once this workers are clearly taken into account into the new context (a task was popped there) we remove them from the previous one. However if the application would like that the change in the distribution of workers should change right away this variable should be set to 0
By default the hypervisor uses a sample of flops when computing the speed of the contexts and of the workers. If this variable is set to time
the hypervisor uses a sample of time (10% of an approximation of the total execution time of the application)