1 Getting good performance from mdrun
2 ===================================
3 The GROMACS build system and the :ref:`gmx mdrun` tool has a lot of built-in
4 and configurable intelligence to detect your hardware and make pretty
5 effective use of that hardware. For a lot of casual and serious use of
6 :ref:`gmx mdrun`, the automatic machinery works well enough. But to get the
7 most from your hardware to maximize your scientific quality, read on!
9 Hardware background information
10 -------------------------------
11 Modern computer hardware is complex and heterogeneous, so we need to
12 discuss a little bit of background information and set up some
13 definitions. Experienced HPC users can skip this section.
18 A hardware compute unit that actually executes
19 instructions. There is normally more than one core in a
20 processor, often many more.
23 A special kind of memory local to core(s) that is much faster
24 to access than main memory, kind of like the top of a human's
25 desk, compared to their filing cabinet. There are often
26 several layers of caches associated with a core.
29 A group of cores that share some kind of locality, such as a
30 shared cache. This makes it more efficient to spread
31 computational work over cores within a socket than over cores
32 in different sockets. Modern processors often have more than
36 A group of sockets that share coarser-level locality, such as
37 shared access to the same memory without requiring any network
38 hardware. A normal laptop or desktop computer is a node. A
39 node is often the smallest amount of a large compute cluster
40 that a user can request to use.
43 A stream of instructions for a core to execute. There are many
44 different programming abstractions that create and manage
45 spreading computation over multiple threads, such as OpenMP,
46 pthreads, winthreads, CUDA, OpenCL, and OpenACC. Some kinds of
47 hardware can map more than one software thread to a core; on
48 Intel x86 processors this is called "hyper-threading", while
49 the more general concept is often called SMT for
50 "simultaneous multi-threading". IBM Power8 can for instance use
51 up to 8 hardware threads per core.
52 This feature can usually be enabled or disabled either in
53 the hardware bios or through a setting in the Linux operating
54 system. GROMACS can typically make use of this, for a moderate
55 free performance boost. In most cases it will be
56 enabled by default e.g. on new x86 processors, but in some cases
57 the system administrators might have disabled it. If that is the
58 case, ask if they can re-enable it for you. If you are not sure
59 if it is enabled, check the output of the CPU information in
60 the log file and compare with CPU specifications you find online.
62 thread affinity (pinning)
63 By default, most operating systems allow software threads to migrate
64 between cores (or hardware threads) to help automatically balance
65 workload. However, the performance of :ref:`gmx mdrun` can deteriorate
66 if this is permitted and will degrade dramatically especially when
67 relying on multi-threading within a rank. To avoid this,
68 :ref:`gmx mdrun` will by default
69 set the affinity of its threads to individual cores/hardware threads,
70 unless the user or software environment has already done so
71 (or not the entire node is used for the run, i.e. there is potential
73 Setting thread affinity is sometimes called thread "pinning".
76 The dominant multi-node parallelization-scheme, which provides
77 a standardized language in which programs can be written that
78 work across more than one node.
81 In MPI, a rank is the smallest grouping of hardware used in
82 the multi-node parallelization scheme. That grouping can be
83 controlled by the user, and might correspond to a core, a
84 socket, a node, or a group of nodes. The best choice varies
85 with the hardware, software and compute task. Sometimes an MPI
86 rank is called an MPI process.
89 A graphics processing unit, which is often faster and more
90 efficient than conventional processors for particular kinds of
91 compute workloads. A GPU is always associated with a
92 particular node, and often a particular socket within that
96 A standardized technique supported by many compilers to share
97 a compute workload over multiple cores. Often combined with
98 MPI to achieve hybrid MPI/OpenMP parallelism.
101 A proprietary parallel computing framework and API developed by NVIDIA
102 that allows targeting their accelerator hardware.
103 |Gromacs| uses CUDA for GPU acceleration support with NVIDIA hardware.
106 An open standard-based parallel computing framework that consists
107 of a C99-based compiler and a programming API for targeting heterogeneous
108 and accelerator hardware. |Gromacs| uses OpenCL for GPU acceleration
109 on AMD devices (both GPUs and APUs); NVIDIA hardware is also supported.
112 Modern CPU cores have instructions that can execute large
113 numbers of floating-point instructions in a single cycle.
116 GROMACS background information
117 ------------------------------
118 The algorithms in :ref:`gmx mdrun` and their implementations are most relevant
119 when choosing how to make good use of the hardware. For details,
120 see the Reference Manual. The most important of these are
125 The domain decomposition (DD) algorithm decomposes the
126 (short-ranged) component of the non-bonded interactions into
127 domains that share spatial locality, which permits efficient
128 code to be written. Each domain handles all of the
129 particle-particle (PP) interactions for its members, and is
130 mapped to a single rank. Within a PP rank, OpenMP threads can
131 share the workload, or the work can be off-loaded to a
132 GPU. The PP rank also handles any bonded interactions for the
133 members of its domain. A GPU may perform work for more than
134 one PP rank, but it is normally most efficient to use a single
135 PP rank per GPU and for that rank to have thousands of
136 particles. When the work of a PP rank is done on the CPU, mdrun
137 will make extensive use of the SIMD capabilities of the
138 core. There are various `command-line options
139 <controlling-the-domain-decomposition-algorithm` to control
140 the behaviour of the DD algorithm.
143 The particle-mesh Ewald (PME) algorithm treats the long-ranged
144 components of the non-bonded interactions (Coulomb and/or
145 Lennard-Jones). Either all, or just a subset of ranks may
146 participate in the work for computing long-ranged component
147 (often inaccurately called simple the "PME"
148 component). Because the algorithm uses a 3D FFT that requires
149 global communication, its performance gets worse as more ranks
150 participate, which can mean it is fastest to use just a subset
151 of ranks (e.g. one-quarter to one-half of the ranks). If
152 there are separate PME ranks, then the remaining ranks handle
153 the PP work. Otherwise, all ranks do both PP and PME work.
155 Running mdrun within a single node
156 ----------------------------------
158 :ref:`gmx mdrun` can be configured and compiled in several different ways that
159 are efficient to use within a single :term:`node`. The default configuration
160 using a suitable compiler will deploy a multi-level hybrid parallelism
161 that uses CUDA, OpenMP and the threading platform native to the
162 hardware. For programming convenience, in GROMACS, those native
163 threads are used to implement on a single node the same MPI scheme as
164 would be used between nodes, but much more efficient; this is called
165 thread-MPI. From a user's perspective, real MPI and thread-MPI look
166 almost the same, and GROMACS refers to MPI ranks to mean either kind,
167 except where noted. A real external MPI can be used for :ref:`gmx mdrun` within
168 a single node, but runs more slowly than the thread-MPI version.
170 By default, :ref:`gmx mdrun` will inspect the hardware available at run time
171 and do its best to make fairly efficient use of the whole node. The
172 log file, stdout and stderr are used to print diagnostics that
173 inform the user about the choices made and possible consequences.
175 A number of command-line parameters are available to vary the default
179 The total number of threads to use. The default, 0, will start as
180 many threads as available cores. Whether the threads are
181 thread-MPI ranks, or OpenMP threads within such ranks depends on
185 The total number of thread-MPI ranks to use. The default, 0,
186 will start one rank per GPU (if present), and otherwise one rank
190 The total number of OpenMP threads per rank to start. The
191 default, 0, will start one thread on each available core.
192 Alternatively, mdrun will honor the appropriate system
193 environment variable (e.g. ``OMP_NUM_THREADS``) if set.
196 The total number of ranks to dedicate to the long-ranged
197 component of PME, if used. The default, -1, will dedicate ranks
198 only if the total number of threads is at least 12, and will use
199 around one-third of the ranks for the long-ranged component.
202 When using PME with separate PME ranks,
203 the total number of OpenMP threads per separate PME ranks.
204 The default, 0, copies the value from ``-ntomp``.
207 A string that specifies the ID numbers of the GPUs to be
208 used by corresponding PP ranks on this node. For example,
209 "0011" specifies that the lowest two PP ranks use GPU 0,
210 and the other two use GPU 1.
213 Can be set to "auto," "on" or "off" to control whether
214 mdrun will attempt to set the affinity of threads to cores.
215 Defaults to "auto," which means that if mdrun detects that all the
216 cores on the node are being used for mdrun, then it should behave
217 like "on," and attempt to set the affinities (unless they are
218 already set by something else).
221 If ``-pin on``, specifies the logical core number to
222 which mdrun should pin the first thread. When running more than
223 one instance of mdrun on a node, use this option to to avoid
224 pinning threads from different mdrun instances to the same core.
227 If ``-pin on``, specifies the stride in logical core
228 numbers for the cores to which mdrun should pin its threads. When
229 running more than one instance of mdrun on a node, use this option
230 to to avoid pinning threads from different mdrun instances to the
231 same core. Use the default, 0, to minimize the number of threads
232 per physical core - this lets mdrun manage the hardware-, OS- and
233 configuration-specific details of how to map logical cores to
237 Can be set to "interleave," "pp_pme" or "cartesian."
238 Defaults to "interleave," which means that any separate PME ranks
239 will be mapped to MPI ranks in an order like PP, PP, PME, PP, PP,
240 PME, ... etc. This generally makes the best use of the available
241 hardware. "pp_pme" maps all PP ranks first, then all PME
242 ranks. "cartesian" is a special-purpose mapping generally useful
243 only on special torus networks with accelerated global
244 communication for Cartesian communicators. Has no effect if there
245 are no separate PME ranks.
248 Can be set to "auto", "cpu", "gpu", "cpu_gpu."
249 Defaults to "auto," which uses a compatible GPU if available.
250 Setting "cpu" requires that no GPU is used. Setting "gpu" requires
251 that a compatible GPU be available and will be used. Setting
252 "cpu_gpu" permits the CPU to execute a GPU-like code path, which
253 will run slowly on the CPU and should only be used for debugging.
255 Examples for mdrun on one node
256 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
262 Starts mdrun using all the available resources. mdrun
263 will automatically choose a fairly efficient division
264 into thread-MPI ranks, OpenMP threads and assign work
265 to compatible GPUs. Details will vary with hardware
266 and the kind of simulation being run.
272 Starts mdrun using 8 threads, which might be thread-MPI
273 or OpenMP threads depending on hardware and the kind
274 of simulation being run.
278 gmx mdrun -ntmpi 2 -ntomp 4
280 Starts mdrun using eight total threads, with four thread-MPI
281 ranks and two OpenMP threads per core. You should only use
282 these options when seeking optimal performance, and
283 must take care that the ranks you create can have
284 all of their OpenMP threads run on the same socket.
285 The number of ranks must be a multiple of the number of
286 sockets, and the number of cores per node must be
287 a multiple of the number of threads per rank.
293 Starts mdrun using GPUs with IDs 1 and 2 (e.g. because
294 GPU 0 is dedicated to running a display). This requires
295 two thread-MPI ranks, and will split the available
296 CPU cores between them using OpenMP threads.
300 gmx mdrun -ntmpi 4 -gpu_id "1122"
302 Starts mdrun using four thread-MPI ranks, and maps them
303 to GPUs with IDs 1 and 2. The CPU cores available will
304 be split evenly between the ranks using OpenMP threads.
308 gmx mdrun -nt 6 -pin on -pinoffset 0
309 gmx mdrun -nt 6 -pin on -pinoffset 3
311 Starts two mdrun processes, each with six total threads.
312 Threads will have their affinities set to particular
313 logical cores, beginning from the logical core
314 with rank 0 or 3, respectively. The above would work
315 well on an Intel CPU with six physical cores and
316 hyper-threading enabled. Use this kind of setup only
317 if restricting mdrun to a subset of cores to share a
318 node with other processes.
322 mpirun -np 2 gmx_mpi mdrun
324 When using an :ref:`gmx mdrun` compiled with external MPI,
325 this will start two ranks and as many OpenMP threads
326 as the hardware and MPI setup will permit. If the
327 MPI setup is restricted to one node, then the resulting
328 :ref:`gmx mdrun` will be local to that node.
330 Running mdrun on more than one node
331 -----------------------------------
332 This requires configuring GROMACS to build with an external MPI
333 library. By default, this mdrun executable is run with
334 :ref:`mdrun_mpi`. All of the considerations for running single-node
335 mdrun still apply, except that ``-ntmpi`` and ``-nt`` cause a fatal
336 error, and instead the number of ranks is controlled by the
338 Settings such as ``-npme`` are much more important when
339 using multiple nodes. Configuring the MPI environment to
340 produce one rank per core is generally good until one
341 approaches the strong-scaling limit. At that point, using
342 OpenMP to spread the work of an MPI rank over more than one
343 core is needed to continue to improve absolute performance.
344 The location of the scaling limit depends on the processor,
345 presence of GPUs, network, and simulation algorithm, but
346 it is worth measuring at around ~200 particles/core if you
347 need maximum throughput.
349 There are further command-line parameters that are relevant in these
353 Defaults to "on." If "on," will optimize various aspects of the
354 PME and DD algorithms, shifting load between ranks and/or GPUs to
358 Can be set to "auto," "no," or "yes."
359 Defaults to "auto." Doing Dynamic Load Balancing between MPI ranks
360 is needed to maximize performance. This is particularly important
361 for molecular systems with heterogeneous particle or interaction
362 density. When a certain threshold for performance loss is
363 exceeded, DLB activates and shifts particles between ranks to improve
367 During the simulation :ref:`gmx mdrun` must communicate between all ranks to
368 compute quantities such as kinetic energy. By default, this
369 happens whenever plausible, and is influenced by a lot of [.mdp
370 options](#mdp-options). The period between communication phases
371 must be a multiple of :mdp:`nstlist`, and defaults to
372 the minimum of :mdp:`nstcalcenergy` and :mdp:`nstlist`.
373 ``mdrun -gcom`` sets the number of steps that must elapse between
374 such communication phases, which can improve performance when
375 running on a lot of ranks. Note that this means that _e.g._
376 temperature coupling algorithms will
377 effectively remain at constant energy until the next
378 communication phase. :ref:`gmx mdrun` will always honor the
379 setting of ``mdrun -gcom``, by changing :mdp:`nstcalcenergy`,
380 :mdp:`nstenergy`, :mdp:`nstlog`, :mdp:`nsttcouple` and/or
381 :mdp:`nstpcouple` if necessary.
383 Note that ``-tunepme`` has more effect when there is more than one
384 :term:`node`, because the cost of communication for the PP and PME
385 ranks differs. It still shifts load between PP and PME ranks, but does
386 not change the number of separate PME ranks in use.
388 Note also that ``-dlb`` and ``-tunepme`` can interfere with each other, so
389 if you experience performance variation that could result from this,
390 you may wish to tune PME separately, and run the result with ``mdrun
391 -notunepme -dlb yes``.
393 The :ref:`gmx tune_pme` utility is available to search a wider
394 range of parameter space, including making safe
395 modifications to the :ref:`tpr` file, and varying ``-npme``.
396 It is only aware of the number of ranks created by
397 the MPI environment, and does not explicitly manage
398 any aspect of OpenMP during the optimization.
400 Examples for mdrun on more than one node
401 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
402 The examples and explanations for for single-node mdrun are
403 still relevant, but ``-nt`` is no longer the way
404 to choose the number of MPI ranks.
408 mpirun -np 16 gmx_mpi mdrun
410 Starts :ref:`mdrun_mpi` with 16 ranks, which are mapped to
411 the hardware by the MPI library, e.g. as specified
412 in an MPI hostfile. The available cores will be
413 automatically split among ranks using OpenMP threads,
414 depending on the hardware and any environment settings
415 such as ``OMP_NUM_THREADS``.
419 mpirun -np 16 gmx_mpi mdrun -npme 5
421 Starts :ref:`mdrun_mpi` with 16 ranks, as above, and
422 require that 5 of them are dedicated to the PME
427 mpirun -np 11 gmx_mpi mdrun -ntomp 2 -npme 6 -ntomp_pme 1
429 Starts :ref:`mdrun_mpi` with 11 ranks, as above, and
430 require that six of them are dedicated to the PME
431 component with one OpenMP thread each. The remaining
432 five do the PP component, with two OpenMP threads
437 mpirun -np 4 gmx mdrun -ntomp 6 -gpu_id 00
439 Starts :ref:`mdrun_mpi` on a machine with two nodes, using
440 four total ranks, each rank with six OpenMP threads,
441 and both ranks on a node sharing GPU with ID 0.
445 mpirun -np 8 gmx mdrun -ntomp 3 -gpu_id 0000
447 Using a same/similar hardware as above,
448 starts :ref:`mdrun_mpi` on a machine with two nodes, using
449 eight total ranks, each rank with three OpenMP threads,
450 and all four ranks on a node sharing GPU with ID 0.
451 This may or may not be faster than the previous setup
452 on the same hardware.
456 mpirun -np 20 gmx_mpi mdrun -ntomp 4 -gpu_id 0
458 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
459 across ranks each to one OpenMP thread. This setup is likely to be
460 suitable when there are ten nodes, each with one GPU, and each node
465 mpirun -np 20 gmx_mpi mdrun -gpu_id 00
467 Starts :ref:`mdrun_mpi` with 20 ranks, and assigns the CPU cores evenly
468 across ranks each to one OpenMP thread. This setup is likely to be
469 suitable when there are ten nodes, each with one GPU, and each node
474 mpirun -np 20 gmx_mpi mdrun -gpu_id 01
476 Starts :ref:`mdrun_mpi` with 20 ranks. This setup is likely
477 to be suitable when there are ten nodes, each with two
482 mpirun -np 40 gmx_mpi mdrun -gpu_id 0011
484 Starts :ref:`mdrun_mpi` with 40 ranks. This setup is likely
485 to be suitable when there are ten nodes, each with two
486 GPUs, and OpenMP performs poorly on the hardware.
488 Controlling the domain decomposition algorithm
489 ----------------------------------------------
490 This section lists all the options that affect how the domain
491 decomposition algorithm decomposes the workload to the available
495 Can be used to set the required maximum distance for inter
496 charge-group bonded interactions. Communication for two-body
497 bonded interactions below the non-bonded cut-off distance always
498 comes for free with the non-bonded communication. Particles beyond
499 the non-bonded cut-off are only communicated when they have
500 missing bonded interactions; this means that the extra cost is
501 minor and nearly independent of the value of ``-rdd``. With dynamic
502 load balancing, option ``-rdd`` also sets the lower limit for the
503 domain decomposition cell sizes. By default ``-rdd`` is determined
504 by :ref:`gmx mdrun` based on the initial coordinates. The chosen value will
505 be a balance between interaction range and communication cost.
508 On by default. When inter charge-group bonded interactions are
509 beyond the bonded cut-off distance, :ref:`gmx mdrun` terminates with an
510 error message. For pair interactions and tabulated bonds that do
511 not generate exclusions, this check can be turned off with the
512 option ``-noddcheck``.
515 When constraints are present, option ``-rcon`` influences
516 the cell size limit as well.
517 Particles connected by NC constraints, where NC is the LINCS order
518 plus 1, should not be beyond the smallest cell size. A error
519 message is generated when this happens, and the user should change
520 the decomposition or decrease the LINCS order and increase the
521 number of LINCS iterations. By default :ref:`gmx mdrun` estimates the
522 minimum cell size required for P-LINCS in a conservative
523 fashion. For high parallelization, it can be useful to set the
524 distance required for P-LINCS with ``-rcon``.
527 Sets the minimum allowed x, y and/or z scaling of the cells with
528 dynamic load balancing. :ref:`gmx mdrun` will ensure that the cells can
529 scale down by at least this factor. This option is used for the
530 automated spatial decomposition (when not using ``-dd``) as well as
531 for determining the number of grid pulses, which in turn sets the
532 minimum allowed cell size. Under certain circumstances the value
533 of ``-dds`` might need to be adjusted to account for high or low
534 spatial inhomogeneity of the system.
536 Finding out how to run mdrun better
537 -----------------------------------
539 The Wallcycle module is used for runtime performance measurement of :ref:`gmx mdrun`.
540 At the end of the log file of each run, the "Real cycle and time accounting" section
541 provides a table with runtime statistics for different parts of the :ref:`gmx mdrun` code
542 in rows of the table.
543 The table contains colums indicating the number of ranks and threads that
544 executed the respective part of the run, wall-time and cycle
545 count aggregates (across all threads and ranks) averaged over the entire run.
546 The last column also shows what precentage of the total runtime each row represents.
547 Note that the :ref:`gmx mdrun` timer resetting functionalities (`-resethway` and `-resetstep`)
548 reset the performance counters and therefore are useful to avoid startup overhead and
549 performance instability (e.g. due to load balancing) at the beginning of the run.
551 The performance counters are:
553 * Particle-particle during Particle mesh Ewald
554 * Domain decomposition
555 * Domain decomposition communication load
556 * Domain decomposition communication bounds
557 * Virtual site constraints
558 * Send X to Particle mesh Ewald
560 * Launch GPU operations
561 * Communication of coordinates
564 * Waiting + Communication of force
565 * Particle mesh Ewald
569 * PME 3D-FFT Communication
570 * PME solve Lennard-Jones
572 * PME wait for particle-particle
573 * Wait + Receive PME force
576 * Non-bonded position/force buffer operations
577 * Virtual site spread
582 * Communication of energies
584 * Add rotational forces
588 As performance data is collected for every run, they are essential to assessing
589 and tuning the performance of :ref:`gmx mdrun` performance. Therefore, they benefit
590 both code developers as well as users of the program.
591 The counters are an average of the time/cycles different parts of the simulation take,
592 hence can not directly reveal fluctuations during a single run (although comparisons across
593 multiple runs are still very useful).
595 Counters will appear in MD log file only if the related parts of the code were
596 executed during the :ref:`gmx mdrun` run. There is also a special counter called "Rest" which
597 indicated for the amount of time not accounted for by any of the counters above. Theerfore,
598 a significant amount "Rest" time (more than a few percent) will often be an indication of
599 parallelization inefficiency (e.g. serial code) and it is recommended to be reported to the
602 An additional set of subcounters can offer more fine-grained inspection of performance. They are:
604 * Domain decomposition redistribution
605 * DD neighbor search grid + sort
606 * DD setup communication
608 * DD make constraints
610 * Neighbor search grid local
613 * NS search non-local
617 * Listed buffer operations
619 * Ewald force correction
620 * Non-bonded position buffer operations
621 * Non-bonded force buffer operations
623 Subcounters are geared toward developers and have to be enabled during compilation. See
624 :doc:`/dev-manual/build-system` for more information.
626 TODO In future patch:
627 - red flags in log files, how to interpret wallcycle output
628 - hints to devs how to extend wallcycles
630 TODO In future patch: import wiki page stuff on performance checklist; maybe here,
633 Running mdrun with GPUs
634 -----------------------
636 NVIDIA GPUs from the professional line (Tesla or Quadro) starting with
637 the Kepler generation (compute capability 3.5 and later) support changing the
638 processor and memory clock frequency with the help of the applications clocks feature.
639 With many workloads, using higher clock rates than the default provides significant
640 performance improvements.
641 For more information see the `NVIDIA blog article`_ on this topic.
642 For |Gromacs| the highest application clock rates are optimal on all hardware
643 available to date (up to and including Maxwell, compute capability 5.2).
645 Application clocks can be set using the NVIDIA system managemet tool
646 ``nvidia-smi``. If the system permissions allow, :ref:`gmx mdrun` has
647 built-in support to set application clocks if built with NVML support. # TODO add ref to relevant section
648 Note that application clocks are a global setting, hence affect the
649 performance of all applications that use the respective GPU(s).
650 For this reason, :ref:`gmx mdrun` sets application clocks at initialization
651 to the values optimal for |Gromacs| and it restores them before exiting
652 to the values found at startup, unless it detects that they were altered
655 .. _NVIDIA blog article: https://devblogs.nvidia.com/parallelforall/increase-performance-gpu-boost-k80-autoboost/
657 Reducing overheads in GPU accelerated runs
658 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
660 In order for CPU cores and GPU(s) to execute concurrently, tasks are
661 launched and executed asynchronously on the GPU(s) while the CPU cores
662 execute non-offloaded force computation (like long-range PME electrostatics).
663 Asynchronous task launches are handled by GPU device driver and
664 require CPU involvement. Therefore, the work of scheduling
665 GPU tasks will incur an overhead that can in some cases significantly
666 delay or interfere with the CPU execution.
668 Delays in CPU execution are caused by the latency of launching GPU tasks,
669 an overhead that can become significant as simulation ns/day increases
670 (i.e. with shorter wall-time per step).
671 The overhead is measured by :ref:`gmx mdrun` and reported in the performance
672 summary section of the log file ("Launch GPU ops" row).
673 A few percent of runtime spent in this category is normal,
674 but in fast-iterating and multi-GPU parallel runs 10% or larger overheads can be observed.
675 In general, there a user can do little to avoid such overheads, but there
676 are a few cases where tweaks can give performance benefits.
677 In single-rank runs timing of GPU tasks is by default enabled and,
678 while in most cases its impact is small, in fast runs performance can be affected.
679 The performance impact will be most significant on NVIDIA GPUs with CUDA,
680 less on AMD with OpenCL.
681 In these cases, when more than a few percent of "Launch GPU ops" time is observed,
682 it is recommended turning off timing by setting the ``GMX_DISABLE_GPU_TIMING``
683 environment variable.
684 In parallel runs with with many ranks sharing a GPU
685 launch overheads can also be reduced by staring fewer thread-MPI
686 or MPI ranks per GPU; e.g. most often one rank per thread or core is not optimal.
688 The second type of overhead, interference of the GPU driver with CPU computation,
689 is caused by the scheduling and coordination of GPU tasks.
690 A separate GPU driver thread can require CPU resources
691 which may clash with the concurrently running non-offloaded tasks,
692 potentially degrading the performance of PME or bonded force computation.
693 This effect is most pronounced when using AMD GPUs with OpenCL with
694 all stable driver releases to date (up to and including fglrx 12.15).
695 To minimize the overhead it is recommended to
696 leave a CPU hardware thread unused when launching :ref:`gmx mdrun`,
697 especially on CPUs with high core count and/or HyperThreading enabled.
698 E.g. on a machine with a 4-core CPU and eight threads (via HyperThreading) and an AMD GPU,
699 try ``gmx mdrun -ntomp 7 -pin on``.
700 This will leave free CPU resources for the GPU task scheduling
701 reducing interference with CPU computation.
702 Note that assigning fewer resources to :ref:`gmx mdrun` CPU computation
703 involves a tradeoff which may outweigh the benefits of reduced GPU driver overhead,
704 in particular without HyperThreading and with few CPU cores.
706 TODO In future patch: any tips not covered above
708 Running the OpenCL version of mdrun
709 -----------------------------------
711 The current version works with GCN-based AMD GPUs, and NVIDIA CUDA
712 GPUs. Make sure that you have the latest drivers installed. The
713 minimum OpenCL version required is |REQUIRED_OPENCL_MIN_VERSION|. See
714 also the :ref:`known limitations <opencl-known-limitations>`.
716 Devices from the AMD GCN architectures (all series) and NVIDIA Fermi
717 and later (compute capability 2.0) are known to work, but before
718 doing production runs always make sure that the |Gromacs| tests
719 pass successfully on the hardware.
721 The OpenCL GPU kernels are compiled at run time. Hence,
722 building the OpenCL program can take a few seconds introducing a slight
723 delay in the :ref:`gmx mdrun` startup. This is not normally a
724 problem for long production MD, but you might prefer to do some kinds
725 of work, e.g. that runs very few steps, on just the CPU (e.g. see ``-nb`` above).
727 The same ``-gpu_id`` option (or ``GMX_GPU_ID`` environment variable)
728 used to select CUDA devices, or to define a mapping of GPUs to PP
729 ranks, is used for OpenCL devices.
731 Some other :ref:`OpenCL management <opencl-management>` environment
732 variables may be of interest to developers.
734 .. _opencl-known-limitations:
736 Known limitations of the OpenCL support
737 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
739 Limitations in the current OpenCL support of interest to |Gromacs| users:
741 - No Intel devices (CPUs, GPUs or Xeon Phi) are supported
742 - Due to blocking behavior of some asynchronous task enqueuing functions
743 in the NVIDIA OpenCL runtime, with the affected driver versions there is
744 almost no performance gain when using NVIDIA GPUs.
745 The issue affects NVIDIA driver versions up to 349 series, but it
746 known to be fixed 352 and later driver releases.
747 - On NVIDIA GPUs the OpenCL kernels achieve much lower performance
748 than the equivalent CUDA kernels due to limitations of the NVIDIA OpenCL
750 - The AMD APPSDK version 3.0 ships with OpenCL compiler/runtime components,
751 libamdocl12cl64.so and libamdocl64.so (only in earlier releases),
752 that conflict with newer fglrx GPU drivers which provide the same libraries.
753 This conflict manifests in kernel launch failures as, due to the library path
754 setup, the OpenCL runtime loads the APPSDK version of the aforementioned
755 libraries instead of the ones provided by the driver installer.
756 The recommended workaround is to remove or rename the APPSDK versions of the
759 Limitations of interest to |Gromacs| developers:
761 - The current implementation is not compatible with OpenCL devices that are
762 not using warp/wavefronts or for which the warp/wavefront size is not a
764 - Some Ewald tabulated kernels are known to produce incorrect results, so
765 (correct) analytical kernels are used instead.