1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006-2020 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
50 @setchapternewpage odd
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Runtime Library Routines: Runtime Library Routines.
99 The OpenMP runtime application programming
101 * OpenMP Environment Variables: Environment Variables.
102 Influencing OpenMP runtime behavior with
103 environment variables.
104 * Enabling OpenACC:: How to enable OpenACC for your
106 * OpenACC Runtime Library Routines:: The OpenACC runtime application
107 programming interface.
108 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
109 environment variables.
110 * CUDA Streams Usage:: Notes on the implementation of
111 asynchronous operations.
112 * OpenACC Library Interoperability:: OpenACC library interoperability with the
113 NVIDIA CUBLAS library.
114 * OpenACC Profiling Interface::
115 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
116 * Reporting Bugs:: How to report bugs in the GNU Offloading and
117 Multi Processing Runtime Library.
118 * Copying:: GNU general public license says
119 how you can copy and share libgomp.
120 * GNU Free Documentation License::
121 How you can copy and share this manual.
122 * Funding:: How to help assure continued work for free
124 * Library Index:: Index of this documentation.
128 @c ---------------------------------------------------------------------
130 @c ---------------------------------------------------------------------
132 @node Enabling OpenMP
133 @chapter Enabling OpenMP
135 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
136 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
137 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
138 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
139 @code{!$} conditional compilation sentinels in free form and @code{c$},
140 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
141 arranges for automatic linking of the OpenMP runtime library
142 (@ref{Runtime Library Routines}).
144 A complete description of all OpenMP directives accepted may be found in
145 the @uref{https://www.openmp.org, OpenMP Application Program Interface} manual,
149 @c ---------------------------------------------------------------------
150 @c OpenMP Runtime Library Routines
151 @c ---------------------------------------------------------------------
153 @node Runtime Library Routines
154 @chapter OpenMP Runtime Library Routines
156 The runtime routines described here are defined by Section 3 of the OpenMP
157 specification in version 4.5. The routines are structured in following
161 Control threads, processors and the parallel environment. They have C
162 linkage, and do not throw exceptions.
164 * omp_get_active_level:: Number of active parallel regions
165 * omp_get_ancestor_thread_num:: Ancestor thread ID
166 * omp_get_cancellation:: Whether cancellation support is enabled
167 * omp_get_default_device:: Get the default device for target regions
168 * omp_get_dynamic:: Dynamic teams setting
169 * omp_get_level:: Number of parallel regions
170 * omp_get_max_active_levels:: Current maximum number of active regions
171 * omp_get_max_task_priority:: Maximum task priority value that can be set
172 * omp_get_max_threads:: Maximum number of threads of parallel region
173 * omp_get_nested:: Nested parallel regions
174 * omp_get_num_devices:: Number of target devices
175 * omp_get_num_procs:: Number of processors online
176 * omp_get_num_teams:: Number of teams
177 * omp_get_num_threads:: Size of the active team
178 * omp_get_proc_bind:: Whether theads may be moved between CPUs
179 * omp_get_schedule:: Obtain the runtime scheduling method
180 * omp_get_supported_active_levels:: Maximum number of active regions supported
181 * omp_get_team_num:: Get team number
182 * omp_get_team_size:: Number of threads in a team
183 * omp_get_thread_limit:: Maximum number of threads
184 * omp_get_thread_num:: Current thread ID
185 * omp_in_parallel:: Whether a parallel region is active
186 * omp_in_final:: Whether in final or included task region
187 * omp_is_initial_device:: Whether executing on the host device
188 * omp_set_default_device:: Set the default device for target regions
189 * omp_set_dynamic:: Enable/disable dynamic teams
190 * omp_set_max_active_levels:: Limits the number of active parallel regions
191 * omp_set_nested:: Enable/disable nested parallel regions
192 * omp_set_num_threads:: Set upper team size limit
193 * omp_set_schedule:: Set the runtime scheduling method
195 Initialize, set, test, unset and destroy simple and nested locks.
197 * omp_init_lock:: Initialize simple lock
198 * omp_set_lock:: Wait for and set simple lock
199 * omp_test_lock:: Test and set simple lock if available
200 * omp_unset_lock:: Unset simple lock
201 * omp_destroy_lock:: Destroy simple lock
202 * omp_init_nest_lock:: Initialize nested lock
203 * omp_set_nest_lock:: Wait for and set simple lock
204 * omp_test_nest_lock:: Test and set nested lock if available
205 * omp_unset_nest_lock:: Unset nested lock
206 * omp_destroy_nest_lock:: Destroy nested lock
208 Portable, thread-based, wall clock timer.
210 * omp_get_wtick:: Get timer precision.
211 * omp_get_wtime:: Elapsed wall clock time.
216 @node omp_get_active_level
217 @section @code{omp_get_active_level} -- Number of parallel regions
219 @item @emph{Description}:
220 This function returns the nesting level for the active parallel blocks,
221 which enclose the calling call.
224 @multitable @columnfractions .20 .80
225 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
228 @item @emph{Fortran}:
229 @multitable @columnfractions .20 .80
230 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
233 @item @emph{See also}:
234 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
236 @item @emph{Reference}:
237 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
242 @node omp_get_ancestor_thread_num
243 @section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
245 @item @emph{Description}:
246 This function returns the thread identification number for the given
247 nesting level of the current thread. For values of @var{level} outside
248 zero to @code{omp_get_level} -1 is returned; if @var{level} is
249 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
252 @multitable @columnfractions .20 .80
253 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
256 @item @emph{Fortran}:
257 @multitable @columnfractions .20 .80
258 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
259 @item @tab @code{integer level}
262 @item @emph{See also}:
263 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
265 @item @emph{Reference}:
266 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
271 @node omp_get_cancellation
272 @section @code{omp_get_cancellation} -- Whether cancellation support is enabled
274 @item @emph{Description}:
275 This function returns @code{true} if cancellation is activated, @code{false}
276 otherwise. Here, @code{true} and @code{false} represent their language-specific
277 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
281 @multitable @columnfractions .20 .80
282 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
285 @item @emph{Fortran}:
286 @multitable @columnfractions .20 .80
287 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
290 @item @emph{See also}:
291 @ref{OMP_CANCELLATION}
293 @item @emph{Reference}:
294 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
299 @node omp_get_default_device
300 @section @code{omp_get_default_device} -- Get the default device for target regions
302 @item @emph{Description}:
303 Get the default device for target regions without device clause.
306 @multitable @columnfractions .20 .80
307 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
310 @item @emph{Fortran}:
311 @multitable @columnfractions .20 .80
312 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
315 @item @emph{See also}:
316 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
318 @item @emph{Reference}:
319 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
324 @node omp_get_dynamic
325 @section @code{omp_get_dynamic} -- Dynamic teams setting
327 @item @emph{Description}:
328 This function returns @code{true} if enabled, @code{false} otherwise.
329 Here, @code{true} and @code{false} represent their language-specific
332 The dynamic team setting may be initialized at startup by the
333 @env{OMP_DYNAMIC} environment variable or at runtime using
334 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
338 @multitable @columnfractions .20 .80
339 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
342 @item @emph{Fortran}:
343 @multitable @columnfractions .20 .80
344 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
347 @item @emph{See also}:
348 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
350 @item @emph{Reference}:
351 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
357 @section @code{omp_get_level} -- Obtain the current nesting level
359 @item @emph{Description}:
360 This function returns the nesting level for the parallel blocks,
361 which enclose the calling call.
364 @multitable @columnfractions .20 .80
365 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
368 @item @emph{Fortran}:
369 @multitable @columnfractions .20 .80
370 @item @emph{Interface}: @tab @code{integer function omp_level()}
373 @item @emph{See also}:
374 @ref{omp_get_active_level}
376 @item @emph{Reference}:
377 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
382 @node omp_get_max_active_levels
383 @section @code{omp_get_max_active_levels} -- Current maximum number of active regions
385 @item @emph{Description}:
386 This function obtains the maximum allowed number of nested, active parallel regions.
389 @multitable @columnfractions .20 .80
390 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
393 @item @emph{Fortran}:
394 @multitable @columnfractions .20 .80
395 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
398 @item @emph{See also}:
399 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
401 @item @emph{Reference}:
402 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
406 @node omp_get_max_task_priority
407 @section @code{omp_get_max_task_priority} -- Maximum priority value
408 that can be set for tasks.
410 @item @emph{Description}:
411 This function obtains the maximum allowed priority number for tasks.
414 @multitable @columnfractions .20 .80
415 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
418 @item @emph{Fortran}:
419 @multitable @columnfractions .20 .80
420 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
423 @item @emph{Reference}:
424 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
428 @node omp_get_max_threads
429 @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
431 @item @emph{Description}:
432 Return the maximum number of threads used for the current parallel region
433 that does not use the clause @code{num_threads}.
436 @multitable @columnfractions .20 .80
437 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
440 @item @emph{Fortran}:
441 @multitable @columnfractions .20 .80
442 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
445 @item @emph{See also}:
446 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
448 @item @emph{Reference}:
449 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
455 @section @code{omp_get_nested} -- Nested parallel regions
457 @item @emph{Description}:
458 This function returns @code{true} if nested parallel regions are
459 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
460 represent their language-specific counterparts.
462 Nested parallel regions may be initialized at startup by the
463 @env{OMP_NESTED} environment variable or at runtime using
464 @code{omp_set_nested}. If undefined, nested parallel regions are
468 @multitable @columnfractions .20 .80
469 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
472 @item @emph{Fortran}:
473 @multitable @columnfractions .20 .80
474 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
477 @item @emph{See also}:
478 @ref{omp_set_nested}, @ref{OMP_NESTED}
480 @item @emph{Reference}:
481 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
486 @node omp_get_num_devices
487 @section @code{omp_get_num_devices} -- Number of target devices
489 @item @emph{Description}:
490 Returns the number of target devices.
493 @multitable @columnfractions .20 .80
494 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
497 @item @emph{Fortran}:
498 @multitable @columnfractions .20 .80
499 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
502 @item @emph{Reference}:
503 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
508 @node omp_get_num_procs
509 @section @code{omp_get_num_procs} -- Number of processors online
511 @item @emph{Description}:
512 Returns the number of processors online on that device.
515 @multitable @columnfractions .20 .80
516 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
519 @item @emph{Fortran}:
520 @multitable @columnfractions .20 .80
521 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
524 @item @emph{Reference}:
525 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
530 @node omp_get_num_teams
531 @section @code{omp_get_num_teams} -- Number of teams
533 @item @emph{Description}:
534 Returns the number of teams in the current team region.
537 @multitable @columnfractions .20 .80
538 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
541 @item @emph{Fortran}:
542 @multitable @columnfractions .20 .80
543 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
546 @item @emph{Reference}:
547 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
552 @node omp_get_num_threads
553 @section @code{omp_get_num_threads} -- Size of the active team
555 @item @emph{Description}:
556 Returns the number of threads in the current team. In a sequential section of
557 the program @code{omp_get_num_threads} returns 1.
559 The default team size may be initialized at startup by the
560 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
561 of the current team may be set either by the @code{NUM_THREADS}
562 clause or by @code{omp_set_num_threads}. If none of the above were
563 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
564 one thread per CPU online is used.
567 @multitable @columnfractions .20 .80
568 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
571 @item @emph{Fortran}:
572 @multitable @columnfractions .20 .80
573 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
576 @item @emph{See also}:
577 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
579 @item @emph{Reference}:
580 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
585 @node omp_get_proc_bind
586 @section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
588 @item @emph{Description}:
589 This functions returns the currently active thread affinity policy, which is
590 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
591 @code{omp_proc_bind_true}, @code{omp_proc_bind_master},
592 @code{omp_proc_bind_close} and @code{omp_proc_bind_spread}.
595 @multitable @columnfractions .20 .80
596 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
599 @item @emph{Fortran}:
600 @multitable @columnfractions .20 .80
601 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
604 @item @emph{See also}:
605 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
607 @item @emph{Reference}:
608 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
613 @node omp_get_schedule
614 @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
616 @item @emph{Description}:
617 Obtain the runtime scheduling method. The @var{kind} argument will be
618 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
619 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
620 @var{chunk_size}, is set to the chunk size.
623 @multitable @columnfractions .20 .80
624 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
627 @item @emph{Fortran}:
628 @multitable @columnfractions .20 .80
629 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
630 @item @tab @code{integer(kind=omp_sched_kind) kind}
631 @item @tab @code{integer chunk_size}
634 @item @emph{See also}:
635 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
637 @item @emph{Reference}:
638 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
642 @node omp_get_supported_active_levels
643 @section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
645 @item @emph{Description}:
646 This function returns the maximum number of nested, active parallel regions
647 supported by this implementation.
650 @multitable @columnfractions .20 .80
651 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
654 @item @emph{Fortran}:
655 @multitable @columnfractions .20 .80
656 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
659 @item @emph{See also}:
660 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
662 @item @emph{Reference}:
663 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
668 @node omp_get_team_num
669 @section @code{omp_get_team_num} -- Get team number
671 @item @emph{Description}:
672 Returns the team number of the calling thread.
675 @multitable @columnfractions .20 .80
676 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
679 @item @emph{Fortran}:
680 @multitable @columnfractions .20 .80
681 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
684 @item @emph{Reference}:
685 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
690 @node omp_get_team_size
691 @section @code{omp_get_team_size} -- Number of threads in a team
693 @item @emph{Description}:
694 This function returns the number of threads in a thread team to which
695 either the current thread or its ancestor belongs. For values of @var{level}
696 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
697 1 is returned, and for @code{omp_get_level}, the result is identical
698 to @code{omp_get_num_threads}.
701 @multitable @columnfractions .20 .80
702 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
705 @item @emph{Fortran}:
706 @multitable @columnfractions .20 .80
707 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
708 @item @tab @code{integer level}
711 @item @emph{See also}:
712 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
714 @item @emph{Reference}:
715 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
720 @node omp_get_thread_limit
721 @section @code{omp_get_thread_limit} -- Maximum number of threads
723 @item @emph{Description}:
724 Return the maximum number of threads of the program.
727 @multitable @columnfractions .20 .80
728 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
731 @item @emph{Fortran}:
732 @multitable @columnfractions .20 .80
733 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
736 @item @emph{See also}:
737 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
739 @item @emph{Reference}:
740 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
745 @node omp_get_thread_num
746 @section @code{omp_get_thread_num} -- Current thread ID
748 @item @emph{Description}:
749 Returns a unique thread identification number within the current team.
750 In a sequential parts of the program, @code{omp_get_thread_num}
751 always returns 0. In parallel regions the return value varies
752 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
753 value of the master thread of a team is always 0.
756 @multitable @columnfractions .20 .80
757 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
760 @item @emph{Fortran}:
761 @multitable @columnfractions .20 .80
762 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
765 @item @emph{See also}:
766 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
768 @item @emph{Reference}:
769 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
774 @node omp_in_parallel
775 @section @code{omp_in_parallel} -- Whether a parallel region is active
777 @item @emph{Description}:
778 This function returns @code{true} if currently running in parallel,
779 @code{false} otherwise. Here, @code{true} and @code{false} represent
780 their language-specific counterparts.
783 @multitable @columnfractions .20 .80
784 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
787 @item @emph{Fortran}:
788 @multitable @columnfractions .20 .80
789 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
792 @item @emph{Reference}:
793 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
798 @section @code{omp_in_final} -- Whether in final or included task region
800 @item @emph{Description}:
801 This function returns @code{true} if currently running in a final
802 or included task region, @code{false} otherwise. Here, @code{true}
803 and @code{false} represent their language-specific counterparts.
806 @multitable @columnfractions .20 .80
807 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
810 @item @emph{Fortran}:
811 @multitable @columnfractions .20 .80
812 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
815 @item @emph{Reference}:
816 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
821 @node omp_is_initial_device
822 @section @code{omp_is_initial_device} -- Whether executing on the host device
824 @item @emph{Description}:
825 This function returns @code{true} if currently running on the host device,
826 @code{false} otherwise. Here, @code{true} and @code{false} represent
827 their language-specific counterparts.
830 @multitable @columnfractions .20 .80
831 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
834 @item @emph{Fortran}:
835 @multitable @columnfractions .20 .80
836 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
839 @item @emph{Reference}:
840 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
845 @node omp_set_default_device
846 @section @code{omp_set_default_device} -- Set the default device for target regions
848 @item @emph{Description}:
849 Set the default device for target regions without device clause. The argument
850 shall be a nonnegative device number.
853 @multitable @columnfractions .20 .80
854 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
857 @item @emph{Fortran}:
858 @multitable @columnfractions .20 .80
859 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
860 @item @tab @code{integer device_num}
863 @item @emph{See also}:
864 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
866 @item @emph{Reference}:
867 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
872 @node omp_set_dynamic
873 @section @code{omp_set_dynamic} -- Enable/disable dynamic teams
875 @item @emph{Description}:
876 Enable or disable the dynamic adjustment of the number of threads
877 within a team. The function takes the language-specific equivalent
878 of @code{true} and @code{false}, where @code{true} enables dynamic
879 adjustment of team sizes and @code{false} disables it.
882 @multitable @columnfractions .20 .80
883 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
886 @item @emph{Fortran}:
887 @multitable @columnfractions .20 .80
888 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
889 @item @tab @code{logical, intent(in) :: dynamic_threads}
892 @item @emph{See also}:
893 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
895 @item @emph{Reference}:
896 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
901 @node omp_set_max_active_levels
902 @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
904 @item @emph{Description}:
905 This function limits the maximum allowed number of nested, active
906 parallel regions. @var{max_levels} must be less or equal to
907 the value returned by @code{omp_get_supported_active_levels}.
910 @multitable @columnfractions .20 .80
911 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
914 @item @emph{Fortran}:
915 @multitable @columnfractions .20 .80
916 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
917 @item @tab @code{integer max_levels}
920 @item @emph{See also}:
921 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
922 @ref{omp_get_supported_active_levels}
924 @item @emph{Reference}:
925 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
931 @section @code{omp_set_nested} -- Enable/disable nested parallel regions
933 @item @emph{Description}:
934 Enable or disable nested parallel regions, i.e., whether team members
935 are allowed to create new teams. The function takes the language-specific
936 equivalent of @code{true} and @code{false}, where @code{true} enables
937 dynamic adjustment of team sizes and @code{false} disables it.
940 @multitable @columnfractions .20 .80
941 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
944 @item @emph{Fortran}:
945 @multitable @columnfractions .20 .80
946 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
947 @item @tab @code{logical, intent(in) :: nested}
950 @item @emph{See also}:
951 @ref{OMP_NESTED}, @ref{omp_get_nested}
953 @item @emph{Reference}:
954 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
959 @node omp_set_num_threads
960 @section @code{omp_set_num_threads} -- Set upper team size limit
962 @item @emph{Description}:
963 Specifies the number of threads used by default in subsequent parallel
964 sections, if those do not specify a @code{num_threads} clause. The
965 argument of @code{omp_set_num_threads} shall be a positive integer.
968 @multitable @columnfractions .20 .80
969 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
972 @item @emph{Fortran}:
973 @multitable @columnfractions .20 .80
974 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
975 @item @tab @code{integer, intent(in) :: num_threads}
978 @item @emph{See also}:
979 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
981 @item @emph{Reference}:
982 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
987 @node omp_set_schedule
988 @section @code{omp_set_schedule} -- Set the runtime scheduling method
990 @item @emph{Description}:
991 Sets the runtime scheduling method. The @var{kind} argument can have the
992 value @code{omp_sched_static}, @code{omp_sched_dynamic},
993 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
994 @code{omp_sched_auto}, the chunk size is set to the value of
995 @var{chunk_size} if positive, or to the default value if zero or negative.
996 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
999 @multitable @columnfractions .20 .80
1000 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1003 @item @emph{Fortran}:
1004 @multitable @columnfractions .20 .80
1005 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1006 @item @tab @code{integer(kind=omp_sched_kind) kind}
1007 @item @tab @code{integer chunk_size}
1010 @item @emph{See also}:
1011 @ref{omp_get_schedule}
1014 @item @emph{Reference}:
1015 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1021 @section @code{omp_init_lock} -- Initialize simple lock
1023 @item @emph{Description}:
1024 Initialize a simple lock. After initialization, the lock is in
1028 @multitable @columnfractions .20 .80
1029 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1032 @item @emph{Fortran}:
1033 @multitable @columnfractions .20 .80
1034 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1035 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1038 @item @emph{See also}:
1039 @ref{omp_destroy_lock}
1041 @item @emph{Reference}:
1042 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1048 @section @code{omp_set_lock} -- Wait for and set simple lock
1050 @item @emph{Description}:
1051 Before setting a simple lock, the lock variable must be initialized by
1052 @code{omp_init_lock}. The calling thread is blocked until the lock
1053 is available. If the lock is already held by the current thread,
1057 @multitable @columnfractions .20 .80
1058 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1061 @item @emph{Fortran}:
1062 @multitable @columnfractions .20 .80
1063 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1064 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1067 @item @emph{See also}:
1068 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1070 @item @emph{Reference}:
1071 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1077 @section @code{omp_test_lock} -- Test and set simple lock if available
1079 @item @emph{Description}:
1080 Before setting a simple lock, the lock variable must be initialized by
1081 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1082 does not block if the lock is not available. This function returns
1083 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
1084 @code{false} represent their language-specific counterparts.
1087 @multitable @columnfractions .20 .80
1088 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1091 @item @emph{Fortran}:
1092 @multitable @columnfractions .20 .80
1093 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1094 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1097 @item @emph{See also}:
1098 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1100 @item @emph{Reference}:
1101 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1106 @node omp_unset_lock
1107 @section @code{omp_unset_lock} -- Unset simple lock
1109 @item @emph{Description}:
1110 A simple lock about to be unset must have been locked by @code{omp_set_lock}
1111 or @code{omp_test_lock} before. In addition, the lock must be held by the
1112 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1113 or more threads attempted to set the lock before, one of them is chosen to,
1114 again, set the lock to itself.
1117 @multitable @columnfractions .20 .80
1118 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1121 @item @emph{Fortran}:
1122 @multitable @columnfractions .20 .80
1123 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1124 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1127 @item @emph{See also}:
1128 @ref{omp_set_lock}, @ref{omp_test_lock}
1130 @item @emph{Reference}:
1131 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1136 @node omp_destroy_lock
1137 @section @code{omp_destroy_lock} -- Destroy simple lock
1139 @item @emph{Description}:
1140 Destroy a simple lock. In order to be destroyed, a simple lock must be
1141 in the unlocked state.
1144 @multitable @columnfractions .20 .80
1145 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1148 @item @emph{Fortran}:
1149 @multitable @columnfractions .20 .80
1150 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1151 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1154 @item @emph{See also}:
1157 @item @emph{Reference}:
1158 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1163 @node omp_init_nest_lock
1164 @section @code{omp_init_nest_lock} -- Initialize nested lock
1166 @item @emph{Description}:
1167 Initialize a nested lock. After initialization, the lock is in
1168 an unlocked state and the nesting count is set to zero.
1171 @multitable @columnfractions .20 .80
1172 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1175 @item @emph{Fortran}:
1176 @multitable @columnfractions .20 .80
1177 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1178 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1181 @item @emph{See also}:
1182 @ref{omp_destroy_nest_lock}
1184 @item @emph{Reference}:
1185 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1189 @node omp_set_nest_lock
1190 @section @code{omp_set_nest_lock} -- Wait for and set nested lock
1192 @item @emph{Description}:
1193 Before setting a nested lock, the lock variable must be initialized by
1194 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
1195 is available. If the lock is already held by the current thread, the
1196 nesting count for the lock is incremented.
1199 @multitable @columnfractions .20 .80
1200 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1203 @item @emph{Fortran}:
1204 @multitable @columnfractions .20 .80
1205 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1206 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1209 @item @emph{See also}:
1210 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1212 @item @emph{Reference}:
1213 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1218 @node omp_test_nest_lock
1219 @section @code{omp_test_nest_lock} -- Test and set nested lock if available
1221 @item @emph{Description}:
1222 Before setting a nested lock, the lock variable must be initialized by
1223 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1224 @code{omp_test_nest_lock} does not block if the lock is not available.
1225 If the lock is already held by the current thread, the new nesting count
1226 is returned. Otherwise, the return value equals zero.
1229 @multitable @columnfractions .20 .80
1230 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1233 @item @emph{Fortran}:
1234 @multitable @columnfractions .20 .80
1235 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1236 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1240 @item @emph{See also}:
1241 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1243 @item @emph{Reference}:
1244 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1249 @node omp_unset_nest_lock
1250 @section @code{omp_unset_nest_lock} -- Unset nested lock
1252 @item @emph{Description}:
1253 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1254 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1255 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1256 lock becomes unlocked. If one ore more threads attempted to set the lock before,
1257 one of them is chosen to, again, set the lock to itself.
1260 @multitable @columnfractions .20 .80
1261 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1264 @item @emph{Fortran}:
1265 @multitable @columnfractions .20 .80
1266 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1267 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1270 @item @emph{See also}:
1271 @ref{omp_set_nest_lock}
1273 @item @emph{Reference}:
1274 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1279 @node omp_destroy_nest_lock
1280 @section @code{omp_destroy_nest_lock} -- Destroy nested lock
1282 @item @emph{Description}:
1283 Destroy a nested lock. In order to be destroyed, a nested lock must be
1284 in the unlocked state and its nesting count must equal zero.
1287 @multitable @columnfractions .20 .80
1288 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1291 @item @emph{Fortran}:
1292 @multitable @columnfractions .20 .80
1293 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1294 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1297 @item @emph{See also}:
1300 @item @emph{Reference}:
1301 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1307 @section @code{omp_get_wtick} -- Get timer precision
1309 @item @emph{Description}:
1310 Gets the timer precision, i.e., the number of seconds between two
1311 successive clock ticks.
1314 @multitable @columnfractions .20 .80
1315 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1318 @item @emph{Fortran}:
1319 @multitable @columnfractions .20 .80
1320 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1323 @item @emph{See also}:
1326 @item @emph{Reference}:
1327 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1333 @section @code{omp_get_wtime} -- Elapsed wall clock time
1335 @item @emph{Description}:
1336 Elapsed wall clock time in seconds. The time is measured per thread, no
1337 guarantee can be made that two distinct threads measure the same time.
1338 Time is measured from some "time in the past", which is an arbitrary time
1339 guaranteed not to change during the execution of the program.
1342 @multitable @columnfractions .20 .80
1343 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1346 @item @emph{Fortran}:
1347 @multitable @columnfractions .20 .80
1348 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1351 @item @emph{See also}:
1354 @item @emph{Reference}:
1355 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1360 @c ---------------------------------------------------------------------
1361 @c OpenMP Environment Variables
1362 @c ---------------------------------------------------------------------
1364 @node Environment Variables
1365 @chapter OpenMP Environment Variables
1367 The environment variables which beginning with @env{OMP_} are defined by
1368 section 4 of the OpenMP specification in version 4.5, while those
1369 beginning with @env{GOMP_} are GNU extensions.
1372 * OMP_CANCELLATION:: Set whether cancellation is activated
1373 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1374 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
1375 * OMP_DYNAMIC:: Dynamic adjustment of threads
1376 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1377 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1378 * OMP_NESTED:: Nested parallel regions
1379 * OMP_NUM_THREADS:: Specifies the number of threads to use
1380 * OMP_PROC_BIND:: Whether theads may be moved between CPUs
1381 * OMP_PLACES:: Specifies on which CPUs the theads should be placed
1382 * OMP_STACKSIZE:: Set default thread stack size
1383 * OMP_SCHEDULE:: How threads are scheduled
1384 * OMP_THREAD_LIMIT:: Set the maximum number of threads
1385 * OMP_WAIT_POLICY:: How waiting threads are handled
1386 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1387 * GOMP_DEBUG:: Enable debugging output
1388 * GOMP_STACKSIZE:: Set default thread stack size
1389 * GOMP_SPINCOUNT:: Set the busy-wait spin count
1390 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1394 @node OMP_CANCELLATION
1395 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1396 @cindex Environment Variable
1398 @item @emph{Description}:
1399 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1400 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1402 @item @emph{See also}:
1403 @ref{omp_get_cancellation}
1405 @item @emph{Reference}:
1406 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1411 @node OMP_DISPLAY_ENV
1412 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1413 @cindex Environment Variable
1415 @item @emph{Description}:
1416 If set to @code{TRUE}, the OpenMP version number and the values
1417 associated with the OpenMP environment variables are printed to @code{stderr}.
1418 If set to @code{VERBOSE}, it additionally shows the value of the environment
1419 variables which are GNU extensions. If undefined or set to @code{FALSE},
1420 this information will not be shown.
1423 @item @emph{Reference}:
1424 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
1429 @node OMP_DEFAULT_DEVICE
1430 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
1431 @cindex Environment Variable
1433 @item @emph{Description}:
1434 Set to choose the device which is used in a @code{target} region, unless the
1435 value is overridden by @code{omp_set_default_device} or by a @code{device}
1436 clause. The value shall be the nonnegative device number. If no device with
1437 the given device number exists, the code is executed on the host. If unset,
1438 device number 0 will be used.
1441 @item @emph{See also}:
1442 @ref{omp_get_default_device}, @ref{omp_set_default_device},
1444 @item @emph{Reference}:
1445 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
1451 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1452 @cindex Environment Variable
1454 @item @emph{Description}:
1455 Enable or disable the dynamic adjustment of the number of threads
1456 within a team. The value of this environment variable shall be
1457 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
1458 disabled by default.
1460 @item @emph{See also}:
1461 @ref{omp_set_dynamic}
1463 @item @emph{Reference}:
1464 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
1469 @node OMP_MAX_ACTIVE_LEVELS
1470 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
1471 @cindex Environment Variable
1473 @item @emph{Description}:
1474 Specifies the initial value for the maximum number of nested parallel
1475 regions. The value of this variable shall be a positive integer.
1476 If undefined, the number of active levels is unlimited.
1478 @item @emph{See also}:
1479 @ref{omp_set_max_active_levels}
1481 @item @emph{Reference}:
1482 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
1487 @node OMP_MAX_TASK_PRIORITY
1488 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
1489 number that can be set for a task.
1490 @cindex Environment Variable
1492 @item @emph{Description}:
1493 Specifies the initial value for the maximum priority value that can be
1494 set for a task. The value of this variable shall be a non-negative
1495 integer, and zero is allowed. If undefined, the default priority is
1498 @item @emph{See also}:
1499 @ref{omp_get_max_task_priority}
1501 @item @emph{Reference}:
1502 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
1508 @section @env{OMP_NESTED} -- Nested parallel regions
1509 @cindex Environment Variable
1510 @cindex Implementation specific setting
1512 @item @emph{Description}:
1513 Enable or disable nested parallel regions, i.e., whether team members
1514 are allowed to create new teams. The value of this environment variable
1515 shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
1516 regions are disabled by default.
1518 @item @emph{See also}:
1519 @ref{omp_set_nested}
1521 @item @emph{Reference}:
1522 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
1527 @node OMP_NUM_THREADS
1528 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1529 @cindex Environment Variable
1530 @cindex Implementation specific setting
1532 @item @emph{Description}:
1533 Specifies the default number of threads to use in parallel regions. The
1534 value of this variable shall be a comma-separated list of positive integers;
1535 the value specified the number of threads to use for the corresponding nested
1536 level. If undefined one thread per CPU is used.
1538 @item @emph{See also}:
1539 @ref{omp_set_num_threads}
1541 @item @emph{Reference}:
1542 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
1548 @section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1549 @cindex Environment Variable
1551 @item @emph{Description}:
1552 Specifies whether threads may be moved between processors. If set to
1553 @code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
1554 they may be moved. Alternatively, a comma separated list with the
1555 values @code{MASTER}, @code{CLOSE} and @code{SPREAD} can be used to specify
1556 the thread affinity policy for the corresponding nesting level. With
1557 @code{MASTER} the worker threads are in the same place partition as the
1558 master thread. With @code{CLOSE} those are kept close to the master thread
1559 in contiguous place partitions. And with @code{SPREAD} a sparse distribution
1560 across the place partitions is used.
1562 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
1563 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
1565 @item @emph{See also}:
1566 @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind}
1568 @item @emph{Reference}:
1569 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
1575 @section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
1576 @cindex Environment Variable
1578 @item @emph{Description}:
1579 The thread placement can be either specified using an abstract name or by an
1580 explicit list of the places. The abstract names @code{threads}, @code{cores}
1581 and @code{sockets} can be optionally followed by a positive number in
1582 parentheses, which denotes the how many places shall be created. With
1583 @code{threads} each place corresponds to a single hardware thread; @code{cores}
1584 to a single core with the corresponding number of hardware threads; and with
1585 @code{sockets} the place corresponds to a single socket. The resulting
1586 placement can be shown by setting the @env{OMP_DISPLAY_ENV} environment
1589 Alternatively, the placement can be specified explicitly as comma-separated
1590 list of places. A place is specified by set of nonnegative numbers in curly
1591 braces, denoting the denoting the hardware threads. The hardware threads
1592 belonging to a place can either be specified as comma-separated list of
1593 nonnegative thread numbers or using an interval. Multiple places can also be
1594 either specified by a comma-separated list of places or by an interval. To
1595 specify an interval, a colon followed by the count is placed after after
1596 the hardware thread number or the place. Optionally, the length can be
1597 followed by a colon and the stride number -- otherwise a unit stride is
1598 assumed. For instance, the following specifies the same places list:
1599 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
1600 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
1602 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
1603 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
1604 between CPUs following no placement policy.
1606 @item @emph{See also}:
1607 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
1608 @ref{OMP_DISPLAY_ENV}
1610 @item @emph{Reference}:
1611 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
1617 @section @env{OMP_STACKSIZE} -- Set default thread stack size
1618 @cindex Environment Variable
1620 @item @emph{Description}:
1621 Set the default thread stack size in kilobytes, unless the number
1622 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1623 case the size is, respectively, in bytes, kilobytes, megabytes
1624 or gigabytes. This is different from @code{pthread_attr_setstacksize}
1625 which gets the number of bytes as an argument. If the stack size cannot
1626 be set due to system constraints, an error is reported and the initial
1627 stack size is left unchanged. If undefined, the stack size is system
1630 @item @emph{Reference}:
1631 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
1637 @section @env{OMP_SCHEDULE} -- How threads are scheduled
1638 @cindex Environment Variable
1639 @cindex Implementation specific setting
1641 @item @emph{Description}:
1642 Allows to specify @code{schedule type} and @code{chunk size}.
1643 The value of the variable shall have the form: @code{type[,chunk]} where
1644 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
1645 The optional @code{chunk} size shall be a positive integer. If undefined,
1646 dynamic scheduling and a chunk size of 1 is used.
1648 @item @emph{See also}:
1649 @ref{omp_set_schedule}
1651 @item @emph{Reference}:
1652 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
1657 @node OMP_THREAD_LIMIT
1658 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
1659 @cindex Environment Variable
1661 @item @emph{Description}:
1662 Specifies the number of threads to use for the whole program. The
1663 value of this variable shall be a positive integer. If undefined,
1664 the number of threads is not limited.
1666 @item @emph{See also}:
1667 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
1669 @item @emph{Reference}:
1670 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
1675 @node OMP_WAIT_POLICY
1676 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1677 @cindex Environment Variable
1679 @item @emph{Description}:
1680 Specifies whether waiting threads should be active or passive. If
1681 the value is @code{PASSIVE}, waiting threads should not consume CPU
1682 power while waiting; while the value is @code{ACTIVE} specifies that
1683 they should. If undefined, threads wait actively for a short time
1684 before waiting passively.
1686 @item @emph{See also}:
1687 @ref{GOMP_SPINCOUNT}
1689 @item @emph{Reference}:
1690 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
1695 @node GOMP_CPU_AFFINITY
1696 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1697 @cindex Environment Variable
1699 @item @emph{Description}:
1700 Binds threads to specific CPUs. The variable should contain a space-separated
1701 or comma-separated list of CPUs. This list may contain different kinds of
1702 entries: either single CPU numbers in any order, a range of CPUs (M-N)
1703 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
1704 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1705 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1706 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1707 and 14 respectively and then start assigning back from the beginning of
1708 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
1710 There is no libgomp library routine to determine whether a CPU affinity
1711 specification is in effect. As a workaround, language-specific library
1712 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1713 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
1714 environment variable. A defined CPU affinity on startup cannot be changed
1715 or disabled during the runtime of the application.
1717 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
1718 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
1719 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
1720 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
1722 @item @emph{See also}:
1723 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
1729 @section @env{GOMP_DEBUG} -- Enable debugging output
1730 @cindex Environment Variable
1732 @item @emph{Description}:
1733 Enable debugging output. The variable should be set to @code{0}
1734 (disabled, also the default if not set), or @code{1} (enabled).
1736 If enabled, some debugging output will be printed during execution.
1737 This is currently not specified in more detail, and subject to change.
1742 @node GOMP_STACKSIZE
1743 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
1744 @cindex Environment Variable
1745 @cindex Implementation specific setting
1747 @item @emph{Description}:
1748 Set the default thread stack size in kilobytes. This is different from
1749 @code{pthread_attr_setstacksize} which gets the number of bytes as an
1750 argument. If the stack size cannot be set due to system constraints, an
1751 error is reported and the initial stack size is left unchanged. If undefined,
1752 the stack size is system dependent.
1754 @item @emph{See also}:
1757 @item @emph{Reference}:
1758 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
1759 GCC Patches Mailinglist},
1760 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
1761 GCC Patches Mailinglist}
1766 @node GOMP_SPINCOUNT
1767 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
1768 @cindex Environment Variable
1769 @cindex Implementation specific setting
1771 @item @emph{Description}:
1772 Determines how long a threads waits actively with consuming CPU power
1773 before waiting passively without consuming CPU power. The value may be
1774 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
1775 integer which gives the number of spins of the busy-wait loop. The
1776 integer may optionally be followed by the following suffixes acting
1777 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
1778 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
1779 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
1780 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
1781 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
1782 If there are more OpenMP threads than available CPUs, 1000 and 100
1783 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
1784 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
1785 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
1787 @item @emph{See also}:
1788 @ref{OMP_WAIT_POLICY}
1793 @node GOMP_RTEMS_THREAD_POOLS
1794 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
1795 @cindex Environment Variable
1796 @cindex Implementation specific setting
1798 @item @emph{Description}:
1799 This environment variable is only used on the RTEMS real-time operating system.
1800 It determines the scheduler instance specific thread pools. The format for
1801 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
1802 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
1803 separated by @code{:} where:
1805 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
1807 @item @code{$<priority>} is an optional priority for the worker threads of a
1808 thread pool according to @code{pthread_setschedparam}. In case a priority
1809 value is omitted, then a worker thread will inherit the priority of the OpenMP
1810 master thread that created it. The priority of the worker thread is not
1811 changed after creation, even if a new OpenMP master thread using the worker has
1812 a different priority.
1813 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
1814 RTEMS application configuration.
1816 In case no thread pool configuration is specified for a scheduler instance,
1817 then each OpenMP master thread of this scheduler instance will use its own
1818 dynamically allocated thread pool. To limit the worker thread count of the
1819 thread pools, each OpenMP master thread must call @code{omp_set_num_threads}.
1820 @item @emph{Example}:
1821 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
1822 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
1823 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
1824 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
1825 one thread pool available. Since no priority is specified for this scheduler
1826 instance, the worker thread inherits the priority of the OpenMP master thread
1827 that created it. In the scheduler instance @code{WRK1} there are three thread
1828 pools available and their worker threads run at priority four.
1833 @c ---------------------------------------------------------------------
1835 @c ---------------------------------------------------------------------
1837 @node Enabling OpenACC
1838 @chapter Enabling OpenACC
1840 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
1841 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
1842 @code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
1843 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
1844 @code{!$} conditional compilation sentinels in free form and @code{c$},
1845 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
1846 arranges for automatic linking of the OpenACC runtime library
1847 (@ref{OpenACC Runtime Library Routines}).
1849 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
1851 A complete description of all OpenACC directives accepted may be found in
1852 the @uref{https://www.openacc.org, OpenACC} Application Programming
1853 Interface manual, version 2.6.
1857 @c ---------------------------------------------------------------------
1858 @c OpenACC Runtime Library Routines
1859 @c ---------------------------------------------------------------------
1861 @node OpenACC Runtime Library Routines
1862 @chapter OpenACC Runtime Library Routines
1864 The runtime routines described here are defined by section 3 of the OpenACC
1865 specifications in version 2.6.
1866 They have C linkage, and do not throw exceptions.
1867 Generally, they are available only for the host, with the exception of
1868 @code{acc_on_device}, which is available for both the host and the
1869 acceleration device.
1872 * acc_get_num_devices:: Get number of devices for the given device
1874 * acc_set_device_type:: Set type of device accelerator to use.
1875 * acc_get_device_type:: Get type of device accelerator to be used.
1876 * acc_set_device_num:: Set device number to use.
1877 * acc_get_device_num:: Get device number to be used.
1878 * acc_get_property:: Get device property.
1879 * acc_async_test:: Tests for completion of a specific asynchronous
1881 * acc_async_test_all:: Tests for completion of all asynchronous
1883 * acc_wait:: Wait for completion of a specific asynchronous
1885 * acc_wait_all:: Waits for completion of all asynchronous
1887 * acc_wait_all_async:: Wait for completion of all asynchronous
1889 * acc_wait_async:: Wait for completion of asynchronous operations.
1890 * acc_init:: Initialize runtime for a specific device type.
1891 * acc_shutdown:: Shuts down the runtime for a specific device
1893 * acc_on_device:: Whether executing on a particular device
1894 * acc_malloc:: Allocate device memory.
1895 * acc_free:: Free device memory.
1896 * acc_copyin:: Allocate device memory and copy host memory to
1898 * acc_present_or_copyin:: If the data is not present on the device,
1899 allocate device memory and copy from host
1901 * acc_create:: Allocate device memory and map it to host
1903 * acc_present_or_create:: If the data is not present on the device,
1904 allocate device memory and map it to host
1906 * acc_copyout:: Copy device memory to host memory.
1907 * acc_delete:: Free device memory.
1908 * acc_update_device:: Update device memory from mapped host memory.
1909 * acc_update_self:: Update host memory from mapped device memory.
1910 * acc_map_data:: Map previously allocated device memory to host
1912 * acc_unmap_data:: Unmap device memory from host memory.
1913 * acc_deviceptr:: Get device pointer associated with specific
1915 * acc_hostptr:: Get host pointer associated with specific
1917 * acc_is_present:: Indicate whether host variable / array is
1919 * acc_memcpy_to_device:: Copy host memory to device memory.
1920 * acc_memcpy_from_device:: Copy device memory to host memory.
1921 * acc_attach:: Let device pointer point to device-pointer target.
1922 * acc_detach:: Let device pointer point to host-pointer target.
1924 API routines for target platforms.
1926 * acc_get_current_cuda_device:: Get CUDA device handle.
1927 * acc_get_current_cuda_context::Get CUDA context handle.
1928 * acc_get_cuda_stream:: Get CUDA stream handle.
1929 * acc_set_cuda_stream:: Set CUDA stream handle.
1931 API routines for the OpenACC Profiling Interface.
1933 * acc_prof_register:: Register callbacks.
1934 * acc_prof_unregister:: Unregister callbacks.
1935 * acc_prof_lookup:: Obtain inquiry functions.
1936 * acc_register_library:: Library registration.
1941 @node acc_get_num_devices
1942 @section @code{acc_get_num_devices} -- Get number of devices for given device type
1944 @item @emph{Description}
1945 This function returns a value indicating the number of devices available
1946 for the device type specified in @var{devicetype}.
1949 @multitable @columnfractions .20 .80
1950 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
1953 @item @emph{Fortran}:
1954 @multitable @columnfractions .20 .80
1955 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
1956 @item @tab @code{integer(kind=acc_device_kind) devicetype}
1959 @item @emph{Reference}:
1960 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
1966 @node acc_set_device_type
1967 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
1969 @item @emph{Description}
1970 This function indicates to the runtime library which device type, specified
1971 in @var{devicetype}, to use when executing a parallel or kernels region.
1974 @multitable @columnfractions .20 .80
1975 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
1978 @item @emph{Fortran}:
1979 @multitable @columnfractions .20 .80
1980 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
1981 @item @tab @code{integer(kind=acc_device_kind) devicetype}
1984 @item @emph{Reference}:
1985 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
1991 @node acc_get_device_type
1992 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
1994 @item @emph{Description}
1995 This function returns what device type will be used when executing a
1996 parallel or kernels region.
1998 This function returns @code{acc_device_none} if
1999 @code{acc_get_device_type} is called from
2000 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2001 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2002 Interface}), that is, if the device is currently being initialized.
2005 @multitable @columnfractions .20 .80
2006 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2009 @item @emph{Fortran}:
2010 @multitable @columnfractions .20 .80
2011 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2012 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2015 @item @emph{Reference}:
2016 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2022 @node acc_set_device_num
2023 @section @code{acc_set_device_num} -- Set device number to use.
2025 @item @emph{Description}
2026 This function will indicate to the runtime which device number,
2027 specified by @var{devicenum}, associated with the specified device
2028 type @var{devicetype}.
2031 @multitable @columnfractions .20 .80
2032 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2035 @item @emph{Fortran}:
2036 @multitable @columnfractions .20 .80
2037 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2038 @item @tab @code{integer devicenum}
2039 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2042 @item @emph{Reference}:
2043 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2049 @node acc_get_device_num
2050 @section @code{acc_get_device_num} -- Get device number to be used.
2052 @item @emph{Description}
2053 This function returns which device number associated with the specified device
2054 type @var{devicetype}, will be used when executing a parallel or kernels
2058 @multitable @columnfractions .20 .80
2059 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2062 @item @emph{Fortran}:
2063 @multitable @columnfractions .20 .80
2064 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2065 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2066 @item @tab @code{integer acc_get_device_num}
2069 @item @emph{Reference}:
2070 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2076 @node acc_get_property
2077 @section @code{acc_get_property} -- Get device property.
2078 @cindex acc_get_property
2079 @cindex acc_get_property_string
2081 @item @emph{Description}
2082 These routines return the value of the specified @var{property} for the
2083 device being queried according to @var{devicenum} and @var{devicetype}.
2084 Integer-valued and string-valued properties are returned by
2085 @code{acc_get_property} and @code{acc_get_property_string} respectively.
2086 The Fortran @code{acc_get_property_string} subroutine returns the string
2087 retrieved in its fourth argument while the remaining entry points are
2088 functions, which pass the return value as their result.
2090 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2091 modified the interface introduced in OpenACC 2.6. The kind-value parameter
2092 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2093 for consistency and the return type of the @code{acc_get_property} function is
2094 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2095 The parameter @code{acc_device_property} will continue to be provided,
2096 but might be removed in a future version of GCC.
2099 @multitable @columnfractions .20 .80
2100 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2101 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2104 @item @emph{Fortran}:
2105 @multitable @columnfractions .20 .80
2106 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2107 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2108 @item @tab @code{use ISO_C_Binding, only: c_size_t}
2109 @item @tab @code{integer devicenum}
2110 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2111 @item @tab @code{integer(kind=acc_device_property_kind) property}
2112 @item @tab @code{integer(kind=c_size_t) acc_get_property}
2113 @item @tab @code{character(*) string}
2116 @item @emph{Reference}:
2117 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2123 @node acc_async_test
2124 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2126 @item @emph{Description}
2127 This function tests for completion of the asynchronous operation specified
2128 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2129 the specified asynchronous operation has completed. While Fortran will return
2130 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2131 a zero and Fortran returns a @code{false}.
2134 @multitable @columnfractions .20 .80
2135 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2138 @item @emph{Fortran}:
2139 @multitable @columnfractions .20 .80
2140 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2141 @item @tab @code{integer(kind=acc_handle_kind) arg}
2142 @item @tab @code{logical acc_async_test}
2145 @item @emph{Reference}:
2146 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2152 @node acc_async_test_all
2153 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2155 @item @emph{Description}
2156 This function tests for completion of all asynchronous operations.
2157 In C/C++, a non-zero value will be returned to indicate all asynchronous
2158 operations have completed. While Fortran will return a @code{true}. If
2159 any asynchronous operation has not completed, C/C++ returns a zero and
2160 Fortran returns a @code{false}.
2163 @multitable @columnfractions .20 .80
2164 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2167 @item @emph{Fortran}:
2168 @multitable @columnfractions .20 .80
2169 @item @emph{Interface}: @tab @code{function acc_async_test()}
2170 @item @tab @code{logical acc_get_device_num}
2173 @item @emph{Reference}:
2174 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2181 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2183 @item @emph{Description}
2184 This function waits for completion of the asynchronous operation
2185 specified in @var{arg}.
2188 @multitable @columnfractions .20 .80
2189 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
2190 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2193 @item @emph{Fortran}:
2194 @multitable @columnfractions .20 .80
2195 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2196 @item @tab @code{integer(acc_handle_kind) arg}
2197 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2198 @item @tab @code{integer(acc_handle_kind) arg}
2201 @item @emph{Reference}:
2202 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2209 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2211 @item @emph{Description}
2212 This function waits for the completion of all asynchronous operations.
2215 @multitable @columnfractions .20 .80
2216 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2217 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2220 @item @emph{Fortran}:
2221 @multitable @columnfractions .20 .80
2222 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2223 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2226 @item @emph{Reference}:
2227 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2233 @node acc_wait_all_async
2234 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2236 @item @emph{Description}
2237 This function enqueues a wait operation on the queue @var{async} for any
2238 and all asynchronous operations that have been previously enqueued on
2242 @multitable @columnfractions .20 .80
2243 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2246 @item @emph{Fortran}:
2247 @multitable @columnfractions .20 .80
2248 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2249 @item @tab @code{integer(acc_handle_kind) async}
2252 @item @emph{Reference}:
2253 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2259 @node acc_wait_async
2260 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2262 @item @emph{Description}
2263 This function enqueues a wait operation on queue @var{async} for any and all
2264 asynchronous operations enqueued on queue @var{arg}.
2267 @multitable @columnfractions .20 .80
2268 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2271 @item @emph{Fortran}:
2272 @multitable @columnfractions .20 .80
2273 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2274 @item @tab @code{integer(acc_handle_kind) arg, async}
2277 @item @emph{Reference}:
2278 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2285 @section @code{acc_init} -- Initialize runtime for a specific device type.
2287 @item @emph{Description}
2288 This function initializes the runtime for the device type specified in
2292 @multitable @columnfractions .20 .80
2293 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2296 @item @emph{Fortran}:
2297 @multitable @columnfractions .20 .80
2298 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2299 @item @tab @code{integer(acc_device_kind) devicetype}
2302 @item @emph{Reference}:
2303 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2310 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2312 @item @emph{Description}
2313 This function shuts down the runtime for the device type specified in
2317 @multitable @columnfractions .20 .80
2318 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2321 @item @emph{Fortran}:
2322 @multitable @columnfractions .20 .80
2323 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2324 @item @tab @code{integer(acc_device_kind) devicetype}
2327 @item @emph{Reference}:
2328 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2335 @section @code{acc_on_device} -- Whether executing on a particular device
2337 @item @emph{Description}:
2338 This function returns whether the program is executing on a particular
2339 device specified in @var{devicetype}. In C/C++ a non-zero value is
2340 returned to indicate the device is executing on the specified device type.
2341 In Fortran, @code{true} will be returned. If the program is not executing
2342 on the specified device type C/C++ will return a zero, while Fortran will
2343 return @code{false}.
2346 @multitable @columnfractions .20 .80
2347 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
2350 @item @emph{Fortran}:
2351 @multitable @columnfractions .20 .80
2352 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
2353 @item @tab @code{integer(acc_device_kind) devicetype}
2354 @item @tab @code{logical acc_on_device}
2358 @item @emph{Reference}:
2359 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2366 @section @code{acc_malloc} -- Allocate device memory.
2368 @item @emph{Description}
2369 This function allocates @var{len} bytes of device memory. It returns
2370 the device address of the allocated memory.
2373 @multitable @columnfractions .20 .80
2374 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
2377 @item @emph{Reference}:
2378 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2385 @section @code{acc_free} -- Free device memory.
2387 @item @emph{Description}
2388 Free previously allocated device memory at the device address @code{a}.
2391 @multitable @columnfractions .20 .80
2392 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
2395 @item @emph{Reference}:
2396 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2403 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
2405 @item @emph{Description}
2406 In C/C++, this function allocates @var{len} bytes of device memory
2407 and maps it to the specified host address in @var{a}. The device
2408 address of the newly allocated device memory is returned.
2410 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2411 a contiguous array section. The second form @var{a} specifies a
2412 variable or array element and @var{len} specifies the length in bytes.
2415 @multitable @columnfractions .20 .80
2416 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
2417 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
2420 @item @emph{Fortran}:
2421 @multitable @columnfractions .20 .80
2422 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
2423 @item @tab @code{type, dimension(:[,:]...) :: a}
2424 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
2425 @item @tab @code{type, dimension(:[,:]...) :: a}
2426 @item @tab @code{integer len}
2427 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
2428 @item @tab @code{type, dimension(:[,:]...) :: a}
2429 @item @tab @code{integer(acc_handle_kind) :: async}
2430 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
2431 @item @tab @code{type, dimension(:[,:]...) :: a}
2432 @item @tab @code{integer len}
2433 @item @tab @code{integer(acc_handle_kind) :: async}
2436 @item @emph{Reference}:
2437 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2443 @node acc_present_or_copyin
2444 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
2446 @item @emph{Description}
2447 This function tests if the host data specified by @var{a} and of length
2448 @var{len} is present or not. If it is not present, then device memory
2449 will be allocated and the host memory copied. The device address of
2450 the newly allocated device memory is returned.
2452 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2453 a contiguous array section. The second form @var{a} specifies a variable or
2454 array element and @var{len} specifies the length in bytes.
2456 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
2457 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
2460 @multitable @columnfractions .20 .80
2461 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
2462 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
2465 @item @emph{Fortran}:
2466 @multitable @columnfractions .20 .80
2467 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
2468 @item @tab @code{type, dimension(:[,:]...) :: a}
2469 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
2470 @item @tab @code{type, dimension(:[,:]...) :: a}
2471 @item @tab @code{integer len}
2472 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
2473 @item @tab @code{type, dimension(:[,:]...) :: a}
2474 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
2475 @item @tab @code{type, dimension(:[,:]...) :: a}
2476 @item @tab @code{integer len}
2479 @item @emph{Reference}:
2480 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2487 @section @code{acc_create} -- Allocate device memory and map it to host memory.
2489 @item @emph{Description}
2490 This function allocates device memory and maps it to host memory specified
2491 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
2492 the function returns the device address of the allocated device memory.
2494 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2495 a contiguous array section. The second form @var{a} specifies a variable or
2496 array element and @var{len} specifies the length in bytes.
2499 @multitable @columnfractions .20 .80
2500 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
2501 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
2504 @item @emph{Fortran}:
2505 @multitable @columnfractions .20 .80
2506 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
2507 @item @tab @code{type, dimension(:[,:]...) :: a}
2508 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
2509 @item @tab @code{type, dimension(:[,:]...) :: a}
2510 @item @tab @code{integer len}
2511 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
2512 @item @tab @code{type, dimension(:[,:]...) :: a}
2513 @item @tab @code{integer(acc_handle_kind) :: async}
2514 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
2515 @item @tab @code{type, dimension(:[,:]...) :: a}
2516 @item @tab @code{integer len}
2517 @item @tab @code{integer(acc_handle_kind) :: async}
2520 @item @emph{Reference}:
2521 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2527 @node acc_present_or_create
2528 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
2530 @item @emph{Description}
2531 This function tests if the host data specified by @var{a} and of length
2532 @var{len} is present or not. If it is not present, then device memory
2533 will be allocated and mapped to host memory. In C/C++, the device address
2534 of the newly allocated device memory is returned.
2536 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2537 a contiguous array section. The second form @var{a} specifies a variable or
2538 array element and @var{len} specifies the length in bytes.
2540 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
2541 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
2544 @multitable @columnfractions .20 .80
2545 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
2546 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
2549 @item @emph{Fortran}:
2550 @multitable @columnfractions .20 .80
2551 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
2552 @item @tab @code{type, dimension(:[,:]...) :: a}
2553 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
2554 @item @tab @code{type, dimension(:[,:]...) :: a}
2555 @item @tab @code{integer len}
2556 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
2557 @item @tab @code{type, dimension(:[,:]...) :: a}
2558 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
2559 @item @tab @code{type, dimension(:[,:]...) :: a}
2560 @item @tab @code{integer len}
2563 @item @emph{Reference}:
2564 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2571 @section @code{acc_copyout} -- Copy device memory to host memory.
2573 @item @emph{Description}
2574 This function copies mapped device memory to host memory which is specified
2575 by host address @var{a} for a length @var{len} bytes in C/C++.
2577 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2578 a contiguous array section. The second form @var{a} specifies a variable or
2579 array element and @var{len} specifies the length in bytes.
2582 @multitable @columnfractions .20 .80
2583 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
2584 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
2585 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
2586 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
2589 @item @emph{Fortran}:
2590 @multitable @columnfractions .20 .80
2591 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
2592 @item @tab @code{type, dimension(:[,:]...) :: a}
2593 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
2594 @item @tab @code{type, dimension(:[,:]...) :: a}
2595 @item @tab @code{integer len}
2596 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
2597 @item @tab @code{type, dimension(:[,:]...) :: a}
2598 @item @tab @code{integer(acc_handle_kind) :: async}
2599 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
2600 @item @tab @code{type, dimension(:[,:]...) :: a}
2601 @item @tab @code{integer len}
2602 @item @tab @code{integer(acc_handle_kind) :: async}
2603 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
2604 @item @tab @code{type, dimension(:[,:]...) :: a}
2605 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
2606 @item @tab @code{type, dimension(:[,:]...) :: a}
2607 @item @tab @code{integer len}
2608 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
2609 @item @tab @code{type, dimension(:[,:]...) :: a}
2610 @item @tab @code{integer(acc_handle_kind) :: async}
2611 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
2612 @item @tab @code{type, dimension(:[,:]...) :: a}
2613 @item @tab @code{integer len}
2614 @item @tab @code{integer(acc_handle_kind) :: async}
2617 @item @emph{Reference}:
2618 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2625 @section @code{acc_delete} -- Free device memory.
2627 @item @emph{Description}
2628 This function frees previously allocated device memory specified by
2629 the device address @var{a} and the length of @var{len} bytes.
2631 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2632 a contiguous array section. The second form @var{a} specifies a variable or
2633 array element and @var{len} specifies the length in bytes.
2636 @multitable @columnfractions .20 .80
2637 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
2638 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
2639 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
2640 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
2643 @item @emph{Fortran}:
2644 @multitable @columnfractions .20 .80
2645 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
2646 @item @tab @code{type, dimension(:[,:]...) :: a}
2647 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
2648 @item @tab @code{type, dimension(:[,:]...) :: a}
2649 @item @tab @code{integer len}
2650 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
2651 @item @tab @code{type, dimension(:[,:]...) :: a}
2652 @item @tab @code{integer(acc_handle_kind) :: async}
2653 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
2654 @item @tab @code{type, dimension(:[,:]...) :: a}
2655 @item @tab @code{integer len}
2656 @item @tab @code{integer(acc_handle_kind) :: async}
2657 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
2658 @item @tab @code{type, dimension(:[,:]...) :: a}
2659 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
2660 @item @tab @code{type, dimension(:[,:]...) :: a}
2661 @item @tab @code{integer len}
2662 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
2663 @item @tab @code{type, dimension(:[,:]...) :: a}
2664 @item @tab @code{integer(acc_handle_kind) :: async}
2665 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
2666 @item @tab @code{type, dimension(:[,:]...) :: a}
2667 @item @tab @code{integer len}
2668 @item @tab @code{integer(acc_handle_kind) :: async}
2671 @item @emph{Reference}:
2672 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2678 @node acc_update_device
2679 @section @code{acc_update_device} -- Update device memory from mapped host memory.
2681 @item @emph{Description}
2682 This function updates the device copy from the previously mapped host memory.
2683 The host memory is specified with the host address @var{a} and a length of
2686 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2687 a contiguous array section. The second form @var{a} specifies a variable or
2688 array element and @var{len} specifies the length in bytes.
2691 @multitable @columnfractions .20 .80
2692 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
2693 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
2696 @item @emph{Fortran}:
2697 @multitable @columnfractions .20 .80
2698 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
2699 @item @tab @code{type, dimension(:[,:]...) :: a}
2700 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
2701 @item @tab @code{type, dimension(:[,:]...) :: a}
2702 @item @tab @code{integer len}
2703 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
2704 @item @tab @code{type, dimension(:[,:]...) :: a}
2705 @item @tab @code{integer(acc_handle_kind) :: async}
2706 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
2707 @item @tab @code{type, dimension(:[,:]...) :: a}
2708 @item @tab @code{integer len}
2709 @item @tab @code{integer(acc_handle_kind) :: async}
2712 @item @emph{Reference}:
2713 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2719 @node acc_update_self
2720 @section @code{acc_update_self} -- Update host memory from mapped device memory.
2722 @item @emph{Description}
2723 This function updates the host copy from the previously mapped device memory.
2724 The host memory is specified with the host address @var{a} and a length of
2727 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2728 a contiguous array section. The second form @var{a} specifies a variable or
2729 array element and @var{len} specifies the length in bytes.
2732 @multitable @columnfractions .20 .80
2733 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
2734 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
2737 @item @emph{Fortran}:
2738 @multitable @columnfractions .20 .80
2739 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
2740 @item @tab @code{type, dimension(:[,:]...) :: a}
2741 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
2742 @item @tab @code{type, dimension(:[,:]...) :: a}
2743 @item @tab @code{integer len}
2744 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
2745 @item @tab @code{type, dimension(:[,:]...) :: a}
2746 @item @tab @code{integer(acc_handle_kind) :: async}
2747 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
2748 @item @tab @code{type, dimension(:[,:]...) :: a}
2749 @item @tab @code{integer len}
2750 @item @tab @code{integer(acc_handle_kind) :: async}
2753 @item @emph{Reference}:
2754 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2761 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
2763 @item @emph{Description}
2764 This function maps previously allocated device and host memory. The device
2765 memory is specified with the device address @var{d}. The host memory is
2766 specified with the host address @var{h} and a length of @var{len}.
2769 @multitable @columnfractions .20 .80
2770 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
2773 @item @emph{Reference}:
2774 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2780 @node acc_unmap_data
2781 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
2783 @item @emph{Description}
2784 This function unmaps previously mapped device and host memory. The latter
2785 specified by @var{h}.
2788 @multitable @columnfractions .20 .80
2789 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
2792 @item @emph{Reference}:
2793 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2800 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
2802 @item @emph{Description}
2803 This function returns the device address that has been mapped to the
2804 host address specified by @var{h}.
2807 @multitable @columnfractions .20 .80
2808 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
2811 @item @emph{Reference}:
2812 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2819 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
2821 @item @emph{Description}
2822 This function returns the host address that has been mapped to the
2823 device address specified by @var{d}.
2826 @multitable @columnfractions .20 .80
2827 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
2830 @item @emph{Reference}:
2831 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2837 @node acc_is_present
2838 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
2840 @item @emph{Description}
2841 This function indicates whether the specified host address in @var{a} and a
2842 length of @var{len} bytes is present on the device. In C/C++, a non-zero
2843 value is returned to indicate the presence of the mapped memory on the
2844 device. A zero is returned to indicate the memory is not mapped on the
2847 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
2848 a contiguous array section. The second form @var{a} specifies a variable or
2849 array element and @var{len} specifies the length in bytes. If the host
2850 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
2851 a @code{false} is return to indicate the mapped memory is not present.
2854 @multitable @columnfractions .20 .80
2855 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
2858 @item @emph{Fortran}:
2859 @multitable @columnfractions .20 .80
2860 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
2861 @item @tab @code{type, dimension(:[,:]...) :: a}
2862 @item @tab @code{logical acc_is_present}
2863 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
2864 @item @tab @code{type, dimension(:[,:]...) :: a}
2865 @item @tab @code{integer len}
2866 @item @tab @code{logical acc_is_present}
2869 @item @emph{Reference}:
2870 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2876 @node acc_memcpy_to_device
2877 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
2879 @item @emph{Description}
2880 This function copies host memory specified by host address of @var{src} to
2881 device memory specified by the device address @var{dest} for a length of
2885 @multitable @columnfractions .20 .80
2886 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
2889 @item @emph{Reference}:
2890 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2896 @node acc_memcpy_from_device
2897 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
2899 @item @emph{Description}
2900 This function copies host memory specified by host address of @var{src} from
2901 device memory specified by the device address @var{dest} for a length of
2905 @multitable @columnfractions .20 .80
2906 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
2909 @item @emph{Reference}:
2910 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2917 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
2919 @item @emph{Description}
2920 This function updates a pointer on the device from pointing to a host-pointer
2921 address to pointing to the corresponding device data.
2924 @multitable @columnfractions .20 .80
2925 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
2926 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
2929 @item @emph{Reference}:
2930 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2937 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
2939 @item @emph{Description}
2940 This function updates a pointer on the device from pointing to a device-pointer
2941 address to pointing to the corresponding host data.
2944 @multitable @columnfractions .20 .80
2945 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
2946 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
2947 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
2948 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
2951 @item @emph{Reference}:
2952 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2958 @node acc_get_current_cuda_device
2959 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
2961 @item @emph{Description}
2962 This function returns the CUDA device handle. This handle is the same
2963 as used by the CUDA Runtime or Driver API's.
2966 @multitable @columnfractions .20 .80
2967 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
2970 @item @emph{Reference}:
2971 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2977 @node acc_get_current_cuda_context
2978 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
2980 @item @emph{Description}
2981 This function returns the CUDA context handle. This handle is the same
2982 as used by the CUDA Runtime or Driver API's.
2985 @multitable @columnfractions .20 .80
2986 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
2989 @item @emph{Reference}:
2990 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2996 @node acc_get_cuda_stream
2997 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
2999 @item @emph{Description}
3000 This function returns the CUDA stream handle for the queue @var{async}.
3001 This handle is the same as used by the CUDA Runtime or Driver API's.
3004 @multitable @columnfractions .20 .80
3005 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3008 @item @emph{Reference}:
3009 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3015 @node acc_set_cuda_stream
3016 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3018 @item @emph{Description}
3019 This function associates the stream handle specified by @var{stream} with
3020 the queue @var{async}.
3022 This cannot be used to change the stream handle associated with
3023 @code{acc_async_sync}.
3025 The return value is not specified.
3028 @multitable @columnfractions .20 .80
3029 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3032 @item @emph{Reference}:
3033 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3039 @node acc_prof_register
3040 @section @code{acc_prof_register} -- Register callbacks.
3042 @item @emph{Description}:
3043 This function registers callbacks.
3046 @multitable @columnfractions .20 .80
3047 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3050 @item @emph{See also}:
3051 @ref{OpenACC Profiling Interface}
3053 @item @emph{Reference}:
3054 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3060 @node acc_prof_unregister
3061 @section @code{acc_prof_unregister} -- Unregister callbacks.
3063 @item @emph{Description}:
3064 This function unregisters callbacks.
3067 @multitable @columnfractions .20 .80
3068 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3071 @item @emph{See also}:
3072 @ref{OpenACC Profiling Interface}
3074 @item @emph{Reference}:
3075 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3081 @node acc_prof_lookup
3082 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
3084 @item @emph{Description}:
3085 Function to obtain inquiry functions.
3088 @multitable @columnfractions .20 .80
3089 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3092 @item @emph{See also}:
3093 @ref{OpenACC Profiling Interface}
3095 @item @emph{Reference}:
3096 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3102 @node acc_register_library
3103 @section @code{acc_register_library} -- Library registration.
3105 @item @emph{Description}:
3106 Function for library registration.
3109 @multitable @columnfractions .20 .80
3110 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3113 @item @emph{See also}:
3114 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3116 @item @emph{Reference}:
3117 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3123 @c ---------------------------------------------------------------------
3124 @c OpenACC Environment Variables
3125 @c ---------------------------------------------------------------------
3127 @node OpenACC Environment Variables
3128 @chapter OpenACC Environment Variables
3130 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
3131 are defined by section 4 of the OpenACC specification in version 2.0.
3132 The variable @env{ACC_PROFLIB}
3133 is defined by section 4 of the OpenACC specification in version 2.6.
3134 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
3145 @node ACC_DEVICE_TYPE
3146 @section @code{ACC_DEVICE_TYPE}
3148 @item @emph{Reference}:
3149 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3155 @node ACC_DEVICE_NUM
3156 @section @code{ACC_DEVICE_NUM}
3158 @item @emph{Reference}:
3159 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3166 @section @code{ACC_PROFLIB}
3168 @item @emph{See also}:
3169 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3171 @item @emph{Reference}:
3172 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3178 @node GCC_ACC_NOTIFY
3179 @section @code{GCC_ACC_NOTIFY}
3181 @item @emph{Description}:
3182 Print debug information pertaining to the accelerator.
3187 @c ---------------------------------------------------------------------
3188 @c CUDA Streams Usage
3189 @c ---------------------------------------------------------------------
3191 @node CUDA Streams Usage
3192 @chapter CUDA Streams Usage
3194 This applies to the @code{nvptx} plugin only.
3196 The library provides elements that perform asynchronous movement of
3197 data and asynchronous operation of computing constructs. This
3198 asynchronous functionality is implemented by making use of CUDA
3199 streams@footnote{See "Stream Management" in "CUDA Driver API",
3200 TRM-06703-001, Version 5.5, for additional information}.
3202 The primary means by that the asynchronous functionality is accessed
3203 is through the use of those OpenACC directives which make use of the
3204 @code{async} and @code{wait} clauses. When the @code{async} clause is
3205 first used with a directive, it creates a CUDA stream. If an
3206 @code{async-argument} is used with the @code{async} clause, then the
3207 stream is associated with the specified @code{async-argument}.
3209 Following the creation of an association between a CUDA stream and the
3210 @code{async-argument} of an @code{async} clause, both the @code{wait}
3211 clause and the @code{wait} directive can be used. When either the
3212 clause or directive is used after stream creation, it creates a
3213 rendezvous point whereby execution waits until all operations
3214 associated with the @code{async-argument}, that is, stream, have
3217 Normally, the management of the streams that are created as a result of
3218 using the @code{async} clause, is done without any intervention by the
3219 caller. This implies the association between the @code{async-argument}
3220 and the CUDA stream will be maintained for the lifetime of the program.
3221 However, this association can be changed through the use of the library
3222 function @code{acc_set_cuda_stream}. When the function
3223 @code{acc_set_cuda_stream} is called, the CUDA stream that was
3224 originally associated with the @code{async} clause will be destroyed.
3225 Caution should be taken when changing the association as subsequent
3226 references to the @code{async-argument} refer to a different
3231 @c ---------------------------------------------------------------------
3232 @c OpenACC Library Interoperability
3233 @c ---------------------------------------------------------------------
3235 @node OpenACC Library Interoperability
3236 @chapter OpenACC Library Interoperability
3238 @section Introduction
3240 The OpenACC library uses the CUDA Driver API, and may interact with
3241 programs that use the Runtime library directly, or another library
3242 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3243 "Interactions with the CUDA Driver API" in
3244 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3245 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3246 for additional information on library interoperability.}.
3247 This chapter describes the use cases and what changes are
3248 required in order to use both the OpenACC library and the CUBLAS and Runtime
3249 libraries within a program.
3251 @section First invocation: NVIDIA CUBLAS library API
3253 In this first use case (see below), a function in the CUBLAS library is called
3254 prior to any of the functions in the OpenACC library. More specifically, the
3255 function @code{cublasCreate()}.
3257 When invoked, the function initializes the library and allocates the
3258 hardware resources on the host and the device on behalf of the caller. Once
3259 the initialization and allocation has completed, a handle is returned to the
3260 caller. The OpenACC library also requires initialization and allocation of
3261 hardware resources. Since the CUBLAS library has already allocated the
3262 hardware resources for the device, all that is left to do is to initialize
3263 the OpenACC library and acquire the hardware resources on the host.
3265 Prior to calling the OpenACC function that initializes the library and
3266 allocate the host hardware resources, you need to acquire the device number
3267 that was allocated during the call to @code{cublasCreate()}. The invoking of the
3268 runtime library function @code{cudaGetDevice()} accomplishes this. Once
3269 acquired, the device number is passed along with the device type as
3270 parameters to the OpenACC library function @code{acc_set_device_num()}.
3272 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3273 library uses the context that was created during the call to
3274 @code{cublasCreate()}. In other words, both libraries will be sharing the
3278 /* Create the handle */
3279 s = cublasCreate(&h);
3280 if (s != CUBLAS_STATUS_SUCCESS)
3282 fprintf(stderr, "cublasCreate failed %d\n", s);
3286 /* Get the device number */
3287 e = cudaGetDevice(&dev);
3288 if (e != cudaSuccess)
3290 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3294 /* Initialize OpenACC library and use device 'dev' */
3295 acc_set_device_num(dev, acc_device_nvidia);
3300 @section First invocation: OpenACC library API
3302 In this second use case (see below), a function in the OpenACC library is
3303 called prior to any of the functions in the CUBLAS library. More specificially,
3304 the function @code{acc_set_device_num()}.
3306 In the use case presented here, the function @code{acc_set_device_num()}
3307 is used to both initialize the OpenACC library and allocate the hardware
3308 resources on the host and the device. In the call to the function, the
3309 call parameters specify which device to use and what device
3310 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3311 is but one method to initialize the OpenACC library and allocate the
3312 appropriate hardware resources. Other methods are available through the
3313 use of environment variables and these will be discussed in the next section.
3315 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3316 functions can be called as seen with multiple calls being made to
3317 @code{acc_copyin()}. In addition, calls can be made to functions in the
3318 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3319 subsequent to the calls to @code{acc_copyin()}.
3320 As seen in the previous use case, a call to @code{cublasCreate()}
3321 initializes the CUBLAS library and allocates the hardware resources on the
3322 host and the device. However, since the device has already been allocated,
3323 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
3324 the appropriate hardware resources on the host. The context that was created
3325 as part of the OpenACC initialization is shared with the CUBLAS library,
3326 similarly to the first use case.
3331 acc_set_device_num(dev, acc_device_nvidia);
3333 /* Copy the first set to the device */
3334 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3337 fprintf(stderr, "copyin error h_X\n");
3341 /* Copy the second set to the device */
3342 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
3345 fprintf(stderr, "copyin error h_Y1\n");
3349 /* Create the handle */
3350 s = cublasCreate(&h);
3351 if (s != CUBLAS_STATUS_SUCCESS)
3353 fprintf(stderr, "cublasCreate failed %d\n", s);
3357 /* Perform saxpy using CUBLAS library function */
3358 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
3359 if (s != CUBLAS_STATUS_SUCCESS)
3361 fprintf(stderr, "cublasSaxpy failed %d\n", s);
3365 /* Copy the results from the device */
3366 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
3371 @section OpenACC library and environment variables
3373 There are two environment variables associated with the OpenACC library
3374 that may be used to control the device type and device number:
3375 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
3376 environment variables can be used as an alternative to calling
3377 @code{acc_set_device_num()}. As seen in the second use case, the device
3378 type and device number were specified using @code{acc_set_device_num()}.
3379 If however, the aforementioned environment variables were set, then the
3380 call to @code{acc_set_device_num()} would not be required.
3383 The use of the environment variables is only relevant when an OpenACC function
3384 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
3385 is called prior to a call to an OpenACC function, then you must call
3386 @code{acc_set_device_num()}@footnote{More complete information
3387 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
3388 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
3389 Application Programming Interface”, Version 2.6.}
3393 @c ---------------------------------------------------------------------
3394 @c OpenACC Profiling Interface
3395 @c ---------------------------------------------------------------------
3397 @node OpenACC Profiling Interface
3398 @chapter OpenACC Profiling Interface
3400 @section Implementation Status and Implementation-Defined Behavior
3402 We're implementing the OpenACC Profiling Interface as defined by the
3403 OpenACC 2.6 specification. We're clarifying some aspects here as
3404 @emph{implementation-defined behavior}, while they're still under
3405 discussion within the OpenACC Technical Committee.
3407 This implementation is tuned to keep the performance impact as low as
3408 possible for the (very common) case that the Profiling Interface is
3409 not enabled. This is relevant, as the Profiling Interface affects all
3410 the @emph{hot} code paths (in the target code, not in the offloaded
3411 code). Users of the OpenACC Profiling Interface can be expected to
3412 understand that performance will be impacted to some degree once the
3413 Profiling Interface has gotten enabled: for example, because of the
3414 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
3415 every event that has been registered.
3417 We're not yet accounting for the fact that @cite{OpenACC events may
3418 occur during event processing}.
3419 We just handle one case specially, as required by CUDA 9.0
3420 @command{nvprof}, that @code{acc_get_device_type}
3421 (@ref{acc_get_device_type})) may be called from
3422 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3425 We're not yet implementing initialization via a
3426 @code{acc_register_library} function that is either statically linked
3427 in, or dynamically via @env{LD_PRELOAD}.
3428 Initialization via @code{acc_register_library} functions dynamically
3429 loaded via the @env{ACC_PROFLIB} environment variable does work, as
3430 does directly calling @code{acc_prof_register},
3431 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
3433 As currently there are no inquiry functions defined, calls to
3434 @code{acc_prof_lookup} will always return @code{NULL}.
3436 There aren't separate @emph{start}, @emph{stop} events defined for the
3437 event types @code{acc_ev_create}, @code{acc_ev_delete},
3438 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
3439 should be triggered before or after the actual device-specific call is
3440 made. We trigger them after.
3442 Remarks about data provided to callbacks:
3446 @item @code{acc_prof_info.event_type}
3447 It's not clear if for @emph{nested} event callbacks (for example,
3448 @code{acc_ev_enqueue_launch_start} as part of a parent compute
3449 construct), this should be set for the nested event
3450 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
3451 construct should remain (@code{acc_ev_compute_construct_start}). In
3452 this implementation, the value will generally correspond to the
3453 innermost nested event type.
3455 @item @code{acc_prof_info.device_type}
3459 For @code{acc_ev_compute_construct_start}, and in presence of an
3460 @code{if} clause with @emph{false} argument, this will still refer to
3461 the offloading device type.
3462 It's not clear if that's the expected behavior.
3465 Complementary to the item before, for
3466 @code{acc_ev_compute_construct_end}, this is set to
3467 @code{acc_device_host} in presence of an @code{if} clause with
3468 @emph{false} argument.
3469 It's not clear if that's the expected behavior.
3473 @item @code{acc_prof_info.thread_id}
3474 Always @code{-1}; not yet implemented.
3476 @item @code{acc_prof_info.async}
3480 Not yet implemented correctly for
3481 @code{acc_ev_compute_construct_start}.
3484 In a compute construct, for host-fallback
3485 execution/@code{acc_device_host} it will always be
3486 @code{acc_async_sync}.
3487 It's not clear if that's the expected behavior.
3490 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
3491 it will always be @code{acc_async_sync}.
3492 It's not clear if that's the expected behavior.
3496 @item @code{acc_prof_info.async_queue}
3497 There is no @cite{limited number of asynchronous queues} in libgomp.
3498 This will always have the same value as @code{acc_prof_info.async}.
3500 @item @code{acc_prof_info.src_file}
3501 Always @code{NULL}; not yet implemented.
3503 @item @code{acc_prof_info.func_name}
3504 Always @code{NULL}; not yet implemented.
3506 @item @code{acc_prof_info.line_no}
3507 Always @code{-1}; not yet implemented.
3509 @item @code{acc_prof_info.end_line_no}
3510 Always @code{-1}; not yet implemented.
3512 @item @code{acc_prof_info.func_line_no}
3513 Always @code{-1}; not yet implemented.
3515 @item @code{acc_prof_info.func_end_line_no}
3516 Always @code{-1}; not yet implemented.
3518 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
3519 Relating to @code{acc_prof_info.event_type} discussed above, in this
3520 implementation, this will always be the same value as
3521 @code{acc_prof_info.event_type}.
3523 @item @code{acc_event_info.*.parent_construct}
3527 Will be @code{acc_construct_parallel} for all OpenACC compute
3528 constructs as well as many OpenACC Runtime API calls; should be the
3529 one matching the actual construct, or
3530 @code{acc_construct_runtime_api}, respectively.
3533 Will be @code{acc_construct_enter_data} or
3534 @code{acc_construct_exit_data} when processing variable mappings
3535 specified in OpenACC @emph{declare} directives; should be
3536 @code{acc_construct_declare}.
3539 For implicit @code{acc_ev_device_init_start},
3540 @code{acc_ev_device_init_end}, and explicit as well as implicit
3541 @code{acc_ev_alloc}, @code{acc_ev_free},
3542 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3543 @code{acc_ev_enqueue_download_start}, and
3544 @code{acc_ev_enqueue_download_end}, will be
3545 @code{acc_construct_parallel}; should reflect the real parent
3550 @item @code{acc_event_info.*.implicit}
3551 For @code{acc_ev_alloc}, @code{acc_ev_free},
3552 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
3553 @code{acc_ev_enqueue_download_start}, and
3554 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
3555 also for explicit usage.
3557 @item @code{acc_event_info.data_event.var_name}
3558 Always @code{NULL}; not yet implemented.
3560 @item @code{acc_event_info.data_event.host_ptr}
3561 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
3564 @item @code{typedef union acc_api_info}
3565 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
3566 Information}. This should obviously be @code{typedef @emph{struct}
3569 @item @code{acc_api_info.device_api}
3570 Possibly not yet implemented correctly for
3571 @code{acc_ev_compute_construct_start},
3572 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
3573 will always be @code{acc_device_api_none} for these event types.
3574 For @code{acc_ev_enter_data_start}, it will be
3575 @code{acc_device_api_none} in some cases.
3577 @item @code{acc_api_info.device_type}
3578 Always the same as @code{acc_prof_info.device_type}.
3580 @item @code{acc_api_info.vendor}
3581 Always @code{-1}; not yet implemented.
3583 @item @code{acc_api_info.device_handle}
3584 Always @code{NULL}; not yet implemented.
3586 @item @code{acc_api_info.context_handle}
3587 Always @code{NULL}; not yet implemented.
3589 @item @code{acc_api_info.async_handle}
3590 Always @code{NULL}; not yet implemented.
3594 Remarks about certain event types:
3598 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
3602 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
3603 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
3604 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
3605 Whan a compute construct triggers implicit
3606 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
3607 events, they currently aren't @emph{nested within} the corresponding
3608 @code{acc_ev_compute_construct_start} and
3609 @code{acc_ev_compute_construct_end}, but they're currently observed
3610 @emph{before} @code{acc_ev_compute_construct_start}.
3611 It's not clear what to do: the standard asks us provide a lot of
3612 details to the @code{acc_ev_compute_construct_start} callback, without
3613 (implicitly) initializing a device before?
3616 Callbacks for these event types will not be invoked for calls to the
3617 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
3618 It's not clear if they should be.
3622 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
3626 Callbacks for these event types will also be invoked for OpenACC
3627 @emph{host_data} constructs.
3628 It's not clear if they should be.
3631 Callbacks for these event types will also be invoked when processing
3632 variable mappings specified in OpenACC @emph{declare} directives.
3633 It's not clear if they should be.
3639 Callbacks for the following event types will be invoked, but dispatch
3640 and information provided therein has not yet been thoroughly reviewed:
3643 @item @code{acc_ev_alloc}
3644 @item @code{acc_ev_free}
3645 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
3646 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
3647 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
3650 During device initialization, and finalization, respectively,
3651 callbacks for the following event types will not yet be invoked:
3654 @item @code{acc_ev_alloc}
3655 @item @code{acc_ev_free}
3658 Callbacks for the following event types have not yet been implemented,
3659 so currently won't be invoked:
3662 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
3663 @item @code{acc_ev_runtime_shutdown}
3664 @item @code{acc_ev_create}, @code{acc_ev_delete}
3665 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
3668 For the following runtime library functions, not all expected
3669 callbacks will be invoked (mostly concerning implicit device
3673 @item @code{acc_get_num_devices}
3674 @item @code{acc_set_device_type}
3675 @item @code{acc_get_device_type}
3676 @item @code{acc_set_device_num}
3677 @item @code{acc_get_device_num}
3678 @item @code{acc_init}
3679 @item @code{acc_shutdown}
3682 Aside from implicit device initialization, for the following runtime
3683 library functions, no callbacks will be invoked for shared-memory
3684 offloading devices (it's not clear if they should be):
3687 @item @code{acc_malloc}
3688 @item @code{acc_free}
3689 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
3690 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
3691 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
3692 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
3693 @item @code{acc_update_device}, @code{acc_update_device_async}
3694 @item @code{acc_update_self}, @code{acc_update_self_async}
3695 @item @code{acc_map_data}, @code{acc_unmap_data}
3696 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
3697 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
3702 @c ---------------------------------------------------------------------
3704 @c ---------------------------------------------------------------------
3706 @node The libgomp ABI
3707 @chapter The libgomp ABI
3709 The following sections present notes on the external ABI as
3710 presented by libgomp. Only maintainers should need them.
3713 * Implementing MASTER construct::
3714 * Implementing CRITICAL construct::
3715 * Implementing ATOMIC construct::
3716 * Implementing FLUSH construct::
3717 * Implementing BARRIER construct::
3718 * Implementing THREADPRIVATE construct::
3719 * Implementing PRIVATE clause::
3720 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
3721 * Implementing REDUCTION clause::
3722 * Implementing PARALLEL construct::
3723 * Implementing FOR construct::
3724 * Implementing ORDERED construct::
3725 * Implementing SECTIONS construct::
3726 * Implementing SINGLE construct::
3727 * Implementing OpenACC's PARALLEL construct::
3731 @node Implementing MASTER construct
3732 @section Implementing MASTER construct
3735 if (omp_get_thread_num () == 0)
3739 Alternately, we generate two copies of the parallel subfunction
3740 and only include this in the version run by the master thread.
3741 Surely this is not worthwhile though...
3745 @node Implementing CRITICAL construct
3746 @section Implementing CRITICAL construct
3748 Without a specified name,
3751 void GOMP_critical_start (void);
3752 void GOMP_critical_end (void);
3755 so that we don't get COPY relocations from libgomp to the main
3758 With a specified name, use omp_set_lock and omp_unset_lock with
3759 name being transformed into a variable declared like
3762 omp_lock_t gomp_critical_user_<name> __attribute__((common))
3765 Ideally the ABI would specify that all zero is a valid unlocked
3766 state, and so we wouldn't need to initialize this at
3771 @node Implementing ATOMIC construct
3772 @section Implementing ATOMIC construct
3774 The target should implement the @code{__sync} builtins.
3776 Failing that we could add
3779 void GOMP_atomic_enter (void)
3780 void GOMP_atomic_exit (void)
3783 which reuses the regular lock code, but with yet another lock
3784 object private to the library.
3788 @node Implementing FLUSH construct
3789 @section Implementing FLUSH construct
3791 Expands to the @code{__sync_synchronize} builtin.
3795 @node Implementing BARRIER construct
3796 @section Implementing BARRIER construct
3799 void GOMP_barrier (void)
3803 @node Implementing THREADPRIVATE construct
3804 @section Implementing THREADPRIVATE construct
3806 In _most_ cases we can map this directly to @code{__thread}. Except
3807 that OMP allows constructors for C++ objects. We can either
3808 refuse to support this (how often is it used?) or we can
3809 implement something akin to .ctors.
3811 Even more ideally, this ctor feature is handled by extensions
3812 to the main pthreads library. Failing that, we can have a set
3813 of entry points to register ctor functions to be called.
3817 @node Implementing PRIVATE clause
3818 @section Implementing PRIVATE clause
3820 In association with a PARALLEL, or within the lexical extent
3821 of a PARALLEL block, the variable becomes a local variable in
3822 the parallel subfunction.
3824 In association with FOR or SECTIONS blocks, create a new
3825 automatic variable within the current function. This preserves
3826 the semantic of new variable creation.
3830 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3831 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
3833 This seems simple enough for PARALLEL blocks. Create a private
3834 struct for communicating between the parent and subfunction.
3835 In the parent, copy in values for scalar and "small" structs;
3836 copy in addresses for others TREE_ADDRESSABLE types. In the
3837 subfunction, copy the value into the local variable.
3839 It is not clear what to do with bare FOR or SECTION blocks.
3840 The only thing I can figure is that we do something like:
3843 #pragma omp for firstprivate(x) lastprivate(y)
3844 for (int i = 0; i < n; ++i)
3861 where the "x=x" and "y=y" assignments actually have different
3862 uids for the two variables, i.e. not something you could write
3863 directly in C. Presumably this only makes sense if the "outer"
3864 x and y are global variables.
3866 COPYPRIVATE would work the same way, except the structure
3867 broadcast would have to happen via SINGLE machinery instead.
3871 @node Implementing REDUCTION clause
3872 @section Implementing REDUCTION clause
3874 The private struct mentioned in the previous section should have
3875 a pointer to an array of the type of the variable, indexed by the
3876 thread's @var{team_id}. The thread stores its final value into the
3877 array, and after the barrier, the master thread iterates over the
3878 array to collect the values.
3881 @node Implementing PARALLEL construct
3882 @section Implementing PARALLEL construct
3885 #pragma omp parallel
3894 void subfunction (void *data)
3901 GOMP_parallel_start (subfunction, &data, num_threads);
3902 subfunction (&data);
3903 GOMP_parallel_end ();
3907 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
3910 The @var{FN} argument is the subfunction to be run in parallel.
3912 The @var{DATA} argument is a pointer to a structure used to
3913 communicate data in and out of the subfunction, as discussed
3914 above with respect to FIRSTPRIVATE et al.
3916 The @var{NUM_THREADS} argument is 1 if an IF clause is present
3917 and false, or the value of the NUM_THREADS clause, if
3920 The function needs to create the appropriate number of
3921 threads and/or launch them from the dock. It needs to
3922 create the team structure and assign team ids.
3925 void GOMP_parallel_end (void)
3928 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
3932 @node Implementing FOR construct
3933 @section Implementing FOR construct
3936 #pragma omp parallel for
3937 for (i = lb; i <= ub; i++)
3944 void subfunction (void *data)
3947 while (GOMP_loop_static_next (&_s0, &_e0))
3950 for (i = _s0; i < _e1; i++)
3953 GOMP_loop_end_nowait ();
3956 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
3958 GOMP_parallel_end ();
3962 #pragma omp for schedule(runtime)
3963 for (i = 0; i < n; i++)
3972 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
3975 for (i = _s0, i < _e0; i++)
3977 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
3982 Note that while it looks like there is trickiness to propagating
3983 a non-constant STEP, there isn't really. We're explicitly allowed
3984 to evaluate it as many times as we want, and any variables involved
3985 should automatically be handled as PRIVATE or SHARED like any other
3986 variables. So the expression should remain evaluable in the
3987 subfunction. We can also pull it into a local variable if we like,
3988 but since its supposed to remain unchanged, we can also not if we like.
3990 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
3991 able to get away with no work-sharing context at all, since we can
3992 simply perform the arithmetic directly in each thread to divide up
3993 the iterations. Which would mean that we wouldn't need to call any
3996 There are separate routines for handling loops with an ORDERED
3997 clause. Bookkeeping for that is non-trivial...
4001 @node Implementing ORDERED construct
4002 @section Implementing ORDERED construct
4005 void GOMP_ordered_start (void)
4006 void GOMP_ordered_end (void)
4011 @node Implementing SECTIONS construct
4012 @section Implementing SECTIONS construct
4017 #pragma omp sections
4031 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
4048 @node Implementing SINGLE construct
4049 @section Implementing SINGLE construct
4063 if (GOMP_single_start ())
4071 #pragma omp single copyprivate(x)
4078 datap = GOMP_single_copy_start ();
4083 GOMP_single_copy_end (&data);
4092 @node Implementing OpenACC's PARALLEL construct
4093 @section Implementing OpenACC's PARALLEL construct
4096 void GOACC_parallel ()
4101 @c ---------------------------------------------------------------------
4103 @c ---------------------------------------------------------------------
4105 @node Reporting Bugs
4106 @chapter Reporting Bugs
4108 Bugs in the GNU Offloading and Multi Processing Runtime Library should
4109 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
4110 "openacc", or "openmp", or both to the keywords field in the bug
4111 report, as appropriate.
4115 @c ---------------------------------------------------------------------
4116 @c GNU General Public License
4117 @c ---------------------------------------------------------------------
4119 @include gpl_v3.texi
4123 @c ---------------------------------------------------------------------
4124 @c GNU Free Documentation License
4125 @c ---------------------------------------------------------------------
4131 @c ---------------------------------------------------------------------
4132 @c Funding Free Software
4133 @c ---------------------------------------------------------------------
4135 @include funding.texi
4137 @c ---------------------------------------------------------------------
4139 @c ---------------------------------------------------------------------
4142 @unnumbered Library Index