1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006-2024 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
50 @setchapternewpage odd
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
71 @node Top, Enabling OpenMP
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Implementation Status:: List of implemented features by OpenMP version
99 * OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
102 * OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105 * Enabling OpenACC:: How to enable OpenACC for your
107 * OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111 * CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113 * OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115 * OpenACC Profiling Interface::
116 * OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
118 * Offload-Target Specifics:: Notes on offload-target specific internals
119 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
120 * Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122 * Copying:: GNU general public license says
123 how you can copy and share libgomp.
124 * GNU Free Documentation License::
125 How you can copy and share this manual.
126 * Funding:: How to help assure continued work for free
128 * Library Index:: Index of this documentation.
132 @c ---------------------------------------------------------------------
134 @c ---------------------------------------------------------------------
136 @node Enabling OpenMP
137 @chapter Enabling OpenMP
139 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140 flag @option{-fopenmp} must be specified. For C and C++, this enables
141 the handling of the OpenMP directives using @code{#pragma omp} and the
142 @code{[[omp::directive(...)]]}, @code{[[omp::sequence(...)]]} and
143 @code{[[omp::decl(...)]]} attributes. For Fortran, it enables for
144 free source form the @code{!$omp} sentinel for directives and the
145 @code{!$} conditional compilation sentinel and for fixed source form the
146 @code{c$omp}, @code{*$omp} and @code{!$omp} sentinels for directives and
147 the @code{c$}, @code{*$} and @code{!$} conditional compilation sentinels.
148 The flag also arranges for automatic linking of the OpenMP runtime library
149 (@ref{Runtime Library Routines}).
151 The @option{-fopenmp-simd} flag can be used to enable a subset of
152 OpenMP directives that do not require the linking of either the
153 OpenMP runtime library or the POSIX threads library.
155 A complete description of all OpenMP directives may be found in the
156 @uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
157 See also @ref{OpenMP Implementation Status}.
160 @c ---------------------------------------------------------------------
161 @c OpenMP Implementation Status
162 @c ---------------------------------------------------------------------
164 @node OpenMP Implementation Status
165 @chapter OpenMP Implementation Status
168 * OpenMP 4.5:: Feature completion status to 4.5 specification
169 * OpenMP 5.0:: Feature completion status to 5.0 specification
170 * OpenMP 5.1:: Feature completion status to 5.1 specification
171 * OpenMP 5.2:: Feature completion status to 5.2 specification
172 * OpenMP Technical Report 12:: Feature completion status to second 6.0 preview
175 The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
176 parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
177 the value @code{201511} (i.e. OpenMP 4.5).
182 The OpenMP 4.5 specification is fully supported.
187 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
188 @c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
190 @multitable @columnfractions .60 .10 .25
191 @headitem Description @tab Status @tab Comments
192 @item Array shaping @tab N @tab
193 @item Array sections with non-unit strides in C and C++ @tab N @tab
194 @item Iterators @tab Y @tab
195 @item @code{metadirective} directive @tab N @tab
196 @item @code{declare variant} directive
197 @tab P @tab @emph{simd} traits not handled correctly
198 @item @var{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
199 env variable @tab Y @tab
200 @item Nested-parallel changes to @var{max-active-levels-var} ICV @tab Y @tab
201 @item @code{requires} directive @tab P
202 @tab complete but no non-host device provides @code{unified_shared_memory}
203 @item @code{teams} construct outside an enclosing target region @tab Y @tab
204 @item Non-rectangular loop nests @tab P
205 @tab Full support for C/C++, partial for Fortran
206 (@uref{https://gcc.gnu.org/PR110735,PR110735})
207 @item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
208 @item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
209 constructs @tab Y @tab
210 @item Collapse of associated loops that are imperfectly nested loops @tab Y @tab
211 @item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
212 @code{simd} construct @tab Y @tab
213 @item @code{atomic} constructs in @code{simd} @tab Y @tab
214 @item @code{loop} construct @tab Y @tab
215 @item @code{order(concurrent)} clause @tab Y @tab
216 @item @code{scan} directive and @code{in_scan} modifier for the
217 @code{reduction} clause @tab Y @tab
218 @item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
219 @item @code{in_reduction} clause on @code{target} constructs @tab P
220 @tab @code{nowait} only stub
221 @item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
222 @item @code{task} modifier to @code{reduction} clause @tab Y @tab
223 @item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
224 @item @code{detach} clause to @code{task} construct @tab Y @tab
225 @item @code{omp_fulfill_event} runtime routine @tab Y @tab
226 @item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
227 and @code{taskloop simd} constructs @tab Y @tab
228 @item @code{taskloop} construct cancelable by @code{cancel} construct
230 @item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
232 @item Predefined memory spaces, memory allocators, allocator traits
233 @tab Y @tab See also @ref{Memory allocation}
234 @item Memory management routines @tab Y @tab
235 @item @code{allocate} directive @tab P
236 @tab Only C for stack/automatic and Fortran for stack/automatic
237 and allocatable/pointer variables
238 @item @code{allocate} clause @tab P @tab Initial support
239 @item @code{use_device_addr} clause on @code{target data} @tab Y @tab
240 @item @code{ancestor} modifier on @code{device} clause @tab Y @tab
241 @item Implicit declare target directive @tab Y @tab
242 @item Discontiguous array section with @code{target update} construct
244 @item C/C++'s lvalue expressions in @code{to}, @code{from}
245 and @code{map} clauses @tab Y @tab
246 @item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
247 @item Nested @code{declare target} directive @tab Y @tab
248 @item Combined @code{master} constructs @tab Y @tab
249 @item @code{depend} clause on @code{taskwait} @tab Y @tab
250 @item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
252 @item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
253 @item @code{depobj} construct and depend objects @tab Y @tab
254 @item Lock hints were renamed to synchronization hints @tab Y @tab
255 @item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
256 @item Map-order clarifications @tab P @tab
257 @item @code{close} @emph{map-type-modifier} @tab Y @tab
258 @item Mapping C/C++ pointer variables and to assign the address of
259 device memory mapped by an array section @tab P @tab
260 @item Mapping of Fortran pointer and allocatable variables, including pointer
261 and allocatable components of variables
262 @tab P @tab Mapping of vars with allocatable components unsupported
263 @item @code{defaultmap} extensions @tab Y @tab
264 @item @code{declare mapper} directive @tab N @tab
265 @item @code{omp_get_supported_active_levels} routine @tab Y @tab
266 @item Runtime routines and environment variables to display runtime thread
267 affinity information @tab Y @tab
268 @item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
270 @item @code{omp_get_device_num} runtime routine @tab Y @tab
271 @item OMPT interface @tab N @tab
272 @item OMPD interface @tab N @tab
275 @unnumberedsubsec Other new OpenMP 5.0 features
277 @multitable @columnfractions .60 .10 .25
278 @headitem Description @tab Status @tab Comments
279 @item Supporting C++'s range-based for loop @tab Y @tab
286 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
288 @multitable @columnfractions .60 .10 .25
289 @headitem Description @tab Status @tab Comments
290 @item OpenMP directive as C++ attribute specifiers @tab Y @tab
291 @item @code{omp_all_memory} reserved locator @tab Y @tab
292 @item @emph{target_device trait} in OpenMP Context @tab N @tab
293 @item @code{target_device} selector set in context selectors @tab N @tab
294 @item C/C++'s @code{declare variant} directive: elision support of
295 preprocessed code @tab N @tab
296 @item @code{declare variant}: new clauses @code{adjust_args} and
297 @code{append_args} @tab N @tab
298 @item @code{dispatch} construct @tab N @tab
299 @item device-specific ICV settings with environment variables @tab Y @tab
300 @item @code{assume} and @code{assumes} directives @tab Y @tab
301 @item @code{nothing} directive @tab Y @tab
302 @item @code{error} directive @tab Y @tab
303 @item @code{masked} construct @tab Y @tab
304 @item @code{scope} directive @tab Y @tab
305 @item Loop transformation constructs @tab N @tab
306 @item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
307 clauses of the @code{taskloop} construct @tab Y @tab
308 @item @code{align} clause in @code{allocate} directive @tab P
309 @tab Only C and Fortran (and not for static variables)
310 @item @code{align} modifier in @code{allocate} clause @tab Y @tab
311 @item @code{thread_limit} clause to @code{target} construct @tab Y @tab
312 @item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
313 @item Iterators in @code{target update} motion clauses and @code{map}
315 @item Indirect calls to the device version of a procedure or function in
316 @code{target} regions @tab P @tab Only C and C++
317 @item @code{interop} directive @tab N @tab
318 @item @code{omp_interop_t} object support in runtime routines @tab N @tab
319 @item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
320 @item Extensions to the @code{atomic} directive @tab Y @tab
321 @item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
322 @item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
323 @item @code{private} and @code{firstprivate} argument to @code{default}
324 clause in C and C++ @tab Y @tab
325 @item @code{present} argument to @code{defaultmap} clause @tab Y @tab
326 @item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
327 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
329 @item @code{omp_target_is_accessible} runtime routine @tab Y @tab
330 @item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
331 runtime routines @tab Y @tab
332 @item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
333 @item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
334 @code{omp_aligned_calloc} runtime routines @tab Y @tab
335 @item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
336 @code{omp_atv_default} changed @tab Y @tab
337 @item @code{omp_display_env} runtime routine @tab Y @tab
338 @item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
339 @item @code{ompt_sync_region_t} enum additions @tab N @tab
340 @item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
341 and @code{ompt_state_wait_barrier_teams} @tab N @tab
342 @item @code{ompt_callback_target_data_op_emi_t},
343 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
344 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
345 @item @code{ompt_callback_error_t} type @tab N @tab
346 @item @code{OMP_PLACES} syntax extensions @tab Y @tab
347 @item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
348 variables @tab Y @tab
351 @unnumberedsubsec Other new OpenMP 5.1 features
353 @multitable @columnfractions .60 .10 .25
354 @headitem Description @tab Status @tab Comments
355 @item Support of strictly structured blocks in Fortran @tab Y @tab
356 @item Support of structured block sequences in C/C++ @tab Y @tab
357 @item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
359 @item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
360 @item Pointer predetermined firstprivate getting initialized
361 to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
362 @item For Fortran, diagnose placing declarative before/between @code{USE},
363 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
364 @item Optional comma between directive and clause in the @code{#pragma} form @tab Y @tab
365 @item @code{indirect} clause in @code{declare target} @tab P @tab Only C and C++
366 @item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
367 @item @code{present} modifier to the @code{map}, @code{to} and @code{from}
375 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
377 @multitable @columnfractions .60 .10 .25
378 @headitem Description @tab Status @tab Comments
379 @item @code{omp_in_explicit_task} routine and @var{explicit-task-var} ICV
381 @item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
383 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
384 sentinel as C/C++ pragma and C++ attributes are warned for with
385 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
386 (enabled by default), respectively; for Fortran free-source code, there is
387 a warning enabled by default and, for fixed-source code, the @code{omx}
388 sentinel is warned for with with @code{-Wsurprising} (enabled by
389 @code{-Wall}). Unknown clauses are always rejected with an error.}
390 @item Clauses on @code{end} directive can be on directive @tab Y @tab
391 @item @code{destroy} clause with destroy-var argument on @code{depobj}
393 @item Deprecation of no-argument @code{destroy} clause on @code{depobj}
395 @item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
396 @item Deprecation of minus operator for reductions @tab N @tab
397 @item Deprecation of separating @code{map} modifiers without comma @tab N @tab
398 @item @code{declare mapper} with iterator and @code{present} modifiers
400 @item If a matching mapped list item is not found in the data environment, the
401 pointer retains its original value @tab Y @tab
402 @item New @code{enter} clause as alias for @code{to} on declare target directive
404 @item Deprecation of @code{to} clause on declare target directive @tab N @tab
405 @item Extended list of directives permitted in Fortran pure procedures
407 @item New @code{allocators} directive for Fortran @tab Y @tab
408 @item Deprecation of @code{allocate} directive for Fortran
409 allocatables/pointers @tab N @tab
410 @item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
411 @item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
413 @item Deprecation of traits array following the allocator_handle expression in
414 @code{uses_allocators} @tab N @tab
415 @item New @code{otherwise} clause as alias for @code{default} on metadirectives
417 @item Deprecation of @code{default} clause on metadirectives @tab N @tab
418 @item Deprecation of delimited form of @code{declare target} @tab N @tab
419 @item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
420 @item @code{allocate} and @code{firstprivate} clauses on @code{scope}
422 @item @code{ompt_callback_work} @tab N @tab
423 @item Default map-type for the @code{map} clause in @code{target enter/exit data}
425 @item New @code{doacross} clause as alias for @code{depend} with
426 @code{source}/@code{sink} modifier @tab Y @tab
427 @item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
429 @item @code{omp_cur_iteration} keyword @tab Y @tab
432 @unnumberedsubsec Other new OpenMP 5.2 features
434 @multitable @columnfractions .60 .10 .25
435 @headitem Description @tab Status @tab Comments
436 @item For Fortran, optional comma between directive and clause @tab N @tab
437 @item Conforming device numbers and @code{omp_initial_device} and
438 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
439 @item Initial value of @var{default-device-var} ICV with
440 @code{OMP_TARGET_OFFLOAD=mandatory} @tab Y @tab
441 @item @code{all} as @emph{implicit-behavior} for @code{defaultmap} @tab Y @tab
442 @item @emph{interop_types} in any position of the modifier list for the @code{init} clause
443 of the @code{interop} construct @tab N @tab
444 @item Invoke virtual member functions of C++ objects created on the host device
445 on other devices @tab N @tab
449 @node OpenMP Technical Report 12
450 @section OpenMP Technical Report 12
452 Technical Report (TR) 12 is the second preview for OpenMP 6.0.
454 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
455 @multitable @columnfractions .60 .10 .25
456 @item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
457 @tab N/A @tab Backward compatibility
458 @item Full support for C23 was added @tab P @tab
459 @item Full support for C++23 was added @tab P @tab
460 @item @code{_ALL} suffix to the device-scope environment variables
461 @tab P @tab Host device number wrongly accepted
462 @item @code{num_threads} now accepts a list @tab N @tab
463 @item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
464 @item Extension of @code{OMP_DEFAULT_DEVICE} and new
465 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
466 @item New @code{OMP_THREADS_RESERVE} environment variable @tab N @tab
467 @item The @code{decl} attribute was added to the C++ attribute syntax
469 @item The OpenMP directive syntax was extended to include C 23 attribute
470 specifiers @tab Y @tab
471 @item All inarguable clauses take now an optional Boolean argument @tab N @tab
472 @item For Fortran, @emph{locator list} can be also function reference with
473 data pointer result @tab N @tab
474 @item Concept of @emph{assumed-size arrays} in C and C++
476 @item @emph{directive-name-modifier} accepted in all clauses @tab N @tab
477 @item For Fortran, atomic with BLOCK construct and, for C/C++, with
478 unlimited curly braces supported @tab N @tab
479 @item For Fortran, atomic compare with storing the comparison result
481 @item New @code{looprange} clause @tab N @tab
482 @item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
484 @item Support for inductions @tab N @tab
485 @item Implicit reduction identifiers of C++ classes
487 @item Change of the @emph{map-type} property from @emph{ultimate} to
488 @emph{default} @tab N @tab
489 @item @code{self} modifier to @code{map} and @code{self} as
490 @code{defaultmap} argument @tab N @tab
491 @item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
493 @item @code{groupprivate} directive @tab N @tab
494 @item @code{local} clause to @code{declare target} directive @tab N @tab
495 @item @code{part_size} allocator trait @tab N @tab
496 @item @code{pin_device}, @code{preferred_device} and @code{target_access}
499 @item @code{access} allocator trait changes @tab N @tab
500 @item Extension of @code{interop} operation of @code{append_args}, allowing all
501 modifiers of the @code{init} clause
503 @item @code{interop} clause to @code{dispatch} @tab N @tab
504 @item @code{message} and @code{severity} clauses to @code{parallel} directive
506 @item @code{self} clause to @code{requires} directive @tab N @tab
507 @item @code{no_openmp_constructs} assumptions clause @tab N @tab
508 @item @code{reverse} loop-transformation construct @tab N @tab
509 @item @code{interchange} loop-transformation construct @tab N @tab
510 @item @code{fuse} loop-transformation construct @tab N @tab
511 @item @code{apply} code to loop-transforming constructs @tab N @tab
512 @item @code{omp_curr_progress_width} identifier @tab N @tab
513 @item @code{safesync} clause to the @code{parallel} construct @tab N @tab
514 @item @code{omp_get_max_progress_width} runtime routine @tab N @tab
515 @item @code{strict} modifier keyword to @code{num_threads} @tab N @tab
516 @item @code{atomic} permitted in a construct with @code{order(concurrent)}
518 @item @code{coexecute} directive for Fortran @tab N @tab
519 @item Fortran DO CONCURRENT as associated loop in a @code{loop} construct
521 @item @code{threadset} clause in task-generating constructs @tab N @tab
522 @item @code{nowait} clause with reverse-offload @code{target} directives
524 @item Boolean argument to @code{nowait} and @code{nogroup} may be non constant
526 @item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
527 @item @code{omp_is_free_agent} and @code{omp_ancestor_is_free_agent} routines
529 @item @code{omp_target_memset} and @code{omp_target_memset_rect_async} routines
531 @item Routines for obtaining memory spaces/allocators for shared/device memory
533 @item @code{omp_get_memspace_num_resources} routine @tab N @tab
534 @item @code{omp_get_submemspace} routine @tab N @tab
535 @item @code{ompt_target_data_transfer} and @code{ompt_target_data_transfer_async}
536 values in @code{ompt_target_data_op_t} enum @tab N @tab
537 @item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
540 @unnumberedsubsec Other new TR 12 features
541 @multitable @columnfractions .60 .10 .25
542 @item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
543 @item Mapping lambda captures @tab N @tab
544 @item New @code{omp_pause_stop_tool} constant for omp_pause_resource @tab N @tab
549 @c ---------------------------------------------------------------------
550 @c OpenMP Runtime Library Routines
551 @c ---------------------------------------------------------------------
553 @node Runtime Library Routines
554 @chapter OpenMP Runtime Library Routines
556 The runtime routines described here are defined by Section 18 of the OpenMP
557 specification in version 5.2.
560 * Thread Team Routines::
561 * Thread Affinity Routines::
562 * Teams Region Routines::
564 @c * Resource Relinquishing Routines::
565 * Device Information Routines::
566 * Device Memory Routines::
570 @c * Interoperability Routines::
571 * Memory Management Routines::
572 @c * Tool Control Routine::
573 * Environment Display Routine::
578 @node Thread Team Routines
579 @section Thread Team Routines
581 Routines controlling threads in the current contention group.
582 They have C linkage and do not throw exceptions.
585 * omp_set_num_threads:: Set upper team size limit
586 * omp_get_num_threads:: Size of the active team
587 * omp_get_max_threads:: Maximum number of threads of parallel region
588 * omp_get_thread_num:: Current thread ID
589 * omp_in_parallel:: Whether a parallel region is active
590 * omp_set_dynamic:: Enable/disable dynamic teams
591 * omp_get_dynamic:: Dynamic teams setting
592 * omp_get_cancellation:: Whether cancellation support is enabled
593 * omp_set_nested:: Enable/disable nested parallel regions
594 * omp_get_nested:: Nested parallel regions
595 * omp_set_schedule:: Set the runtime scheduling method
596 * omp_get_schedule:: Obtain the runtime scheduling method
597 * omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
598 * omp_get_supported_active_levels:: Maximum number of active regions supported
599 * omp_set_max_active_levels:: Limits the number of active parallel regions
600 * omp_get_max_active_levels:: Current maximum number of active regions
601 * omp_get_level:: Number of parallel regions
602 * omp_get_ancestor_thread_num:: Ancestor thread ID
603 * omp_get_team_size:: Number of threads in a team
604 * omp_get_active_level:: Number of active parallel regions
609 @node omp_set_num_threads
610 @subsection @code{omp_set_num_threads} -- Set upper team size limit
612 @item @emph{Description}:
613 Specifies the number of threads used by default in subsequent parallel
614 sections, if those do not specify a @code{num_threads} clause. The
615 argument of @code{omp_set_num_threads} shall be a positive integer.
618 @multitable @columnfractions .20 .80
619 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
622 @item @emph{Fortran}:
623 @multitable @columnfractions .20 .80
624 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
625 @item @tab @code{integer, intent(in) :: num_threads}
628 @item @emph{See also}:
629 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
631 @item @emph{Reference}:
632 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
637 @node omp_get_num_threads
638 @subsection @code{omp_get_num_threads} -- Size of the active team
640 @item @emph{Description}:
641 Returns the number of threads in the current team. In a sequential section of
642 the program @code{omp_get_num_threads} returns 1.
644 The default team size may be initialized at startup by the
645 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
646 of the current team may be set either by the @code{NUM_THREADS}
647 clause or by @code{omp_set_num_threads}. If none of the above were
648 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
649 one thread per CPU online is used.
652 @multitable @columnfractions .20 .80
653 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
656 @item @emph{Fortran}:
657 @multitable @columnfractions .20 .80
658 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
661 @item @emph{See also}:
662 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
664 @item @emph{Reference}:
665 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
670 @node omp_get_max_threads
671 @subsection @code{omp_get_max_threads} -- Maximum number of threads of parallel region
673 @item @emph{Description}:
674 Return the maximum number of threads used for the current parallel region
675 that does not use the clause @code{num_threads}.
678 @multitable @columnfractions .20 .80
679 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
682 @item @emph{Fortran}:
683 @multitable @columnfractions .20 .80
684 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
687 @item @emph{See also}:
688 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
690 @item @emph{Reference}:
691 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
696 @node omp_get_thread_num
697 @subsection @code{omp_get_thread_num} -- Current thread ID
699 @item @emph{Description}:
700 Returns a unique thread identification number within the current team.
701 In a sequential parts of the program, @code{omp_get_thread_num}
702 always returns 0. In parallel regions the return value varies
703 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
704 value of the primary thread of a team is always 0.
707 @multitable @columnfractions .20 .80
708 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
711 @item @emph{Fortran}:
712 @multitable @columnfractions .20 .80
713 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
716 @item @emph{See also}:
717 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
719 @item @emph{Reference}:
720 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
725 @node omp_in_parallel
726 @subsection @code{omp_in_parallel} -- Whether a parallel region is active
728 @item @emph{Description}:
729 This function returns @code{true} if currently running in parallel,
730 @code{false} otherwise. Here, @code{true} and @code{false} represent
731 their language-specific counterparts.
734 @multitable @columnfractions .20 .80
735 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
738 @item @emph{Fortran}:
739 @multitable @columnfractions .20 .80
740 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
743 @item @emph{Reference}:
744 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
748 @node omp_set_dynamic
749 @subsection @code{omp_set_dynamic} -- Enable/disable dynamic teams
751 @item @emph{Description}:
752 Enable or disable the dynamic adjustment of the number of threads
753 within a team. The function takes the language-specific equivalent
754 of @code{true} and @code{false}, where @code{true} enables dynamic
755 adjustment of team sizes and @code{false} disables it.
758 @multitable @columnfractions .20 .80
759 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
762 @item @emph{Fortran}:
763 @multitable @columnfractions .20 .80
764 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
765 @item @tab @code{logical, intent(in) :: dynamic_threads}
768 @item @emph{See also}:
769 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
771 @item @emph{Reference}:
772 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
777 @node omp_get_dynamic
778 @subsection @code{omp_get_dynamic} -- Dynamic teams setting
780 @item @emph{Description}:
781 This function returns @code{true} if enabled, @code{false} otherwise.
782 Here, @code{true} and @code{false} represent their language-specific
785 The dynamic team setting may be initialized at startup by the
786 @env{OMP_DYNAMIC} environment variable or at runtime using
787 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
791 @multitable @columnfractions .20 .80
792 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
795 @item @emph{Fortran}:
796 @multitable @columnfractions .20 .80
797 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
800 @item @emph{See also}:
801 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
803 @item @emph{Reference}:
804 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
809 @node omp_get_cancellation
810 @subsection @code{omp_get_cancellation} -- Whether cancellation support is enabled
812 @item @emph{Description}:
813 This function returns @code{true} if cancellation is activated, @code{false}
814 otherwise. Here, @code{true} and @code{false} represent their language-specific
815 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
819 @multitable @columnfractions .20 .80
820 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
823 @item @emph{Fortran}:
824 @multitable @columnfractions .20 .80
825 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
828 @item @emph{See also}:
829 @ref{OMP_CANCELLATION}
831 @item @emph{Reference}:
832 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
838 @subsection @code{omp_set_nested} -- Enable/disable nested parallel regions
840 @item @emph{Description}:
841 Enable or disable nested parallel regions, i.e., whether team members
842 are allowed to create new teams. The function takes the language-specific
843 equivalent of @code{true} and @code{false}, where @code{true} enables
844 dynamic adjustment of team sizes and @code{false} disables it.
846 Enabling nested parallel regions also sets the maximum number of
847 active nested regions to the maximum supported. Disabling nested parallel
848 regions sets the maximum number of active nested regions to one.
850 Note that the @code{omp_set_nested} API routine was deprecated
851 in the OpenMP specification 5.2 in favor of @code{omp_set_max_active_levels}.
854 @multitable @columnfractions .20 .80
855 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
858 @item @emph{Fortran}:
859 @multitable @columnfractions .20 .80
860 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
861 @item @tab @code{logical, intent(in) :: nested}
864 @item @emph{See also}:
865 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
866 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
868 @item @emph{Reference}:
869 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
875 @subsection @code{omp_get_nested} -- Nested parallel regions
877 @item @emph{Description}:
878 This function returns @code{true} if nested parallel regions are
879 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
880 represent their language-specific counterparts.
882 The state of nested parallel regions at startup depends on several
883 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
884 and is set to greater than one, then nested parallel regions will be
885 enabled. If not defined, then the value of the @env{OMP_NESTED}
886 environment variable will be followed if defined. If neither are
887 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
888 are defined with a list of more than one value, then nested parallel
889 regions are enabled. If none of these are defined, then nested parallel
890 regions are disabled by default.
892 Nested parallel regions can be enabled or disabled at runtime using
893 @code{omp_set_nested}, or by setting the maximum number of nested
894 regions with @code{omp_set_max_active_levels} to one to disable, or
897 Note that the @code{omp_get_nested} API routine was deprecated
898 in the OpenMP specification 5.2 in favor of @code{omp_get_max_active_levels}.
901 @multitable @columnfractions .20 .80
902 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
905 @item @emph{Fortran}:
906 @multitable @columnfractions .20 .80
907 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
910 @item @emph{See also}:
911 @ref{omp_get_max_active_levels}, @ref{omp_set_nested},
912 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
914 @item @emph{Reference}:
915 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
920 @node omp_set_schedule
921 @subsection @code{omp_set_schedule} -- Set the runtime scheduling method
923 @item @emph{Description}:
924 Sets the runtime scheduling method. The @var{kind} argument can have the
925 value @code{omp_sched_static}, @code{omp_sched_dynamic},
926 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
927 @code{omp_sched_auto}, the chunk size is set to the value of
928 @var{chunk_size} if positive, or to the default value if zero or negative.
929 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
932 @multitable @columnfractions .20 .80
933 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
936 @item @emph{Fortran}:
937 @multitable @columnfractions .20 .80
938 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
939 @item @tab @code{integer(kind=omp_sched_kind) kind}
940 @item @tab @code{integer chunk_size}
943 @item @emph{See also}:
944 @ref{omp_get_schedule}
947 @item @emph{Reference}:
948 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
953 @node omp_get_schedule
954 @subsection @code{omp_get_schedule} -- Obtain the runtime scheduling method
956 @item @emph{Description}:
957 Obtain the runtime scheduling method. The @var{kind} argument is set to
958 @code{omp_sched_static}, @code{omp_sched_dynamic},
959 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
960 @var{chunk_size}, is set to the chunk size.
963 @multitable @columnfractions .20 .80
964 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
967 @item @emph{Fortran}:
968 @multitable @columnfractions .20 .80
969 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
970 @item @tab @code{integer(kind=omp_sched_kind) kind}
971 @item @tab @code{integer chunk_size}
974 @item @emph{See also}:
975 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
977 @item @emph{Reference}:
978 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
982 @node omp_get_teams_thread_limit
983 @subsection @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
985 @item @emph{Description}:
986 Return the maximum number of threads that are able to participate in
987 each team created by a teams construct.
990 @multitable @columnfractions .20 .80
991 @item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
994 @item @emph{Fortran}:
995 @multitable @columnfractions .20 .80
996 @item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
999 @item @emph{See also}:
1000 @ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1002 @item @emph{Reference}:
1003 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1008 @node omp_get_supported_active_levels
1009 @subsection @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1011 @item @emph{Description}:
1012 This function returns the maximum number of nested, active parallel regions
1013 supported by this implementation.
1016 @multitable @columnfractions .20 .80
1017 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1020 @item @emph{Fortran}:
1021 @multitable @columnfractions .20 .80
1022 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1025 @item @emph{See also}:
1026 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1028 @item @emph{Reference}:
1029 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1034 @node omp_set_max_active_levels
1035 @subsection @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1037 @item @emph{Description}:
1038 This function limits the maximum allowed number of nested, active
1039 parallel regions. @var{max_levels} must be less or equal to
1040 the value returned by @code{omp_get_supported_active_levels}.
1043 @multitable @columnfractions .20 .80
1044 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1047 @item @emph{Fortran}:
1048 @multitable @columnfractions .20 .80
1049 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1050 @item @tab @code{integer max_levels}
1053 @item @emph{See also}:
1054 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1055 @ref{omp_get_supported_active_levels}
1057 @item @emph{Reference}:
1058 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1063 @node omp_get_max_active_levels
1064 @subsection @code{omp_get_max_active_levels} -- Current maximum number of active regions
1066 @item @emph{Description}:
1067 This function obtains the maximum allowed number of nested, active parallel regions.
1070 @multitable @columnfractions .20 .80
1071 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
1074 @item @emph{Fortran}:
1075 @multitable @columnfractions .20 .80
1076 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
1079 @item @emph{See also}:
1080 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
1082 @item @emph{Reference}:
1083 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
1088 @subsection @code{omp_get_level} -- Obtain the current nesting level
1090 @item @emph{Description}:
1091 This function returns the nesting level for the parallel blocks,
1092 which enclose the calling call.
1095 @multitable @columnfractions .20 .80
1096 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
1099 @item @emph{Fortran}:
1100 @multitable @columnfractions .20 .80
1101 @item @emph{Interface}: @tab @code{integer function omp_level()}
1104 @item @emph{See also}:
1105 @ref{omp_get_active_level}
1107 @item @emph{Reference}:
1108 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
1113 @node omp_get_ancestor_thread_num
1114 @subsection @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
1116 @item @emph{Description}:
1117 This function returns the thread identification number for the given
1118 nesting level of the current thread. For values of @var{level} outside
1119 zero to @code{omp_get_level} -1 is returned; if @var{level} is
1120 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
1123 @multitable @columnfractions .20 .80
1124 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
1127 @item @emph{Fortran}:
1128 @multitable @columnfractions .20 .80
1129 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
1130 @item @tab @code{integer level}
1133 @item @emph{See also}:
1134 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
1136 @item @emph{Reference}:
1137 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
1142 @node omp_get_team_size
1143 @subsection @code{omp_get_team_size} -- Number of threads in a team
1145 @item @emph{Description}:
1146 This function returns the number of threads in a thread team to which
1147 either the current thread or its ancestor belongs. For values of @var{level}
1148 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1149 1 is returned, and for @code{omp_get_level}, the result is identical
1150 to @code{omp_get_num_threads}.
1153 @multitable @columnfractions .20 .80
1154 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1157 @item @emph{Fortran}:
1158 @multitable @columnfractions .20 .80
1159 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1160 @item @tab @code{integer level}
1163 @item @emph{See also}:
1164 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1166 @item @emph{Reference}:
1167 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1172 @node omp_get_active_level
1173 @subsection @code{omp_get_active_level} -- Number of parallel regions
1175 @item @emph{Description}:
1176 This function returns the nesting level for the active parallel blocks,
1177 which enclose the calling call.
1180 @multitable @columnfractions .20 .80
1181 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
1184 @item @emph{Fortran}:
1185 @multitable @columnfractions .20 .80
1186 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
1189 @item @emph{See also}:
1190 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1192 @item @emph{Reference}:
1193 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
1198 @node Thread Affinity Routines
1199 @section Thread Affinity Routines
1201 Routines controlling and accessing thread-affinity policies.
1202 They have C linkage and do not throw exceptions.
1205 * omp_get_proc_bind:: Whether threads may be moved between CPUs
1206 @c * omp_get_num_places:: <fixme>
1207 @c * omp_get_place_num_procs:: <fixme>
1208 @c * omp_get_place_proc_ids:: <fixme>
1209 @c * omp_get_place_num:: <fixme>
1210 @c * omp_get_partition_num_places:: <fixme>
1211 @c * omp_get_partition_place_nums:: <fixme>
1212 @c * omp_set_affinity_format:: <fixme>
1213 @c * omp_get_affinity_format:: <fixme>
1214 @c * omp_display_affinity:: <fixme>
1215 @c * omp_capture_affinity:: <fixme>
1220 @node omp_get_proc_bind
1221 @subsection @code{omp_get_proc_bind} -- Whether threads may be moved between CPUs
1223 @item @emph{Description}:
1224 This functions returns the currently active thread affinity policy, which is
1225 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1226 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1227 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1228 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1231 @multitable @columnfractions .20 .80
1232 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1235 @item @emph{Fortran}:
1236 @multitable @columnfractions .20 .80
1237 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1240 @item @emph{See also}:
1241 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1243 @item @emph{Reference}:
1244 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1249 @node Teams Region Routines
1250 @section Teams Region Routines
1252 Routines controlling the league of teams that are executed in a @code{teams}
1253 region. They have C linkage and do not throw exceptions.
1256 * omp_get_num_teams:: Number of teams
1257 * omp_get_team_num:: Get team number
1258 * omp_set_num_teams:: Set upper teams limit for teams region
1259 * omp_get_max_teams:: Maximum number of teams for teams region
1260 * omp_set_teams_thread_limit:: Set upper thread limit for teams construct
1261 * omp_get_thread_limit:: Maximum number of threads
1266 @node omp_get_num_teams
1267 @subsection @code{omp_get_num_teams} -- Number of teams
1269 @item @emph{Description}:
1270 Returns the number of teams in the current team region.
1273 @multitable @columnfractions .20 .80
1274 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
1277 @item @emph{Fortran}:
1278 @multitable @columnfractions .20 .80
1279 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
1282 @item @emph{Reference}:
1283 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1288 @node omp_get_team_num
1289 @subsection @code{omp_get_team_num} -- Get team number
1291 @item @emph{Description}:
1292 Returns the team number of the calling thread.
1295 @multitable @columnfractions .20 .80
1296 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1299 @item @emph{Fortran}:
1300 @multitable @columnfractions .20 .80
1301 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1304 @item @emph{Reference}:
1305 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1310 @node omp_set_num_teams
1311 @subsection @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1313 @item @emph{Description}:
1314 Specifies the upper bound for number of teams created by the teams construct
1315 which does not specify a @code{num_teams} clause. The
1316 argument of @code{omp_set_num_teams} shall be a positive integer.
1319 @multitable @columnfractions .20 .80
1320 @item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1323 @item @emph{Fortran}:
1324 @multitable @columnfractions .20 .80
1325 @item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1326 @item @tab @code{integer, intent(in) :: num_teams}
1329 @item @emph{See also}:
1330 @ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1332 @item @emph{Reference}:
1333 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1338 @node omp_get_max_teams
1339 @subsection @code{omp_get_max_teams} -- Maximum number of teams of teams region
1341 @item @emph{Description}:
1342 Return the maximum number of teams used for the teams region
1343 that does not use the clause @code{num_teams}.
1346 @multitable @columnfractions .20 .80
1347 @item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
1350 @item @emph{Fortran}:
1351 @multitable @columnfractions .20 .80
1352 @item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
1355 @item @emph{See also}:
1356 @ref{omp_set_num_teams}, @ref{omp_get_num_teams}
1358 @item @emph{Reference}:
1359 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
1364 @node omp_set_teams_thread_limit
1365 @subsection @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1367 @item @emph{Description}:
1368 Specifies the upper bound for number of threads that are available
1369 for each team created by the teams construct which does not specify a
1370 @code{thread_limit} clause. The argument of
1371 @code{omp_set_teams_thread_limit} shall be a positive integer.
1374 @multitable @columnfractions .20 .80
1375 @item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1378 @item @emph{Fortran}:
1379 @multitable @columnfractions .20 .80
1380 @item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1381 @item @tab @code{integer, intent(in) :: thread_limit}
1384 @item @emph{See also}:
1385 @ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1387 @item @emph{Reference}:
1388 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1393 @node omp_get_thread_limit
1394 @subsection @code{omp_get_thread_limit} -- Maximum number of threads
1396 @item @emph{Description}:
1397 Return the maximum number of threads of the program.
1400 @multitable @columnfractions .20 .80
1401 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1404 @item @emph{Fortran}:
1405 @multitable @columnfractions .20 .80
1406 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1409 @item @emph{See also}:
1410 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1412 @item @emph{Reference}:
1413 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1418 @node Tasking Routines
1419 @section Tasking Routines
1421 Routines relating to explicit tasks.
1422 They have C linkage and do not throw exceptions.
1425 * omp_get_max_task_priority:: Maximum task priority value that can be set
1426 * omp_in_explicit_task:: Whether a given task is an explicit task
1427 * omp_in_final:: Whether in final or included task region
1428 @c * omp_is_free_agent:: <fixme>/TR12
1429 @c * omp_ancestor_is_free_agent:: <fixme>/TR12
1434 @node omp_get_max_task_priority
1435 @subsection @code{omp_get_max_task_priority} -- Maximum priority value
1436 that can be set for tasks.
1438 @item @emph{Description}:
1439 This function obtains the maximum allowed priority number for tasks.
1442 @multitable @columnfractions .20 .80
1443 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
1446 @item @emph{Fortran}:
1447 @multitable @columnfractions .20 .80
1448 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
1451 @item @emph{Reference}:
1452 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1457 @node omp_in_explicit_task
1458 @subsection @code{omp_in_explicit_task} -- Whether a given task is an explicit task
1460 @item @emph{Description}:
1461 The function returns the @var{explicit-task-var} ICV; it returns true when the
1462 encountering task was generated by a task-generating construct such as
1463 @code{target}, @code{task} or @code{taskloop}. Otherwise, the encountering task
1464 is in an implicit task region such as generated by the implicit or explicit
1465 @code{parallel} region and @code{omp_in_explicit_task} returns false.
1468 @multitable @columnfractions .20 .80
1469 @item @emph{Prototype}: @tab @code{int omp_in_explicit_task(void);}
1472 @item @emph{Fortran}:
1473 @multitable @columnfractions .20 .80
1474 @item @emph{Interface}: @tab @code{logical function omp_in_explicit_task()}
1477 @item @emph{Reference}:
1478 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 18.5.2.
1484 @subsection @code{omp_in_final} -- Whether in final or included task region
1486 @item @emph{Description}:
1487 This function returns @code{true} if currently running in a final
1488 or included task region, @code{false} otherwise. Here, @code{true}
1489 and @code{false} represent their language-specific counterparts.
1492 @multitable @columnfractions .20 .80
1493 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1496 @item @emph{Fortran}:
1497 @multitable @columnfractions .20 .80
1498 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
1501 @item @emph{Reference}:
1502 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1507 @c @node Resource Relinquishing Routines
1508 @c @section Resource Relinquishing Routines
1510 @c Routines releasing resources used by the OpenMP runtime.
1511 @c They have C linkage and do not throw exceptions.
1514 @c * omp_pause_resource:: <fixme>
1515 @c * omp_pause_resource_all:: <fixme>
1518 @node Device Information Routines
1519 @section Device Information Routines
1521 Routines related to devices available to an OpenMP program.
1522 They have C linkage and do not throw exceptions.
1525 * omp_get_num_procs:: Number of processors online
1526 @c * omp_get_max_progress_width:: <fixme>/TR11
1527 * omp_set_default_device:: Set the default device for target regions
1528 * omp_get_default_device:: Get the default device for target regions
1529 * omp_get_num_devices:: Number of target devices
1530 * omp_get_device_num:: Get device that current thread is running on
1531 * omp_is_initial_device:: Whether executing on the host device
1532 * omp_get_initial_device:: Device number of host device
1537 @node omp_get_num_procs
1538 @subsection @code{omp_get_num_procs} -- Number of processors online
1540 @item @emph{Description}:
1541 Returns the number of processors online on that device.
1544 @multitable @columnfractions .20 .80
1545 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
1548 @item @emph{Fortran}:
1549 @multitable @columnfractions .20 .80
1550 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
1553 @item @emph{Reference}:
1554 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
1559 @node omp_set_default_device
1560 @subsection @code{omp_set_default_device} -- Set the default device for target regions
1562 @item @emph{Description}:
1563 Set the default device for target regions without device clause. The argument
1564 shall be a nonnegative device number.
1567 @multitable @columnfractions .20 .80
1568 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1571 @item @emph{Fortran}:
1572 @multitable @columnfractions .20 .80
1573 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1574 @item @tab @code{integer device_num}
1577 @item @emph{See also}:
1578 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1580 @item @emph{Reference}:
1581 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1586 @node omp_get_default_device
1587 @subsection @code{omp_get_default_device} -- Get the default device for target regions
1589 @item @emph{Description}:
1590 Get the default device for target regions without device clause.
1593 @multitable @columnfractions .20 .80
1594 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
1597 @item @emph{Fortran}:
1598 @multitable @columnfractions .20 .80
1599 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
1602 @item @emph{See also}:
1603 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
1605 @item @emph{Reference}:
1606 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
1611 @node omp_get_num_devices
1612 @subsection @code{omp_get_num_devices} -- Number of target devices
1614 @item @emph{Description}:
1615 Returns the number of target devices.
1618 @multitable @columnfractions .20 .80
1619 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
1622 @item @emph{Fortran}:
1623 @multitable @columnfractions .20 .80
1624 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
1627 @item @emph{Reference}:
1628 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
1633 @node omp_get_device_num
1634 @subsection @code{omp_get_device_num} -- Return device number of current device
1636 @item @emph{Description}:
1637 This function returns a device number that represents the device that the
1638 current thread is executing on. For OpenMP 5.0, this must be equal to the
1639 value returned by the @code{omp_get_initial_device} function when called
1643 @multitable @columnfractions .20 .80
1644 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
1647 @item @emph{Fortran}:
1648 @multitable @columnfractions .20 .80
1649 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
1652 @item @emph{See also}:
1653 @ref{omp_get_initial_device}
1655 @item @emph{Reference}:
1656 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
1661 @node omp_is_initial_device
1662 @subsection @code{omp_is_initial_device} -- Whether executing on the host device
1664 @item @emph{Description}:
1665 This function returns @code{true} if currently running on the host device,
1666 @code{false} otherwise. Here, @code{true} and @code{false} represent
1667 their language-specific counterparts.
1670 @multitable @columnfractions .20 .80
1671 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1674 @item @emph{Fortran}:
1675 @multitable @columnfractions .20 .80
1676 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1679 @item @emph{Reference}:
1680 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1685 @node omp_get_initial_device
1686 @subsection @code{omp_get_initial_device} -- Return device number of initial device
1688 @item @emph{Description}:
1689 This function returns a device number that represents the host device.
1690 For OpenMP 5.1, this must be equal to the value returned by the
1691 @code{omp_get_num_devices} function.
1694 @multitable @columnfractions .20 .80
1695 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
1698 @item @emph{Fortran}:
1699 @multitable @columnfractions .20 .80
1700 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
1703 @item @emph{See also}:
1704 @ref{omp_get_num_devices}
1706 @item @emph{Reference}:
1707 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
1712 @node Device Memory Routines
1713 @section Device Memory Routines
1715 Routines related to memory allocation and managing corresponding
1716 pointers on devices. They have C linkage and do not throw exceptions.
1719 * omp_target_alloc:: Allocate device memory
1720 * omp_target_free:: Free device memory
1721 * omp_target_is_present:: Check whether storage is mapped
1722 * omp_target_is_accessible:: Check whether memory is device accessible
1723 @c * omp_target_memcpy:: <fixme>
1724 @c * omp_target_memcpy_rect:: <fixme>
1725 @c * omp_target_memcpy_async:: <fixme>
1726 @c * omp_target_memcpy_rect_async:: <fixme>
1727 @c * omp_target_memset:: <fixme>/TR12
1728 @c * omp_target_memset_async:: <fixme>/TR12
1729 * omp_target_associate_ptr:: Associate a device pointer with a host pointer
1730 * omp_target_disassociate_ptr:: Remove device--host pointer association
1731 * omp_get_mapped_ptr:: Return device pointer to a host pointer
1736 @node omp_target_alloc
1737 @subsection @code{omp_target_alloc} -- Allocate device memory
1739 @item @emph{Description}:
1740 This routine allocates @var{size} bytes of memory in the device environment
1741 associated with the device number @var{device_num}. If successful, a device
1742 pointer is returned, otherwise a null pointer.
1744 In GCC, when the device is the host or the device shares memory with the host,
1745 the memory is allocated on the host; in that case, when @var{size} is zero,
1746 either NULL or a unique pointer value that can later be successfully passed to
1747 @code{omp_target_free} is returned. When the allocation is not performed on
1748 the host, a null pointer is returned when @var{size} is zero; in that case,
1749 additionally a diagnostic might be printed to standard error (stderr).
1751 Running this routine in a @code{target} region except on the initial device
1755 @multitable @columnfractions .20 .80
1756 @item @emph{Prototype}: @tab @code{void *omp_target_alloc(size_t size, int device_num)}
1759 @item @emph{Fortran}:
1760 @multitable @columnfractions .20 .80
1761 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_target_alloc(size, device_num) bind(C)}
1762 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1763 @item @tab @code{integer(c_size_t), value :: size}
1764 @item @tab @code{integer(c_int), value :: device_num}
1767 @item @emph{See also}:
1768 @ref{omp_target_free}, @ref{omp_target_associate_ptr}
1770 @item @emph{Reference}:
1771 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.1
1776 @node omp_target_free
1777 @subsection @code{omp_target_free} -- Free device memory
1779 @item @emph{Description}:
1780 This routine frees memory allocated by the @code{omp_target_alloc} routine.
1781 The @var{device_ptr} argument must be either a null pointer or a device pointer
1782 returned by @code{omp_target_alloc} for the specified @code{device_num}. The
1783 device number @var{device_num} must be a conforming device number.
1785 Running this routine in a @code{target} region except on the initial device
1789 @multitable @columnfractions .20 .80
1790 @item @emph{Prototype}: @tab @code{void omp_target_free(void *device_ptr, int device_num)}
1793 @item @emph{Fortran}:
1794 @multitable @columnfractions .20 .80
1795 @item @emph{Interface}: @tab @code{subroutine omp_target_free(device_ptr, device_num) bind(C)}
1796 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1797 @item @tab @code{type(c_ptr), value :: device_ptr}
1798 @item @tab @code{integer(c_int), value :: device_num}
1801 @item @emph{See also}:
1802 @ref{omp_target_alloc}, @ref{omp_target_disassociate_ptr}
1804 @item @emph{Reference}:
1805 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.2
1810 @node omp_target_is_present
1811 @subsection @code{omp_target_is_present} -- Check whether storage is mapped
1813 @item @emph{Description}:
1814 This routine tests whether storage, identified by the host pointer @var{ptr}
1815 is mapped to the device specified by @var{device_num}. If so, it returns
1816 a nonzero value and otherwise zero.
1818 In GCC, this includes self mapping such that @code{omp_target_is_present}
1819 returns @emph{true} when @var{device_num} specifies the host or when the host
1820 and the device share memory. If @var{ptr} is a null pointer, @var{true} is
1821 returned and if @var{device_num} is an invalid device number, @var{false} is
1824 If those conditions do not apply, @emph{true} is returned if the association has
1825 been established by an explicit or implicit @code{map} clause, the
1826 @code{declare target} directive or a call to the @code{omp_target_associate_ptr}
1829 Running this routine in a @code{target} region except on the initial device
1833 @multitable @columnfractions .20 .80
1834 @item @emph{Prototype}: @tab @code{int omp_target_is_present(const void *ptr,}
1835 @item @tab @code{ int device_num)}
1838 @item @emph{Fortran}:
1839 @multitable @columnfractions .20 .80
1840 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_present(ptr, &}
1841 @item @tab @code{ device_num) bind(C)}
1842 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1843 @item @tab @code{type(c_ptr), value :: ptr}
1844 @item @tab @code{integer(c_int), value :: device_num}
1847 @item @emph{See also}:
1848 @ref{omp_target_associate_ptr}
1850 @item @emph{Reference}:
1851 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.3
1856 @node omp_target_is_accessible
1857 @subsection @code{omp_target_is_accessible} -- Check whether memory is device accessible
1859 @item @emph{Description}:
1860 This routine tests whether memory, starting at the address given by @var{ptr}
1861 and extending @var{size} bytes, is accessibly on the device specified by
1862 @var{device_num}. If so, it returns a nonzero value and otherwise zero.
1864 The address given by @var{ptr} is interpreted to be in the address space of
1865 the device and @var{size} must be positive.
1867 Note that GCC's current implementation assumes that @var{ptr} is a valid host
1868 pointer. Therefore, all addresses given by @var{ptr} are assumed to be
1869 accessible on the initial device. And, to err on the safe side, this memory
1870 is only available on a non-host device that can access all host memory
1871 ([uniform] shared memory access).
1873 Running this routine in a @code{target} region except on the initial device
1877 @multitable @columnfractions .20 .80
1878 @item @emph{Prototype}: @tab @code{int omp_target_is_accessible(const void *ptr,}
1879 @item @tab @code{ size_t size,}
1880 @item @tab @code{ int device_num)}
1883 @item @emph{Fortran}:
1884 @multitable @columnfractions .20 .80
1885 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_is_accessible(ptr, &}
1886 @item @tab @code{ size, device_num) bind(C)}
1887 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_size_t, c_int}
1888 @item @tab @code{type(c_ptr), value :: ptr}
1889 @item @tab @code{integer(c_size_t), value :: size}
1890 @item @tab @code{integer(c_int), value :: device_num}
1893 @item @emph{See also}:
1894 @ref{omp_target_associate_ptr}
1896 @item @emph{Reference}:
1897 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.4
1902 @node omp_target_associate_ptr
1903 @subsection @code{omp_target_associate_ptr} -- Associate a device pointer with a host pointer
1905 @item @emph{Description}:
1906 This routine associates storage on the host with storage on a device identified
1907 by @var{device_num}. The device pointer is usually obtained by calling
1908 @code{omp_target_alloc} or by other means (but not by using the @code{map}
1909 clauses or the @code{declare target} directive). The host pointer should point
1910 to memory that has a storage size of at least @var{size}.
1912 The @var{device_offset} parameter specifies the offset into @var{device_ptr}
1913 that is used as the base address for the device side of the mapping; the
1914 storage size should be at least @var{device_offset} plus @var{size}.
1916 After the association, the host pointer can be used in a @code{map} clause and
1917 in the @code{to} and @code{from} clauses of the @code{target update} directive
1918 to transfer data between the associated pointers. The reference count of such
1919 associated storage is infinite. The association can be removed by calling
1920 @code{omp_target_disassociate_ptr} which should be done before the lifetime
1921 of either either storage ends.
1923 The routine returns nonzero (@code{EINVAL}) when the @var{device_num} invalid,
1924 for when the initial device or the associated device shares memory with the
1925 host. @code{omp_target_associate_ptr} returns zero if @var{host_ptr} points
1926 into already associated storage that is fully inside of a previously associated
1927 memory. Otherwise, if the association was successful zero is returned; if none
1928 of the cases above apply, nonzero (@code{EINVAL}) is returned.
1930 The @code{omp_target_is_present} routine can be used to test whether
1931 associated storage for a device pointer exists.
1933 Running this routine in a @code{target} region except on the initial device
1937 @multitable @columnfractions .20 .80
1938 @item @emph{Prototype}: @tab @code{int omp_target_associate_ptr(const void *host_ptr,}
1939 @item @tab @code{ const void *device_ptr,}
1940 @item @tab @code{ size_t size,}
1941 @item @tab @code{ size_t device_offset,}
1942 @item @tab @code{ int device_num)}
1945 @item @emph{Fortran}:
1946 @multitable @columnfractions .20 .80
1947 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_associate_ptr(host_ptr, &}
1948 @item @tab @code{ device_ptr, size, device_offset, device_num) bind(C)}
1949 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int, c_size_t}
1950 @item @tab @code{type(c_ptr), value :: host_ptr, device_ptr}
1951 @item @tab @code{integer(c_size_t), value :: size, device_offset}
1952 @item @tab @code{integer(c_int), value :: device_num}
1955 @item @emph{See also}:
1956 @ref{omp_target_disassociate_ptr}, @ref{omp_target_is_present},
1957 @ref{omp_target_alloc}
1959 @item @emph{Reference}:
1960 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.9
1965 @node omp_target_disassociate_ptr
1966 @subsection @code{omp_target_disassociate_ptr} -- Remove device--host pointer association
1968 @item @emph{Description}:
1969 This routine removes the storage association established by calling
1970 @code{omp_target_associate_ptr} and sets the reference count to zero,
1971 even if @code{omp_target_associate_ptr} was invoked multiple times for
1972 for host pointer @code{ptr}. If applicable, the device memory needs
1973 to be freed by the user.
1975 If an associated device storage location for the @var{device_num} was
1976 found and has infinite reference count, the association is removed and
1977 zero is returned. In all other cases, nonzero (@code{EINVAL}) is returned
1978 and no other action is taken.
1980 Note that passing a host pointer where the association to the device pointer
1981 was established with the @code{declare target} directive yields undefined
1984 Running this routine in a @code{target} region except on the initial device
1988 @multitable @columnfractions .20 .80
1989 @item @emph{Prototype}: @tab @code{int omp_target_disassociate_ptr(const void *ptr,}
1990 @item @tab @code{ int device_num)}
1993 @item @emph{Fortran}:
1994 @multitable @columnfractions .20 .80
1995 @item @emph{Interface}: @tab @code{integer(c_int) function omp_target_disassociate_ptr(ptr, &}
1996 @item @tab @code{ device_num) bind(C)}
1997 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
1998 @item @tab @code{type(c_ptr), value :: ptr}
1999 @item @tab @code{integer(c_int), value :: device_num}
2002 @item @emph{See also}:
2003 @ref{omp_target_associate_ptr}
2005 @item @emph{Reference}:
2006 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.10
2011 @node omp_get_mapped_ptr
2012 @subsection @code{omp_get_mapped_ptr} -- Return device pointer to a host pointer
2014 @item @emph{Description}:
2015 If the device number is refers to the initial device or to a device with
2016 memory accessible from the host (shared memory), the @code{omp_get_mapped_ptr}
2017 routines returns the value of the passed @var{ptr}. Otherwise, if associated
2018 storage to the passed host pointer @var{ptr} exists on device associated with
2019 @var{device_num}, it returns that pointer. In all other cases and in cases of
2020 an error, a null pointer is returned.
2022 The association of storage location is established either via an explicit or
2023 implicit @code{map} clause, the @code{declare target} directive or the
2024 @code{omp_target_associate_ptr} routine.
2026 Running this routine in a @code{target} region except on the initial device
2030 @multitable @columnfractions .20 .80
2031 @item @emph{Prototype}: @tab @code{void *omp_get_mapped_ptr(const void *ptr, int device_num);}
2034 @item @emph{Fortran}:
2035 @multitable @columnfractions .20 .80
2036 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_get_mapped_ptr(ptr, device_num) bind(C)}
2037 @item @tab @code{use, intrinsic :: iso_c_binding, only: c_ptr, c_int}
2038 @item @tab @code{type(c_ptr), value :: ptr}
2039 @item @tab @code{integer(c_int), value :: device_num}
2042 @item @emph{See also}:
2043 @ref{omp_target_associate_ptr}
2045 @item @emph{Reference}:
2046 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.8.11
2052 @section Lock Routines
2054 Initialize, set, test, unset and destroy simple and nested locks.
2055 The routines have C linkage and do not throw exceptions.
2058 * omp_init_lock:: Initialize simple lock
2059 * omp_init_nest_lock:: Initialize nested lock
2060 @c * omp_init_lock_with_hint:: <fixme>
2061 @c * omp_init_nest_lock_with_hint:: <fixme>
2062 * omp_destroy_lock:: Destroy simple lock
2063 * omp_destroy_nest_lock:: Destroy nested lock
2064 * omp_set_lock:: Wait for and set simple lock
2065 * omp_set_nest_lock:: Wait for and set simple lock
2066 * omp_unset_lock:: Unset simple lock
2067 * omp_unset_nest_lock:: Unset nested lock
2068 * omp_test_lock:: Test and set simple lock if available
2069 * omp_test_nest_lock:: Test and set nested lock if available
2075 @subsection @code{omp_init_lock} -- Initialize simple lock
2077 @item @emph{Description}:
2078 Initialize a simple lock. After initialization, the lock is in
2082 @multitable @columnfractions .20 .80
2083 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
2086 @item @emph{Fortran}:
2087 @multitable @columnfractions .20 .80
2088 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
2089 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
2092 @item @emph{See also}:
2093 @ref{omp_destroy_lock}
2095 @item @emph{Reference}:
2096 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2101 @node omp_init_nest_lock
2102 @subsection @code{omp_init_nest_lock} -- Initialize nested lock
2104 @item @emph{Description}:
2105 Initialize a nested lock. After initialization, the lock is in
2106 an unlocked state and the nesting count is set to zero.
2109 @multitable @columnfractions .20 .80
2110 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
2113 @item @emph{Fortran}:
2114 @multitable @columnfractions .20 .80
2115 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
2116 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
2119 @item @emph{See also}:
2120 @ref{omp_destroy_nest_lock}
2122 @item @emph{Reference}:
2123 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
2128 @node omp_destroy_lock
2129 @subsection @code{omp_destroy_lock} -- Destroy simple lock
2131 @item @emph{Description}:
2132 Destroy a simple lock. In order to be destroyed, a simple lock must be
2133 in the unlocked state.
2136 @multitable @columnfractions .20 .80
2137 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
2140 @item @emph{Fortran}:
2141 @multitable @columnfractions .20 .80
2142 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
2143 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2146 @item @emph{See also}:
2149 @item @emph{Reference}:
2150 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2155 @node omp_destroy_nest_lock
2156 @subsection @code{omp_destroy_nest_lock} -- Destroy nested lock
2158 @item @emph{Description}:
2159 Destroy a nested lock. In order to be destroyed, a nested lock must be
2160 in the unlocked state and its nesting count must equal zero.
2163 @multitable @columnfractions .20 .80
2164 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
2167 @item @emph{Fortran}:
2168 @multitable @columnfractions .20 .80
2169 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
2170 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2173 @item @emph{See also}:
2176 @item @emph{Reference}:
2177 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
2183 @subsection @code{omp_set_lock} -- Wait for and set simple lock
2185 @item @emph{Description}:
2186 Before setting a simple lock, the lock variable must be initialized by
2187 @code{omp_init_lock}. The calling thread is blocked until the lock
2188 is available. If the lock is already held by the current thread,
2192 @multitable @columnfractions .20 .80
2193 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
2196 @item @emph{Fortran}:
2197 @multitable @columnfractions .20 .80
2198 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
2199 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2202 @item @emph{See also}:
2203 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
2205 @item @emph{Reference}:
2206 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2211 @node omp_set_nest_lock
2212 @subsection @code{omp_set_nest_lock} -- Wait for and set nested lock
2214 @item @emph{Description}:
2215 Before setting a nested lock, the lock variable must be initialized by
2216 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
2217 is available. If the lock is already held by the current thread, the
2218 nesting count for the lock is incremented.
2221 @multitable @columnfractions .20 .80
2222 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
2225 @item @emph{Fortran}:
2226 @multitable @columnfractions .20 .80
2227 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
2228 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2231 @item @emph{See also}:
2232 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
2234 @item @emph{Reference}:
2235 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
2240 @node omp_unset_lock
2241 @subsection @code{omp_unset_lock} -- Unset simple lock
2243 @item @emph{Description}:
2244 A simple lock about to be unset must have been locked by @code{omp_set_lock}
2245 or @code{omp_test_lock} before. In addition, the lock must be held by the
2246 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
2247 or more threads attempted to set the lock before, one of them is chosen to,
2248 again, set the lock to itself.
2251 @multitable @columnfractions .20 .80
2252 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
2255 @item @emph{Fortran}:
2256 @multitable @columnfractions .20 .80
2257 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
2258 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2261 @item @emph{See also}:
2262 @ref{omp_set_lock}, @ref{omp_test_lock}
2264 @item @emph{Reference}:
2265 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2270 @node omp_unset_nest_lock
2271 @subsection @code{omp_unset_nest_lock} -- Unset nested lock
2273 @item @emph{Description}:
2274 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
2275 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
2276 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
2277 lock becomes unlocked. If one ore more threads attempted to set the lock before,
2278 one of them is chosen to, again, set the lock to itself.
2281 @multitable @columnfractions .20 .80
2282 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
2285 @item @emph{Fortran}:
2286 @multitable @columnfractions .20 .80
2287 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
2288 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2291 @item @emph{See also}:
2292 @ref{omp_set_nest_lock}
2294 @item @emph{Reference}:
2295 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
2301 @subsection @code{omp_test_lock} -- Test and set simple lock if available
2303 @item @emph{Description}:
2304 Before setting a simple lock, the lock variable must be initialized by
2305 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
2306 does not block if the lock is not available. This function returns
2307 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
2308 @code{false} represent their language-specific counterparts.
2311 @multitable @columnfractions .20 .80
2312 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
2315 @item @emph{Fortran}:
2316 @multitable @columnfractions .20 .80
2317 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
2318 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
2321 @item @emph{See also}:
2322 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2324 @item @emph{Reference}:
2325 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2330 @node omp_test_nest_lock
2331 @subsection @code{omp_test_nest_lock} -- Test and set nested lock if available
2333 @item @emph{Description}:
2334 Before setting a nested lock, the lock variable must be initialized by
2335 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
2336 @code{omp_test_nest_lock} does not block if the lock is not available.
2337 If the lock is already held by the current thread, the new nesting count
2338 is returned. Otherwise, the return value equals zero.
2341 @multitable @columnfractions .20 .80
2342 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
2345 @item @emph{Fortran}:
2346 @multitable @columnfractions .20 .80
2347 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
2348 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
2352 @item @emph{See also}:
2353 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
2355 @item @emph{Reference}:
2356 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
2361 @node Timing Routines
2362 @section Timing Routines
2364 Portable, thread-based, wall clock timer.
2365 The routines have C linkage and do not throw exceptions.
2368 * omp_get_wtick:: Get timer precision.
2369 * omp_get_wtime:: Elapsed wall clock time.
2375 @subsection @code{omp_get_wtick} -- Get timer precision
2377 @item @emph{Description}:
2378 Gets the timer precision, i.e., the number of seconds between two
2379 successive clock ticks.
2382 @multitable @columnfractions .20 .80
2383 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
2386 @item @emph{Fortran}:
2387 @multitable @columnfractions .20 .80
2388 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
2391 @item @emph{See also}:
2394 @item @emph{Reference}:
2395 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
2401 @subsection @code{omp_get_wtime} -- Elapsed wall clock time
2403 @item @emph{Description}:
2404 Elapsed wall clock time in seconds. The time is measured per thread, no
2405 guarantee can be made that two distinct threads measure the same time.
2406 Time is measured from some "time in the past", which is an arbitrary time
2407 guaranteed not to change during the execution of the program.
2410 @multitable @columnfractions .20 .80
2411 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
2414 @item @emph{Fortran}:
2415 @multitable @columnfractions .20 .80
2416 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
2419 @item @emph{See also}:
2422 @item @emph{Reference}:
2423 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
2429 @section Event Routine
2431 Support for event objects.
2432 The routine has C linkage and do not throw exceptions.
2435 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
2440 @node omp_fulfill_event
2441 @subsection @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
2443 @item @emph{Description}:
2444 Fulfill the event associated with the event handle argument. Currently, it
2445 is only used to fulfill events generated by detach clauses on task
2446 constructs - the effect of fulfilling the event is to allow the task to
2449 The result of calling @code{omp_fulfill_event} with an event handle other
2450 than that generated by a detach clause is undefined. Calling it with an
2451 event handle that has already been fulfilled is also undefined.
2454 @multitable @columnfractions .20 .80
2455 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
2458 @item @emph{Fortran}:
2459 @multitable @columnfractions .20 .80
2460 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
2461 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
2464 @item @emph{Reference}:
2465 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
2470 @c @node Interoperability Routines
2471 @c @section Interoperability Routines
2473 @c Routines to obtain properties from an @code{omp_interop_t} object.
2474 @c They have C linkage and do not throw exceptions.
2477 @c * omp_get_num_interop_properties:: <fixme>
2478 @c * omp_get_interop_int:: <fixme>
2479 @c * omp_get_interop_ptr:: <fixme>
2480 @c * omp_get_interop_str:: <fixme>
2481 @c * omp_get_interop_name:: <fixme>
2482 @c * omp_get_interop_type_desc:: <fixme>
2483 @c * omp_get_interop_rc_desc:: <fixme>
2486 @node Memory Management Routines
2487 @section Memory Management Routines
2489 Routines to manage and allocate memory on the current device.
2490 They have C linkage and do not throw exceptions.
2493 * omp_init_allocator:: Create an allocator
2494 * omp_destroy_allocator:: Destroy an allocator
2495 * omp_set_default_allocator:: Set the default allocator
2496 * omp_get_default_allocator:: Get the default allocator
2497 * omp_alloc:: Memory allocation with an allocator
2498 * omp_aligned_alloc:: Memory allocation with an allocator and alignment
2499 * omp_free:: Freeing memory allocated with OpenMP routines
2500 * omp_calloc:: Allocate nullified memory with an allocator
2501 * omp_aligned_calloc:: Allocate nullified aligned memory with an allocator
2502 * omp_realloc:: Reallocate memory allocated with OpenMP routines
2503 @c * omp_get_memspace_num_resources:: <fixme>/TR11
2504 @c * omp_get_submemspace:: <fixme>/TR11
2509 @node omp_init_allocator
2510 @subsection @code{omp_init_allocator} -- Create an allocator
2512 @item @emph{Description}:
2513 Create an allocator that uses the specified memory space and has the specified
2514 traits; if an allocator that fulfills the requirements cannot be created,
2515 @code{omp_null_allocator} is returned.
2517 The predefined memory spaces and available traits can be found at
2518 @ref{OMP_ALLOCATOR}, where the trait names have to be be prefixed by
2519 @code{omp_atk_} (e.g. @code{omp_atk_pinned}) and the named trait values by
2520 @code{omp_atv_} (e.g. @code{omp_atv_true}); additionally, @code{omp_atv_default}
2521 may be used as trait value to specify that the default value should be used.
2524 @multitable @columnfractions .20 .80
2525 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_init_allocator(}
2526 @item @tab @code{ omp_memspace_handle_t memspace,}
2527 @item @tab @code{ int ntraits,}
2528 @item @tab @code{ const omp_alloctrait_t traits[]);}
2531 @item @emph{Fortran}:
2532 @multitable @columnfractions .20 .80
2533 @item @emph{Interface}: @tab @code{function omp_init_allocator(memspace, ntraits, traits)}
2534 @item @tab @code{integer (omp_allocator_handle_kind) :: omp_init_allocator}
2535 @item @tab @code{integer (omp_memspace_handle_kind), intent(in) :: memspace}
2536 @item @tab @code{integer, intent(in) :: ntraits}
2537 @item @tab @code{type (omp_alloctrait), intent(in) :: traits(*)}
2540 @item @emph{See also}:
2541 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_destroy_allocator}
2543 @item @emph{Reference}:
2544 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.2
2549 @node omp_destroy_allocator
2550 @subsection @code{omp_destroy_allocator} -- Destroy an allocator
2552 @item @emph{Description}:
2553 Releases all resources used by a memory allocator, which must not represent
2554 a predefined memory allocator. Accessing memory after its allocator has been
2555 destroyed has unspecified behavior. Passing @code{omp_null_allocator} to the
2556 routine is permitted but has no effect.
2560 @multitable @columnfractions .20 .80
2561 @item @emph{Prototype}: @tab @code{void omp_destroy_allocator (omp_allocator_handle_t allocator);}
2564 @item @emph{Fortran}:
2565 @multitable @columnfractions .20 .80
2566 @item @emph{Interface}: @tab @code{subroutine omp_destroy_allocator(allocator)}
2567 @item @tab @code{integer (omp_allocator_handle_kind), intent(in) :: allocator}
2570 @item @emph{See also}:
2571 @ref{omp_init_allocator}
2573 @item @emph{Reference}:
2574 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.3
2579 @node omp_set_default_allocator
2580 @subsection @code{omp_set_default_allocator} -- Set the default allocator
2582 @item @emph{Description}:
2583 Sets the default allocator that is used when no allocator has been specified
2584 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
2585 routine is invoked with the @code{omp_null_allocator} allocator.
2588 @multitable @columnfractions .20 .80
2589 @item @emph{Prototype}: @tab @code{void omp_set_default_allocator(omp_allocator_handle_t allocator);}
2592 @item @emph{Fortran}:
2593 @multitable @columnfractions .20 .80
2594 @item @emph{Interface}: @tab @code{subroutine omp_set_default_allocator(allocator)}
2595 @item @tab @code{integer (omp_allocator_handle_kind), intent(in) :: allocator}
2598 @item @emph{See also}:
2599 @ref{omp_get_default_allocator}, @ref{omp_init_allocator}, @ref{OMP_ALLOCATOR},
2600 @ref{Memory allocation}
2602 @item @emph{Reference}:
2603 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.4
2608 @node omp_get_default_allocator
2609 @subsection @code{omp_get_default_allocator} -- Get the default allocator
2611 @item @emph{Description}:
2612 The routine returns the default allocator that is used when no allocator has
2613 been specified in the @code{allocate} or @code{allocator} clause or if an
2614 OpenMP memory routine is invoked with the @code{omp_null_allocator} allocator.
2617 @multitable @columnfractions .20 .80
2618 @item @emph{Prototype}: @tab @code{omp_allocator_handle_t omp_get_default_allocator();}
2621 @item @emph{Fortran}:
2622 @multitable @columnfractions .20 .80
2623 @item @emph{Interface}: @tab @code{function omp_get_default_allocator()}
2624 @item @tab @code{integer (omp_allocator_handle_kind) :: omp_get_default_allocator}
2627 @item @emph{See also}:
2628 @ref{omp_set_default_allocator}, @ref{OMP_ALLOCATOR}
2630 @item @emph{Reference}:
2631 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.5
2637 @subsection @code{omp_alloc} -- Memory allocation with an allocator
2639 @item @emph{Description}:
2640 Allocate memory with the specified allocator, which can either be a predefined
2641 allocator, an allocator handle or @code{omp_null_allocator}. If the allocators
2642 is @code{omp_null_allocator}, the allocator specified by the
2643 @var{def-allocator-var} ICV is used. @var{size} must be a nonnegative number
2644 denoting the number of bytes to be allocated; if @var{size} is zero,
2645 @code{omp_alloc} will return a null pointer. If successful, a pointer to the
2646 allocated memory is returned, otherwise the @code{fallback} trait of the
2647 allocator determines the behavior. The content of the allocated memory is
2650 In @code{target} regions, either the @code{dynamic_allocators} clause must
2651 appear on a @code{requires} directive in the same compilation unit -- or the
2652 @var{allocator} argument may only be a constant expression with the value of
2653 one of the predefined allocators and may not be @code{omp_null_allocator}.
2655 Memory allocated by @code{omp_alloc} must be freed using @code{omp_free}.
2658 @multitable @columnfractions .20 .80
2659 @item @emph{Prototype}: @tab @code{void* omp_alloc(size_t size,}
2660 @item @tab @code{ omp_allocator_handle_t allocator)}
2664 @multitable @columnfractions .20 .80
2665 @item @emph{Prototype}: @tab @code{void* omp_alloc(size_t size,}
2666 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2669 @item @emph{Fortran}:
2670 @multitable @columnfractions .20 .80
2671 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_alloc(size, allocator) bind(C)}
2672 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2673 @item @tab @code{integer (c_size_t), value :: size}
2674 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2677 @item @emph{See also}:
2678 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2679 @ref{omp_free}, @ref{omp_init_allocator}
2681 @item @emph{Reference}:
2682 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.6
2687 @node omp_aligned_alloc
2688 @subsection @code{omp_aligned_alloc} -- Memory allocation with an allocator and alignment
2690 @item @emph{Description}:
2691 Allocate memory with the specified allocator, which can either be a predefined
2692 allocator, an allocator handle or @code{omp_null_allocator}. If the allocators
2693 is @code{omp_null_allocator}, the allocator specified by the
2694 @var{def-allocator-var} ICV is used. @var{alignment} must be a positive power
2695 of two and @var{size} must be a nonnegative number that is a multiple of the
2696 alignment and denotes the number of bytes to be allocated; if @var{size} is
2697 zero, @code{omp_aligned_alloc} will return a null pointer. The alignment will
2698 be at least the maximal value required by @code{alignment} trait of the
2699 allocator and the value of the passed @var{alignment} argument. If successful,
2700 a pointer to the allocated memory is returned, otherwise the @code{fallback}
2701 trait of the allocator determines the behavior. The content of the allocated
2702 memory is unspecified.
2704 In @code{target} regions, either the @code{dynamic_allocators} clause must
2705 appear on a @code{requires} directive in the same compilation unit -- or the
2706 @var{allocator} argument may only be a constant expression with the value of
2707 one of the predefined allocators and may not be @code{omp_null_allocator}.
2709 Memory allocated by @code{omp_aligned_alloc} must be freed using
2713 @multitable @columnfractions .20 .80
2714 @item @emph{Prototype}: @tab @code{void* omp_aligned_alloc(size_t alignment,}
2715 @item @tab @code{ size_t size,}
2716 @item @tab @code{ omp_allocator_handle_t allocator)}
2720 @multitable @columnfractions .20 .80
2721 @item @emph{Prototype}: @tab @code{void* omp_aligned_alloc(size_t alignment,}
2722 @item @tab @code{ size_t size,}
2723 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2726 @item @emph{Fortran}:
2727 @multitable @columnfractions .20 .80
2728 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_aligned_alloc(alignment, size, allocator) bind(C)}
2729 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2730 @item @tab @code{integer (c_size_t), value :: alignment, size}
2731 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2734 @item @emph{See also}:
2735 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2736 @ref{omp_free}, @ref{omp_init_allocator}
2738 @item @emph{Reference}:
2739 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.6
2745 @subsection @code{omp_free} -- Freeing memory allocated with OpenMP routines
2747 @item @emph{Description}:
2748 The @code{omp_free} routine deallocates memory previously allocated by an
2749 OpenMP memory-management routine. The @var{ptr} argument must point to such
2750 memory or be a null pointer; if it is a null pointer, no operation is
2751 performed. If specified, the @var{allocator} argument must be either the
2752 memory allocator that was used for the allocation or @code{omp_null_allocator};
2753 if it is @code{omp_null_allocator}, the implementation will determine the value
2756 Calling @code{omp_free} invokes undefined behavior if the memory
2757 was already deallocated or when the used allocator has already been destroyed.
2760 @multitable @columnfractions .20 .80
2761 @item @emph{Prototype}: @tab @code{void omp_free(void *ptr,}
2762 @item @tab @code{ omp_allocator_handle_t allocator)}
2766 @multitable @columnfractions .20 .80
2767 @item @emph{Prototype}: @tab @code{void omp_free(void *ptr,}
2768 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2771 @item @emph{Fortran}:
2772 @multitable @columnfractions .20 .80
2773 @item @emph{Interface}: @tab @code{subroutine omp_free(ptr, allocator) bind(C)}
2774 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr}
2775 @item @tab @code{type (c_ptr), value :: ptr}
2776 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2779 @item @emph{See also}:
2780 @ref{omp_alloc}, @ref{omp_aligned_alloc}, @ref{omp_calloc},
2781 @ref{omp_aligned_calloc}, @ref{omp_realloc}
2783 @item @emph{Reference}:
2784 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.7
2790 @subsection @code{omp_calloc} -- Allocate nullified memory with an allocator
2792 @item @emph{Description}:
2793 Allocate zero-initialized memory with the specified allocator, which can either
2794 be a predefined allocator, an allocator handle or @code{omp_null_allocator}. If
2795 the allocators is @code{omp_null_allocator}, the allocator specified by the
2796 @var{def-allocator-var} ICV is used. The to-be allocated memory is for an
2797 array with @var{nmemb} elements, each having a size of @var{size} bytes. Both
2798 @var{nmemb} and @var{size} must be nonnegative numbers; if either of them is
2799 zero, @code{omp_calloc} will return a null pointer. If successful, a pointer to
2800 the zero-initialized allocated memory is returned, otherwise the @code{fallback}
2801 trait of the allocator determines the behavior.
2803 In @code{target} regions, either the @code{dynamic_allocators} clause must
2804 appear on a @code{requires} directive in the same compilation unit -- or the
2805 @var{allocator} argument may only be a constant expression with the value of
2806 one of the predefined allocators and may not be @code{omp_null_allocator}.
2808 Memory allocated by @code{omp_calloc} must be freed using @code{omp_free}.
2811 @multitable @columnfractions .20 .80
2812 @item @emph{Prototype}: @tab @code{void* omp_calloc(size_t nmemb, size_t size,}
2813 @item @tab @code{ omp_allocator_handle_t allocator)}
2817 @multitable @columnfractions .20 .80
2818 @item @emph{Prototype}: @tab @code{void* omp_calloc(size_t nmemb, size_t size,}
2819 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2822 @item @emph{Fortran}:
2823 @multitable @columnfractions .20 .80
2824 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_calloc(nmemb, size, allocator) bind(C)}
2825 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2826 @item @tab @code{integer (c_size_t), value :: nmemb, size}
2827 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2830 @item @emph{See also}:
2831 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2832 @ref{omp_free}, @ref{omp_init_allocator}
2834 @item @emph{Reference}:
2835 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.8
2840 @node omp_aligned_calloc
2841 @subsection @code{omp_aligned_calloc} -- Allocate aligned nullified memory with an allocator
2843 @item @emph{Description}:
2844 Allocate zero-initialized memory with the specified allocator, which can either
2845 be a predefined allocator, an allocator handle or @code{omp_null_allocator}. If
2846 the allocators is @code{omp_null_allocator}, the allocator specified by the
2847 @var{def-allocator-var} ICV is used. The to-be allocated memory is for an
2848 array with @var{nmemb} elements, each having a size of @var{size} bytes. Both
2849 @var{nmemb} and @var{size} must be nonnegative numbers; if either of them is
2850 zero, @code{omp_aligned_calloc} will return a null pointer. @var{alignment}
2851 must be a positive power of two and @var{size} must be a multiple of the
2852 alignment; the alignment will be at least the maximal value required by
2853 @code{alignment} trait of the allocator and the value of the passed
2854 @var{alignment} argument. If successful, a pointer to the zero-initialized
2855 allocated memory is returned, otherwise the @code{fallback} trait of the
2856 allocator determines the behavior.
2858 In @code{target} regions, either the @code{dynamic_allocators} clause must
2859 appear on a @code{requires} directive in the same compilation unit -- or the
2860 @var{allocator} argument may only be a constant expression with the value of
2861 one of the predefined allocators and may not be @code{omp_null_allocator}.
2863 Memory allocated by @code{omp_aligned_calloc} must be freed using
2867 @multitable @columnfractions .20 .80
2868 @item @emph{Prototype}: @tab @code{void* omp_aligned_calloc(size_t nmemb, size_t size,}
2869 @item @tab @code{ omp_allocator_handle_t allocator)}
2873 @multitable @columnfractions .20 .80
2874 @item @emph{Prototype}: @tab @code{void* omp_aligned_calloc(size_t nmemb, size_t size,}
2875 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator)}
2878 @item @emph{Fortran}:
2879 @multitable @columnfractions .20 .80
2880 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_aligned_calloc(nmemb, size, allocator) bind(C)}
2881 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2882 @item @tab @code{integer (c_size_t), value :: nmemb, size}
2883 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator}
2886 @item @emph{See also}:
2887 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2888 @ref{omp_free}, @ref{omp_init_allocator}
2890 @item @emph{Reference}:
2891 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.13.8
2897 @subsection @code{omp_realloc} -- Reallocate memory allocated with OpenMP routines
2899 @item @emph{Description}:
2900 The @code{omp_realloc} routine deallocates memory to which @var{ptr} points to
2901 and allocates new memory with the specified @var{allocator} argument; the
2902 new memory will have the content of the old memory up to the minimum of the
2903 old size and the new @var{size}, otherwise the content of the returned memory
2904 is unspecified. If the new allocator is the same as the old one, the routine
2905 tries to resize the existing memory allocation, returning the same address as
2906 @var{ptr} if successful. @var{ptr} must point to memory allocated by an OpenMP
2907 memory-management routine.
2909 The @var{allocator} and @var{free_allocator} arguments must be a predefined
2910 allocator, an allocator handle or @code{omp_null_allocator}. If
2911 @var{free_allocator} is @code{omp_null_allocator}, the implementation
2912 automatically determines the allocator used for the allocation of @var{ptr}.
2913 If @var{allocator} is @code{omp_null_allocator} and @var{ptr} is is not a
2914 null pointer, the same allocator as @code{free_allocator} is used and
2915 when @var{ptr} is a null pointer the allocator specified by the
2916 @var{def-allocator-var} ICV is used.
2918 The @var{size} must be a nonnegative number denoting the number of bytes to be
2919 allocated; if @var{size} is zero, @code{omp_realloc} will return free the
2920 memory and return a null pointer. When @var{size} is nonzero: if successful,
2921 a pointer to the allocated memory is returned, otherwise the @code{fallback}
2922 trait of the allocator determines the behavior.
2924 In @code{target} regions, either the @code{dynamic_allocators} clause must
2925 appear on a @code{requires} directive in the same compilation unit -- or the
2926 @var{free_allocator} and @var{allocator} arguments may only be a constant
2927 expression with the value of one of the predefined allocators and may not be
2928 @code{omp_null_allocator}.
2930 Memory allocated by @code{omp_realloc} must be freed using @code{omp_free}.
2931 Calling @code{omp_free} invokes undefined behavior if the memory
2932 was already deallocated or when the used allocator has already been destroyed.
2935 @multitable @columnfractions .20 .80
2936 @item @emph{Prototype}: @tab @code{void* omp_realloc(void *ptr, size_t size,}
2937 @item @tab @code{ omp_allocator_handle_t allocator,}
2938 @item @tab @code{ omp_allocator_handle_t free_allocator)}
2942 @multitable @columnfractions .20 .80
2943 @item @emph{Prototype}: @tab @code{void* omp_realloc(void *ptr, size_t size,}
2944 @item @tab @code{ omp_allocator_handle_t allocator=omp_null_allocator,}
2945 @item @tab @code{ omp_allocator_handle_t free_allocator=omp_null_allocator)}
2948 @item @emph{Fortran}:
2949 @multitable @columnfractions .20 .80
2950 @item @emph{Interface}: @tab @code{type(c_ptr) function omp_realloc(ptr, size, allocator, free_allocator) bind(C)}
2951 @item @tab @code{use, intrinsic :: iso_c_binding, only : c_ptr, c_size_t}
2952 @item @tab @code{type(C_ptr), value :: ptr}
2953 @item @tab @code{integer (c_size_t), value :: size}
2954 @item @tab @code{integer (omp_allocator_handle_kind), value :: allocator, free_allocator}
2957 @item @emph{See also}:
2958 @ref{OMP_ALLOCATOR}, @ref{Memory allocation}, @ref{omp_set_default_allocator},
2959 @ref{omp_free}, @ref{omp_init_allocator}
2961 @item @emph{Reference}:
2962 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.7.9
2967 @c @node Tool Control Routine
2968 @c @section Tool Control Routine
2972 @node Environment Display Routine
2973 @section Environment Display Routine
2975 Routine to display the OpenMP version number and the initial value of ICVs.
2976 It has C linkage and does not throw exceptions.
2979 * omp_display_env:: print the initial ICV values
2982 @node omp_display_env
2983 @subsection @code{omp_display_env} -- print the initial ICV values
2985 @item @emph{Description}:
2986 Each time this routine is invoked, the OpenMP version number and initial value
2987 of internal control variables (ICVs) is printed on @code{stderr}. The displayed
2988 values are those at startup after evaluating the environment variables; later
2989 calls to API routines or clauses used in enclosing constructs do not affect
2992 If the @var{verbose} argument is @code{false}, only the OpenMP version and
2993 standard OpenMP ICVs are shown; if it is @code{true}, additionally, the
2994 GCC-specific ICVs are shown.
2996 The output consists of multiple lines and starts with
2997 @samp{OPENMP DISPLAY ENVIRONMENT BEGIN} followed by the name-value lines and
2998 ends with @samp{OPENMP DISPLAY ENVIRONMENT END}. The @var{name} is followed by
2999 an equal sign and the @var{value} is enclosed in single quotes.
3001 The first line has as @var{name} either @samp{_OPENMP} or @samp{openmp_version}
3002 and shows as value the supported OpenMP version number (4-digit year, 2-digit
3003 month) of the implementation, matching the value of the @code{_OPENMP} macro
3004 and, in Fortran, the named constant @code{openmp_version}.
3006 In each of the succeeding lines, the @var{name} matches the environment-variable
3007 name of an ICV and shows its value. Those line are might be prefixed by pair of
3008 brackets and a space, where the brackets enclose a comma-separated list of
3009 devices to which the ICV-value combination applies to; the value can either be a
3010 numeric device number or an abstract name denoting all devices (@code{all}), the
3011 initial host device (@code{host}) or all devices but the host (@code{device}).
3012 Note that the same ICV might be printed multiple times for multiple devices,
3013 even if all have the same value.
3015 The effect when invoked from within a @code{target} region is unspecified.
3018 @multitable @columnfractions .20 .80
3019 @item @emph{Prototype}: @tab @code{void omp_display_env(int verbose)}
3022 @item @emph{Fortran}:
3023 @multitable @columnfractions .20 .80
3024 @item @emph{Interface}: @tab @code{subroutine omp_display_env(vebose)}
3025 @item @tab @code{logical, intent(in) :: verbose}
3028 @item @emph{Example}:
3029 Note that the GCC-specific ICVs, such as the shown @code{GOMP_SPINCOUNT},
3030 are only printed when @var{varbose} set to @code{true}.
3033 OPENMP DISPLAY ENVIRONMENT BEGIN
3035 [host] OMP_DYNAMIC = 'FALSE'
3036 [host] OMP_NESTED = 'FALSE'
3037 [all] OMP_CANCELLATION = 'FALSE'
3039 [host] GOMP_SPINCOUNT = '300000'
3040 OPENMP DISPLAY ENVIRONMENT END
3044 @item @emph{See also}:
3045 @ref{OMP_DISPLAY_ENV}, @ref{Environment Variables},
3046 @ref{Implementation-defined ICV Initialization}
3048 @item @emph{Reference}:
3049 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.15
3053 @c ---------------------------------------------------------------------
3054 @c OpenMP Environment Variables
3055 @c ---------------------------------------------------------------------
3057 @node Environment Variables
3058 @chapter OpenMP Environment Variables
3060 The environment variables which beginning with @env{OMP_} are defined by
3061 section 4 of the OpenMP specification in version 4.5 or in a later version
3062 of the specification, while those beginning with @env{GOMP_} are GNU extensions.
3063 Most @env{OMP_} environment variables have an associated internal control
3066 For any OpenMP environment variable that sets an ICV and is neither
3067 @code{OMP_DEFAULT_DEVICE} nor has global ICV scope, associated
3068 device-specific environment variables exist. For them, the environment
3069 variable without suffix affects the host. The suffix @code{_DEV_} followed
3070 by a non-negative device number less that the number of available devices sets
3071 the ICV for the corresponding device. The suffix @code{_DEV} sets the ICV
3072 of all non-host devices for which a device-specific corresponding environment
3073 variable has not been set while the @code{_ALL} suffix sets the ICV of all
3074 host and non-host devices for which a more specific corresponding environment
3075 variable is not set.
3078 * OMP_ALLOCATOR:: Set the default allocator
3079 * OMP_AFFINITY_FORMAT:: Set the format string used for affinity display
3080 * OMP_CANCELLATION:: Set whether cancellation is activated
3081 * OMP_DISPLAY_AFFINITY:: Display thread affinity information
3082 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
3083 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
3084 * OMP_DYNAMIC:: Dynamic adjustment of threads
3085 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
3086 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
3087 * OMP_NESTED:: Nested parallel regions
3088 * OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
3089 * OMP_NUM_THREADS:: Specifies the number of threads to use
3090 * OMP_PROC_BIND:: Whether threads may be moved between CPUs
3091 * OMP_PLACES:: Specifies on which CPUs the threads should be placed
3092 * OMP_STACKSIZE:: Set default thread stack size
3093 * OMP_SCHEDULE:: How threads are scheduled
3094 * OMP_TARGET_OFFLOAD:: Controls offloading behavior
3095 * OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
3096 * OMP_THREAD_LIMIT:: Set the maximum number of threads
3097 * OMP_WAIT_POLICY:: How waiting threads are handled
3098 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
3099 * GOMP_DEBUG:: Enable debugging output
3100 * GOMP_STACKSIZE:: Set default thread stack size
3101 * GOMP_SPINCOUNT:: Set the busy-wait spin count
3102 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
3107 @section @env{OMP_ALLOCATOR} -- Set the default allocator
3108 @cindex Environment Variable
3110 @item @emph{ICV:} @var{def-allocator-var}
3111 @item @emph{Scope:} data environment
3112 @item @emph{Description}:
3113 Sets the default allocator that is used when no allocator has been specified
3114 in the @code{allocate} or @code{allocator} clause or if an OpenMP memory
3115 routine is invoked with the @code{omp_null_allocator} allocator.
3116 If unset, @code{omp_default_mem_alloc} is used.
3118 The value can either be a predefined allocator or a predefined memory space
3119 or a predefined memory space followed by a colon and a comma-separated list
3120 of memory trait and value pairs, separated by @code{=}.
3122 Note: The corresponding device environment variables are currently not
3123 supported. Therefore, the non-host @var{def-allocator-var} ICVs are always
3124 initialized to @code{omp_default_mem_alloc}. However, on all devices,
3125 the @code{omp_set_default_allocator} API routine can be used to change
3128 @multitable @columnfractions .45 .45
3129 @headitem Predefined allocators @tab Associated predefined memory spaces
3130 @item omp_default_mem_alloc @tab omp_default_mem_space
3131 @item omp_large_cap_mem_alloc @tab omp_large_cap_mem_space
3132 @item omp_const_mem_alloc @tab omp_const_mem_space
3133 @item omp_high_bw_mem_alloc @tab omp_high_bw_mem_space
3134 @item omp_low_lat_mem_alloc @tab omp_low_lat_mem_space
3135 @item omp_cgroup_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3136 @item omp_pteam_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3137 @item omp_thread_mem_alloc @tab omp_low_lat_mem_space (implementation defined)
3140 The predefined allocators use the default values for the traits,
3141 as listed below. Except that the last three allocators have the
3142 @code{access} trait set to @code{cgroup}, @code{pteam}, and
3143 @code{thread}, respectively.
3145 @multitable @columnfractions .25 .40 .25
3146 @headitem Trait @tab Allowed values @tab Default value
3147 @item @code{sync_hint} @tab @code{contended}, @code{uncontended},
3148 @code{serialized}, @code{private}
3149 @tab @code{contended}
3150 @item @code{alignment} @tab Positive integer being a power of two
3152 @item @code{access} @tab @code{all}, @code{cgroup},
3153 @code{pteam}, @code{thread}
3155 @item @code{pool_size} @tab Positive integer
3156 @tab See @ref{Memory allocation}
3157 @item @code{fallback} @tab @code{default_mem_fb}, @code{null_fb},
3158 @code{abort_fb}, @code{allocator_fb}
3160 @item @code{fb_data} @tab @emph{unsupported as it needs an allocator handle}
3162 @item @code{pinned} @tab @code{true}, @code{false}
3164 @item @code{partition} @tab @code{environment}, @code{nearest},
3165 @code{blocked}, @code{interleaved}
3166 @tab @code{environment}
3169 For the @code{fallback} trait, the default value is @code{null_fb} for the
3170 @code{omp_default_mem_alloc} allocator and any allocator that is associated
3171 with device memory; for all other other allocators, it is @code{default_mem_fb}
3176 OMP_ALLOCATOR=omp_high_bw_mem_alloc
3177 OMP_ALLOCATOR=omp_large_cap_mem_space
3178 OMP_ALLOCATOR=omp_low_lat_mem_space:pinned=true,partition=nearest
3181 @item @emph{See also}:
3182 @ref{Memory allocation}, @ref{omp_get_default_allocator},
3183 @ref{omp_set_default_allocator}, @ref{Offload-Target Specifics}
3185 @item @emph{Reference}:
3186 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.21
3191 @node OMP_AFFINITY_FORMAT
3192 @section @env{OMP_AFFINITY_FORMAT} -- Set the format string used for affinity display
3193 @cindex Environment Variable
3195 @item @emph{ICV:} @var{affinity-format-var}
3196 @item @emph{Scope:} device
3197 @item @emph{Description}:
3198 Sets the format string used when displaying OpenMP thread affinity information.
3199 Special values are output using @code{%} followed by an optional size
3200 specification and then either the single-character field type or its long
3201 name enclosed in curly braces; using @code{%%} displays a literal percent.
3202 The size specification consists of an optional @code{0.} or @code{.} followed
3203 by a positive integer, specifying the minimal width of the output. With
3204 @code{0.} and numerical values, the output is padded with zeros on the left;
3205 with @code{.}, the output is padded by spaces on the left; otherwise, the
3206 output is padded by spaces on the right. If unset, the value is
3207 ``@code{level %L thread %i affinity %A}''.
3209 Supported field types are:
3211 @multitable @columnfractions .10 .25 .60
3212 @item t @tab team_num @tab value returned by @code{omp_get_team_num}
3213 @item T @tab num_teams @tab value returned by @code{omp_get_num_teams}
3214 @item L @tab nesting_level @tab value returned by @code{omp_get_level}
3215 @item n @tab thread_num @tab value returned by @code{omp_get_thread_num}
3216 @item N @tab num_threads @tab value returned by @code{omp_get_num_threads}
3217 @item a @tab ancestor_tnum
3218 @tab value returned by
3219 @code{omp_get_ancestor_thread_num(omp_get_level()-1)}
3220 @item H @tab host @tab name of the host that executes the thread
3221 @item P @tab process_id @tab process identifier
3222 @item i @tab native_thread_id @tab native thread identifier
3223 @item A @tab thread_affinity
3224 @tab comma separated list of integer values or ranges, representing the
3225 processors on which a process might execute, subject to affinity
3229 For instance, after setting
3232 OMP_AFFINITY_FORMAT="%0.2a!%n!%.4L!%N;%.2t;%0.2T;%@{team_num@};%@{num_teams@};%A"
3235 with either @code{OMP_DISPLAY_AFFINITY} being set or when calling
3236 @code{omp_display_affinity} with @code{NULL} or an empty string, the program
3237 might display the following:
3240 00!0! 1!4; 0;01;0;1;0-11
3241 00!3! 1!4; 0;01;0;1;0-11
3242 00!2! 1!4; 0;01;0;1;0-11
3243 00!1! 1!4; 0;01;0;1;0-11
3246 @item @emph{See also}:
3247 @ref{OMP_DISPLAY_AFFINITY}
3249 @item @emph{Reference}:
3250 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.14
3255 @node OMP_CANCELLATION
3256 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
3257 @cindex Environment Variable
3259 @item @emph{ICV:} @var{cancel-var}
3260 @item @emph{Scope:} global
3261 @item @emph{Description}:
3262 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
3263 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
3265 @item @emph{See also}:
3266 @ref{omp_get_cancellation}
3268 @item @emph{Reference}:
3269 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
3274 @node OMP_DISPLAY_AFFINITY
3275 @section @env{OMP_DISPLAY_AFFINITY} -- Display thread affinity information
3276 @cindex Environment Variable
3278 @item @emph{ICV:} @var{display-affinity-var}
3279 @item @emph{Scope:} global
3280 @item @emph{Description}:
3281 If set to @code{FALSE} or if unset, affinity displaying is disabled.
3282 If set to @code{TRUE}, the runtime displays affinity information about
3283 OpenMP threads in a parallel region upon entering the region and every time
3286 @item @emph{See also}:
3287 @ref{OMP_AFFINITY_FORMAT}
3289 @item @emph{Reference}:
3290 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.13
3296 @node OMP_DISPLAY_ENV
3297 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
3298 @cindex Environment Variable
3300 @item @emph{ICV:} none
3301 @item @emph{Scope:} not applicable
3302 @item @emph{Description}:
3303 If set to @code{TRUE}, the runtime displays the same information to
3304 @code{stderr} as shown by the @code{omp_display_env} routine invoked with
3305 @var{verbose} argument set to @code{false}. If set to @code{VERBOSE}, the same
3306 information is shown as invoking the routine with @var{verbose} set to
3307 @code{true}. If unset or set to @code{FALSE}, this information is not shown.
3308 The result for any other value is unspecified.
3310 @item @emph{See also}:
3311 @ref{omp_display_env}
3313 @item @emph{Reference}:
3314 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
3319 @node OMP_DEFAULT_DEVICE
3320 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
3321 @cindex Environment Variable
3323 @item @emph{ICV:} @var{default-device-var}
3324 @item @emph{Scope:} data environment
3325 @item @emph{Description}:
3326 Set to choose the device which is used in a @code{target} region, unless the
3327 value is overridden by @code{omp_set_default_device} or by a @code{device}
3328 clause. The value shall be the nonnegative device number. If no device with
3329 the given device number exists, the code is executed on the host. If unset,
3330 @env{OMP_TARGET_OFFLOAD} is @code{mandatory} and no non-host devices are
3331 available, it is set to @code{omp_invalid_device}. Otherwise, if unset,
3332 device number 0 is used.
3335 @item @emph{See also}:
3336 @ref{omp_get_default_device}, @ref{omp_set_default_device},
3337 @ref{OMP_TARGET_OFFLOAD}
3339 @item @emph{Reference}:
3340 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.7
3346 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
3347 @cindex Environment Variable
3349 @item @emph{ICV:} @var{dyn-var}
3350 @item @emph{Scope:} global
3351 @item @emph{Description}:
3352 Enable or disable the dynamic adjustment of the number of threads
3353 within a team. The value of this environment variable shall be
3354 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
3355 disabled by default.
3357 @item @emph{See also}:
3358 @ref{omp_set_dynamic}
3360 @item @emph{Reference}:
3361 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
3366 @node OMP_MAX_ACTIVE_LEVELS
3367 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
3368 @cindex Environment Variable
3370 @item @emph{ICV:} @var{max-active-levels-var}
3371 @item @emph{Scope:} data environment
3372 @item @emph{Description}:
3373 Specifies the initial value for the maximum number of nested parallel
3374 regions. The value of this variable shall be a positive integer.
3375 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
3376 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
3377 a list with more than one item, the maximum number of nested parallel
3378 regions is initialized to the largest number supported, otherwise
3381 @item @emph{See also}:
3382 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}, @ref{OMP_PROC_BIND},
3383 @ref{OMP_NUM_THREADS}
3386 @item @emph{Reference}:
3387 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
3392 @node OMP_MAX_TASK_PRIORITY
3393 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
3394 number that can be set for a task.
3395 @cindex Environment Variable
3397 @item @emph{ICV:} @var{max-task-priority-var}
3398 @item @emph{Scope:} global
3399 @item @emph{Description}:
3400 Specifies the initial value for the maximum priority value that can be
3401 set for a task. The value of this variable shall be a non-negative
3402 integer, and zero is allowed. If undefined, the default priority is
3405 @item @emph{See also}:
3406 @ref{omp_get_max_task_priority}
3408 @item @emph{Reference}:
3409 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
3415 @section @env{OMP_NESTED} -- Nested parallel regions
3416 @cindex Environment Variable
3417 @cindex Implementation specific setting
3419 @item @emph{ICV:} @var{max-active-levels-var}
3420 @item @emph{Scope:} data environment
3421 @item @emph{Description}:
3422 Enable or disable nested parallel regions, i.e., whether team members
3423 are allowed to create new teams. The value of this environment variable
3424 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
3425 of maximum active nested regions supported is by default set to the
3426 maximum supported, otherwise it is set to one. If
3427 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting overrides this
3428 setting. If both are undefined, nested parallel regions are enabled if
3429 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
3430 more than one item, otherwise they are disabled by default.
3432 Note that the @code{OMP_NESTED} environment variable was deprecated in
3433 the OpenMP specification 5.2 in favor of @code{OMP_MAX_ACTIVE_LEVELS}.
3435 @item @emph{See also}:
3436 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
3437 @ref{OMP_MAX_ACTIVE_LEVELS}
3439 @item @emph{Reference}:
3440 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
3446 @section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
3447 @cindex Environment Variable
3449 @item @emph{ICV:} @var{nteams-var}
3450 @item @emph{Scope:} device
3451 @item @emph{Description}:
3452 Specifies the upper bound for number of teams to use in teams regions
3453 without explicit @code{num_teams} clause. The value of this variable shall
3454 be a positive integer. If undefined it defaults to 0 which means
3455 implementation defined upper bound.
3457 @item @emph{See also}:
3458 @ref{omp_set_num_teams}
3460 @item @emph{Reference}:
3461 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
3466 @node OMP_NUM_THREADS
3467 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
3468 @cindex Environment Variable
3469 @cindex Implementation specific setting
3471 @item @emph{ICV:} @var{nthreads-var}
3472 @item @emph{Scope:} data environment
3473 @item @emph{Description}:
3474 Specifies the default number of threads to use in parallel regions. The
3475 value of this variable shall be a comma-separated list of positive integers;
3476 the value specifies the number of threads to use for the corresponding nested
3477 level. Specifying more than one item in the list automatically enables
3478 nesting by default. If undefined one thread per CPU is used.
3480 When a list with more than value is specified, it also affects the
3481 @var{max-active-levels-var} ICV as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3483 @item @emph{See also}:
3484 @ref{omp_set_num_threads}, @ref{OMP_MAX_ACTIVE_LEVELS}
3486 @item @emph{Reference}:
3487 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
3493 @section @env{OMP_PROC_BIND} -- Whether threads may be moved between CPUs
3494 @cindex Environment Variable
3496 @item @emph{ICV:} @var{bind-var}
3497 @item @emph{Scope:} data environment
3498 @item @emph{Description}:
3499 Specifies whether threads may be moved between processors. If set to
3500 @code{TRUE}, OpenMP threads should not be moved; if set to @code{FALSE}
3501 they may be moved. Alternatively, a comma separated list with the
3502 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
3503 be used to specify the thread affinity policy for the corresponding nesting
3504 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
3505 same place partition as the primary thread. With @code{CLOSE} those are
3506 kept close to the primary thread in contiguous place partitions. And
3507 with @code{SPREAD} a sparse distribution
3508 across the place partitions is used. Specifying more than one item in the
3509 list automatically enables nesting by default.
3511 When a list is specified, it also affects the @var{max-active-levels-var} ICV
3512 as described in @ref{OMP_MAX_ACTIVE_LEVELS}.
3514 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
3515 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
3517 @item @emph{See also}:
3518 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY}, @ref{OMP_PLACES},
3519 @ref{OMP_MAX_ACTIVE_LEVELS}
3521 @item @emph{Reference}:
3522 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
3528 @section @env{OMP_PLACES} -- Specifies on which CPUs the threads should be placed
3529 @cindex Environment Variable
3531 @item @emph{ICV:} @var{place-partition-var}
3532 @item @emph{Scope:} implicit tasks
3533 @item @emph{Description}:
3534 The thread placement can be either specified using an abstract name or by an
3535 explicit list of the places. The abstract names @code{threads}, @code{cores},
3536 @code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
3537 followed by a positive number in parentheses, which denotes the how many places
3538 shall be created. With @code{threads} each place corresponds to a single
3539 hardware thread; @code{cores} to a single core with the corresponding number of
3540 hardware threads; with @code{sockets} the place corresponds to a single
3541 socket; with @code{ll_caches} to a set of cores that shares the last level
3542 cache on the device; and @code{numa_domains} to a set of cores for which their
3543 closest memory on the device is the same memory and at a similar distance from
3544 the cores. The resulting placement can be shown by setting the
3545 @env{OMP_DISPLAY_ENV} environment variable.
3547 Alternatively, the placement can be specified explicitly as comma-separated
3548 list of places. A place is specified by set of nonnegative numbers in curly
3549 braces, denoting the hardware threads. The curly braces can be omitted
3550 when only a single number has been specified. The hardware threads
3551 belonging to a place can either be specified as comma-separated list of
3552 nonnegative thread numbers or using an interval. Multiple places can also be
3553 either specified by a comma-separated list of places or by an interval. To
3554 specify an interval, a colon followed by the count is placed after
3555 the hardware thread number or the place. Optionally, the length can be
3556 followed by a colon and the stride number -- otherwise a unit stride is
3557 assumed. Placing an exclamation mark (@code{!}) directly before a curly
3558 brace or numbers inside the curly braces (excluding intervals)
3559 excludes those hardware threads.
3561 For instance, the following specifies the same places list:
3562 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
3563 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
3565 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
3566 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
3567 between CPUs following no placement policy.
3569 @item @emph{See also}:
3570 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
3571 @ref{OMP_DISPLAY_ENV}
3573 @item @emph{Reference}:
3574 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
3580 @section @env{OMP_STACKSIZE} -- Set default thread stack size
3581 @cindex Environment Variable
3583 @item @emph{ICV:} @var{stacksize-var}
3584 @item @emph{Scope:} device
3585 @item @emph{Description}:
3586 Set the default thread stack size in kilobytes, unless the number
3587 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
3588 case the size is, respectively, in bytes, kilobytes, megabytes
3589 or gigabytes. This is different from @code{pthread_attr_setstacksize}
3590 which gets the number of bytes as an argument. If the stack size cannot
3591 be set due to system constraints, an error is reported and the initial
3592 stack size is left unchanged. If undefined, the stack size is system
3595 @item @emph{See also}:
3596 @ref{GOMP_STACKSIZE}
3598 @item @emph{Reference}:
3599 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
3605 @section @env{OMP_SCHEDULE} -- How threads are scheduled
3606 @cindex Environment Variable
3607 @cindex Implementation specific setting
3609 @item @emph{ICV:} @var{run-sched-var}
3610 @item @emph{Scope:} data environment
3611 @item @emph{Description}:
3612 Allows to specify @code{schedule type} and @code{chunk size}.
3613 The value of the variable shall have the form: @code{type[,chunk]} where
3614 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
3615 The optional @code{chunk} size shall be a positive integer. If undefined,
3616 dynamic scheduling and a chunk size of 1 is used.
3618 @item @emph{See also}:
3619 @ref{omp_set_schedule}
3621 @item @emph{Reference}:
3622 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
3627 @node OMP_TARGET_OFFLOAD
3628 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behavior
3629 @cindex Environment Variable
3630 @cindex Implementation specific setting
3632 @item @emph{ICV:} @var{target-offload-var}
3633 @item @emph{Scope:} global
3634 @item @emph{Description}:
3635 Specifies the behavior with regard to offloading code to a device. This
3636 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
3639 If set to @code{MANDATORY}, the program terminates with an error if
3640 any device construct or device memory routine uses a device that is unavailable
3641 or not supported by the implementation, or uses a non-conforming device number.
3642 If set to @code{DISABLED}, then offloading is disabled and all code runs on
3643 the host. If set to @code{DEFAULT}, the program tries offloading to the
3644 device first, then falls back to running code on the host if it cannot.
3646 If undefined, then the program behaves as if @code{DEFAULT} was set.
3648 Note: Even with @code{MANDATORY}, no run-time termination is performed when
3649 the device number in a @code{device} clause or argument to a device memory
3650 routine is for host, which includes using the device number in the
3651 @var{default-device-var} ICV. However, the initial value of
3652 the @var{default-device-var} ICV is affected by @code{MANDATORY}.
3654 @item @emph{See also}:
3655 @ref{OMP_DEFAULT_DEVICE}
3657 @item @emph{Reference}:
3658 @uref{https://www.openmp.org, OpenMP specification v5.2}, Section 21.2.8
3663 @node OMP_TEAMS_THREAD_LIMIT
3664 @section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
3665 @cindex Environment Variable
3667 @item @emph{ICV:} @var{teams-thread-limit-var}
3668 @item @emph{Scope:} device
3669 @item @emph{Description}:
3670 Specifies an upper bound for the number of threads to use by each contention
3671 group created by a teams construct without explicit @code{thread_limit}
3672 clause. The value of this variable shall be a positive integer. If undefined,
3673 the value of 0 is used which stands for an implementation defined upper
3676 @item @emph{See also}:
3677 @ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
3679 @item @emph{Reference}:
3680 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
3685 @node OMP_THREAD_LIMIT
3686 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
3687 @cindex Environment Variable
3689 @item @emph{ICV:} @var{thread-limit-var}
3690 @item @emph{Scope:} data environment
3691 @item @emph{Description}:
3692 Specifies the number of threads to use for the whole program. The
3693 value of this variable shall be a positive integer. If undefined,
3694 the number of threads is not limited.
3696 @item @emph{See also}:
3697 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
3699 @item @emph{Reference}:
3700 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
3705 @node OMP_WAIT_POLICY
3706 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
3707 @cindex Environment Variable
3709 @item @emph{Description}:
3710 Specifies whether waiting threads should be active or passive. If
3711 the value is @code{PASSIVE}, waiting threads should not consume CPU
3712 power while waiting; while the value is @code{ACTIVE} specifies that
3713 they should. If undefined, threads wait actively for a short time
3714 before waiting passively.
3716 @item @emph{See also}:
3717 @ref{GOMP_SPINCOUNT}
3719 @item @emph{Reference}:
3720 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
3725 @node GOMP_CPU_AFFINITY
3726 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
3727 @cindex Environment Variable
3729 @item @emph{Description}:
3730 Binds threads to specific CPUs. The variable should contain a space-separated
3731 or comma-separated list of CPUs. This list may contain different kinds of
3732 entries: either single CPU numbers in any order, a range of CPUs (M-N)
3733 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
3734 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} binds the initial thread
3735 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
3736 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
3737 and 14 respectively and then starts assigning back from the beginning of
3738 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
3740 There is no libgomp library routine to determine whether a CPU affinity
3741 specification is in effect. As a workaround, language-specific library
3742 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
3743 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
3744 environment variable. A defined CPU affinity on startup cannot be changed
3745 or disabled during the runtime of the application.
3747 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
3748 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
3749 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
3750 @code{FALSE}, the host system handles the assignment of threads to CPUs.
3752 @item @emph{See also}:
3753 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
3759 @section @env{GOMP_DEBUG} -- Enable debugging output
3760 @cindex Environment Variable
3762 @item @emph{Description}:
3763 Enable debugging output. The variable should be set to @code{0}
3764 (disabled, also the default if not set), or @code{1} (enabled).
3766 If enabled, some debugging output is printed during execution.
3767 This is currently not specified in more detail, and subject to change.
3772 @node GOMP_STACKSIZE
3773 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
3774 @cindex Environment Variable
3775 @cindex Implementation specific setting
3777 @item @emph{Description}:
3778 Set the default thread stack size in kilobytes. This is different from
3779 @code{pthread_attr_setstacksize} which gets the number of bytes as an
3780 argument. If the stack size cannot be set due to system constraints, an
3781 error is reported and the initial stack size is left unchanged. If undefined,
3782 the stack size is system dependent.
3784 @item @emph{See also}:
3787 @item @emph{Reference}:
3788 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
3789 GCC Patches Mailinglist},
3790 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
3791 GCC Patches Mailinglist}
3796 @node GOMP_SPINCOUNT
3797 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
3798 @cindex Environment Variable
3799 @cindex Implementation specific setting
3801 @item @emph{Description}:
3802 Determines how long a threads waits actively with consuming CPU power
3803 before waiting passively without consuming CPU power. The value may be
3804 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
3805 integer which gives the number of spins of the busy-wait loop. The
3806 integer may optionally be followed by the following suffixes acting
3807 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
3808 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
3809 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
3810 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
3811 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
3812 If there are more OpenMP threads than available CPUs, 1000 and 100
3813 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
3814 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
3815 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
3817 @item @emph{See also}:
3818 @ref{OMP_WAIT_POLICY}
3823 @node GOMP_RTEMS_THREAD_POOLS
3824 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
3825 @cindex Environment Variable
3826 @cindex Implementation specific setting
3828 @item @emph{Description}:
3829 This environment variable is only used on the RTEMS real-time operating system.
3830 It determines the scheduler instance specific thread pools. The format for
3831 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
3832 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
3833 separated by @code{:} where:
3835 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
3837 @item @code{$<priority>} is an optional priority for the worker threads of a
3838 thread pool according to @code{pthread_setschedparam}. In case a priority
3839 value is omitted, then a worker thread inherits the priority of the OpenMP
3840 primary thread that created it. The priority of the worker thread is not
3841 changed after creation, even if a new OpenMP primary thread using the worker has
3842 a different priority.
3843 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
3844 RTEMS application configuration.
3846 In case no thread pool configuration is specified for a scheduler instance,
3847 then each OpenMP primary thread of this scheduler instance uses its own
3848 dynamically allocated thread pool. To limit the worker thread count of the
3849 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
3850 @item @emph{Example}:
3851 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
3852 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
3853 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
3854 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
3855 one thread pool available. Since no priority is specified for this scheduler
3856 instance, the worker thread inherits the priority of the OpenMP primary thread
3857 that created it. In the scheduler instance @code{WRK1} there are three thread
3858 pools available and their worker threads run at priority four.
3863 @c ---------------------------------------------------------------------
3865 @c ---------------------------------------------------------------------
3867 @node Enabling OpenACC
3868 @chapter Enabling OpenACC
3870 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
3871 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
3872 @samp{#pragma acc} in C/C++ and, in Fortran, the @samp{!$acc} sentinel in free
3873 source form and the @samp{c$acc}, @samp{*$acc} and @samp{!$acc} sentinels in
3874 fixed source form. The flag also arranges for automatic linking of the OpenACC
3875 runtime library (@ref{OpenACC Runtime Library Routines}).
3877 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
3879 A complete description of all OpenACC directives accepted may be found in
3880 the @uref{https://www.openacc.org, OpenACC} Application Programming
3881 Interface manual, version 2.6.
3885 @c ---------------------------------------------------------------------
3886 @c OpenACC Runtime Library Routines
3887 @c ---------------------------------------------------------------------
3889 @node OpenACC Runtime Library Routines
3890 @chapter OpenACC Runtime Library Routines
3892 The runtime routines described here are defined by section 3 of the OpenACC
3893 specifications in version 2.6.
3894 They have C linkage, and do not throw exceptions.
3895 Generally, they are available only for the host, with the exception of
3896 @code{acc_on_device}, which is available for both the host and the
3897 acceleration device.
3900 * acc_get_num_devices:: Get number of devices for the given device
3902 * acc_set_device_type:: Set type of device accelerator to use.
3903 * acc_get_device_type:: Get type of device accelerator to be used.
3904 * acc_set_device_num:: Set device number to use.
3905 * acc_get_device_num:: Get device number to be used.
3906 * acc_get_property:: Get device property.
3907 * acc_async_test:: Tests for completion of a specific asynchronous
3909 * acc_async_test_all:: Tests for completion of all asynchronous
3911 * acc_wait:: Wait for completion of a specific asynchronous
3913 * acc_wait_all:: Waits for completion of all asynchronous
3915 * acc_wait_all_async:: Wait for completion of all asynchronous
3917 * acc_wait_async:: Wait for completion of asynchronous operations.
3918 * acc_init:: Initialize runtime for a specific device type.
3919 * acc_shutdown:: Shuts down the runtime for a specific device
3921 * acc_on_device:: Whether executing on a particular device
3922 * acc_malloc:: Allocate device memory.
3923 * acc_free:: Free device memory.
3924 * acc_copyin:: Allocate device memory and copy host memory to
3926 * acc_present_or_copyin:: If the data is not present on the device,
3927 allocate device memory and copy from host
3929 * acc_create:: Allocate device memory and map it to host
3931 * acc_present_or_create:: If the data is not present on the device,
3932 allocate device memory and map it to host
3934 * acc_copyout:: Copy device memory to host memory.
3935 * acc_delete:: Free device memory.
3936 * acc_update_device:: Update device memory from mapped host memory.
3937 * acc_update_self:: Update host memory from mapped device memory.
3938 * acc_map_data:: Map previously allocated device memory to host
3940 * acc_unmap_data:: Unmap device memory from host memory.
3941 * acc_deviceptr:: Get device pointer associated with specific
3943 * acc_hostptr:: Get host pointer associated with specific
3945 * acc_is_present:: Indicate whether host variable / array is
3947 * acc_memcpy_to_device:: Copy host memory to device memory.
3948 * acc_memcpy_from_device:: Copy device memory to host memory.
3949 * acc_attach:: Let device pointer point to device-pointer target.
3950 * acc_detach:: Let device pointer point to host-pointer target.
3952 API routines for target platforms.
3954 * acc_get_current_cuda_device:: Get CUDA device handle.
3955 * acc_get_current_cuda_context::Get CUDA context handle.
3956 * acc_get_cuda_stream:: Get CUDA stream handle.
3957 * acc_set_cuda_stream:: Set CUDA stream handle.
3959 API routines for the OpenACC Profiling Interface.
3961 * acc_prof_register:: Register callbacks.
3962 * acc_prof_unregister:: Unregister callbacks.
3963 * acc_prof_lookup:: Obtain inquiry functions.
3964 * acc_register_library:: Library registration.
3969 @node acc_get_num_devices
3970 @section @code{acc_get_num_devices} -- Get number of devices for given device type
3972 @item @emph{Description}
3973 This function returns a value indicating the number of devices available
3974 for the device type specified in @var{devicetype}.
3977 @multitable @columnfractions .20 .80
3978 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
3981 @item @emph{Fortran}:
3982 @multitable @columnfractions .20 .80
3983 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
3984 @item @tab @code{integer(kind=acc_device_kind) devicetype}
3987 @item @emph{Reference}:
3988 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3994 @node acc_set_device_type
3995 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
3997 @item @emph{Description}
3998 This function indicates to the runtime library which device type, specified
3999 in @var{devicetype}, to use when executing a parallel or kernels region.
4002 @multitable @columnfractions .20 .80
4003 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
4006 @item @emph{Fortran}:
4007 @multitable @columnfractions .20 .80
4008 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
4009 @item @tab @code{integer(kind=acc_device_kind) devicetype}
4012 @item @emph{Reference}:
4013 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4019 @node acc_get_device_type
4020 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
4022 @item @emph{Description}
4023 This function returns what device type will be used when executing a
4024 parallel or kernels region.
4026 This function returns @code{acc_device_none} if
4027 @code{acc_get_device_type} is called from
4028 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4029 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
4030 Interface}), that is, if the device is currently being initialized.
4033 @multitable @columnfractions .20 .80
4034 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
4037 @item @emph{Fortran}:
4038 @multitable @columnfractions .20 .80
4039 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
4040 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
4043 @item @emph{Reference}:
4044 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4050 @node acc_set_device_num
4051 @section @code{acc_set_device_num} -- Set device number to use.
4053 @item @emph{Description}
4054 This function will indicate to the runtime which device number,
4055 specified by @var{devicenum}, associated with the specified device
4056 type @var{devicetype}.
4059 @multitable @columnfractions .20 .80
4060 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
4063 @item @emph{Fortran}:
4064 @multitable @columnfractions .20 .80
4065 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
4066 @item @tab @code{integer devicenum}
4067 @item @tab @code{integer(kind=acc_device_kind) devicetype}
4070 @item @emph{Reference}:
4071 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4077 @node acc_get_device_num
4078 @section @code{acc_get_device_num} -- Get device number to be used.
4080 @item @emph{Description}
4081 This function returns which device number associated with the specified device
4082 type @var{devicetype}, will be used when executing a parallel or kernels
4086 @multitable @columnfractions .20 .80
4087 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
4090 @item @emph{Fortran}:
4091 @multitable @columnfractions .20 .80
4092 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
4093 @item @tab @code{integer(kind=acc_device_kind) devicetype}
4094 @item @tab @code{integer acc_get_device_num}
4097 @item @emph{Reference}:
4098 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4104 @node acc_get_property
4105 @section @code{acc_get_property} -- Get device property.
4106 @cindex acc_get_property
4107 @cindex acc_get_property_string
4109 @item @emph{Description}
4110 These routines return the value of the specified @var{property} for the
4111 device being queried according to @var{devicenum} and @var{devicetype}.
4112 Integer-valued and string-valued properties are returned by
4113 @code{acc_get_property} and @code{acc_get_property_string} respectively.
4114 The Fortran @code{acc_get_property_string} subroutine returns the string
4115 retrieved in its fourth argument while the remaining entry points are
4116 functions, which pass the return value as their result.
4118 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
4119 modified the interface introduced in OpenACC 2.6. The kind-value parameter
4120 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
4121 for consistency and the return type of the @code{acc_get_property} function is
4122 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
4123 The parameter @code{acc_device_property} is still provided,
4124 but might be removed in a future version of GCC.
4127 @multitable @columnfractions .20 .80
4128 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
4129 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
4132 @item @emph{Fortran}:
4133 @multitable @columnfractions .20 .80
4134 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
4135 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
4136 @item @tab @code{use ISO_C_Binding, only: c_size_t}
4137 @item @tab @code{integer devicenum}
4138 @item @tab @code{integer(kind=acc_device_kind) devicetype}
4139 @item @tab @code{integer(kind=acc_device_property_kind) property}
4140 @item @tab @code{integer(kind=c_size_t) acc_get_property}
4141 @item @tab @code{character(*) string}
4144 @item @emph{Reference}:
4145 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4151 @node acc_async_test
4152 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
4154 @item @emph{Description}
4155 This function tests for completion of the asynchronous operation specified
4156 in @var{arg}. In C/C++, a non-zero value is returned to indicate
4157 the specified asynchronous operation has completed while Fortran returns
4158 @code{true}. If the asynchronous operation has not completed, C/C++ returns
4159 zero and Fortran returns @code{false}.
4162 @multitable @columnfractions .20 .80
4163 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
4166 @item @emph{Fortran}:
4167 @multitable @columnfractions .20 .80
4168 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
4169 @item @tab @code{integer(kind=acc_handle_kind) arg}
4170 @item @tab @code{logical acc_async_test}
4173 @item @emph{Reference}:
4174 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4180 @node acc_async_test_all
4181 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
4183 @item @emph{Description}
4184 This function tests for completion of all asynchronous operations.
4185 In C/C++, a non-zero value is returned to indicate all asynchronous
4186 operations have completed while Fortran returns @code{true}. If
4187 any asynchronous operation has not completed, C/C++ returns zero and
4188 Fortran returns @code{false}.
4191 @multitable @columnfractions .20 .80
4192 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
4195 @item @emph{Fortran}:
4196 @multitable @columnfractions .20 .80
4197 @item @emph{Interface}: @tab @code{function acc_async_test()}
4198 @item @tab @code{logical acc_get_device_num}
4201 @item @emph{Reference}:
4202 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4209 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
4211 @item @emph{Description}
4212 This function waits for completion of the asynchronous operation
4213 specified in @var{arg}.
4216 @multitable @columnfractions .20 .80
4217 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
4218 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
4221 @item @emph{Fortran}:
4222 @multitable @columnfractions .20 .80
4223 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
4224 @item @tab @code{integer(acc_handle_kind) arg}
4225 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
4226 @item @tab @code{integer(acc_handle_kind) arg}
4229 @item @emph{Reference}:
4230 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4237 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
4239 @item @emph{Description}
4240 This function waits for the completion of all asynchronous operations.
4243 @multitable @columnfractions .20 .80
4244 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
4245 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
4248 @item @emph{Fortran}:
4249 @multitable @columnfractions .20 .80
4250 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
4251 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
4254 @item @emph{Reference}:
4255 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4261 @node acc_wait_all_async
4262 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
4264 @item @emph{Description}
4265 This function enqueues a wait operation on the queue @var{async} for any
4266 and all asynchronous operations that have been previously enqueued on
4270 @multitable @columnfractions .20 .80
4271 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
4274 @item @emph{Fortran}:
4275 @multitable @columnfractions .20 .80
4276 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
4277 @item @tab @code{integer(acc_handle_kind) async}
4280 @item @emph{Reference}:
4281 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4287 @node acc_wait_async
4288 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
4290 @item @emph{Description}
4291 This function enqueues a wait operation on queue @var{async} for any and all
4292 asynchronous operations enqueued on queue @var{arg}.
4295 @multitable @columnfractions .20 .80
4296 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
4299 @item @emph{Fortran}:
4300 @multitable @columnfractions .20 .80
4301 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
4302 @item @tab @code{integer(acc_handle_kind) arg, async}
4305 @item @emph{Reference}:
4306 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4313 @section @code{acc_init} -- Initialize runtime for a specific device type.
4315 @item @emph{Description}
4316 This function initializes the runtime for the device type specified in
4320 @multitable @columnfractions .20 .80
4321 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
4324 @item @emph{Fortran}:
4325 @multitable @columnfractions .20 .80
4326 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
4327 @item @tab @code{integer(acc_device_kind) devicetype}
4330 @item @emph{Reference}:
4331 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4338 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
4340 @item @emph{Description}
4341 This function shuts down the runtime for the device type specified in
4345 @multitable @columnfractions .20 .80
4346 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
4349 @item @emph{Fortran}:
4350 @multitable @columnfractions .20 .80
4351 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
4352 @item @tab @code{integer(acc_device_kind) devicetype}
4355 @item @emph{Reference}:
4356 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4363 @section @code{acc_on_device} -- Whether executing on a particular device
4365 @item @emph{Description}:
4366 This function returns whether the program is executing on a particular
4367 device specified in @var{devicetype}. In C/C++ a non-zero value is
4368 returned to indicate the device is executing on the specified device type.
4369 In Fortran, @code{true} is returned. If the program is not executing
4370 on the specified device type C/C++ returns zero, while Fortran
4371 returns @code{false}.
4374 @multitable @columnfractions .20 .80
4375 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
4378 @item @emph{Fortran}:
4379 @multitable @columnfractions .20 .80
4380 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
4381 @item @tab @code{integer(acc_device_kind) devicetype}
4382 @item @tab @code{logical acc_on_device}
4386 @item @emph{Reference}:
4387 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4394 @section @code{acc_malloc} -- Allocate device memory.
4396 @item @emph{Description}
4397 This function allocates @var{len} bytes of device memory. It returns
4398 the device address of the allocated memory.
4401 @multitable @columnfractions .20 .80
4402 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
4405 @item @emph{Reference}:
4406 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4413 @section @code{acc_free} -- Free device memory.
4415 @item @emph{Description}
4416 Free previously allocated device memory at the device address @code{a}.
4419 @multitable @columnfractions .20 .80
4420 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
4423 @item @emph{Reference}:
4424 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4431 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
4433 @item @emph{Description}
4434 In C/C++, this function allocates @var{len} bytes of device memory
4435 and maps it to the specified host address in @var{a}. The device
4436 address of the newly allocated device memory is returned.
4438 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4439 a contiguous array section. The second form @var{a} specifies a
4440 variable or array element and @var{len} specifies the length in bytes.
4443 @multitable @columnfractions .20 .80
4444 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
4445 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
4448 @item @emph{Fortran}:
4449 @multitable @columnfractions .20 .80
4450 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
4451 @item @tab @code{type, dimension(:[,:]...) :: a}
4452 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
4453 @item @tab @code{type, dimension(:[,:]...) :: a}
4454 @item @tab @code{integer len}
4455 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
4456 @item @tab @code{type, dimension(:[,:]...) :: a}
4457 @item @tab @code{integer(acc_handle_kind) :: async}
4458 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
4459 @item @tab @code{type, dimension(:[,:]...) :: a}
4460 @item @tab @code{integer len}
4461 @item @tab @code{integer(acc_handle_kind) :: async}
4464 @item @emph{Reference}:
4465 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4471 @node acc_present_or_copyin
4472 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
4474 @item @emph{Description}
4475 This function tests if the host data specified by @var{a} and of length
4476 @var{len} is present or not. If it is not present, device memory
4477 is allocated and the host memory copied. The device address of
4478 the newly allocated device memory is returned.
4480 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4481 a contiguous array section. The second form @var{a} specifies a variable or
4482 array element and @var{len} specifies the length in bytes.
4484 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
4485 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
4488 @multitable @columnfractions .20 .80
4489 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
4490 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
4493 @item @emph{Fortran}:
4494 @multitable @columnfractions .20 .80
4495 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
4496 @item @tab @code{type, dimension(:[,:]...) :: a}
4497 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
4498 @item @tab @code{type, dimension(:[,:]...) :: a}
4499 @item @tab @code{integer len}
4500 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
4501 @item @tab @code{type, dimension(:[,:]...) :: a}
4502 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
4503 @item @tab @code{type, dimension(:[,:]...) :: a}
4504 @item @tab @code{integer len}
4507 @item @emph{Reference}:
4508 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4515 @section @code{acc_create} -- Allocate device memory and map it to host memory.
4517 @item @emph{Description}
4518 This function allocates device memory and maps it to host memory specified
4519 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
4520 the function returns the device address of the allocated device memory.
4522 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4523 a contiguous array section. The second form @var{a} specifies a variable or
4524 array element and @var{len} specifies the length in bytes.
4527 @multitable @columnfractions .20 .80
4528 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
4529 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
4532 @item @emph{Fortran}:
4533 @multitable @columnfractions .20 .80
4534 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
4535 @item @tab @code{type, dimension(:[,:]...) :: a}
4536 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
4537 @item @tab @code{type, dimension(:[,:]...) :: a}
4538 @item @tab @code{integer len}
4539 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
4540 @item @tab @code{type, dimension(:[,:]...) :: a}
4541 @item @tab @code{integer(acc_handle_kind) :: async}
4542 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
4543 @item @tab @code{type, dimension(:[,:]...) :: a}
4544 @item @tab @code{integer len}
4545 @item @tab @code{integer(acc_handle_kind) :: async}
4548 @item @emph{Reference}:
4549 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4555 @node acc_present_or_create
4556 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
4558 @item @emph{Description}
4559 This function tests if the host data specified by @var{a} and of length
4560 @var{len} is present or not. If it is not present, device memory
4561 is allocated and mapped to host memory. In C/C++, the device address
4562 of the newly allocated device memory is returned.
4564 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4565 a contiguous array section. The second form @var{a} specifies a variable or
4566 array element and @var{len} specifies the length in bytes.
4568 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
4569 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
4572 @multitable @columnfractions .20 .80
4573 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
4574 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
4577 @item @emph{Fortran}:
4578 @multitable @columnfractions .20 .80
4579 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
4580 @item @tab @code{type, dimension(:[,:]...) :: a}
4581 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
4582 @item @tab @code{type, dimension(:[,:]...) :: a}
4583 @item @tab @code{integer len}
4584 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
4585 @item @tab @code{type, dimension(:[,:]...) :: a}
4586 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
4587 @item @tab @code{type, dimension(:[,:]...) :: a}
4588 @item @tab @code{integer len}
4591 @item @emph{Reference}:
4592 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4599 @section @code{acc_copyout} -- Copy device memory to host memory.
4601 @item @emph{Description}
4602 This function copies mapped device memory to host memory which is specified
4603 by host address @var{a} for a length @var{len} bytes in C/C++.
4605 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4606 a contiguous array section. The second form @var{a} specifies a variable or
4607 array element and @var{len} specifies the length in bytes.
4610 @multitable @columnfractions .20 .80
4611 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
4612 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
4613 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
4614 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
4617 @item @emph{Fortran}:
4618 @multitable @columnfractions .20 .80
4619 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
4620 @item @tab @code{type, dimension(:[,:]...) :: a}
4621 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
4622 @item @tab @code{type, dimension(:[,:]...) :: a}
4623 @item @tab @code{integer len}
4624 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
4625 @item @tab @code{type, dimension(:[,:]...) :: a}
4626 @item @tab @code{integer(acc_handle_kind) :: async}
4627 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
4628 @item @tab @code{type, dimension(:[,:]...) :: a}
4629 @item @tab @code{integer len}
4630 @item @tab @code{integer(acc_handle_kind) :: async}
4631 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
4632 @item @tab @code{type, dimension(:[,:]...) :: a}
4633 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
4634 @item @tab @code{type, dimension(:[,:]...) :: a}
4635 @item @tab @code{integer len}
4636 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
4637 @item @tab @code{type, dimension(:[,:]...) :: a}
4638 @item @tab @code{integer(acc_handle_kind) :: async}
4639 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
4640 @item @tab @code{type, dimension(:[,:]...) :: a}
4641 @item @tab @code{integer len}
4642 @item @tab @code{integer(acc_handle_kind) :: async}
4645 @item @emph{Reference}:
4646 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4653 @section @code{acc_delete} -- Free device memory.
4655 @item @emph{Description}
4656 This function frees previously allocated device memory specified by
4657 the device address @var{a} and the length of @var{len} bytes.
4659 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4660 a contiguous array section. The second form @var{a} specifies a variable or
4661 array element and @var{len} specifies the length in bytes.
4664 @multitable @columnfractions .20 .80
4665 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
4666 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
4667 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
4668 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
4671 @item @emph{Fortran}:
4672 @multitable @columnfractions .20 .80
4673 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
4674 @item @tab @code{type, dimension(:[,:]...) :: a}
4675 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
4676 @item @tab @code{type, dimension(:[,:]...) :: a}
4677 @item @tab @code{integer len}
4678 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
4679 @item @tab @code{type, dimension(:[,:]...) :: a}
4680 @item @tab @code{integer(acc_handle_kind) :: async}
4681 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
4682 @item @tab @code{type, dimension(:[,:]...) :: a}
4683 @item @tab @code{integer len}
4684 @item @tab @code{integer(acc_handle_kind) :: async}
4685 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
4686 @item @tab @code{type, dimension(:[,:]...) :: a}
4687 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
4688 @item @tab @code{type, dimension(:[,:]...) :: a}
4689 @item @tab @code{integer len}
4690 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
4691 @item @tab @code{type, dimension(:[,:]...) :: a}
4692 @item @tab @code{integer(acc_handle_kind) :: async}
4693 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
4694 @item @tab @code{type, dimension(:[,:]...) :: a}
4695 @item @tab @code{integer len}
4696 @item @tab @code{integer(acc_handle_kind) :: async}
4699 @item @emph{Reference}:
4700 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4706 @node acc_update_device
4707 @section @code{acc_update_device} -- Update device memory from mapped host memory.
4709 @item @emph{Description}
4710 This function updates the device copy from the previously mapped host memory.
4711 The host memory is specified with the host address @var{a} and a length of
4714 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4715 a contiguous array section. The second form @var{a} specifies a variable or
4716 array element and @var{len} specifies the length in bytes.
4719 @multitable @columnfractions .20 .80
4720 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
4721 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
4724 @item @emph{Fortran}:
4725 @multitable @columnfractions .20 .80
4726 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
4727 @item @tab @code{type, dimension(:[,:]...) :: a}
4728 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
4729 @item @tab @code{type, dimension(:[,:]...) :: a}
4730 @item @tab @code{integer len}
4731 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
4732 @item @tab @code{type, dimension(:[,:]...) :: a}
4733 @item @tab @code{integer(acc_handle_kind) :: async}
4734 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
4735 @item @tab @code{type, dimension(:[,:]...) :: a}
4736 @item @tab @code{integer len}
4737 @item @tab @code{integer(acc_handle_kind) :: async}
4740 @item @emph{Reference}:
4741 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4747 @node acc_update_self
4748 @section @code{acc_update_self} -- Update host memory from mapped device memory.
4750 @item @emph{Description}
4751 This function updates the host copy from the previously mapped device memory.
4752 The host memory is specified with the host address @var{a} and a length of
4755 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4756 a contiguous array section. The second form @var{a} specifies a variable or
4757 array element and @var{len} specifies the length in bytes.
4760 @multitable @columnfractions .20 .80
4761 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
4762 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
4765 @item @emph{Fortran}:
4766 @multitable @columnfractions .20 .80
4767 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
4768 @item @tab @code{type, dimension(:[,:]...) :: a}
4769 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
4770 @item @tab @code{type, dimension(:[,:]...) :: a}
4771 @item @tab @code{integer len}
4772 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
4773 @item @tab @code{type, dimension(:[,:]...) :: a}
4774 @item @tab @code{integer(acc_handle_kind) :: async}
4775 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
4776 @item @tab @code{type, dimension(:[,:]...) :: a}
4777 @item @tab @code{integer len}
4778 @item @tab @code{integer(acc_handle_kind) :: async}
4781 @item @emph{Reference}:
4782 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4789 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
4791 @item @emph{Description}
4792 This function maps previously allocated device and host memory. The device
4793 memory is specified with the device address @var{d}. The host memory is
4794 specified with the host address @var{h} and a length of @var{len}.
4797 @multitable @columnfractions .20 .80
4798 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
4801 @item @emph{Reference}:
4802 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4808 @node acc_unmap_data
4809 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
4811 @item @emph{Description}
4812 This function unmaps previously mapped device and host memory. The latter
4813 specified by @var{h}.
4816 @multitable @columnfractions .20 .80
4817 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
4820 @item @emph{Reference}:
4821 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4828 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
4830 @item @emph{Description}
4831 This function returns the device address that has been mapped to the
4832 host address specified by @var{h}.
4835 @multitable @columnfractions .20 .80
4836 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
4839 @item @emph{Reference}:
4840 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4847 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
4849 @item @emph{Description}
4850 This function returns the host address that has been mapped to the
4851 device address specified by @var{d}.
4854 @multitable @columnfractions .20 .80
4855 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
4858 @item @emph{Reference}:
4859 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4865 @node acc_is_present
4866 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
4868 @item @emph{Description}
4869 This function indicates whether the specified host address in @var{a} and a
4870 length of @var{len} bytes is present on the device. In C/C++, a non-zero
4871 value is returned to indicate the presence of the mapped memory on the
4872 device. A zero is returned to indicate the memory is not mapped on the
4875 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
4876 a contiguous array section. The second form @var{a} specifies a variable or
4877 array element and @var{len} specifies the length in bytes. If the host
4878 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
4879 a @code{false} is return to indicate the mapped memory is not present.
4882 @multitable @columnfractions .20 .80
4883 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
4886 @item @emph{Fortran}:
4887 @multitable @columnfractions .20 .80
4888 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
4889 @item @tab @code{type, dimension(:[,:]...) :: a}
4890 @item @tab @code{logical acc_is_present}
4891 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
4892 @item @tab @code{type, dimension(:[,:]...) :: a}
4893 @item @tab @code{integer len}
4894 @item @tab @code{logical acc_is_present}
4897 @item @emph{Reference}:
4898 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4904 @node acc_memcpy_to_device
4905 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
4907 @item @emph{Description}
4908 This function copies host memory specified by host address of @var{src} to
4909 device memory specified by the device address @var{dest} for a length of
4913 @multitable @columnfractions .20 .80
4914 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
4917 @item @emph{Reference}:
4918 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4924 @node acc_memcpy_from_device
4925 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
4927 @item @emph{Description}
4928 This function copies host memory specified by host address of @var{src} from
4929 device memory specified by the device address @var{dest} for a length of
4933 @multitable @columnfractions .20 .80
4934 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
4937 @item @emph{Reference}:
4938 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4945 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
4947 @item @emph{Description}
4948 This function updates a pointer on the device from pointing to a host-pointer
4949 address to pointing to the corresponding device data.
4952 @multitable @columnfractions .20 .80
4953 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
4954 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
4957 @item @emph{Reference}:
4958 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4965 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
4967 @item @emph{Description}
4968 This function updates a pointer on the device from pointing to a device-pointer
4969 address to pointing to the corresponding host data.
4972 @multitable @columnfractions .20 .80
4973 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
4974 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
4975 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
4976 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
4979 @item @emph{Reference}:
4980 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
4986 @node acc_get_current_cuda_device
4987 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
4989 @item @emph{Description}
4990 This function returns the CUDA device handle. This handle is the same
4991 as used by the CUDA Runtime or Driver API's.
4994 @multitable @columnfractions .20 .80
4995 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
4998 @item @emph{Reference}:
4999 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5005 @node acc_get_current_cuda_context
5006 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
5008 @item @emph{Description}
5009 This function returns the CUDA context handle. This handle is the same
5010 as used by the CUDA Runtime or Driver API's.
5013 @multitable @columnfractions .20 .80
5014 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
5017 @item @emph{Reference}:
5018 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5024 @node acc_get_cuda_stream
5025 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
5027 @item @emph{Description}
5028 This function returns the CUDA stream handle for the queue @var{async}.
5029 This handle is the same as used by the CUDA Runtime or Driver API's.
5032 @multitable @columnfractions .20 .80
5033 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
5036 @item @emph{Reference}:
5037 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5043 @node acc_set_cuda_stream
5044 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
5046 @item @emph{Description}
5047 This function associates the stream handle specified by @var{stream} with
5048 the queue @var{async}.
5050 This cannot be used to change the stream handle associated with
5051 @code{acc_async_sync}.
5053 The return value is not specified.
5056 @multitable @columnfractions .20 .80
5057 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
5060 @item @emph{Reference}:
5061 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5067 @node acc_prof_register
5068 @section @code{acc_prof_register} -- Register callbacks.
5070 @item @emph{Description}:
5071 This function registers callbacks.
5074 @multitable @columnfractions .20 .80
5075 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
5078 @item @emph{See also}:
5079 @ref{OpenACC Profiling Interface}
5081 @item @emph{Reference}:
5082 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5088 @node acc_prof_unregister
5089 @section @code{acc_prof_unregister} -- Unregister callbacks.
5091 @item @emph{Description}:
5092 This function unregisters callbacks.
5095 @multitable @columnfractions .20 .80
5096 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
5099 @item @emph{See also}:
5100 @ref{OpenACC Profiling Interface}
5102 @item @emph{Reference}:
5103 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5109 @node acc_prof_lookup
5110 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
5112 @item @emph{Description}:
5113 Function to obtain inquiry functions.
5116 @multitable @columnfractions .20 .80
5117 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
5120 @item @emph{See also}:
5121 @ref{OpenACC Profiling Interface}
5123 @item @emph{Reference}:
5124 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5130 @node acc_register_library
5131 @section @code{acc_register_library} -- Library registration.
5133 @item @emph{Description}:
5134 Function for library registration.
5137 @multitable @columnfractions .20 .80
5138 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
5141 @item @emph{See also}:
5142 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
5144 @item @emph{Reference}:
5145 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5151 @c ---------------------------------------------------------------------
5152 @c OpenACC Environment Variables
5153 @c ---------------------------------------------------------------------
5155 @node OpenACC Environment Variables
5156 @chapter OpenACC Environment Variables
5158 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
5159 are defined by section 4 of the OpenACC specification in version 2.0.
5160 The variable @env{ACC_PROFLIB}
5161 is defined by section 4 of the OpenACC specification in version 2.6.
5171 @node ACC_DEVICE_TYPE
5172 @section @code{ACC_DEVICE_TYPE}
5174 @item @emph{Description}:
5175 Control the default device type to use when executing compute regions.
5176 If unset, the code can be run on any device type, favoring a non-host
5179 Supported values in GCC (if compiled in) are
5185 @item @emph{Reference}:
5186 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5192 @node ACC_DEVICE_NUM
5193 @section @code{ACC_DEVICE_NUM}
5195 @item @emph{Description}:
5196 Control which device, identified by device number, is the default device.
5197 The value must be a nonnegative integer less than the number of devices.
5198 If unset, device number zero is used.
5199 @item @emph{Reference}:
5200 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5207 @section @code{ACC_PROFLIB}
5209 @item @emph{Description}:
5210 Semicolon-separated list of dynamic libraries that are loaded as profiling
5211 libraries. Each library must provide at least the @code{acc_register_library}
5212 routine. Each library file is found as described by the documentation of
5213 @code{dlopen} of your operating system.
5214 @item @emph{See also}:
5215 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
5217 @item @emph{Reference}:
5218 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
5224 @c ---------------------------------------------------------------------
5225 @c CUDA Streams Usage
5226 @c ---------------------------------------------------------------------
5228 @node CUDA Streams Usage
5229 @chapter CUDA Streams Usage
5231 This applies to the @code{nvptx} plugin only.
5233 The library provides elements that perform asynchronous movement of
5234 data and asynchronous operation of computing constructs. This
5235 asynchronous functionality is implemented by making use of CUDA
5236 streams@footnote{See "Stream Management" in "CUDA Driver API",
5237 TRM-06703-001, Version 5.5, for additional information}.
5239 The primary means by that the asynchronous functionality is accessed
5240 is through the use of those OpenACC directives which make use of the
5241 @code{async} and @code{wait} clauses. When the @code{async} clause is
5242 first used with a directive, it creates a CUDA stream. If an
5243 @code{async-argument} is used with the @code{async} clause, then the
5244 stream is associated with the specified @code{async-argument}.
5246 Following the creation of an association between a CUDA stream and the
5247 @code{async-argument} of an @code{async} clause, both the @code{wait}
5248 clause and the @code{wait} directive can be used. When either the
5249 clause or directive is used after stream creation, it creates a
5250 rendezvous point whereby execution waits until all operations
5251 associated with the @code{async-argument}, that is, stream, have
5254 Normally, the management of the streams that are created as a result of
5255 using the @code{async} clause, is done without any intervention by the
5256 caller. This implies the association between the @code{async-argument}
5257 and the CUDA stream is maintained for the lifetime of the program.
5258 However, this association can be changed through the use of the library
5259 function @code{acc_set_cuda_stream}. When the function
5260 @code{acc_set_cuda_stream} is called, the CUDA stream that was
5261 originally associated with the @code{async} clause is destroyed.
5262 Caution should be taken when changing the association as subsequent
5263 references to the @code{async-argument} refer to a different
5268 @c ---------------------------------------------------------------------
5269 @c OpenACC Library Interoperability
5270 @c ---------------------------------------------------------------------
5272 @node OpenACC Library Interoperability
5273 @chapter OpenACC Library Interoperability
5275 @section Introduction
5277 The OpenACC library uses the CUDA Driver API, and may interact with
5278 programs that use the Runtime library directly, or another library
5279 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
5280 "Interactions with the CUDA Driver API" in
5281 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
5282 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
5283 for additional information on library interoperability.}.
5284 This chapter describes the use cases and what changes are
5285 required in order to use both the OpenACC library and the CUBLAS and Runtime
5286 libraries within a program.
5288 @section First invocation: NVIDIA CUBLAS library API
5290 In this first use case (see below), a function in the CUBLAS library is called
5291 prior to any of the functions in the OpenACC library. More specifically, the
5292 function @code{cublasCreate()}.
5294 When invoked, the function initializes the library and allocates the
5295 hardware resources on the host and the device on behalf of the caller. Once
5296 the initialization and allocation has completed, a handle is returned to the
5297 caller. The OpenACC library also requires initialization and allocation of
5298 hardware resources. Since the CUBLAS library has already allocated the
5299 hardware resources for the device, all that is left to do is to initialize
5300 the OpenACC library and acquire the hardware resources on the host.
5302 Prior to calling the OpenACC function that initializes the library and
5303 allocate the host hardware resources, you need to acquire the device number
5304 that was allocated during the call to @code{cublasCreate()}. The invoking of the
5305 runtime library function @code{cudaGetDevice()} accomplishes this. Once
5306 acquired, the device number is passed along with the device type as
5307 parameters to the OpenACC library function @code{acc_set_device_num()}.
5309 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
5310 library uses the context that was created during the call to
5311 @code{cublasCreate()}. In other words, both libraries share the
5315 /* Create the handle */
5316 s = cublasCreate(&h);
5317 if (s != CUBLAS_STATUS_SUCCESS)
5319 fprintf(stderr, "cublasCreate failed %d\n", s);
5323 /* Get the device number */
5324 e = cudaGetDevice(&dev);
5325 if (e != cudaSuccess)
5327 fprintf(stderr, "cudaGetDevice failed %d\n", e);
5331 /* Initialize OpenACC library and use device 'dev' */
5332 acc_set_device_num(dev, acc_device_nvidia);
5337 @section First invocation: OpenACC library API
5339 In this second use case (see below), a function in the OpenACC library is
5340 called prior to any of the functions in the CUBLAS library. More specifically,
5341 the function @code{acc_set_device_num()}.
5343 In the use case presented here, the function @code{acc_set_device_num()}
5344 is used to both initialize the OpenACC library and allocate the hardware
5345 resources on the host and the device. In the call to the function, the
5346 call parameters specify which device to use and what device
5347 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
5348 is but one method to initialize the OpenACC library and allocate the
5349 appropriate hardware resources. Other methods are available through the
5350 use of environment variables and these is discussed in the next section.
5352 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
5353 functions can be called as seen with multiple calls being made to
5354 @code{acc_copyin()}. In addition, calls can be made to functions in the
5355 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
5356 subsequent to the calls to @code{acc_copyin()}.
5357 As seen in the previous use case, a call to @code{cublasCreate()}
5358 initializes the CUBLAS library and allocates the hardware resources on the
5359 host and the device. However, since the device has already been allocated,
5360 @code{cublasCreate()} only initializes the CUBLAS library and allocates
5361 the appropriate hardware resources on the host. The context that was created
5362 as part of the OpenACC initialization is shared with the CUBLAS library,
5363 similarly to the first use case.
5368 acc_set_device_num(dev, acc_device_nvidia);
5370 /* Copy the first set to the device */
5371 d_X = acc_copyin(&h_X[0], N * sizeof (float));
5374 fprintf(stderr, "copyin error h_X\n");
5378 /* Copy the second set to the device */
5379 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
5382 fprintf(stderr, "copyin error h_Y1\n");
5386 /* Create the handle */
5387 s = cublasCreate(&h);
5388 if (s != CUBLAS_STATUS_SUCCESS)
5390 fprintf(stderr, "cublasCreate failed %d\n", s);
5394 /* Perform saxpy using CUBLAS library function */
5395 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
5396 if (s != CUBLAS_STATUS_SUCCESS)
5398 fprintf(stderr, "cublasSaxpy failed %d\n", s);
5402 /* Copy the results from the device */
5403 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
5408 @section OpenACC library and environment variables
5410 There are two environment variables associated with the OpenACC library
5411 that may be used to control the device type and device number:
5412 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
5413 environment variables can be used as an alternative to calling
5414 @code{acc_set_device_num()}. As seen in the second use case, the device
5415 type and device number were specified using @code{acc_set_device_num()}.
5416 If however, the aforementioned environment variables were set, then the
5417 call to @code{acc_set_device_num()} would not be required.
5420 The use of the environment variables is only relevant when an OpenACC function
5421 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
5422 is called prior to a call to an OpenACC function, then you must call
5423 @code{acc_set_device_num()}@footnote{More complete information
5424 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
5425 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
5426 Application Programming Interface”, Version 2.6.}
5430 @c ---------------------------------------------------------------------
5431 @c OpenACC Profiling Interface
5432 @c ---------------------------------------------------------------------
5434 @node OpenACC Profiling Interface
5435 @chapter OpenACC Profiling Interface
5437 @section Implementation Status and Implementation-Defined Behavior
5439 We're implementing the OpenACC Profiling Interface as defined by the
5440 OpenACC 2.6 specification. We're clarifying some aspects here as
5441 @emph{implementation-defined behavior}, while they're still under
5442 discussion within the OpenACC Technical Committee.
5444 This implementation is tuned to keep the performance impact as low as
5445 possible for the (very common) case that the Profiling Interface is
5446 not enabled. This is relevant, as the Profiling Interface affects all
5447 the @emph{hot} code paths (in the target code, not in the offloaded
5448 code). Users of the OpenACC Profiling Interface can be expected to
5449 understand that performance is impacted to some degree once the
5450 Profiling Interface is enabled: for example, because of the
5451 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
5452 every event that has been registered.
5454 We're not yet accounting for the fact that @cite{OpenACC events may
5455 occur during event processing}.
5456 We just handle one case specially, as required by CUDA 9.0
5457 @command{nvprof}, that @code{acc_get_device_type}
5458 (@ref{acc_get_device_type})) may be called from
5459 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5462 We're not yet implementing initialization via a
5463 @code{acc_register_library} function that is either statically linked
5464 in, or dynamically via @env{LD_PRELOAD}.
5465 Initialization via @code{acc_register_library} functions dynamically
5466 loaded via the @env{ACC_PROFLIB} environment variable does work, as
5467 does directly calling @code{acc_prof_register},
5468 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
5470 As currently there are no inquiry functions defined, calls to
5471 @code{acc_prof_lookup} always returns @code{NULL}.
5473 There aren't separate @emph{start}, @emph{stop} events defined for the
5474 event types @code{acc_ev_create}, @code{acc_ev_delete},
5475 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
5476 should be triggered before or after the actual device-specific call is
5477 made. We trigger them after.
5479 Remarks about data provided to callbacks:
5483 @item @code{acc_prof_info.event_type}
5484 It's not clear if for @emph{nested} event callbacks (for example,
5485 @code{acc_ev_enqueue_launch_start} as part of a parent compute
5486 construct), this should be set for the nested event
5487 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
5488 construct should remain (@code{acc_ev_compute_construct_start}). In
5489 this implementation, the value generally corresponds to the
5490 innermost nested event type.
5492 @item @code{acc_prof_info.device_type}
5496 For @code{acc_ev_compute_construct_start}, and in presence of an
5497 @code{if} clause with @emph{false} argument, this still refers to
5498 the offloading device type.
5499 It's not clear if that's the expected behavior.
5502 Complementary to the item before, for
5503 @code{acc_ev_compute_construct_end}, this is set to
5504 @code{acc_device_host} in presence of an @code{if} clause with
5505 @emph{false} argument.
5506 It's not clear if that's the expected behavior.
5510 @item @code{acc_prof_info.thread_id}
5511 Always @code{-1}; not yet implemented.
5513 @item @code{acc_prof_info.async}
5517 Not yet implemented correctly for
5518 @code{acc_ev_compute_construct_start}.
5521 In a compute construct, for host-fallback
5522 execution/@code{acc_device_host} it always is
5523 @code{acc_async_sync}.
5524 It is unclear if that is the expected behavior.
5527 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
5528 it will always be @code{acc_async_sync}.
5529 It is unclear if that is the expected behavior.
5533 @item @code{acc_prof_info.async_queue}
5534 There is no @cite{limited number of asynchronous queues} in libgomp.
5535 This always has the same value as @code{acc_prof_info.async}.
5537 @item @code{acc_prof_info.src_file}
5538 Always @code{NULL}; not yet implemented.
5540 @item @code{acc_prof_info.func_name}
5541 Always @code{NULL}; not yet implemented.
5543 @item @code{acc_prof_info.line_no}
5544 Always @code{-1}; not yet implemented.
5546 @item @code{acc_prof_info.end_line_no}
5547 Always @code{-1}; not yet implemented.
5549 @item @code{acc_prof_info.func_line_no}
5550 Always @code{-1}; not yet implemented.
5552 @item @code{acc_prof_info.func_end_line_no}
5553 Always @code{-1}; not yet implemented.
5555 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
5556 Relating to @code{acc_prof_info.event_type} discussed above, in this
5557 implementation, this will always be the same value as
5558 @code{acc_prof_info.event_type}.
5560 @item @code{acc_event_info.*.parent_construct}
5564 Will be @code{acc_construct_parallel} for all OpenACC compute
5565 constructs as well as many OpenACC Runtime API calls; should be the
5566 one matching the actual construct, or
5567 @code{acc_construct_runtime_api}, respectively.
5570 Will be @code{acc_construct_enter_data} or
5571 @code{acc_construct_exit_data} when processing variable mappings
5572 specified in OpenACC @emph{declare} directives; should be
5573 @code{acc_construct_declare}.
5576 For implicit @code{acc_ev_device_init_start},
5577 @code{acc_ev_device_init_end}, and explicit as well as implicit
5578 @code{acc_ev_alloc}, @code{acc_ev_free},
5579 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5580 @code{acc_ev_enqueue_download_start}, and
5581 @code{acc_ev_enqueue_download_end}, will be
5582 @code{acc_construct_parallel}; should reflect the real parent
5587 @item @code{acc_event_info.*.implicit}
5588 For @code{acc_ev_alloc}, @code{acc_ev_free},
5589 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
5590 @code{acc_ev_enqueue_download_start}, and
5591 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
5592 also for explicit usage.
5594 @item @code{acc_event_info.data_event.var_name}
5595 Always @code{NULL}; not yet implemented.
5597 @item @code{acc_event_info.data_event.host_ptr}
5598 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
5601 @item @code{typedef union acc_api_info}
5602 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
5603 Information}. This should obviously be @code{typedef @emph{struct}
5606 @item @code{acc_api_info.device_api}
5607 Possibly not yet implemented correctly for
5608 @code{acc_ev_compute_construct_start},
5609 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
5610 will always be @code{acc_device_api_none} for these event types.
5611 For @code{acc_ev_enter_data_start}, it will be
5612 @code{acc_device_api_none} in some cases.
5614 @item @code{acc_api_info.device_type}
5615 Always the same as @code{acc_prof_info.device_type}.
5617 @item @code{acc_api_info.vendor}
5618 Always @code{-1}; not yet implemented.
5620 @item @code{acc_api_info.device_handle}
5621 Always @code{NULL}; not yet implemented.
5623 @item @code{acc_api_info.context_handle}
5624 Always @code{NULL}; not yet implemented.
5626 @item @code{acc_api_info.async_handle}
5627 Always @code{NULL}; not yet implemented.
5631 Remarks about certain event types:
5635 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
5639 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
5640 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
5641 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
5642 When a compute construct triggers implicit
5643 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
5644 events, they currently aren't @emph{nested within} the corresponding
5645 @code{acc_ev_compute_construct_start} and
5646 @code{acc_ev_compute_construct_end}, but they're currently observed
5647 @emph{before} @code{acc_ev_compute_construct_start}.
5648 It's not clear what to do: the standard asks us provide a lot of
5649 details to the @code{acc_ev_compute_construct_start} callback, without
5650 (implicitly) initializing a device before?
5653 Callbacks for these event types will not be invoked for calls to the
5654 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
5655 It's not clear if they should be.
5659 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
5663 Callbacks for these event types will also be invoked for OpenACC
5664 @emph{host_data} constructs.
5665 It's not clear if they should be.
5668 Callbacks for these event types will also be invoked when processing
5669 variable mappings specified in OpenACC @emph{declare} directives.
5670 It's not clear if they should be.
5676 Callbacks for the following event types will be invoked, but dispatch
5677 and information provided therein has not yet been thoroughly reviewed:
5680 @item @code{acc_ev_alloc}
5681 @item @code{acc_ev_free}
5682 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
5683 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
5684 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
5687 During device initialization, and finalization, respectively,
5688 callbacks for the following event types will not yet be invoked:
5691 @item @code{acc_ev_alloc}
5692 @item @code{acc_ev_free}
5695 Callbacks for the following event types have not yet been implemented,
5696 so currently won't be invoked:
5699 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
5700 @item @code{acc_ev_runtime_shutdown}
5701 @item @code{acc_ev_create}, @code{acc_ev_delete}
5702 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
5705 For the following runtime library functions, not all expected
5706 callbacks will be invoked (mostly concerning implicit device
5710 @item @code{acc_get_num_devices}
5711 @item @code{acc_set_device_type}
5712 @item @code{acc_get_device_type}
5713 @item @code{acc_set_device_num}
5714 @item @code{acc_get_device_num}
5715 @item @code{acc_init}
5716 @item @code{acc_shutdown}
5719 Aside from implicit device initialization, for the following runtime
5720 library functions, no callbacks will be invoked for shared-memory
5721 offloading devices (it's not clear if they should be):
5724 @item @code{acc_malloc}
5725 @item @code{acc_free}
5726 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
5727 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
5728 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
5729 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
5730 @item @code{acc_update_device}, @code{acc_update_device_async}
5731 @item @code{acc_update_self}, @code{acc_update_self_async}
5732 @item @code{acc_map_data}, @code{acc_unmap_data}
5733 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
5734 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
5737 @c ---------------------------------------------------------------------
5738 @c OpenMP-Implementation Specifics
5739 @c ---------------------------------------------------------------------
5741 @node OpenMP-Implementation Specifics
5742 @chapter OpenMP-Implementation Specifics
5745 * Implementation-defined ICV Initialization::
5746 * OpenMP Context Selectors::
5747 * Memory allocation::
5750 @node Implementation-defined ICV Initialization
5751 @section Implementation-defined ICV Initialization
5752 @cindex Implementation specific setting
5754 @multitable @columnfractions .30 .70
5755 @item @var{affinity-format-var} @tab See @ref{OMP_AFFINITY_FORMAT}.
5756 @item @var{def-allocator-var} @tab See @ref{OMP_ALLOCATOR}.
5757 @item @var{max-active-levels-var} @tab See @ref{OMP_MAX_ACTIVE_LEVELS}.
5758 @item @var{dyn-var} @tab See @ref{OMP_DYNAMIC}.
5759 @item @var{nthreads-var} @tab See @ref{OMP_NUM_THREADS}.
5760 @item @var{num-devices-var} @tab Number of non-host devices found
5761 by GCC's run-time library
5762 @item @var{num-procs-var} @tab The number of CPU cores on the
5763 initial device, except that affinity settings might lead to a
5764 smaller number. On non-host devices, the value of the
5765 @var{nthreads-var} ICV.
5766 @item @var{place-partition-var} @tab See @ref{OMP_PLACES}.
5767 @item @var{run-sched-var} @tab See @ref{OMP_SCHEDULE}.
5768 @item @var{stacksize-var} @tab See @ref{OMP_STACKSIZE}.
5769 @item @var{thread-limit-var} @tab See @ref{OMP_TEAMS_THREAD_LIMIT}
5770 @item @var{wait-policy-var} @tab See @ref{OMP_WAIT_POLICY} and
5771 @ref{GOMP_SPINCOUNT}
5774 @node OpenMP Context Selectors
5775 @section OpenMP Context Selectors
5777 @code{vendor} is always @code{gnu}. References are to the GCC manual.
5779 @c NOTE: Only the following selectors have been implemented. To add
5780 @c additional traits for target architecture, TARGET_OMP_DEVICE_KIND_ARCH_ISA
5781 @c has to be implemented; cf. also PR target/105640.
5782 @c For offload devices, add *additionally* gcc/config/*/t-omp-device.
5784 For the host compiler, @code{kind} always matches @code{host}; for the
5785 offloading architectures AMD GCN and Nvidia PTX, @code{kind} always matches
5786 @code{gpu}. For the x86 family of computers, AMD GCN and Nvidia PTX
5787 the following traits are supported in addition; while OpenMP is supported
5788 on more architectures, GCC currently does not match any @code{arch} or
5789 @code{isa} traits for those.
5791 @multitable @columnfractions .65 .30
5792 @headitem @code{arch} @tab @code{isa}
5793 @item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
5794 @code{i586}, @code{i686}, @code{ia32}
5795 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
5796 @item @code{amdgcn}, @code{gcn}
5797 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
5798 @code{gfx803} is supported as an alias for @code{fiji}.}
5800 @tab See @code{-march=} in ``Nvidia PTX Options''
5803 @node Memory allocation
5804 @section Memory allocation
5806 The description below applies to:
5809 @item Explicit use of the OpenMP API routines, see
5810 @ref{Memory Management Routines}.
5811 @item The @code{allocate} clause, except when the @code{allocator} modifier is a
5812 constant expression with value @code{omp_default_mem_alloc} and no
5813 @code{align} modifier has been specified. (In that case, the normal
5814 @code{malloc} allocation is used.)
5815 @item Using the @code{allocate} directive for automatic/stack variables, except
5816 when the @code{allocator} clause is a constant expression with value
5817 @code{omp_default_mem_alloc} and no @code{align} clause has been
5818 specified. (In that case, the normal allocation is used: stack allocation
5819 and, sometimes for Fortran, also @code{malloc} [depending on flags such as
5820 @option{-fstack-arrays}].)
5821 @item Using the @code{allocate} directive for variable in static memory is
5822 currently not supported (compile time error).
5823 @item In Fortran, the @code{allocators} directive and the executable
5824 @code{allocate} directive for Fortran pointers and allocatables is
5825 supported, but requires that files containing those directives has to be
5826 compiled with @option{-fopenmp-allocators}. Additionally, all files that
5827 might explicitly or implicitly deallocate memory allocated that way must
5828 also be compiled with that option.
5831 For the available predefined allocators and, as applicable, their associated
5832 predefined memory spaces and for the available traits and their default values,
5833 see @ref{OMP_ALLOCATOR}. Predefined allocators without an associated memory
5834 space use the @code{omp_default_mem_space} memory space.
5836 For the memory spaces, the following applies:
5838 @item @code{omp_default_mem_space} is supported
5839 @item @code{omp_const_mem_space} maps to @code{omp_default_mem_space}
5840 @item @code{omp_low_lat_mem_space} is only available on supported devices,
5841 and maps to @code{omp_default_mem_space} otherwise.
5842 @item @code{omp_large_cap_mem_space} maps to @code{omp_default_mem_space},
5843 unless the memkind library is available
5844 @item @code{omp_high_bw_mem_space} maps to @code{omp_default_mem_space},
5845 unless the memkind library is available
5848 On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
5849 library} (@code{libmemkind.so.0}) is available at runtime, it is used when
5850 creating memory allocators requesting
5853 @item the memory space @code{omp_high_bw_mem_space}
5854 @item the memory space @code{omp_large_cap_mem_space}
5855 @item the @code{partition} trait @code{interleaved}; note that for
5856 @code{omp_large_cap_mem_space} the allocation will not be interleaved
5859 On Linux systems, where the @uref{https://github.com/numactl/numactl, numa
5860 library} (@code{libnuma.so.1}) is available at runtime, it used when creating
5861 memory allocators requesting
5864 @item the @code{partition} trait @code{nearest}, except when both the
5865 libmemkind library is available and the memory space is either
5866 @code{omp_large_cap_mem_space} or @code{omp_high_bw_mem_space}
5869 Note that the numa library will round up the allocation size to a multiple of
5870 the system page size; therefore, consider using it only with large data or
5871 by sharing allocations via the @code{pool_size} trait. Furthermore, the Linux
5872 kernel does not guarantee that an allocation will always be on the nearest NUMA
5873 node nor that after reallocation the same node will be used. Note additionally
5874 that, on Linux, the default setting of the memory placement policy is to use the
5875 current node; therefore, unless the memory placement policy has been overridden,
5876 the @code{partition} trait @code{environment} (the default) will be effectively
5877 a @code{nearest} allocation.
5879 Additional notes regarding the traits:
5881 @item The @code{pinned} trait is supported on Linux hosts, but is subject to
5882 the OS @code{ulimit}/@code{rlimit} locked memory settings.
5883 @item The default for the @code{pool_size} trait is no pool and for every
5884 (re)allocation the associated library routine is called, which might
5885 internally use a memory pool.
5886 @item For the @code{partition} trait, the partition part size will be the same
5887 as the requested size (i.e. @code{interleaved} or @code{blocked} has no
5888 effect), except for @code{interleaved} when the memkind library is
5889 available. Furthermore, for @code{nearest} and unless the numa library
5890 is available, the memory might not be on the same NUMA node as thread
5891 that allocated the memory; on Linux, this is in particular the case when
5892 the memory placement policy is set to preferred.
5893 @item The @code{access} trait has no effect such that memory is always
5894 accessible by all threads.
5895 @item The @code{sync_hint} trait has no effect.
5899 @ref{Offload-Target Specifics}
5901 @c ---------------------------------------------------------------------
5902 @c Offload-Target Specifics
5903 @c ---------------------------------------------------------------------
5905 @node Offload-Target Specifics
5906 @chapter Offload-Target Specifics
5908 The following sections present notes on the offload-target specifics
5916 @section AMD Radeon (GCN)
5918 On the hardware side, there is the hierarchy (fine to coarse):
5920 @item work item (thread)
5923 @item compute unit (CU)
5926 All OpenMP and OpenACC levels are used, i.e.
5928 @item OpenMP's simd and OpenACC's vector map to work items (thread)
5929 @item OpenMP's threads (``parallel'') and OpenACC's workers map
5931 @item OpenMP's teams and OpenACC's gang use a threadpool with the
5932 size of the number of teams or gangs, respectively.
5937 @item Number of teams is the specified @code{num_teams} (OpenMP) or
5938 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
5939 by two times the number of CU.
5940 @item Number of wavefronts is 4 for gfx900 and 16 otherwise;
5941 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
5942 overrides this if smaller.
5943 @item The wavefront has 102 scalars and 64 vectors
5944 @item Number of workitems is always 64
5945 @item The hardware permits maximally 40 workgroups/CU and
5946 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
5947 @item 80 scalars registers and 24 vector registers in non-kernel functions
5948 (the chosen procedure-calling API).
5949 @item For the kernel itself: as many as register pressure demands (number of
5950 teams and number of threads, scaled down if registers are exhausted)
5953 The implementation remark:
5955 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
5956 using the C library @code{printf} functions and the Fortran
5957 @code{print}/@code{write} statements.
5958 @item Reverse offload regions (i.e. @code{target} regions with
5959 @code{device(ancestor:1)}) are processed serially per @code{target} region
5960 such that the next reverse offload region is only executed after the previous
5962 @item OpenMP code that has a @code{requires} directive with
5963 @code{unified_shared_memory} will remove any GCN device from the list of
5964 available devices (``host fallback'').
5965 @item The available stack size can be changed using the @code{GCN_STACK_SIZE}
5966 environment variable; the default is 32 kiB per thread.
5967 @item Low-latency memory (@code{omp_low_lat_mem_space}) is supported when the
5968 the @code{access} trait is set to @code{cgroup}. The default pool size
5969 is automatically scaled to share the 64 kiB LDS memory between the number
5970 of teams configured to run on each compute-unit, but may be adjusted at
5971 runtime by setting environment variable
5972 @code{GOMP_GCN_LOWLAT_POOL=@var{bytes}}.
5973 @item @code{omp_low_lat_mem_alloc} cannot be used with true low-latency memory
5974 because the definition implies the @code{omp_atv_all} trait; main
5975 graphics memory is used instead.
5976 @item @code{omp_cgroup_mem_alloc}, @code{omp_pteam_mem_alloc}, and
5977 @code{omp_thread_mem_alloc}, all use low-latency memory as first
5978 preference, and fall back to main graphics memory when the low-latency
5987 On the hardware side, there is the hierarchy (fine to coarse):
5992 @item streaming multiprocessor
5995 All OpenMP and OpenACC levels are used, i.e.
5997 @item OpenMP's simd and OpenACC's vector map to threads
5998 @item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
5999 @item OpenMP's teams and OpenACC's gang use a threadpool with the
6000 size of the number of teams or gangs, respectively.
6005 @item The @code{warp_size} is always 32
6006 @item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
6007 @item The number of teams is limited by the number of blocks the device can
6008 host simultaneously.
6011 Additional information can be obtained by setting the environment variable to
6012 @code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
6015 GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
6016 which caches the JIT in the user's directory (see CUDA documentation; can be
6017 tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
6019 Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
6020 options still affect the used PTX ISA code and, thus, the requirements on
6021 CUDA version and hardware.
6023 The implementation remark:
6025 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
6026 using the C library @code{printf} functions. Note that the Fortran
6027 @code{print}/@code{write} statements are not supported, yet.
6028 @item Compilation OpenMP code that contains @code{requires reverse_offload}
6029 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
6031 @item For code containing reverse offload (i.e. @code{target} regions with
6032 @code{device(ancestor:1)}), there is a slight performance penalty
6033 for @emph{all} target regions, consisting mostly of shutdown delay
6034 Per device, reverse offload regions are processed serially such that
6035 the next reverse offload region is only executed after the previous
6037 @item OpenMP code that has a @code{requires} directive with
6038 @code{unified_shared_memory} will remove any nvptx device from the
6039 list of available devices (``host fallback'').
6040 @item The default per-warp stack size is 128 kiB; see also @code{-msoft-stack}
6042 @item The OpenMP routines @code{omp_target_memcpy_rect} and
6043 @code{omp_target_memcpy_rect_async} and the @code{target update}
6044 directive for non-contiguous list items will use the 2D and 3D
6045 memory-copy functions of the CUDA library. Higher dimensions will
6046 call those functions in a loop and are therefore supported.
6047 @item Low-latency memory (@code{omp_low_lat_mem_space}) is supported when the
6048 the @code{access} trait is set to @code{cgroup}, the ISA is at least
6049 @code{sm_53}, and the PTX version is at least 4.1. The default pool size
6050 is 8 kiB per team, but may be adjusted at runtime by setting environment
6051 variable @code{GOMP_NVPTX_LOWLAT_POOL=@var{bytes}}. The maximum value is
6052 limited by the available hardware, and care should be taken that the
6053 selected pool size does not unduly limit the number of teams that can
6055 @item @code{omp_low_lat_mem_alloc} cannot be used with true low-latency memory
6056 because the definition implies the @code{omp_atv_all} trait; main
6057 graphics memory is used instead.
6058 @item @code{omp_cgroup_mem_alloc}, @code{omp_pteam_mem_alloc}, and
6059 @code{omp_thread_mem_alloc}, all use low-latency memory as first
6060 preference, and fall back to main graphics memory when the low-latency
6065 @c ---------------------------------------------------------------------
6067 @c ---------------------------------------------------------------------
6069 @node The libgomp ABI
6070 @chapter The libgomp ABI
6072 The following sections present notes on the external ABI as
6073 presented by libgomp. Only maintainers should need them.
6076 * Implementing MASTER construct::
6077 * Implementing CRITICAL construct::
6078 * Implementing ATOMIC construct::
6079 * Implementing FLUSH construct::
6080 * Implementing BARRIER construct::
6081 * Implementing THREADPRIVATE construct::
6082 * Implementing PRIVATE clause::
6083 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
6084 * Implementing REDUCTION clause::
6085 * Implementing PARALLEL construct::
6086 * Implementing FOR construct::
6087 * Implementing ORDERED construct::
6088 * Implementing SECTIONS construct::
6089 * Implementing SINGLE construct::
6090 * Implementing OpenACC's PARALLEL construct::
6094 @node Implementing MASTER construct
6095 @section Implementing MASTER construct
6098 if (omp_get_thread_num () == 0)
6102 Alternately, we generate two copies of the parallel subfunction
6103 and only include this in the version run by the primary thread.
6104 Surely this is not worthwhile though...
6108 @node Implementing CRITICAL construct
6109 @section Implementing CRITICAL construct
6111 Without a specified name,
6114 void GOMP_critical_start (void);
6115 void GOMP_critical_end (void);
6118 so that we don't get COPY relocations from libgomp to the main
6121 With a specified name, use omp_set_lock and omp_unset_lock with
6122 name being transformed into a variable declared like
6125 omp_lock_t gomp_critical_user_<name> __attribute__((common))
6128 Ideally the ABI would specify that all zero is a valid unlocked
6129 state, and so we wouldn't need to initialize this at
6134 @node Implementing ATOMIC construct
6135 @section Implementing ATOMIC construct
6137 The target should implement the @code{__sync} builtins.
6139 Failing that we could add
6142 void GOMP_atomic_enter (void)
6143 void GOMP_atomic_exit (void)
6146 which reuses the regular lock code, but with yet another lock
6147 object private to the library.
6151 @node Implementing FLUSH construct
6152 @section Implementing FLUSH construct
6154 Expands to the @code{__sync_synchronize} builtin.
6158 @node Implementing BARRIER construct
6159 @section Implementing BARRIER construct
6162 void GOMP_barrier (void)
6166 @node Implementing THREADPRIVATE construct
6167 @section Implementing THREADPRIVATE construct
6169 In _most_ cases we can map this directly to @code{__thread}. Except
6170 that OMP allows constructors for C++ objects. We can either
6171 refuse to support this (how often is it used?) or we can
6172 implement something akin to .ctors.
6174 Even more ideally, this ctor feature is handled by extensions
6175 to the main pthreads library. Failing that, we can have a set
6176 of entry points to register ctor functions to be called.
6180 @node Implementing PRIVATE clause
6181 @section Implementing PRIVATE clause
6183 In association with a PARALLEL, or within the lexical extent
6184 of a PARALLEL block, the variable becomes a local variable in
6185 the parallel subfunction.
6187 In association with FOR or SECTIONS blocks, create a new
6188 automatic variable within the current function. This preserves
6189 the semantic of new variable creation.
6193 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
6194 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
6196 This seems simple enough for PARALLEL blocks. Create a private
6197 struct for communicating between the parent and subfunction.
6198 In the parent, copy in values for scalar and "small" structs;
6199 copy in addresses for others TREE_ADDRESSABLE types. In the
6200 subfunction, copy the value into the local variable.
6202 It is not clear what to do with bare FOR or SECTION blocks.
6203 The only thing I can figure is that we do something like:
6206 #pragma omp for firstprivate(x) lastprivate(y)
6207 for (int i = 0; i < n; ++i)
6224 where the "x=x" and "y=y" assignments actually have different
6225 uids for the two variables, i.e. not something you could write
6226 directly in C. Presumably this only makes sense if the "outer"
6227 x and y are global variables.
6229 COPYPRIVATE would work the same way, except the structure
6230 broadcast would have to happen via SINGLE machinery instead.
6234 @node Implementing REDUCTION clause
6235 @section Implementing REDUCTION clause
6237 The private struct mentioned in the previous section should have
6238 a pointer to an array of the type of the variable, indexed by the
6239 thread's @var{team_id}. The thread stores its final value into the
6240 array, and after the barrier, the primary thread iterates over the
6241 array to collect the values.
6244 @node Implementing PARALLEL construct
6245 @section Implementing PARALLEL construct
6248 #pragma omp parallel
6257 void subfunction (void *data)
6264 GOMP_parallel_start (subfunction, &data, num_threads);
6265 subfunction (&data);
6266 GOMP_parallel_end ();
6270 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
6273 The @var{FN} argument is the subfunction to be run in parallel.
6275 The @var{DATA} argument is a pointer to a structure used to
6276 communicate data in and out of the subfunction, as discussed
6277 above with respect to FIRSTPRIVATE et al.
6279 The @var{NUM_THREADS} argument is 1 if an IF clause is present
6280 and false, or the value of the NUM_THREADS clause, if
6283 The function needs to create the appropriate number of
6284 threads and/or launch them from the dock. It needs to
6285 create the team structure and assign team ids.
6288 void GOMP_parallel_end (void)
6291 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
6295 @node Implementing FOR construct
6296 @section Implementing FOR construct
6299 #pragma omp parallel for
6300 for (i = lb; i <= ub; i++)
6307 void subfunction (void *data)
6310 while (GOMP_loop_static_next (&_s0, &_e0))
6313 for (i = _s0; i < _e1; i++)
6316 GOMP_loop_end_nowait ();
6319 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
6321 GOMP_parallel_end ();
6325 #pragma omp for schedule(runtime)
6326 for (i = 0; i < n; i++)
6335 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
6338 for (i = _s0, i < _e0; i++)
6340 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
6345 Note that while it looks like there is trickiness to propagating
6346 a non-constant STEP, there isn't really. We're explicitly allowed
6347 to evaluate it as many times as we want, and any variables involved
6348 should automatically be handled as PRIVATE or SHARED like any other
6349 variables. So the expression should remain evaluable in the
6350 subfunction. We can also pull it into a local variable if we like,
6351 but since its supposed to remain unchanged, we can also not if we like.
6353 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
6354 able to get away with no work-sharing context at all, since we can
6355 simply perform the arithmetic directly in each thread to divide up
6356 the iterations. Which would mean that we wouldn't need to call any
6359 There are separate routines for handling loops with an ORDERED
6360 clause. Bookkeeping for that is non-trivial...
6364 @node Implementing ORDERED construct
6365 @section Implementing ORDERED construct
6368 void GOMP_ordered_start (void)
6369 void GOMP_ordered_end (void)
6374 @node Implementing SECTIONS construct
6375 @section Implementing SECTIONS construct
6380 #pragma omp sections
6394 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
6411 @node Implementing SINGLE construct
6412 @section Implementing SINGLE construct
6426 if (GOMP_single_start ())
6434 #pragma omp single copyprivate(x)
6441 datap = GOMP_single_copy_start ();
6446 GOMP_single_copy_end (&data);
6455 @node Implementing OpenACC's PARALLEL construct
6456 @section Implementing OpenACC's PARALLEL construct
6459 void GOACC_parallel ()
6464 @c ---------------------------------------------------------------------
6466 @c ---------------------------------------------------------------------
6468 @node Reporting Bugs
6469 @chapter Reporting Bugs
6471 Bugs in the GNU Offloading and Multi Processing Runtime Library should
6472 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
6473 "openacc", or "openmp", or both to the keywords field in the bug
6474 report, as appropriate.
6478 @c ---------------------------------------------------------------------
6479 @c GNU General Public License
6480 @c ---------------------------------------------------------------------
6482 @include gpl_v3.texi
6486 @c ---------------------------------------------------------------------
6487 @c GNU Free Documentation License
6488 @c ---------------------------------------------------------------------
6494 @c ---------------------------------------------------------------------
6495 @c Funding Free Software
6496 @c ---------------------------------------------------------------------
6498 @include funding.texi
6500 @c ---------------------------------------------------------------------
6502 @c ---------------------------------------------------------------------
6505 @unnumbered Library Index