1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006-2022 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU Offloading and Multi Processing Runtime Library.
37 This manual documents libgomp, the GNU Offloading and Multi Processing
38 Runtime library. This is the GNU implementation of the OpenMP and
39 OpenACC APIs for parallel and accelerator programming in C/C++ and
42 Published by the Free Software Foundation
43 51 Franklin Street, Fifth Floor
44 Boston, MA 02110-1301 USA
50 @setchapternewpage odd
53 @title GNU Offloading and Multi Processing Runtime Library
54 @subtitle The GNU OpenMP and OpenACC Implementation
56 @vskip 0pt plus 1filll
57 @comment For the @value{version-GCC} Version*
59 Published by the Free Software Foundation @*
60 51 Franklin Street, Fifth Floor@*
61 Boston, MA 02110-1301, USA@*
71 @node Top, Enabling OpenMP
75 This manual documents the usage of libgomp, the GNU Offloading and
76 Multi Processing Runtime Library. This includes the GNU
77 implementation of the @uref{https://www.openmp.org, OpenMP} Application
78 Programming Interface (API) for multi-platform shared-memory parallel
79 programming in C/C++ and Fortran, and the GNU implementation of the
80 @uref{https://www.openacc.org, OpenACC} Application Programming
81 Interface (API) for offloading of code to accelerator devices in C/C++
84 Originally, libgomp implemented the GNU OpenMP Runtime Library. Based
85 on this, support for OpenACC and offloading (both OpenACC and OpenMP
86 4's target construct) has been added later on, and the library's name
87 changed to GNU Offloading and Multi Processing Runtime Library.
92 @comment When you add a new menu item, please keep the right hand
93 @comment aligned to the same column. Do not use tabs. This provides
94 @comment better formatting.
97 * Enabling OpenMP:: How to enable OpenMP for your applications.
98 * OpenMP Implementation Status:: List of implemented features by OpenMP version
99 * OpenMP Runtime Library Routines: Runtime Library Routines.
100 The OpenMP runtime application programming
102 * OpenMP Environment Variables: Environment Variables.
103 Influencing OpenMP runtime behavior with
104 environment variables.
105 * Enabling OpenACC:: How to enable OpenACC for your
107 * OpenACC Runtime Library Routines:: The OpenACC runtime application
108 programming interface.
109 * OpenACC Environment Variables:: Influencing OpenACC runtime behavior with
110 environment variables.
111 * CUDA Streams Usage:: Notes on the implementation of
112 asynchronous operations.
113 * OpenACC Library Interoperability:: OpenACC library interoperability with the
114 NVIDIA CUBLAS library.
115 * OpenACC Profiling Interface::
116 * OpenMP-Implementation Specifics:: Notes specifics of this OpenMP
118 * Offload-Target Specifics:: Notes on offload-target specific internals
119 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
120 * Reporting Bugs:: How to report bugs in the GNU Offloading and
121 Multi Processing Runtime Library.
122 * Copying:: GNU general public license says
123 how you can copy and share libgomp.
124 * GNU Free Documentation License::
125 How you can copy and share this manual.
126 * Funding:: How to help assure continued work for free
128 * Library Index:: Index of this documentation.
132 @c ---------------------------------------------------------------------
134 @c ---------------------------------------------------------------------
136 @node Enabling OpenMP
137 @chapter Enabling OpenMP
139 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
140 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
141 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
142 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
143 @code{!$} conditional compilation sentinels in free form and @code{c$},
144 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
145 arranges for automatic linking of the OpenMP runtime library
146 (@ref{Runtime Library Routines}).
148 A complete description of all OpenMP directives may be found in the
149 @uref{https://www.openmp.org, OpenMP Application Program Interface} manuals.
150 See also @ref{OpenMP Implementation Status}.
153 @c ---------------------------------------------------------------------
154 @c OpenMP Implementation Status
155 @c ---------------------------------------------------------------------
157 @node OpenMP Implementation Status
158 @chapter OpenMP Implementation Status
161 * OpenMP 4.5:: Feature completion status to 4.5 specification
162 * OpenMP 5.0:: Feature completion status to 5.0 specification
163 * OpenMP 5.1:: Feature completion status to 5.1 specification
164 * OpenMP 5.2:: Feature completion status to 5.2 specification
165 * OpenMP Technical Report 11:: Feature completion status to first 6.0 preview
168 The @code{_OPENMP} preprocessor macro and Fortran's @code{openmp_version}
169 parameter, provided by @code{omp_lib.h} and the @code{omp_lib} module, have
170 the value @code{201511} (i.e. OpenMP 4.5).
175 The OpenMP 4.5 specification is fully supported.
180 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
181 @c This list is sorted as in OpenMP 5.1's B.3 not as in OpenMP 5.0's B.2
183 @multitable @columnfractions .60 .10 .25
184 @headitem Description @tab Status @tab Comments
185 @item Array shaping @tab N @tab
186 @item Array sections with non-unit strides in C and C++ @tab N @tab
187 @item Iterators @tab Y @tab
188 @item @code{metadirective} directive @tab N @tab
189 @item @code{declare variant} directive
190 @tab P @tab @emph{simd} traits not handled correctly
191 @item @emph{target-offload-var} ICV and @code{OMP_TARGET_OFFLOAD}
192 env variable @tab Y @tab
193 @item Nested-parallel changes to @emph{max-active-levels-var} ICV @tab Y @tab
194 @item @code{requires} directive @tab P
195 @tab complete but no non-host devices provides @code{unified_address},
196 @code{unified_shared_memory} or @code{reverse_offload}
197 @item @code{teams} construct outside an enclosing target region @tab Y @tab
198 @item Non-rectangular loop nests @tab Y @tab
199 @item @code{!=} as relational-op in canonical loop form for C/C++ @tab Y @tab
200 @item @code{nonmonotonic} as default loop schedule modifier for worksharing-loop
201 constructs @tab Y @tab
202 @item Collapse of associated loops that are imperfectly nested loops @tab N @tab
203 @item Clauses @code{if}, @code{nontemporal} and @code{order(concurrent)} in
204 @code{simd} construct @tab Y @tab
205 @item @code{atomic} constructs in @code{simd} @tab Y @tab
206 @item @code{loop} construct @tab Y @tab
207 @item @code{order(concurrent)} clause @tab Y @tab
208 @item @code{scan} directive and @code{in_scan} modifier for the
209 @code{reduction} clause @tab Y @tab
210 @item @code{in_reduction} clause on @code{task} constructs @tab Y @tab
211 @item @code{in_reduction} clause on @code{target} constructs @tab P
212 @tab @code{nowait} only stub
213 @item @code{task_reduction} clause with @code{taskgroup} @tab Y @tab
214 @item @code{task} modifier to @code{reduction} clause @tab Y @tab
215 @item @code{affinity} clause to @code{task} construct @tab Y @tab Stub only
216 @item @code{detach} clause to @code{task} construct @tab Y @tab
217 @item @code{omp_fulfill_event} runtime routine @tab Y @tab
218 @item @code{reduction} and @code{in_reduction} clauses on @code{taskloop}
219 and @code{taskloop simd} constructs @tab Y @tab
220 @item @code{taskloop} construct cancelable by @code{cancel} construct
222 @item @code{mutexinoutset} @emph{dependence-type} for @code{depend} clause
224 @item Predefined memory spaces, memory allocators, allocator traits
225 @tab Y @tab Some are only stubs
226 @item Memory management routines @tab Y @tab
227 @item @code{allocate} directive @tab N @tab
228 @item @code{allocate} clause @tab P @tab Initial support
229 @item @code{use_device_addr} clause on @code{target data} @tab Y @tab
230 @item @code{ancestor} modifier on @code{device} clause
231 @tab Y @tab See comment for @code{requires}
232 @item Implicit declare target directive @tab Y @tab
233 @item Discontiguous array section with @code{target update} construct
235 @item C/C++'s lvalue expressions in @code{to}, @code{from}
236 and @code{map} clauses @tab N @tab
237 @item C/C++'s lvalue expressions in @code{depend} clauses @tab Y @tab
238 @item Nested @code{declare target} directive @tab Y @tab
239 @item Combined @code{master} constructs @tab Y @tab
240 @item @code{depend} clause on @code{taskwait} @tab Y @tab
241 @item Weak memory ordering clauses on @code{atomic} and @code{flush} construct
243 @item @code{hint} clause on the @code{atomic} construct @tab Y @tab Stub only
244 @item @code{depobj} construct and depend objects @tab Y @tab
245 @item Lock hints were renamed to synchronization hints @tab Y @tab
246 @item @code{conditional} modifier to @code{lastprivate} clause @tab Y @tab
247 @item Map-order clarifications @tab P @tab
248 @item @code{close} @emph{map-type-modifier} @tab Y @tab
249 @item Mapping C/C++ pointer variables and to assign the address of
250 device memory mapped by an array section @tab P @tab
251 @item Mapping of Fortran pointer and allocatable variables, including pointer
252 and allocatable components of variables
253 @tab P @tab Mapping of vars with allocatable components unsupported
254 @item @code{defaultmap} extensions @tab Y @tab
255 @item @code{declare mapper} directive @tab N @tab
256 @item @code{omp_get_supported_active_levels} routine @tab Y @tab
257 @item Runtime routines and environment variables to display runtime thread
258 affinity information @tab Y @tab
259 @item @code{omp_pause_resource} and @code{omp_pause_resource_all} runtime
261 @item @code{omp_get_device_num} runtime routine @tab Y @tab
262 @item OMPT interface @tab N @tab
263 @item OMPD interface @tab N @tab
266 @unnumberedsubsec Other new OpenMP 5.0 features
268 @multitable @columnfractions .60 .10 .25
269 @headitem Description @tab Status @tab Comments
270 @item Supporting C++'s range-based for loop @tab Y @tab
277 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
279 @multitable @columnfractions .60 .10 .25
280 @headitem Description @tab Status @tab Comments
281 @item OpenMP directive as C++ attribute specifiers @tab Y @tab
282 @item @code{omp_all_memory} reserved locator @tab Y @tab
283 @item @emph{target_device trait} in OpenMP Context @tab N @tab
284 @item @code{target_device} selector set in context selectors @tab N @tab
285 @item C/C++'s @code{declare variant} directive: elision support of
286 preprocessed code @tab N @tab
287 @item @code{declare variant}: new clauses @code{adjust_args} and
288 @code{append_args} @tab N @tab
289 @item @code{dispatch} construct @tab N @tab
290 @item device-specific ICV settings with environment variables @tab Y @tab
291 @item @code{assume} directive @tab Y @tab
292 @item @code{nothing} directive @tab Y @tab
293 @item @code{error} directive @tab Y @tab
294 @item @code{masked} construct @tab Y @tab
295 @item @code{scope} directive @tab Y @tab
296 @item Loop transformation constructs @tab N @tab
297 @item @code{strict} modifier in the @code{grainsize} and @code{num_tasks}
298 clauses of the @code{taskloop} construct @tab Y @tab
299 @item @code{align} clause in @code{allocate} directive @tab N @tab
300 @item @code{align} modifier in @code{allocate} clause @tab Y @tab
301 @item @code{thread_limit} clause to @code{target} construct @tab Y @tab
302 @item @code{has_device_addr} clause to @code{target} construct @tab Y @tab
303 @item Iterators in @code{target update} motion clauses and @code{map}
305 @item Indirect calls to the device version of a procedure or function in
306 @code{target} regions @tab N @tab
307 @item @code{interop} directive @tab N @tab
308 @item @code{omp_interop_t} object support in runtime routines @tab N @tab
309 @item @code{nowait} clause in @code{taskwait} directive @tab Y @tab
310 @item Extensions to the @code{atomic} directive @tab Y @tab
311 @item @code{seq_cst} clause on a @code{flush} construct @tab Y @tab
312 @item @code{inoutset} argument to the @code{depend} clause @tab Y @tab
313 @item @code{private} and @code{firstprivate} argument to @code{default}
314 clause in C and C++ @tab Y @tab
315 @item @code{present} argument to @code{defaultmap} clause @tab N @tab
316 @item @code{omp_set_num_teams}, @code{omp_set_teams_thread_limit},
317 @code{omp_get_max_teams}, @code{omp_get_teams_thread_limit} runtime
319 @item @code{omp_target_is_accessible} runtime routine @tab Y @tab
320 @item @code{omp_target_memcpy_async} and @code{omp_target_memcpy_rect_async}
321 runtime routines @tab Y @tab
322 @item @code{omp_get_mapped_ptr} runtime routine @tab Y @tab
323 @item @code{omp_calloc}, @code{omp_realloc}, @code{omp_aligned_alloc} and
324 @code{omp_aligned_calloc} runtime routines @tab Y @tab
325 @item @code{omp_alloctrait_key_t} enum: @code{omp_atv_serialized} added,
326 @code{omp_atv_default} changed @tab Y @tab
327 @item @code{omp_display_env} runtime routine @tab Y @tab
328 @item @code{ompt_scope_endpoint_t} enum: @code{ompt_scope_beginend} @tab N @tab
329 @item @code{ompt_sync_region_t} enum additions @tab N @tab
330 @item @code{ompt_state_t} enum: @code{ompt_state_wait_barrier_implementation}
331 and @code{ompt_state_wait_barrier_teams} @tab N @tab
332 @item @code{ompt_callback_target_data_op_emi_t},
333 @code{ompt_callback_target_emi_t}, @code{ompt_callback_target_map_emi_t}
334 and @code{ompt_callback_target_submit_emi_t} @tab N @tab
335 @item @code{ompt_callback_error_t} type @tab N @tab
336 @item @code{OMP_PLACES} syntax extensions @tab Y @tab
337 @item @code{OMP_NUM_TEAMS} and @code{OMP_TEAMS_THREAD_LIMIT} environment
338 variables @tab Y @tab
341 @unnumberedsubsec Other new OpenMP 5.1 features
343 @multitable @columnfractions .60 .10 .25
344 @headitem Description @tab Status @tab Comments
345 @item Support of strictly structured blocks in Fortran @tab Y @tab
346 @item Support of structured block sequences in C/C++ @tab Y @tab
347 @item @code{unconstrained} and @code{reproducible} modifiers on @code{order}
349 @item Support @code{begin/end declare target} syntax in C/C++ @tab Y @tab
350 @item Pointer predetermined firstprivate getting initialized
351 to address of matching mapped list item per 5.1, Sect. 2.21.7.2 @tab N @tab
352 @item For Fortran, diagnose placing declarative before/between @code{USE},
353 @code{IMPORT}, and @code{IMPLICIT} as invalid @tab N @tab
354 @item Optional comma beween directive and clause in the @code{#pragma} form @tab Y @tab
355 @item @code{indirect} clause in @code{declare target} @tab N @tab
356 @item @code{device_type(nohost)}/@code{device_type(host)} for variables @tab N @tab
363 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
365 @multitable @columnfractions .60 .10 .25
366 @headitem Description @tab Status @tab Comments
367 @item @code{omp_in_explicit_task} routine and @emph{explicit-task-var} ICV
369 @item @code{omp}/@code{ompx}/@code{omx} sentinels and @code{omp_}/@code{ompx_}
371 @tab warning for @code{ompx/omx} sentinels@footnote{The @code{ompx}
372 sentinel as C/C++ pragma and C++ attributes are warned for with
373 @code{-Wunknown-pragmas} (implied by @code{-Wall}) and @code{-Wattributes}
374 (enabled by default), respectively; for Fortran free-source code, there is
375 a warning enabled by default and, for fixed-source code, the @code{omx}
376 sentinel is warned for with with @code{-Wsurprising} (enabled by
377 @code{-Wall}). Unknown clauses are always rejected with an error.}
378 @item Clauses on @code{end} directive can be on directive @tab Y @tab
379 @item Deprecation of no-argument @code{destroy} clause on @code{depobj}
381 @item @code{linear} clause syntax changes and @code{step} modifier @tab Y @tab
382 @item Deprecation of minus operator for reductions @tab N @tab
383 @item Deprecation of separating @code{map} modifiers without comma @tab N @tab
384 @item @code{declare mapper} with iterator and @code{present} modifiers
386 @item If a matching mapped list item is not found in the data environment, the
387 pointer retains its original value @tab N @tab
388 @item New @code{enter} clause as alias for @code{to} on declare target directive
390 @item Deprecation of @code{to} clause on declare target directive @tab N @tab
391 @item Extended list of directives permitted in Fortran pure procedures
393 @item New @code{allocators} directive for Fortran @tab N @tab
394 @item Deprecation of @code{allocate} directive for Fortran
395 allocatables/pointers @tab N @tab
396 @item Optional paired @code{end} directive with @code{dispatch} @tab N @tab
397 @item New @code{memspace} and @code{traits} modifiers for @code{uses_allocators}
399 @item Deprecation of traits array following the allocator_handle expression in
400 @code{uses_allocators} @tab N @tab
401 @item New @code{otherwise} clause as alias for @code{default} on metadirectives
403 @item Deprecation of @code{default} clause on metadirectives @tab N @tab
404 @item Deprecation of delimited form of @code{declare target} @tab N @tab
405 @item Reproducible semantics changed for @code{order(concurrent)} @tab N @tab
406 @item @code{allocate} and @code{firstprivate} clauses on @code{scope}
408 @item @code{ompt_callback_work} @tab N @tab
409 @item Default map-type for the @code{map} clause in @code{target enter/exit data}
411 @item New @code{doacross} clause as alias for @code{depend} with
412 @code{source}/@code{sink} modifier @tab Y @tab
413 @item Deprecation of @code{depend} with @code{source}/@code{sink} modifier
415 @item @code{omp_cur_iteration} keyword @tab Y @tab
418 @unnumberedsubsec Other new OpenMP 5.2 features
420 @multitable @columnfractions .60 .10 .25
421 @headitem Description @tab Status @tab Comments
422 @item For Fortran, optional comma between directive and clause @tab N @tab
423 @item Conforming device numbers and @code{omp_initial_device} and
424 @code{omp_invalid_device} enum/PARAMETER @tab Y @tab
425 @item Initial value of @emph{default-device-var} ICV with
426 @code{OMP_TARGET_OFFLOAD=mandatory} @tab N @tab
427 @item @emph{interop_types} in any position of the modifier list for the @code{init} clause
428 of the @code{interop} construct @tab N @tab
432 @node OpenMP Technical Report 11
433 @section OpenMP Technical Report 11
435 Technical Report (TR) 11 is the first preview for OpenMP 6.0.
437 @unnumberedsubsec New features listed in Appendix B of the OpenMP specification
438 @multitable @columnfractions .60 .10 .25
439 @item Features deprecated in versions 5.2, 5.1 and 5.0 were removed
440 @tab N/A @tab Backward compatibility
441 @item The @code{decl} attribute was added to the C++ attribute syntax
443 @item @code{_ALL} suffix to the device-scope environment variables
444 @tab P @tab Host device number wrongly accepted
445 @item For Fortran, @emph{locator list} can be also function reference with
446 data pointer result @tab N @tab
447 @item Ref-count change for @code{use_device_ptr}/@code{use_device_addr}
449 @item Implicit reduction identifiers of C++ classes
451 @item Change of the @emph{map-type} property from @emph{ultimate} to
452 @emph{default} @tab N @tab
453 @item Concept of @emph{assumed-size arrays} in C and C++
455 @item Mapping of @emph{assumed-size arrays} in C, C++ and Fortran
457 @item @code{groupprivate} directive @tab N @tab
458 @item @code{local} clause to declare target directive @tab N @tab
459 @item @code{part_size} allocator trait @tab N @tab
460 @item @code{pin_device}, @code{preferred_device} and @code{target_access}
463 @item @code{access} allocator trait changes @tab N @tab
464 @item Extension of @code{interop} operation of @code{append_args}, allowing all
465 modifiers of the @code{init} clause
467 @item @code{interop} clause to @code{dispatch} @tab N @tab
468 @item @code{apply} code to loop-transforming constructs @tab N @tab
469 @item @code{omp_curr_progress_width} identifier @tab N @tab
470 @item @code{safesync} clause to the @code{parallel} construct @tab N @tab
471 @item @code{omp_get_max_progress_width} runtime routine @tab N @tab
472 @item @code{strict} modifier keyword to @code{num_threads}, @code{num_tasks}
473 and @code{grainsize} @tab N @tab
474 @item @code{memscope} clause to @code{atomic} and @code{flush} @tab N @tab
475 @item Routines for obtaining memory spaces/allocators for shared/device memory
477 @item @code{omp_get_memspace_num_resources} routine @tab N @tab
478 @item @code{omp_get_submemspace} routine @tab N @tab
479 @item @code{ompt_get_buffer_limits} OMPT routine @tab N @tab
480 @item Extension of @code{OMP_DEFAULT_DEVICE} and new
481 @code{OMP_AVAILABLE_DEVICES} environment vars @tab N @tab
482 @item Supporting increments with abstract names in @code{OMP_PLACES} @tab N @tab
485 @unnumberedsubsec Other new TR 11 features
486 @multitable @columnfractions .60 .10 .25
487 @item Relaxed Fortran restrictions to the @code{aligned} clause @tab N @tab
488 @item Mapping lambda captures @tab N @tab
489 @item For Fortran, atomic compare with storing the comparison result
491 @item @code{aligned} clause changes for @code{simd} and @code{declare simd}
497 @c ---------------------------------------------------------------------
498 @c OpenMP Runtime Library Routines
499 @c ---------------------------------------------------------------------
501 @node Runtime Library Routines
502 @chapter OpenMP Runtime Library Routines
504 The runtime routines described here are defined by Section 3 of the OpenMP
505 specification in version 4.5. The routines are structured in following
509 Control threads, processors and the parallel environment. They have C
510 linkage, and do not throw exceptions.
512 * omp_get_active_level:: Number of active parallel regions
513 * omp_get_ancestor_thread_num:: Ancestor thread ID
514 * omp_get_cancellation:: Whether cancellation support is enabled
515 * omp_get_default_device:: Get the default device for target regions
516 * omp_get_device_num:: Get device that current thread is running on
517 * omp_get_dynamic:: Dynamic teams setting
518 * omp_get_initial_device:: Device number of host device
519 * omp_get_level:: Number of parallel regions
520 * omp_get_max_active_levels:: Current maximum number of active regions
521 * omp_get_max_task_priority:: Maximum task priority value that can be set
522 * omp_get_max_teams:: Maximum number of teams for teams region
523 * omp_get_max_threads:: Maximum number of threads of parallel region
524 * omp_get_nested:: Nested parallel regions
525 * omp_get_num_devices:: Number of target devices
526 * omp_get_num_procs:: Number of processors online
527 * omp_get_num_teams:: Number of teams
528 * omp_get_num_threads:: Size of the active team
529 * omp_get_proc_bind:: Whether theads may be moved between CPUs
530 * omp_get_schedule:: Obtain the runtime scheduling method
531 * omp_get_supported_active_levels:: Maximum number of active regions supported
532 * omp_get_team_num:: Get team number
533 * omp_get_team_size:: Number of threads in a team
534 * omp_get_teams_thread_limit:: Maximum number of threads imposed by teams
535 * omp_get_thread_limit:: Maximum number of threads
536 * omp_get_thread_num:: Current thread ID
537 * omp_in_parallel:: Whether a parallel region is active
538 * omp_in_final:: Whether in final or included task region
539 * omp_is_initial_device:: Whether executing on the host device
540 * omp_set_default_device:: Set the default device for target regions
541 * omp_set_dynamic:: Enable/disable dynamic teams
542 * omp_set_max_active_levels:: Limits the number of active parallel regions
543 * omp_set_nested:: Enable/disable nested parallel regions
544 * omp_set_num_teams:: Set upper teams limit for teams region
545 * omp_set_num_threads:: Set upper team size limit
546 * omp_set_schedule:: Set the runtime scheduling method
547 * omp_set_teams_thread_limit:: Set upper thread limit for teams construct
549 Initialize, set, test, unset and destroy simple and nested locks.
551 * omp_init_lock:: Initialize simple lock
552 * omp_set_lock:: Wait for and set simple lock
553 * omp_test_lock:: Test and set simple lock if available
554 * omp_unset_lock:: Unset simple lock
555 * omp_destroy_lock:: Destroy simple lock
556 * omp_init_nest_lock:: Initialize nested lock
557 * omp_set_nest_lock:: Wait for and set simple lock
558 * omp_test_nest_lock:: Test and set nested lock if available
559 * omp_unset_nest_lock:: Unset nested lock
560 * omp_destroy_nest_lock:: Destroy nested lock
562 Portable, thread-based, wall clock timer.
564 * omp_get_wtick:: Get timer precision.
565 * omp_get_wtime:: Elapsed wall clock time.
567 Support for event objects.
569 * omp_fulfill_event:: Fulfill and destroy an OpenMP event.
574 @node omp_get_active_level
575 @section @code{omp_get_active_level} -- Number of parallel regions
577 @item @emph{Description}:
578 This function returns the nesting level for the active parallel blocks,
579 which enclose the calling call.
582 @multitable @columnfractions .20 .80
583 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
586 @item @emph{Fortran}:
587 @multitable @columnfractions .20 .80
588 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
591 @item @emph{See also}:
592 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
594 @item @emph{Reference}:
595 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.20.
600 @node omp_get_ancestor_thread_num
601 @section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
603 @item @emph{Description}:
604 This function returns the thread identification number for the given
605 nesting level of the current thread. For values of @var{level} outside
606 zero to @code{omp_get_level} -1 is returned; if @var{level} is
607 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
610 @multitable @columnfractions .20 .80
611 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
614 @item @emph{Fortran}:
615 @multitable @columnfractions .20 .80
616 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
617 @item @tab @code{integer level}
620 @item @emph{See also}:
621 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
623 @item @emph{Reference}:
624 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.18.
629 @node omp_get_cancellation
630 @section @code{omp_get_cancellation} -- Whether cancellation support is enabled
632 @item @emph{Description}:
633 This function returns @code{true} if cancellation is activated, @code{false}
634 otherwise. Here, @code{true} and @code{false} represent their language-specific
635 counterparts. Unless @env{OMP_CANCELLATION} is set true, cancellations are
639 @multitable @columnfractions .20 .80
640 @item @emph{Prototype}: @tab @code{int omp_get_cancellation(void);}
643 @item @emph{Fortran}:
644 @multitable @columnfractions .20 .80
645 @item @emph{Interface}: @tab @code{logical function omp_get_cancellation()}
648 @item @emph{See also}:
649 @ref{OMP_CANCELLATION}
651 @item @emph{Reference}:
652 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.9.
657 @node omp_get_default_device
658 @section @code{omp_get_default_device} -- Get the default device for target regions
660 @item @emph{Description}:
661 Get the default device for target regions without device clause.
664 @multitable @columnfractions .20 .80
665 @item @emph{Prototype}: @tab @code{int omp_get_default_device(void);}
668 @item @emph{Fortran}:
669 @multitable @columnfractions .20 .80
670 @item @emph{Interface}: @tab @code{integer function omp_get_default_device()}
673 @item @emph{See also}:
674 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_set_default_device}
676 @item @emph{Reference}:
677 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.30.
682 @node omp_get_device_num
683 @section @code{omp_get_device_num} -- Return device number of current device
685 @item @emph{Description}:
686 This function returns a device number that represents the device that the
687 current thread is executing on. For OpenMP 5.0, this must be equal to the
688 value returned by the @code{omp_get_initial_device} function when called
692 @multitable @columnfractions .20 .80
693 @item @emph{Prototype}: @tab @code{int omp_get_device_num(void);}
696 @item @emph{Fortran}:
697 @multitable @columnfractions .20 .80
698 @item @emph{Interface}: @tab @code{integer function omp_get_device_num()}
701 @item @emph{See also}:
702 @ref{omp_get_initial_device}
704 @item @emph{Reference}:
705 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.37.
710 @node omp_get_dynamic
711 @section @code{omp_get_dynamic} -- Dynamic teams setting
713 @item @emph{Description}:
714 This function returns @code{true} if enabled, @code{false} otherwise.
715 Here, @code{true} and @code{false} represent their language-specific
718 The dynamic team setting may be initialized at startup by the
719 @env{OMP_DYNAMIC} environment variable or at runtime using
720 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
724 @multitable @columnfractions .20 .80
725 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
728 @item @emph{Fortran}:
729 @multitable @columnfractions .20 .80
730 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
733 @item @emph{See also}:
734 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
736 @item @emph{Reference}:
737 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.8.
742 @node omp_get_initial_device
743 @section @code{omp_get_initial_device} -- Return device number of initial device
745 @item @emph{Description}:
746 This function returns a device number that represents the host device.
747 For OpenMP 5.1, this must be equal to the value returned by the
748 @code{omp_get_num_devices} function.
751 @multitable @columnfractions .20 .80
752 @item @emph{Prototype}: @tab @code{int omp_get_initial_device(void);}
755 @item @emph{Fortran}:
756 @multitable @columnfractions .20 .80
757 @item @emph{Interface}: @tab @code{integer function omp_get_initial_device()}
760 @item @emph{See also}:
761 @ref{omp_get_num_devices}
763 @item @emph{Reference}:
764 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.35.
770 @section @code{omp_get_level} -- Obtain the current nesting level
772 @item @emph{Description}:
773 This function returns the nesting level for the parallel blocks,
774 which enclose the calling call.
777 @multitable @columnfractions .20 .80
778 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
781 @item @emph{Fortran}:
782 @multitable @columnfractions .20 .80
783 @item @emph{Interface}: @tab @code{integer function omp_level()}
786 @item @emph{See also}:
787 @ref{omp_get_active_level}
789 @item @emph{Reference}:
790 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.17.
795 @node omp_get_max_active_levels
796 @section @code{omp_get_max_active_levels} -- Current maximum number of active regions
798 @item @emph{Description}:
799 This function obtains the maximum allowed number of nested, active parallel regions.
802 @multitable @columnfractions .20 .80
803 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
806 @item @emph{Fortran}:
807 @multitable @columnfractions .20 .80
808 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
811 @item @emph{See also}:
812 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
814 @item @emph{Reference}:
815 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.16.
819 @node omp_get_max_task_priority
820 @section @code{omp_get_max_task_priority} -- Maximum priority value
821 that can be set for tasks.
823 @item @emph{Description}:
824 This function obtains the maximum allowed priority number for tasks.
827 @multitable @columnfractions .20 .80
828 @item @emph{Prototype}: @tab @code{int omp_get_max_task_priority(void);}
831 @item @emph{Fortran}:
832 @multitable @columnfractions .20 .80
833 @item @emph{Interface}: @tab @code{integer function omp_get_max_task_priority()}
836 @item @emph{Reference}:
837 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
841 @node omp_get_max_teams
842 @section @code{omp_get_max_teams} -- Maximum number of teams of teams region
844 @item @emph{Description}:
845 Return the maximum number of teams used for the teams region
846 that does not use the clause @code{num_teams}.
849 @multitable @columnfractions .20 .80
850 @item @emph{Prototype}: @tab @code{int omp_get_max_teams(void);}
853 @item @emph{Fortran}:
854 @multitable @columnfractions .20 .80
855 @item @emph{Interface}: @tab @code{integer function omp_get_max_teams()}
858 @item @emph{See also}:
859 @ref{omp_set_num_teams}, @ref{omp_get_num_teams}
861 @item @emph{Reference}:
862 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.4.
867 @node omp_get_max_threads
868 @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
870 @item @emph{Description}:
871 Return the maximum number of threads used for the current parallel region
872 that does not use the clause @code{num_threads}.
875 @multitable @columnfractions .20 .80
876 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
879 @item @emph{Fortran}:
880 @multitable @columnfractions .20 .80
881 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
884 @item @emph{See also}:
885 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
887 @item @emph{Reference}:
888 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.3.
894 @section @code{omp_get_nested} -- Nested parallel regions
896 @item @emph{Description}:
897 This function returns @code{true} if nested parallel regions are
898 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
899 represent their language-specific counterparts.
901 The state of nested parallel regions at startup depends on several
902 environment variables. If @env{OMP_MAX_ACTIVE_LEVELS} is defined
903 and is set to greater than one, then nested parallel regions will be
904 enabled. If not defined, then the value of the @env{OMP_NESTED}
905 environment variable will be followed if defined. If neither are
906 defined, then if either @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND}
907 are defined with a list of more than one value, then nested parallel
908 regions are enabled. If none of these are defined, then nested parallel
909 regions are disabled by default.
911 Nested parallel regions can be enabled or disabled at runtime using
912 @code{omp_set_nested}, or by setting the maximum number of nested
913 regions with @code{omp_set_max_active_levels} to one to disable, or
917 @multitable @columnfractions .20 .80
918 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
921 @item @emph{Fortran}:
922 @multitable @columnfractions .20 .80
923 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
926 @item @emph{See also}:
927 @ref{omp_set_max_active_levels}, @ref{omp_set_nested},
928 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
930 @item @emph{Reference}:
931 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.11.
936 @node omp_get_num_devices
937 @section @code{omp_get_num_devices} -- Number of target devices
939 @item @emph{Description}:
940 Returns the number of target devices.
943 @multitable @columnfractions .20 .80
944 @item @emph{Prototype}: @tab @code{int omp_get_num_devices(void);}
947 @item @emph{Fortran}:
948 @multitable @columnfractions .20 .80
949 @item @emph{Interface}: @tab @code{integer function omp_get_num_devices()}
952 @item @emph{Reference}:
953 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.31.
958 @node omp_get_num_procs
959 @section @code{omp_get_num_procs} -- Number of processors online
961 @item @emph{Description}:
962 Returns the number of processors online on that device.
965 @multitable @columnfractions .20 .80
966 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
969 @item @emph{Fortran}:
970 @multitable @columnfractions .20 .80
971 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
974 @item @emph{Reference}:
975 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.5.
980 @node omp_get_num_teams
981 @section @code{omp_get_num_teams} -- Number of teams
983 @item @emph{Description}:
984 Returns the number of teams in the current team region.
987 @multitable @columnfractions .20 .80
988 @item @emph{Prototype}: @tab @code{int omp_get_num_teams(void);}
991 @item @emph{Fortran}:
992 @multitable @columnfractions .20 .80
993 @item @emph{Interface}: @tab @code{integer function omp_get_num_teams()}
996 @item @emph{Reference}:
997 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.32.
1002 @node omp_get_num_threads
1003 @section @code{omp_get_num_threads} -- Size of the active team
1005 @item @emph{Description}:
1006 Returns the number of threads in the current team. In a sequential section of
1007 the program @code{omp_get_num_threads} returns 1.
1009 The default team size may be initialized at startup by the
1010 @env{OMP_NUM_THREADS} environment variable. At runtime, the size
1011 of the current team may be set either by the @code{NUM_THREADS}
1012 clause or by @code{omp_set_num_threads}. If none of the above were
1013 used to define a specific value and @env{OMP_DYNAMIC} is disabled,
1014 one thread per CPU online is used.
1017 @multitable @columnfractions .20 .80
1018 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
1021 @item @emph{Fortran}:
1022 @multitable @columnfractions .20 .80
1023 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
1026 @item @emph{See also}:
1027 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
1029 @item @emph{Reference}:
1030 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.2.
1035 @node omp_get_proc_bind
1036 @section @code{omp_get_proc_bind} -- Whether theads may be moved between CPUs
1038 @item @emph{Description}:
1039 This functions returns the currently active thread affinity policy, which is
1040 set via @env{OMP_PROC_BIND}. Possible values are @code{omp_proc_bind_false},
1041 @code{omp_proc_bind_true}, @code{omp_proc_bind_primary},
1042 @code{omp_proc_bind_master}, @code{omp_proc_bind_close} and @code{omp_proc_bind_spread},
1043 where @code{omp_proc_bind_master} is an alias for @code{omp_proc_bind_primary}.
1046 @multitable @columnfractions .20 .80
1047 @item @emph{Prototype}: @tab @code{omp_proc_bind_t omp_get_proc_bind(void);}
1050 @item @emph{Fortran}:
1051 @multitable @columnfractions .20 .80
1052 @item @emph{Interface}: @tab @code{integer(kind=omp_proc_bind_kind) function omp_get_proc_bind()}
1055 @item @emph{See also}:
1056 @ref{OMP_PROC_BIND}, @ref{OMP_PLACES}, @ref{GOMP_CPU_AFFINITY},
1058 @item @emph{Reference}:
1059 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.22.
1064 @node omp_get_schedule
1065 @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
1067 @item @emph{Description}:
1068 Obtain the runtime scheduling method. The @var{kind} argument will be
1069 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
1070 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
1071 @var{chunk_size}, is set to the chunk size.
1074 @multitable @columnfractions .20 .80
1075 @item @emph{Prototype}: @tab @code{void omp_get_schedule(omp_sched_t *kind, int *chunk_size);}
1078 @item @emph{Fortran}:
1079 @multitable @columnfractions .20 .80
1080 @item @emph{Interface}: @tab @code{subroutine omp_get_schedule(kind, chunk_size)}
1081 @item @tab @code{integer(kind=omp_sched_kind) kind}
1082 @item @tab @code{integer chunk_size}
1085 @item @emph{See also}:
1086 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
1088 @item @emph{Reference}:
1089 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.13.
1093 @node omp_get_supported_active_levels
1094 @section @code{omp_get_supported_active_levels} -- Maximum number of active regions supported
1096 @item @emph{Description}:
1097 This function returns the maximum number of nested, active parallel regions
1098 supported by this implementation.
1101 @multitable @columnfractions .20 .80
1102 @item @emph{Prototype}: @tab @code{int omp_get_supported_active_levels(void);}
1105 @item @emph{Fortran}:
1106 @multitable @columnfractions .20 .80
1107 @item @emph{Interface}: @tab @code{integer function omp_get_supported_active_levels()}
1110 @item @emph{See also}:
1111 @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
1113 @item @emph{Reference}:
1114 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.2.15.
1119 @node omp_get_team_num
1120 @section @code{omp_get_team_num} -- Get team number
1122 @item @emph{Description}:
1123 Returns the team number of the calling thread.
1126 @multitable @columnfractions .20 .80
1127 @item @emph{Prototype}: @tab @code{int omp_get_team_num(void);}
1130 @item @emph{Fortran}:
1131 @multitable @columnfractions .20 .80
1132 @item @emph{Interface}: @tab @code{integer function omp_get_team_num()}
1135 @item @emph{Reference}:
1136 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.33.
1141 @node omp_get_team_size
1142 @section @code{omp_get_team_size} -- Number of threads in a team
1144 @item @emph{Description}:
1145 This function returns the number of threads in a thread team to which
1146 either the current thread or its ancestor belongs. For values of @var{level}
1147 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
1148 1 is returned, and for @code{omp_get_level}, the result is identical
1149 to @code{omp_get_num_threads}.
1152 @multitable @columnfractions .20 .80
1153 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
1156 @item @emph{Fortran}:
1157 @multitable @columnfractions .20 .80
1158 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
1159 @item @tab @code{integer level}
1162 @item @emph{See also}:
1163 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
1165 @item @emph{Reference}:
1166 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.19.
1171 @node omp_get_teams_thread_limit
1172 @section @code{omp_get_teams_thread_limit} -- Maximum number of threads imposed by teams
1174 @item @emph{Description}:
1175 Return the maximum number of threads that will be able to participate in
1176 each team created by a teams construct.
1179 @multitable @columnfractions .20 .80
1180 @item @emph{Prototype}: @tab @code{int omp_get_teams_thread_limit(void);}
1183 @item @emph{Fortran}:
1184 @multitable @columnfractions .20 .80
1185 @item @emph{Interface}: @tab @code{integer function omp_get_teams_thread_limit()}
1188 @item @emph{See also}:
1189 @ref{omp_set_teams_thread_limit}, @ref{OMP_TEAMS_THREAD_LIMIT}
1191 @item @emph{Reference}:
1192 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.6.
1197 @node omp_get_thread_limit
1198 @section @code{omp_get_thread_limit} -- Maximum number of threads
1200 @item @emph{Description}:
1201 Return the maximum number of threads of the program.
1204 @multitable @columnfractions .20 .80
1205 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
1208 @item @emph{Fortran}:
1209 @multitable @columnfractions .20 .80
1210 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
1213 @item @emph{See also}:
1214 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
1216 @item @emph{Reference}:
1217 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.14.
1222 @node omp_get_thread_num
1223 @section @code{omp_get_thread_num} -- Current thread ID
1225 @item @emph{Description}:
1226 Returns a unique thread identification number within the current team.
1227 In a sequential parts of the program, @code{omp_get_thread_num}
1228 always returns 0. In parallel regions the return value varies
1229 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
1230 value of the primary thread of a team is always 0.
1233 @multitable @columnfractions .20 .80
1234 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
1237 @item @emph{Fortran}:
1238 @multitable @columnfractions .20 .80
1239 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
1242 @item @emph{See also}:
1243 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
1245 @item @emph{Reference}:
1246 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.4.
1251 @node omp_in_parallel
1252 @section @code{omp_in_parallel} -- Whether a parallel region is active
1254 @item @emph{Description}:
1255 This function returns @code{true} if currently running in parallel,
1256 @code{false} otherwise. Here, @code{true} and @code{false} represent
1257 their language-specific counterparts.
1260 @multitable @columnfractions .20 .80
1261 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
1264 @item @emph{Fortran}:
1265 @multitable @columnfractions .20 .80
1266 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
1269 @item @emph{Reference}:
1270 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.6.
1275 @section @code{omp_in_final} -- Whether in final or included task region
1277 @item @emph{Description}:
1278 This function returns @code{true} if currently running in a final
1279 or included task region, @code{false} otherwise. Here, @code{true}
1280 and @code{false} represent their language-specific counterparts.
1283 @multitable @columnfractions .20 .80
1284 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
1287 @item @emph{Fortran}:
1288 @multitable @columnfractions .20 .80
1289 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
1292 @item @emph{Reference}:
1293 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.21.
1298 @node omp_is_initial_device
1299 @section @code{omp_is_initial_device} -- Whether executing on the host device
1301 @item @emph{Description}:
1302 This function returns @code{true} if currently running on the host device,
1303 @code{false} otherwise. Here, @code{true} and @code{false} represent
1304 their language-specific counterparts.
1307 @multitable @columnfractions .20 .80
1308 @item @emph{Prototype}: @tab @code{int omp_is_initial_device(void);}
1311 @item @emph{Fortran}:
1312 @multitable @columnfractions .20 .80
1313 @item @emph{Interface}: @tab @code{logical function omp_is_initial_device()}
1316 @item @emph{Reference}:
1317 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.34.
1322 @node omp_set_default_device
1323 @section @code{omp_set_default_device} -- Set the default device for target regions
1325 @item @emph{Description}:
1326 Set the default device for target regions without device clause. The argument
1327 shall be a nonnegative device number.
1330 @multitable @columnfractions .20 .80
1331 @item @emph{Prototype}: @tab @code{void omp_set_default_device(int device_num);}
1334 @item @emph{Fortran}:
1335 @multitable @columnfractions .20 .80
1336 @item @emph{Interface}: @tab @code{subroutine omp_set_default_device(device_num)}
1337 @item @tab @code{integer device_num}
1340 @item @emph{See also}:
1341 @ref{OMP_DEFAULT_DEVICE}, @ref{omp_get_default_device}
1343 @item @emph{Reference}:
1344 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.29.
1349 @node omp_set_dynamic
1350 @section @code{omp_set_dynamic} -- Enable/disable dynamic teams
1352 @item @emph{Description}:
1353 Enable or disable the dynamic adjustment of the number of threads
1354 within a team. The function takes the language-specific equivalent
1355 of @code{true} and @code{false}, where @code{true} enables dynamic
1356 adjustment of team sizes and @code{false} disables it.
1359 @multitable @columnfractions .20 .80
1360 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int dynamic_threads);}
1363 @item @emph{Fortran}:
1364 @multitable @columnfractions .20 .80
1365 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(dynamic_threads)}
1366 @item @tab @code{logical, intent(in) :: dynamic_threads}
1369 @item @emph{See also}:
1370 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
1372 @item @emph{Reference}:
1373 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.7.
1378 @node omp_set_max_active_levels
1379 @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
1381 @item @emph{Description}:
1382 This function limits the maximum allowed number of nested, active
1383 parallel regions. @var{max_levels} must be less or equal to
1384 the value returned by @code{omp_get_supported_active_levels}.
1387 @multitable @columnfractions .20 .80
1388 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
1391 @item @emph{Fortran}:
1392 @multitable @columnfractions .20 .80
1393 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
1394 @item @tab @code{integer max_levels}
1397 @item @emph{See also}:
1398 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level},
1399 @ref{omp_get_supported_active_levels}
1401 @item @emph{Reference}:
1402 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.15.
1407 @node omp_set_nested
1408 @section @code{omp_set_nested} -- Enable/disable nested parallel regions
1410 @item @emph{Description}:
1411 Enable or disable nested parallel regions, i.e., whether team members
1412 are allowed to create new teams. The function takes the language-specific
1413 equivalent of @code{true} and @code{false}, where @code{true} enables
1414 dynamic adjustment of team sizes and @code{false} disables it.
1416 Enabling nested parallel regions will also set the maximum number of
1417 active nested regions to the maximum supported. Disabling nested parallel
1418 regions will set the maximum number of active nested regions to one.
1421 @multitable @columnfractions .20 .80
1422 @item @emph{Prototype}: @tab @code{void omp_set_nested(int nested);}
1425 @item @emph{Fortran}:
1426 @multitable @columnfractions .20 .80
1427 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(nested)}
1428 @item @tab @code{logical, intent(in) :: nested}
1431 @item @emph{See also}:
1432 @ref{omp_get_nested}, @ref{omp_set_max_active_levels},
1433 @ref{OMP_MAX_ACTIVE_LEVELS}, @ref{OMP_NESTED}
1435 @item @emph{Reference}:
1436 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.10.
1441 @node omp_set_num_teams
1442 @section @code{omp_set_num_teams} -- Set upper teams limit for teams construct
1444 @item @emph{Description}:
1445 Specifies the upper bound for number of teams created by the teams construct
1446 which does not specify a @code{num_teams} clause. The
1447 argument of @code{omp_set_num_teams} shall be a positive integer.
1450 @multitable @columnfractions .20 .80
1451 @item @emph{Prototype}: @tab @code{void omp_set_num_teams(int num_teams);}
1454 @item @emph{Fortran}:
1455 @multitable @columnfractions .20 .80
1456 @item @emph{Interface}: @tab @code{subroutine omp_set_num_teams(num_teams)}
1457 @item @tab @code{integer, intent(in) :: num_teams}
1460 @item @emph{See also}:
1461 @ref{OMP_NUM_TEAMS}, @ref{omp_get_num_teams}, @ref{omp_get_max_teams}
1463 @item @emph{Reference}:
1464 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.3.
1469 @node omp_set_num_threads
1470 @section @code{omp_set_num_threads} -- Set upper team size limit
1472 @item @emph{Description}:
1473 Specifies the number of threads used by default in subsequent parallel
1474 sections, if those do not specify a @code{num_threads} clause. The
1475 argument of @code{omp_set_num_threads} shall be a positive integer.
1478 @multitable @columnfractions .20 .80
1479 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int num_threads);}
1482 @item @emph{Fortran}:
1483 @multitable @columnfractions .20 .80
1484 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(num_threads)}
1485 @item @tab @code{integer, intent(in) :: num_threads}
1488 @item @emph{See also}:
1489 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
1491 @item @emph{Reference}:
1492 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.1.
1497 @node omp_set_schedule
1498 @section @code{omp_set_schedule} -- Set the runtime scheduling method
1500 @item @emph{Description}:
1501 Sets the runtime scheduling method. The @var{kind} argument can have the
1502 value @code{omp_sched_static}, @code{omp_sched_dynamic},
1503 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
1504 @code{omp_sched_auto}, the chunk size is set to the value of
1505 @var{chunk_size} if positive, or to the default value if zero or negative.
1506 For @code{omp_sched_auto} the @var{chunk_size} argument is ignored.
1509 @multitable @columnfractions .20 .80
1510 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t kind, int chunk_size);}
1513 @item @emph{Fortran}:
1514 @multitable @columnfractions .20 .80
1515 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, chunk_size)}
1516 @item @tab @code{integer(kind=omp_sched_kind) kind}
1517 @item @tab @code{integer chunk_size}
1520 @item @emph{See also}:
1521 @ref{omp_get_schedule}
1524 @item @emph{Reference}:
1525 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.2.12.
1530 @node omp_set_teams_thread_limit
1531 @section @code{omp_set_teams_thread_limit} -- Set upper thread limit for teams construct
1533 @item @emph{Description}:
1534 Specifies the upper bound for number of threads that will be available
1535 for each team created by the teams construct which does not specify a
1536 @code{thread_limit} clause. The argument of
1537 @code{omp_set_teams_thread_limit} shall be a positive integer.
1540 @multitable @columnfractions .20 .80
1541 @item @emph{Prototype}: @tab @code{void omp_set_teams_thread_limit(int thread_limit);}
1544 @item @emph{Fortran}:
1545 @multitable @columnfractions .20 .80
1546 @item @emph{Interface}: @tab @code{subroutine omp_set_teams_thread_limit(thread_limit)}
1547 @item @tab @code{integer, intent(in) :: thread_limit}
1550 @item @emph{See also}:
1551 @ref{OMP_TEAMS_THREAD_LIMIT}, @ref{omp_get_teams_thread_limit}, @ref{omp_get_thread_limit}
1553 @item @emph{Reference}:
1554 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 3.4.5.
1560 @section @code{omp_init_lock} -- Initialize simple lock
1562 @item @emph{Description}:
1563 Initialize a simple lock. After initialization, the lock is in
1567 @multitable @columnfractions .20 .80
1568 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
1571 @item @emph{Fortran}:
1572 @multitable @columnfractions .20 .80
1573 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(svar)}
1574 @item @tab @code{integer(omp_lock_kind), intent(out) :: svar}
1577 @item @emph{See also}:
1578 @ref{omp_destroy_lock}
1580 @item @emph{Reference}:
1581 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1587 @section @code{omp_set_lock} -- Wait for and set simple lock
1589 @item @emph{Description}:
1590 Before setting a simple lock, the lock variable must be initialized by
1591 @code{omp_init_lock}. The calling thread is blocked until the lock
1592 is available. If the lock is already held by the current thread,
1596 @multitable @columnfractions .20 .80
1597 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
1600 @item @emph{Fortran}:
1601 @multitable @columnfractions .20 .80
1602 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(svar)}
1603 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1606 @item @emph{See also}:
1607 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
1609 @item @emph{Reference}:
1610 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1616 @section @code{omp_test_lock} -- Test and set simple lock if available
1618 @item @emph{Description}:
1619 Before setting a simple lock, the lock variable must be initialized by
1620 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
1621 does not block if the lock is not available. This function returns
1622 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
1623 @code{false} represent their language-specific counterparts.
1626 @multitable @columnfractions .20 .80
1627 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
1630 @item @emph{Fortran}:
1631 @multitable @columnfractions .20 .80
1632 @item @emph{Interface}: @tab @code{logical function omp_test_lock(svar)}
1633 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1636 @item @emph{See also}:
1637 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1639 @item @emph{Reference}:
1640 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1645 @node omp_unset_lock
1646 @section @code{omp_unset_lock} -- Unset simple lock
1648 @item @emph{Description}:
1649 A simple lock about to be unset must have been locked by @code{omp_set_lock}
1650 or @code{omp_test_lock} before. In addition, the lock must be held by the
1651 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
1652 or more threads attempted to set the lock before, one of them is chosen to,
1653 again, set the lock to itself.
1656 @multitable @columnfractions .20 .80
1657 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
1660 @item @emph{Fortran}:
1661 @multitable @columnfractions .20 .80
1662 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(svar)}
1663 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1666 @item @emph{See also}:
1667 @ref{omp_set_lock}, @ref{omp_test_lock}
1669 @item @emph{Reference}:
1670 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1675 @node omp_destroy_lock
1676 @section @code{omp_destroy_lock} -- Destroy simple lock
1678 @item @emph{Description}:
1679 Destroy a simple lock. In order to be destroyed, a simple lock must be
1680 in the unlocked state.
1683 @multitable @columnfractions .20 .80
1684 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
1687 @item @emph{Fortran}:
1688 @multitable @columnfractions .20 .80
1689 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(svar)}
1690 @item @tab @code{integer(omp_lock_kind), intent(inout) :: svar}
1693 @item @emph{See also}:
1696 @item @emph{Reference}:
1697 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1702 @node omp_init_nest_lock
1703 @section @code{omp_init_nest_lock} -- Initialize nested lock
1705 @item @emph{Description}:
1706 Initialize a nested lock. After initialization, the lock is in
1707 an unlocked state and the nesting count is set to zero.
1710 @multitable @columnfractions .20 .80
1711 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
1714 @item @emph{Fortran}:
1715 @multitable @columnfractions .20 .80
1716 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(nvar)}
1717 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: nvar}
1720 @item @emph{See also}:
1721 @ref{omp_destroy_nest_lock}
1723 @item @emph{Reference}:
1724 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.1.
1728 @node omp_set_nest_lock
1729 @section @code{omp_set_nest_lock} -- Wait for and set nested lock
1731 @item @emph{Description}:
1732 Before setting a nested lock, the lock variable must be initialized by
1733 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
1734 is available. If the lock is already held by the current thread, the
1735 nesting count for the lock is incremented.
1738 @multitable @columnfractions .20 .80
1739 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
1742 @item @emph{Fortran}:
1743 @multitable @columnfractions .20 .80
1744 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(nvar)}
1745 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1748 @item @emph{See also}:
1749 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
1751 @item @emph{Reference}:
1752 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.4.
1757 @node omp_test_nest_lock
1758 @section @code{omp_test_nest_lock} -- Test and set nested lock if available
1760 @item @emph{Description}:
1761 Before setting a nested lock, the lock variable must be initialized by
1762 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
1763 @code{omp_test_nest_lock} does not block if the lock is not available.
1764 If the lock is already held by the current thread, the new nesting count
1765 is returned. Otherwise, the return value equals zero.
1768 @multitable @columnfractions .20 .80
1769 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
1772 @item @emph{Fortran}:
1773 @multitable @columnfractions .20 .80
1774 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(nvar)}
1775 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1779 @item @emph{See also}:
1780 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
1782 @item @emph{Reference}:
1783 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.6.
1788 @node omp_unset_nest_lock
1789 @section @code{omp_unset_nest_lock} -- Unset nested lock
1791 @item @emph{Description}:
1792 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
1793 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
1794 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
1795 lock becomes unlocked. If one ore more threads attempted to set the lock before,
1796 one of them is chosen to, again, set the lock to itself.
1799 @multitable @columnfractions .20 .80
1800 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
1803 @item @emph{Fortran}:
1804 @multitable @columnfractions .20 .80
1805 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(nvar)}
1806 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1809 @item @emph{See also}:
1810 @ref{omp_set_nest_lock}
1812 @item @emph{Reference}:
1813 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.5.
1818 @node omp_destroy_nest_lock
1819 @section @code{omp_destroy_nest_lock} -- Destroy nested lock
1821 @item @emph{Description}:
1822 Destroy a nested lock. In order to be destroyed, a nested lock must be
1823 in the unlocked state and its nesting count must equal zero.
1826 @multitable @columnfractions .20 .80
1827 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1830 @item @emph{Fortran}:
1831 @multitable @columnfractions .20 .80
1832 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(nvar)}
1833 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: nvar}
1836 @item @emph{See also}:
1839 @item @emph{Reference}:
1840 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.3.3.
1846 @section @code{omp_get_wtick} -- Get timer precision
1848 @item @emph{Description}:
1849 Gets the timer precision, i.e., the number of seconds between two
1850 successive clock ticks.
1853 @multitable @columnfractions .20 .80
1854 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1857 @item @emph{Fortran}:
1858 @multitable @columnfractions .20 .80
1859 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1862 @item @emph{See also}:
1865 @item @emph{Reference}:
1866 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.2.
1872 @section @code{omp_get_wtime} -- Elapsed wall clock time
1874 @item @emph{Description}:
1875 Elapsed wall clock time in seconds. The time is measured per thread, no
1876 guarantee can be made that two distinct threads measure the same time.
1877 Time is measured from some "time in the past", which is an arbitrary time
1878 guaranteed not to change during the execution of the program.
1881 @multitable @columnfractions .20 .80
1882 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1885 @item @emph{Fortran}:
1886 @multitable @columnfractions .20 .80
1887 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1890 @item @emph{See also}:
1893 @item @emph{Reference}:
1894 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 3.4.1.
1899 @node omp_fulfill_event
1900 @section @code{omp_fulfill_event} -- Fulfill and destroy an OpenMP event
1902 @item @emph{Description}:
1903 Fulfill the event associated with the event handle argument. Currently, it
1904 is only used to fulfill events generated by detach clauses on task
1905 constructs - the effect of fulfilling the event is to allow the task to
1908 The result of calling @code{omp_fulfill_event} with an event handle other
1909 than that generated by a detach clause is undefined. Calling it with an
1910 event handle that has already been fulfilled is also undefined.
1913 @multitable @columnfractions .20 .80
1914 @item @emph{Prototype}: @tab @code{void omp_fulfill_event(omp_event_handle_t event);}
1917 @item @emph{Fortran}:
1918 @multitable @columnfractions .20 .80
1919 @item @emph{Interface}: @tab @code{subroutine omp_fulfill_event(event)}
1920 @item @tab @code{integer (kind=omp_event_handle_kind) :: event}
1923 @item @emph{Reference}:
1924 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 3.5.1.
1929 @c ---------------------------------------------------------------------
1930 @c OpenMP Environment Variables
1931 @c ---------------------------------------------------------------------
1933 @node Environment Variables
1934 @chapter OpenMP Environment Variables
1936 The environment variables which beginning with @env{OMP_} are defined by
1937 section 4 of the OpenMP specification in version 4.5, while those
1938 beginning with @env{GOMP_} are GNU extensions.
1941 * OMP_CANCELLATION:: Set whether cancellation is activated
1942 * OMP_DISPLAY_ENV:: Show OpenMP version and environment variables
1943 * OMP_DEFAULT_DEVICE:: Set the device used in target regions
1944 * OMP_DYNAMIC:: Dynamic adjustment of threads
1945 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1946 * OMP_MAX_TASK_PRIORITY:: Set the maximum task priority value
1947 * OMP_NESTED:: Nested parallel regions
1948 * OMP_NUM_TEAMS:: Specifies the number of teams to use by teams region
1949 * OMP_NUM_THREADS:: Specifies the number of threads to use
1950 * OMP_PROC_BIND:: Whether theads may be moved between CPUs
1951 * OMP_PLACES:: Specifies on which CPUs the theads should be placed
1952 * OMP_STACKSIZE:: Set default thread stack size
1953 * OMP_SCHEDULE:: How threads are scheduled
1954 * OMP_TARGET_OFFLOAD:: Controls offloading behaviour
1955 * OMP_TEAMS_THREAD_LIMIT:: Set the maximum number of threads imposed by teams
1956 * OMP_THREAD_LIMIT:: Set the maximum number of threads
1957 * OMP_WAIT_POLICY:: How waiting threads are handled
1958 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1959 * GOMP_DEBUG:: Enable debugging output
1960 * GOMP_STACKSIZE:: Set default thread stack size
1961 * GOMP_SPINCOUNT:: Set the busy-wait spin count
1962 * GOMP_RTEMS_THREAD_POOLS:: Set the RTEMS specific thread pools
1966 @node OMP_CANCELLATION
1967 @section @env{OMP_CANCELLATION} -- Set whether cancellation is activated
1968 @cindex Environment Variable
1970 @item @emph{Description}:
1971 If set to @code{TRUE}, the cancellation is activated. If set to @code{FALSE} or
1972 if unset, cancellation is disabled and the @code{cancel} construct is ignored.
1974 @item @emph{See also}:
1975 @ref{omp_get_cancellation}
1977 @item @emph{Reference}:
1978 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.11
1983 @node OMP_DISPLAY_ENV
1984 @section @env{OMP_DISPLAY_ENV} -- Show OpenMP version and environment variables
1985 @cindex Environment Variable
1987 @item @emph{Description}:
1988 If set to @code{TRUE}, the OpenMP version number and the values
1989 associated with the OpenMP environment variables are printed to @code{stderr}.
1990 If set to @code{VERBOSE}, it additionally shows the value of the environment
1991 variables which are GNU extensions. If undefined or set to @code{FALSE},
1992 this information will not be shown.
1995 @item @emph{Reference}:
1996 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.12
2001 @node OMP_DEFAULT_DEVICE
2002 @section @env{OMP_DEFAULT_DEVICE} -- Set the device used in target regions
2003 @cindex Environment Variable
2005 @item @emph{Description}:
2006 Set to choose the device which is used in a @code{target} region, unless the
2007 value is overridden by @code{omp_set_default_device} or by a @code{device}
2008 clause. The value shall be the nonnegative device number. If no device with
2009 the given device number exists, the code is executed on the host. If unset,
2010 device number 0 will be used.
2013 @item @emph{See also}:
2014 @ref{omp_get_default_device}, @ref{omp_set_default_device},
2016 @item @emph{Reference}:
2017 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.13
2023 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
2024 @cindex Environment Variable
2026 @item @emph{Description}:
2027 Enable or disable the dynamic adjustment of the number of threads
2028 within a team. The value of this environment variable shall be
2029 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
2030 disabled by default.
2032 @item @emph{See also}:
2033 @ref{omp_set_dynamic}
2035 @item @emph{Reference}:
2036 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.3
2041 @node OMP_MAX_ACTIVE_LEVELS
2042 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
2043 @cindex Environment Variable
2045 @item @emph{Description}:
2046 Specifies the initial value for the maximum number of nested parallel
2047 regions. The value of this variable shall be a positive integer.
2048 If undefined, then if @env{OMP_NESTED} is defined and set to true, or
2049 if @env{OMP_NUM_THREADS} or @env{OMP_PROC_BIND} are defined and set to
2050 a list with more than one item, the maximum number of nested parallel
2051 regions will be initialized to the largest number supported, otherwise
2052 it will be set to one.
2054 @item @emph{See also}:
2055 @ref{omp_set_max_active_levels}, @ref{OMP_NESTED}
2057 @item @emph{Reference}:
2058 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.9
2063 @node OMP_MAX_TASK_PRIORITY
2064 @section @env{OMP_MAX_TASK_PRIORITY} -- Set the maximum priority
2065 number that can be set for a task.
2066 @cindex Environment Variable
2068 @item @emph{Description}:
2069 Specifies the initial value for the maximum priority value that can be
2070 set for a task. The value of this variable shall be a non-negative
2071 integer, and zero is allowed. If undefined, the default priority is
2074 @item @emph{See also}:
2075 @ref{omp_get_max_task_priority}
2077 @item @emph{Reference}:
2078 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.14
2084 @section @env{OMP_NESTED} -- Nested parallel regions
2085 @cindex Environment Variable
2086 @cindex Implementation specific setting
2088 @item @emph{Description}:
2089 Enable or disable nested parallel regions, i.e., whether team members
2090 are allowed to create new teams. The value of this environment variable
2091 shall be @code{TRUE} or @code{FALSE}. If set to @code{TRUE}, the number
2092 of maximum active nested regions supported will by default be set to the
2093 maximum supported, otherwise it will be set to one. If
2094 @env{OMP_MAX_ACTIVE_LEVELS} is defined, its setting will override this
2095 setting. If both are undefined, nested parallel regions are enabled if
2096 @env{OMP_NUM_THREADS} or @env{OMP_PROC_BINDS} are defined to a list with
2097 more than one item, otherwise they are disabled by default.
2099 @item @emph{See also}:
2100 @ref{omp_set_max_active_levels}, @ref{omp_set_nested}
2102 @item @emph{Reference}:
2103 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.6
2109 @section @env{OMP_NUM_TEAMS} -- Specifies the number of teams to use by teams region
2110 @cindex Environment Variable
2112 @item @emph{Description}:
2113 Specifies the upper bound for number of teams to use in teams regions
2114 without explicit @code{num_teams} clause. The value of this variable shall
2115 be a positive integer. If undefined it defaults to 0 which means
2116 implementation defined upper bound.
2118 @item @emph{See also}:
2119 @ref{omp_set_num_teams}
2121 @item @emph{Reference}:
2122 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.23
2127 @node OMP_NUM_THREADS
2128 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
2129 @cindex Environment Variable
2130 @cindex Implementation specific setting
2132 @item @emph{Description}:
2133 Specifies the default number of threads to use in parallel regions. The
2134 value of this variable shall be a comma-separated list of positive integers;
2135 the value specifies the number of threads to use for the corresponding nested
2136 level. Specifying more than one item in the list will automatically enable
2137 nesting by default. If undefined one thread per CPU is used.
2139 @item @emph{See also}:
2140 @ref{omp_set_num_threads}, @ref{OMP_NESTED}
2142 @item @emph{Reference}:
2143 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.2
2149 @section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
2150 @cindex Environment Variable
2152 @item @emph{Description}:
2153 Specifies whether threads may be moved between processors. If set to
2154 @code{TRUE}, OpenMP theads should not be moved; if set to @code{FALSE}
2155 they may be moved. Alternatively, a comma separated list with the
2156 values @code{PRIMARY}, @code{MASTER}, @code{CLOSE} and @code{SPREAD} can
2157 be used to specify the thread affinity policy for the corresponding nesting
2158 level. With @code{PRIMARY} and @code{MASTER} the worker threads are in the
2159 same place partition as the primary thread. With @code{CLOSE} those are
2160 kept close to the primary thread in contiguous place partitions. And
2161 with @code{SPREAD} a sparse distribution
2162 across the place partitions is used. Specifying more than one item in the
2163 list will automatically enable nesting by default.
2165 When undefined, @env{OMP_PROC_BIND} defaults to @code{TRUE} when
2166 @env{OMP_PLACES} or @env{GOMP_CPU_AFFINITY} is set and @code{FALSE} otherwise.
2168 @item @emph{See also}:
2169 @ref{omp_get_proc_bind}, @ref{GOMP_CPU_AFFINITY},
2170 @ref{OMP_NESTED}, @ref{OMP_PLACES}
2172 @item @emph{Reference}:
2173 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.4
2179 @section @env{OMP_PLACES} -- Specifies on which CPUs the theads should be placed
2180 @cindex Environment Variable
2182 @item @emph{Description}:
2183 The thread placement can be either specified using an abstract name or by an
2184 explicit list of the places. The abstract names @code{threads}, @code{cores},
2185 @code{sockets}, @code{ll_caches} and @code{numa_domains} can be optionally
2186 followed by a positive number in parentheses, which denotes the how many places
2187 shall be created. With @code{threads} each place corresponds to a single
2188 hardware thread; @code{cores} to a single core with the corresponding number of
2189 hardware threads; with @code{sockets} the place corresponds to a single
2190 socket; with @code{ll_caches} to a set of cores that shares the last level
2191 cache on the device; and @code{numa_domains} to a set of cores for which their
2192 closest memory on the device is the same memory and at a similar distance from
2193 the cores. The resulting placement can be shown by setting the
2194 @env{OMP_DISPLAY_ENV} environment variable.
2196 Alternatively, the placement can be specified explicitly as comma-separated
2197 list of places. A place is specified by set of nonnegative numbers in curly
2198 braces, denoting the hardware threads. The curly braces can be omitted
2199 when only a single number has been specified. The hardware threads
2200 belonging to a place can either be specified as comma-separated list of
2201 nonnegative thread numbers or using an interval. Multiple places can also be
2202 either specified by a comma-separated list of places or by an interval. To
2203 specify an interval, a colon followed by the count is placed after
2204 the hardware thread number or the place. Optionally, the length can be
2205 followed by a colon and the stride number -- otherwise a unit stride is
2206 assumed. Placing an exclamation mark (@code{!}) directly before a curly
2207 brace or numbers inside the curly braces (excluding intervals) will
2208 exclude those hardware threads.
2210 For instance, the following specifies the same places list:
2211 @code{"@{0,1,2@}, @{3,4,6@}, @{7,8,9@}, @{10,11,12@}"};
2212 @code{"@{0:3@}, @{3:3@}, @{7:3@}, @{10:3@}"}; and @code{"@{0:2@}:4:3"}.
2214 If @env{OMP_PLACES} and @env{GOMP_CPU_AFFINITY} are unset and
2215 @env{OMP_PROC_BIND} is either unset or @code{false}, threads may be moved
2216 between CPUs following no placement policy.
2218 @item @emph{See also}:
2219 @ref{OMP_PROC_BIND}, @ref{GOMP_CPU_AFFINITY}, @ref{omp_get_proc_bind},
2220 @ref{OMP_DISPLAY_ENV}
2222 @item @emph{Reference}:
2223 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.5
2229 @section @env{OMP_STACKSIZE} -- Set default thread stack size
2230 @cindex Environment Variable
2232 @item @emph{Description}:
2233 Set the default thread stack size in kilobytes, unless the number
2234 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
2235 case the size is, respectively, in bytes, kilobytes, megabytes
2236 or gigabytes. This is different from @code{pthread_attr_setstacksize}
2237 which gets the number of bytes as an argument. If the stack size cannot
2238 be set due to system constraints, an error is reported and the initial
2239 stack size is left unchanged. If undefined, the stack size is system
2242 @item @emph{Reference}:
2243 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.7
2249 @section @env{OMP_SCHEDULE} -- How threads are scheduled
2250 @cindex Environment Variable
2251 @cindex Implementation specific setting
2253 @item @emph{Description}:
2254 Allows to specify @code{schedule type} and @code{chunk size}.
2255 The value of the variable shall have the form: @code{type[,chunk]} where
2256 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
2257 The optional @code{chunk} size shall be a positive integer. If undefined,
2258 dynamic scheduling and a chunk size of 1 is used.
2260 @item @emph{See also}:
2261 @ref{omp_set_schedule}
2263 @item @emph{Reference}:
2264 @uref{https://www.openmp.org, OpenMP specification v4.5}, Sections 2.7.1.1 and 4.1
2269 @node OMP_TARGET_OFFLOAD
2270 @section @env{OMP_TARGET_OFFLOAD} -- Controls offloading behaviour
2271 @cindex Environment Variable
2272 @cindex Implementation specific setting
2274 @item @emph{Description}:
2275 Specifies the behaviour with regard to offloading code to a device. This
2276 variable can be set to one of three values - @code{MANDATORY}, @code{DISABLED}
2279 If set to @code{MANDATORY}, the program will terminate with an error if
2280 the offload device is not present or is not supported. If set to
2281 @code{DISABLED}, then offloading is disabled and all code will run on the
2282 host. If set to @code{DEFAULT}, the program will try offloading to the
2283 device first, then fall back to running code on the host if it cannot.
2285 If undefined, then the program will behave as if @code{DEFAULT} was set.
2287 @item @emph{Reference}:
2288 @uref{https://www.openmp.org, OpenMP specification v5.0}, Section 6.17
2293 @node OMP_TEAMS_THREAD_LIMIT
2294 @section @env{OMP_TEAMS_THREAD_LIMIT} -- Set the maximum number of threads imposed by teams
2295 @cindex Environment Variable
2297 @item @emph{Description}:
2298 Specifies an upper bound for the number of threads to use by each contention
2299 group created by a teams construct without explicit @code{thread_limit}
2300 clause. The value of this variable shall be a positive integer. If undefined,
2301 the value of 0 is used which stands for an implementation defined upper
2304 @item @emph{See also}:
2305 @ref{OMP_THREAD_LIMIT}, @ref{omp_set_teams_thread_limit}
2307 @item @emph{Reference}:
2308 @uref{https://www.openmp.org, OpenMP specification v5.1}, Section 6.24
2313 @node OMP_THREAD_LIMIT
2314 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
2315 @cindex Environment Variable
2317 @item @emph{Description}:
2318 Specifies the number of threads to use for the whole program. The
2319 value of this variable shall be a positive integer. If undefined,
2320 the number of threads is not limited.
2322 @item @emph{See also}:
2323 @ref{OMP_NUM_THREADS}, @ref{omp_get_thread_limit}
2325 @item @emph{Reference}:
2326 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.10
2331 @node OMP_WAIT_POLICY
2332 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
2333 @cindex Environment Variable
2335 @item @emph{Description}:
2336 Specifies whether waiting threads should be active or passive. If
2337 the value is @code{PASSIVE}, waiting threads should not consume CPU
2338 power while waiting; while the value is @code{ACTIVE} specifies that
2339 they should. If undefined, threads wait actively for a short time
2340 before waiting passively.
2342 @item @emph{See also}:
2343 @ref{GOMP_SPINCOUNT}
2345 @item @emph{Reference}:
2346 @uref{https://www.openmp.org, OpenMP specification v4.5}, Section 4.8
2351 @node GOMP_CPU_AFFINITY
2352 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
2353 @cindex Environment Variable
2355 @item @emph{Description}:
2356 Binds threads to specific CPUs. The variable should contain a space-separated
2357 or comma-separated list of CPUs. This list may contain different kinds of
2358 entries: either single CPU numbers in any order, a range of CPUs (M-N)
2359 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
2360 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
2361 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
2362 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
2363 and 14 respectively and then start assigning back from the beginning of
2364 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
2366 There is no libgomp library routine to determine whether a CPU affinity
2367 specification is in effect. As a workaround, language-specific library
2368 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
2369 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
2370 environment variable. A defined CPU affinity on startup cannot be changed
2371 or disabled during the runtime of the application.
2373 If both @env{GOMP_CPU_AFFINITY} and @env{OMP_PROC_BIND} are set,
2374 @env{OMP_PROC_BIND} has a higher precedence. If neither has been set and
2375 @env{OMP_PROC_BIND} is unset, or when @env{OMP_PROC_BIND} is set to
2376 @code{FALSE}, the host system will handle the assignment of threads to CPUs.
2378 @item @emph{See also}:
2379 @ref{OMP_PLACES}, @ref{OMP_PROC_BIND}
2385 @section @env{GOMP_DEBUG} -- Enable debugging output
2386 @cindex Environment Variable
2388 @item @emph{Description}:
2389 Enable debugging output. The variable should be set to @code{0}
2390 (disabled, also the default if not set), or @code{1} (enabled).
2392 If enabled, some debugging output will be printed during execution.
2393 This is currently not specified in more detail, and subject to change.
2398 @node GOMP_STACKSIZE
2399 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
2400 @cindex Environment Variable
2401 @cindex Implementation specific setting
2403 @item @emph{Description}:
2404 Set the default thread stack size in kilobytes. This is different from
2405 @code{pthread_attr_setstacksize} which gets the number of bytes as an
2406 argument. If the stack size cannot be set due to system constraints, an
2407 error is reported and the initial stack size is left unchanged. If undefined,
2408 the stack size is system dependent.
2410 @item @emph{See also}:
2413 @item @emph{Reference}:
2414 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
2415 GCC Patches Mailinglist},
2416 @uref{https://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
2417 GCC Patches Mailinglist}
2422 @node GOMP_SPINCOUNT
2423 @section @env{GOMP_SPINCOUNT} -- Set the busy-wait spin count
2424 @cindex Environment Variable
2425 @cindex Implementation specific setting
2427 @item @emph{Description}:
2428 Determines how long a threads waits actively with consuming CPU power
2429 before waiting passively without consuming CPU power. The value may be
2430 either @code{INFINITE}, @code{INFINITY} to always wait actively or an
2431 integer which gives the number of spins of the busy-wait loop. The
2432 integer may optionally be followed by the following suffixes acting
2433 as multiplication factors: @code{k} (kilo, thousand), @code{M} (mega,
2434 million), @code{G} (giga, billion), or @code{T} (tera, trillion).
2435 If undefined, 0 is used when @env{OMP_WAIT_POLICY} is @code{PASSIVE},
2436 300,000 is used when @env{OMP_WAIT_POLICY} is undefined and
2437 30 billion is used when @env{OMP_WAIT_POLICY} is @code{ACTIVE}.
2438 If there are more OpenMP threads than available CPUs, 1000 and 100
2439 spins are used for @env{OMP_WAIT_POLICY} being @code{ACTIVE} or
2440 undefined, respectively; unless the @env{GOMP_SPINCOUNT} is lower
2441 or @env{OMP_WAIT_POLICY} is @code{PASSIVE}.
2443 @item @emph{See also}:
2444 @ref{OMP_WAIT_POLICY}
2449 @node GOMP_RTEMS_THREAD_POOLS
2450 @section @env{GOMP_RTEMS_THREAD_POOLS} -- Set the RTEMS specific thread pools
2451 @cindex Environment Variable
2452 @cindex Implementation specific setting
2454 @item @emph{Description}:
2455 This environment variable is only used on the RTEMS real-time operating system.
2456 It determines the scheduler instance specific thread pools. The format for
2457 @env{GOMP_RTEMS_THREAD_POOLS} is a list of optional
2458 @code{<thread-pool-count>[$<priority>]@@<scheduler-name>} configurations
2459 separated by @code{:} where:
2461 @item @code{<thread-pool-count>} is the thread pool count for this scheduler
2463 @item @code{$<priority>} is an optional priority for the worker threads of a
2464 thread pool according to @code{pthread_setschedparam}. In case a priority
2465 value is omitted, then a worker thread will inherit the priority of the OpenMP
2466 primary thread that created it. The priority of the worker thread is not
2467 changed after creation, even if a new OpenMP primary thread using the worker has
2468 a different priority.
2469 @item @code{@@<scheduler-name>} is the scheduler instance name according to the
2470 RTEMS application configuration.
2472 In case no thread pool configuration is specified for a scheduler instance,
2473 then each OpenMP primary thread of this scheduler instance will use its own
2474 dynamically allocated thread pool. To limit the worker thread count of the
2475 thread pools, each OpenMP primary thread must call @code{omp_set_num_threads}.
2476 @item @emph{Example}:
2477 Lets suppose we have three scheduler instances @code{IO}, @code{WRK0}, and
2478 @code{WRK1} with @env{GOMP_RTEMS_THREAD_POOLS} set to
2479 @code{"1@@WRK0:3$4@@WRK1"}. Then there are no thread pool restrictions for
2480 scheduler instance @code{IO}. In the scheduler instance @code{WRK0} there is
2481 one thread pool available. Since no priority is specified for this scheduler
2482 instance, the worker thread inherits the priority of the OpenMP primary thread
2483 that created it. In the scheduler instance @code{WRK1} there are three thread
2484 pools available and their worker threads run at priority four.
2489 @c ---------------------------------------------------------------------
2491 @c ---------------------------------------------------------------------
2493 @node Enabling OpenACC
2494 @chapter Enabling OpenACC
2496 To activate the OpenACC extensions for C/C++ and Fortran, the compile-time
2497 flag @option{-fopenacc} must be specified. This enables the OpenACC directive
2498 @code{#pragma acc} in C/C++ and @code{!$acc} directives in free form,
2499 @code{c$acc}, @code{*$acc} and @code{!$acc} directives in fixed form,
2500 @code{!$} conditional compilation sentinels in free form and @code{c$},
2501 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
2502 arranges for automatic linking of the OpenACC runtime library
2503 (@ref{OpenACC Runtime Library Routines}).
2505 See @uref{https://gcc.gnu.org/wiki/OpenACC} for more information.
2507 A complete description of all OpenACC directives accepted may be found in
2508 the @uref{https://www.openacc.org, OpenACC} Application Programming
2509 Interface manual, version 2.6.
2513 @c ---------------------------------------------------------------------
2514 @c OpenACC Runtime Library Routines
2515 @c ---------------------------------------------------------------------
2517 @node OpenACC Runtime Library Routines
2518 @chapter OpenACC Runtime Library Routines
2520 The runtime routines described here are defined by section 3 of the OpenACC
2521 specifications in version 2.6.
2522 They have C linkage, and do not throw exceptions.
2523 Generally, they are available only for the host, with the exception of
2524 @code{acc_on_device}, which is available for both the host and the
2525 acceleration device.
2528 * acc_get_num_devices:: Get number of devices for the given device
2530 * acc_set_device_type:: Set type of device accelerator to use.
2531 * acc_get_device_type:: Get type of device accelerator to be used.
2532 * acc_set_device_num:: Set device number to use.
2533 * acc_get_device_num:: Get device number to be used.
2534 * acc_get_property:: Get device property.
2535 * acc_async_test:: Tests for completion of a specific asynchronous
2537 * acc_async_test_all:: Tests for completion of all asynchronous
2539 * acc_wait:: Wait for completion of a specific asynchronous
2541 * acc_wait_all:: Waits for completion of all asynchronous
2543 * acc_wait_all_async:: Wait for completion of all asynchronous
2545 * acc_wait_async:: Wait for completion of asynchronous operations.
2546 * acc_init:: Initialize runtime for a specific device type.
2547 * acc_shutdown:: Shuts down the runtime for a specific device
2549 * acc_on_device:: Whether executing on a particular device
2550 * acc_malloc:: Allocate device memory.
2551 * acc_free:: Free device memory.
2552 * acc_copyin:: Allocate device memory and copy host memory to
2554 * acc_present_or_copyin:: If the data is not present on the device,
2555 allocate device memory and copy from host
2557 * acc_create:: Allocate device memory and map it to host
2559 * acc_present_or_create:: If the data is not present on the device,
2560 allocate device memory and map it to host
2562 * acc_copyout:: Copy device memory to host memory.
2563 * acc_delete:: Free device memory.
2564 * acc_update_device:: Update device memory from mapped host memory.
2565 * acc_update_self:: Update host memory from mapped device memory.
2566 * acc_map_data:: Map previously allocated device memory to host
2568 * acc_unmap_data:: Unmap device memory from host memory.
2569 * acc_deviceptr:: Get device pointer associated with specific
2571 * acc_hostptr:: Get host pointer associated with specific
2573 * acc_is_present:: Indicate whether host variable / array is
2575 * acc_memcpy_to_device:: Copy host memory to device memory.
2576 * acc_memcpy_from_device:: Copy device memory to host memory.
2577 * acc_attach:: Let device pointer point to device-pointer target.
2578 * acc_detach:: Let device pointer point to host-pointer target.
2580 API routines for target platforms.
2582 * acc_get_current_cuda_device:: Get CUDA device handle.
2583 * acc_get_current_cuda_context::Get CUDA context handle.
2584 * acc_get_cuda_stream:: Get CUDA stream handle.
2585 * acc_set_cuda_stream:: Set CUDA stream handle.
2587 API routines for the OpenACC Profiling Interface.
2589 * acc_prof_register:: Register callbacks.
2590 * acc_prof_unregister:: Unregister callbacks.
2591 * acc_prof_lookup:: Obtain inquiry functions.
2592 * acc_register_library:: Library registration.
2597 @node acc_get_num_devices
2598 @section @code{acc_get_num_devices} -- Get number of devices for given device type
2600 @item @emph{Description}
2601 This function returns a value indicating the number of devices available
2602 for the device type specified in @var{devicetype}.
2605 @multitable @columnfractions .20 .80
2606 @item @emph{Prototype}: @tab @code{int acc_get_num_devices(acc_device_t devicetype);}
2609 @item @emph{Fortran}:
2610 @multitable @columnfractions .20 .80
2611 @item @emph{Interface}: @tab @code{integer function acc_get_num_devices(devicetype)}
2612 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2615 @item @emph{Reference}:
2616 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2622 @node acc_set_device_type
2623 @section @code{acc_set_device_type} -- Set type of device accelerator to use.
2625 @item @emph{Description}
2626 This function indicates to the runtime library which device type, specified
2627 in @var{devicetype}, to use when executing a parallel or kernels region.
2630 @multitable @columnfractions .20 .80
2631 @item @emph{Prototype}: @tab @code{acc_set_device_type(acc_device_t devicetype);}
2634 @item @emph{Fortran}:
2635 @multitable @columnfractions .20 .80
2636 @item @emph{Interface}: @tab @code{subroutine acc_set_device_type(devicetype)}
2637 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2640 @item @emph{Reference}:
2641 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2647 @node acc_get_device_type
2648 @section @code{acc_get_device_type} -- Get type of device accelerator to be used.
2650 @item @emph{Description}
2651 This function returns what device type will be used when executing a
2652 parallel or kernels region.
2654 This function returns @code{acc_device_none} if
2655 @code{acc_get_device_type} is called from
2656 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
2657 callbacks of the OpenACC Profiling Interface (@ref{OpenACC Profiling
2658 Interface}), that is, if the device is currently being initialized.
2661 @multitable @columnfractions .20 .80
2662 @item @emph{Prototype}: @tab @code{acc_device_t acc_get_device_type(void);}
2665 @item @emph{Fortran}:
2666 @multitable @columnfractions .20 .80
2667 @item @emph{Interface}: @tab @code{function acc_get_device_type(void)}
2668 @item @tab @code{integer(kind=acc_device_kind) acc_get_device_type}
2671 @item @emph{Reference}:
2672 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2678 @node acc_set_device_num
2679 @section @code{acc_set_device_num} -- Set device number to use.
2681 @item @emph{Description}
2682 This function will indicate to the runtime which device number,
2683 specified by @var{devicenum}, associated with the specified device
2684 type @var{devicetype}.
2687 @multitable @columnfractions .20 .80
2688 @item @emph{Prototype}: @tab @code{acc_set_device_num(int devicenum, acc_device_t devicetype);}
2691 @item @emph{Fortran}:
2692 @multitable @columnfractions .20 .80
2693 @item @emph{Interface}: @tab @code{subroutine acc_set_device_num(devicenum, devicetype)}
2694 @item @tab @code{integer devicenum}
2695 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2698 @item @emph{Reference}:
2699 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2705 @node acc_get_device_num
2706 @section @code{acc_get_device_num} -- Get device number to be used.
2708 @item @emph{Description}
2709 This function returns which device number associated with the specified device
2710 type @var{devicetype}, will be used when executing a parallel or kernels
2714 @multitable @columnfractions .20 .80
2715 @item @emph{Prototype}: @tab @code{int acc_get_device_num(acc_device_t devicetype);}
2718 @item @emph{Fortran}:
2719 @multitable @columnfractions .20 .80
2720 @item @emph{Interface}: @tab @code{function acc_get_device_num(devicetype)}
2721 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2722 @item @tab @code{integer acc_get_device_num}
2725 @item @emph{Reference}:
2726 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2732 @node acc_get_property
2733 @section @code{acc_get_property} -- Get device property.
2734 @cindex acc_get_property
2735 @cindex acc_get_property_string
2737 @item @emph{Description}
2738 These routines return the value of the specified @var{property} for the
2739 device being queried according to @var{devicenum} and @var{devicetype}.
2740 Integer-valued and string-valued properties are returned by
2741 @code{acc_get_property} and @code{acc_get_property_string} respectively.
2742 The Fortran @code{acc_get_property_string} subroutine returns the string
2743 retrieved in its fourth argument while the remaining entry points are
2744 functions, which pass the return value as their result.
2746 Note for Fortran, only: the OpenACC technical committee corrected and, hence,
2747 modified the interface introduced in OpenACC 2.6. The kind-value parameter
2748 @code{acc_device_property} has been renamed to @code{acc_device_property_kind}
2749 for consistency and the return type of the @code{acc_get_property} function is
2750 now a @code{c_size_t} integer instead of a @code{acc_device_property} integer.
2751 The parameter @code{acc_device_property} will continue to be provided,
2752 but might be removed in a future version of GCC.
2755 @multitable @columnfractions .20 .80
2756 @item @emph{Prototype}: @tab @code{size_t acc_get_property(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2757 @item @emph{Prototype}: @tab @code{const char *acc_get_property_string(int devicenum, acc_device_t devicetype, acc_device_property_t property);}
2760 @item @emph{Fortran}:
2761 @multitable @columnfractions .20 .80
2762 @item @emph{Interface}: @tab @code{function acc_get_property(devicenum, devicetype, property)}
2763 @item @emph{Interface}: @tab @code{subroutine acc_get_property_string(devicenum, devicetype, property, string)}
2764 @item @tab @code{use ISO_C_Binding, only: c_size_t}
2765 @item @tab @code{integer devicenum}
2766 @item @tab @code{integer(kind=acc_device_kind) devicetype}
2767 @item @tab @code{integer(kind=acc_device_property_kind) property}
2768 @item @tab @code{integer(kind=c_size_t) acc_get_property}
2769 @item @tab @code{character(*) string}
2772 @item @emph{Reference}:
2773 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2779 @node acc_async_test
2780 @section @code{acc_async_test} -- Test for completion of a specific asynchronous operation.
2782 @item @emph{Description}
2783 This function tests for completion of the asynchronous operation specified
2784 in @var{arg}. In C/C++, a non-zero value will be returned to indicate
2785 the specified asynchronous operation has completed. While Fortran will return
2786 a @code{true}. If the asynchronous operation has not completed, C/C++ returns
2787 a zero and Fortran returns a @code{false}.
2790 @multitable @columnfractions .20 .80
2791 @item @emph{Prototype}: @tab @code{int acc_async_test(int arg);}
2794 @item @emph{Fortran}:
2795 @multitable @columnfractions .20 .80
2796 @item @emph{Interface}: @tab @code{function acc_async_test(arg)}
2797 @item @tab @code{integer(kind=acc_handle_kind) arg}
2798 @item @tab @code{logical acc_async_test}
2801 @item @emph{Reference}:
2802 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2808 @node acc_async_test_all
2809 @section @code{acc_async_test_all} -- Tests for completion of all asynchronous operations.
2811 @item @emph{Description}
2812 This function tests for completion of all asynchronous operations.
2813 In C/C++, a non-zero value will be returned to indicate all asynchronous
2814 operations have completed. While Fortran will return a @code{true}. If
2815 any asynchronous operation has not completed, C/C++ returns a zero and
2816 Fortran returns a @code{false}.
2819 @multitable @columnfractions .20 .80
2820 @item @emph{Prototype}: @tab @code{int acc_async_test_all(void);}
2823 @item @emph{Fortran}:
2824 @multitable @columnfractions .20 .80
2825 @item @emph{Interface}: @tab @code{function acc_async_test()}
2826 @item @tab @code{logical acc_get_device_num}
2829 @item @emph{Reference}:
2830 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2837 @section @code{acc_wait} -- Wait for completion of a specific asynchronous operation.
2839 @item @emph{Description}
2840 This function waits for completion of the asynchronous operation
2841 specified in @var{arg}.
2844 @multitable @columnfractions .20 .80
2845 @item @emph{Prototype}: @tab @code{acc_wait(arg);}
2846 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait(arg);}
2849 @item @emph{Fortran}:
2850 @multitable @columnfractions .20 .80
2851 @item @emph{Interface}: @tab @code{subroutine acc_wait(arg)}
2852 @item @tab @code{integer(acc_handle_kind) arg}
2853 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait(arg)}
2854 @item @tab @code{integer(acc_handle_kind) arg}
2857 @item @emph{Reference}:
2858 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2865 @section @code{acc_wait_all} -- Waits for completion of all asynchronous operations.
2867 @item @emph{Description}
2868 This function waits for the completion of all asynchronous operations.
2871 @multitable @columnfractions .20 .80
2872 @item @emph{Prototype}: @tab @code{acc_wait_all(void);}
2873 @item @emph{Prototype (OpenACC 1.0 compatibility)}: @tab @code{acc_async_wait_all(void);}
2876 @item @emph{Fortran}:
2877 @multitable @columnfractions .20 .80
2878 @item @emph{Interface}: @tab @code{subroutine acc_wait_all()}
2879 @item @emph{Interface (OpenACC 1.0 compatibility)}: @tab @code{subroutine acc_async_wait_all()}
2882 @item @emph{Reference}:
2883 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2889 @node acc_wait_all_async
2890 @section @code{acc_wait_all_async} -- Wait for completion of all asynchronous operations.
2892 @item @emph{Description}
2893 This function enqueues a wait operation on the queue @var{async} for any
2894 and all asynchronous operations that have been previously enqueued on
2898 @multitable @columnfractions .20 .80
2899 @item @emph{Prototype}: @tab @code{acc_wait_all_async(int async);}
2902 @item @emph{Fortran}:
2903 @multitable @columnfractions .20 .80
2904 @item @emph{Interface}: @tab @code{subroutine acc_wait_all_async(async)}
2905 @item @tab @code{integer(acc_handle_kind) async}
2908 @item @emph{Reference}:
2909 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2915 @node acc_wait_async
2916 @section @code{acc_wait_async} -- Wait for completion of asynchronous operations.
2918 @item @emph{Description}
2919 This function enqueues a wait operation on queue @var{async} for any and all
2920 asynchronous operations enqueued on queue @var{arg}.
2923 @multitable @columnfractions .20 .80
2924 @item @emph{Prototype}: @tab @code{acc_wait_async(int arg, int async);}
2927 @item @emph{Fortran}:
2928 @multitable @columnfractions .20 .80
2929 @item @emph{Interface}: @tab @code{subroutine acc_wait_async(arg, async)}
2930 @item @tab @code{integer(acc_handle_kind) arg, async}
2933 @item @emph{Reference}:
2934 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2941 @section @code{acc_init} -- Initialize runtime for a specific device type.
2943 @item @emph{Description}
2944 This function initializes the runtime for the device type specified in
2948 @multitable @columnfractions .20 .80
2949 @item @emph{Prototype}: @tab @code{acc_init(acc_device_t devicetype);}
2952 @item @emph{Fortran}:
2953 @multitable @columnfractions .20 .80
2954 @item @emph{Interface}: @tab @code{subroutine acc_init(devicetype)}
2955 @item @tab @code{integer(acc_device_kind) devicetype}
2958 @item @emph{Reference}:
2959 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2966 @section @code{acc_shutdown} -- Shuts down the runtime for a specific device type.
2968 @item @emph{Description}
2969 This function shuts down the runtime for the device type specified in
2973 @multitable @columnfractions .20 .80
2974 @item @emph{Prototype}: @tab @code{acc_shutdown(acc_device_t devicetype);}
2977 @item @emph{Fortran}:
2978 @multitable @columnfractions .20 .80
2979 @item @emph{Interface}: @tab @code{subroutine acc_shutdown(devicetype)}
2980 @item @tab @code{integer(acc_device_kind) devicetype}
2983 @item @emph{Reference}:
2984 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
2991 @section @code{acc_on_device} -- Whether executing on a particular device
2993 @item @emph{Description}:
2994 This function returns whether the program is executing on a particular
2995 device specified in @var{devicetype}. In C/C++ a non-zero value is
2996 returned to indicate the device is executing on the specified device type.
2997 In Fortran, @code{true} will be returned. If the program is not executing
2998 on the specified device type C/C++ will return a zero, while Fortran will
2999 return @code{false}.
3002 @multitable @columnfractions .20 .80
3003 @item @emph{Prototype}: @tab @code{acc_on_device(acc_device_t devicetype);}
3006 @item @emph{Fortran}:
3007 @multitable @columnfractions .20 .80
3008 @item @emph{Interface}: @tab @code{function acc_on_device(devicetype)}
3009 @item @tab @code{integer(acc_device_kind) devicetype}
3010 @item @tab @code{logical acc_on_device}
3014 @item @emph{Reference}:
3015 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3022 @section @code{acc_malloc} -- Allocate device memory.
3024 @item @emph{Description}
3025 This function allocates @var{len} bytes of device memory. It returns
3026 the device address of the allocated memory.
3029 @multitable @columnfractions .20 .80
3030 @item @emph{Prototype}: @tab @code{d_void* acc_malloc(size_t len);}
3033 @item @emph{Reference}:
3034 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3041 @section @code{acc_free} -- Free device memory.
3043 @item @emph{Description}
3044 Free previously allocated device memory at the device address @code{a}.
3047 @multitable @columnfractions .20 .80
3048 @item @emph{Prototype}: @tab @code{acc_free(d_void *a);}
3051 @item @emph{Reference}:
3052 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3059 @section @code{acc_copyin} -- Allocate device memory and copy host memory to it.
3061 @item @emph{Description}
3062 In C/C++, this function allocates @var{len} bytes of device memory
3063 and maps it to the specified host address in @var{a}. The device
3064 address of the newly allocated device memory is returned.
3066 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3067 a contiguous array section. The second form @var{a} specifies a
3068 variable or array element and @var{len} specifies the length in bytes.
3071 @multitable @columnfractions .20 .80
3072 @item @emph{Prototype}: @tab @code{void *acc_copyin(h_void *a, size_t len);}
3073 @item @emph{Prototype}: @tab @code{void *acc_copyin_async(h_void *a, size_t len, int async);}
3076 @item @emph{Fortran}:
3077 @multitable @columnfractions .20 .80
3078 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a)}
3079 @item @tab @code{type, dimension(:[,:]...) :: a}
3080 @item @emph{Interface}: @tab @code{subroutine acc_copyin(a, len)}
3081 @item @tab @code{type, dimension(:[,:]...) :: a}
3082 @item @tab @code{integer len}
3083 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, async)}
3084 @item @tab @code{type, dimension(:[,:]...) :: a}
3085 @item @tab @code{integer(acc_handle_kind) :: async}
3086 @item @emph{Interface}: @tab @code{subroutine acc_copyin_async(a, len, async)}
3087 @item @tab @code{type, dimension(:[,:]...) :: a}
3088 @item @tab @code{integer len}
3089 @item @tab @code{integer(acc_handle_kind) :: async}
3092 @item @emph{Reference}:
3093 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3099 @node acc_present_or_copyin
3100 @section @code{acc_present_or_copyin} -- If the data is not present on the device, allocate device memory and copy from host memory.
3102 @item @emph{Description}
3103 This function tests if the host data specified by @var{a} and of length
3104 @var{len} is present or not. If it is not present, then device memory
3105 will be allocated and the host memory copied. The device address of
3106 the newly allocated device memory is returned.
3108 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3109 a contiguous array section. The second form @var{a} specifies a variable or
3110 array element and @var{len} specifies the length in bytes.
3112 Note that @code{acc_present_or_copyin} and @code{acc_pcopyin} exist for
3113 backward compatibility with OpenACC 2.0; use @ref{acc_copyin} instead.
3116 @multitable @columnfractions .20 .80
3117 @item @emph{Prototype}: @tab @code{void *acc_present_or_copyin(h_void *a, size_t len);}
3118 @item @emph{Prototype}: @tab @code{void *acc_pcopyin(h_void *a, size_t len);}
3121 @item @emph{Fortran}:
3122 @multitable @columnfractions .20 .80
3123 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a)}
3124 @item @tab @code{type, dimension(:[,:]...) :: a}
3125 @item @emph{Interface}: @tab @code{subroutine acc_present_or_copyin(a, len)}
3126 @item @tab @code{type, dimension(:[,:]...) :: a}
3127 @item @tab @code{integer len}
3128 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a)}
3129 @item @tab @code{type, dimension(:[,:]...) :: a}
3130 @item @emph{Interface}: @tab @code{subroutine acc_pcopyin(a, len)}
3131 @item @tab @code{type, dimension(:[,:]...) :: a}
3132 @item @tab @code{integer len}
3135 @item @emph{Reference}:
3136 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3143 @section @code{acc_create} -- Allocate device memory and map it to host memory.
3145 @item @emph{Description}
3146 This function allocates device memory and maps it to host memory specified
3147 by the host address @var{a} with a length of @var{len} bytes. In C/C++,
3148 the function returns the device address of the allocated device memory.
3150 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3151 a contiguous array section. The second form @var{a} specifies a variable or
3152 array element and @var{len} specifies the length in bytes.
3155 @multitable @columnfractions .20 .80
3156 @item @emph{Prototype}: @tab @code{void *acc_create(h_void *a, size_t len);}
3157 @item @emph{Prototype}: @tab @code{void *acc_create_async(h_void *a, size_t len, int async);}
3160 @item @emph{Fortran}:
3161 @multitable @columnfractions .20 .80
3162 @item @emph{Interface}: @tab @code{subroutine acc_create(a)}
3163 @item @tab @code{type, dimension(:[,:]...) :: a}
3164 @item @emph{Interface}: @tab @code{subroutine acc_create(a, len)}
3165 @item @tab @code{type, dimension(:[,:]...) :: a}
3166 @item @tab @code{integer len}
3167 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, async)}
3168 @item @tab @code{type, dimension(:[,:]...) :: a}
3169 @item @tab @code{integer(acc_handle_kind) :: async}
3170 @item @emph{Interface}: @tab @code{subroutine acc_create_async(a, len, async)}
3171 @item @tab @code{type, dimension(:[,:]...) :: a}
3172 @item @tab @code{integer len}
3173 @item @tab @code{integer(acc_handle_kind) :: async}
3176 @item @emph{Reference}:
3177 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3183 @node acc_present_or_create
3184 @section @code{acc_present_or_create} -- If the data is not present on the device, allocate device memory and map it to host memory.
3186 @item @emph{Description}
3187 This function tests if the host data specified by @var{a} and of length
3188 @var{len} is present or not. If it is not present, then device memory
3189 will be allocated and mapped to host memory. In C/C++, the device address
3190 of the newly allocated device memory is returned.
3192 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3193 a contiguous array section. The second form @var{a} specifies a variable or
3194 array element and @var{len} specifies the length in bytes.
3196 Note that @code{acc_present_or_create} and @code{acc_pcreate} exist for
3197 backward compatibility with OpenACC 2.0; use @ref{acc_create} instead.
3200 @multitable @columnfractions .20 .80
3201 @item @emph{Prototype}: @tab @code{void *acc_present_or_create(h_void *a, size_t len)}
3202 @item @emph{Prototype}: @tab @code{void *acc_pcreate(h_void *a, size_t len)}
3205 @item @emph{Fortran}:
3206 @multitable @columnfractions .20 .80
3207 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a)}
3208 @item @tab @code{type, dimension(:[,:]...) :: a}
3209 @item @emph{Interface}: @tab @code{subroutine acc_present_or_create(a, len)}
3210 @item @tab @code{type, dimension(:[,:]...) :: a}
3211 @item @tab @code{integer len}
3212 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a)}
3213 @item @tab @code{type, dimension(:[,:]...) :: a}
3214 @item @emph{Interface}: @tab @code{subroutine acc_pcreate(a, len)}
3215 @item @tab @code{type, dimension(:[,:]...) :: a}
3216 @item @tab @code{integer len}
3219 @item @emph{Reference}:
3220 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3227 @section @code{acc_copyout} -- Copy device memory to host memory.
3229 @item @emph{Description}
3230 This function copies mapped device memory to host memory which is specified
3231 by host address @var{a} for a length @var{len} bytes in C/C++.
3233 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3234 a contiguous array section. The second form @var{a} specifies a variable or
3235 array element and @var{len} specifies the length in bytes.
3238 @multitable @columnfractions .20 .80
3239 @item @emph{Prototype}: @tab @code{acc_copyout(h_void *a, size_t len);}
3240 @item @emph{Prototype}: @tab @code{acc_copyout_async(h_void *a, size_t len, int async);}
3241 @item @emph{Prototype}: @tab @code{acc_copyout_finalize(h_void *a, size_t len);}
3242 @item @emph{Prototype}: @tab @code{acc_copyout_finalize_async(h_void *a, size_t len, int async);}
3245 @item @emph{Fortran}:
3246 @multitable @columnfractions .20 .80
3247 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a)}
3248 @item @tab @code{type, dimension(:[,:]...) :: a}
3249 @item @emph{Interface}: @tab @code{subroutine acc_copyout(a, len)}
3250 @item @tab @code{type, dimension(:[,:]...) :: a}
3251 @item @tab @code{integer len}
3252 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, async)}
3253 @item @tab @code{type, dimension(:[,:]...) :: a}
3254 @item @tab @code{integer(acc_handle_kind) :: async}
3255 @item @emph{Interface}: @tab @code{subroutine acc_copyout_async(a, len, async)}
3256 @item @tab @code{type, dimension(:[,:]...) :: a}
3257 @item @tab @code{integer len}
3258 @item @tab @code{integer(acc_handle_kind) :: async}
3259 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a)}
3260 @item @tab @code{type, dimension(:[,:]...) :: a}
3261 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize(a, len)}
3262 @item @tab @code{type, dimension(:[,:]...) :: a}
3263 @item @tab @code{integer len}
3264 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, async)}
3265 @item @tab @code{type, dimension(:[,:]...) :: a}
3266 @item @tab @code{integer(acc_handle_kind) :: async}
3267 @item @emph{Interface}: @tab @code{subroutine acc_copyout_finalize_async(a, len, async)}
3268 @item @tab @code{type, dimension(:[,:]...) :: a}
3269 @item @tab @code{integer len}
3270 @item @tab @code{integer(acc_handle_kind) :: async}
3273 @item @emph{Reference}:
3274 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3281 @section @code{acc_delete} -- Free device memory.
3283 @item @emph{Description}
3284 This function frees previously allocated device memory specified by
3285 the device address @var{a} and the length of @var{len} bytes.
3287 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3288 a contiguous array section. The second form @var{a} specifies a variable or
3289 array element and @var{len} specifies the length in bytes.
3292 @multitable @columnfractions .20 .80
3293 @item @emph{Prototype}: @tab @code{acc_delete(h_void *a, size_t len);}
3294 @item @emph{Prototype}: @tab @code{acc_delete_async(h_void *a, size_t len, int async);}
3295 @item @emph{Prototype}: @tab @code{acc_delete_finalize(h_void *a, size_t len);}
3296 @item @emph{Prototype}: @tab @code{acc_delete_finalize_async(h_void *a, size_t len, int async);}
3299 @item @emph{Fortran}:
3300 @multitable @columnfractions .20 .80
3301 @item @emph{Interface}: @tab @code{subroutine acc_delete(a)}
3302 @item @tab @code{type, dimension(:[,:]...) :: a}
3303 @item @emph{Interface}: @tab @code{subroutine acc_delete(a, len)}
3304 @item @tab @code{type, dimension(:[,:]...) :: a}
3305 @item @tab @code{integer len}
3306 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, async)}
3307 @item @tab @code{type, dimension(:[,:]...) :: a}
3308 @item @tab @code{integer(acc_handle_kind) :: async}
3309 @item @emph{Interface}: @tab @code{subroutine acc_delete_async(a, len, async)}
3310 @item @tab @code{type, dimension(:[,:]...) :: a}
3311 @item @tab @code{integer len}
3312 @item @tab @code{integer(acc_handle_kind) :: async}
3313 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a)}
3314 @item @tab @code{type, dimension(:[,:]...) :: a}
3315 @item @emph{Interface}: @tab @code{subroutine acc_delete_finalize(a, len)}
3316 @item @tab @code{type, dimension(:[,:]...) :: a}
3317 @item @tab @code{integer len}
3318 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, async)}
3319 @item @tab @code{type, dimension(:[,:]...) :: a}
3320 @item @tab @code{integer(acc_handle_kind) :: async}
3321 @item @emph{Interface}: @tab @code{subroutine acc_delete_async_finalize(a, len, async)}
3322 @item @tab @code{type, dimension(:[,:]...) :: a}
3323 @item @tab @code{integer len}
3324 @item @tab @code{integer(acc_handle_kind) :: async}
3327 @item @emph{Reference}:
3328 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3334 @node acc_update_device
3335 @section @code{acc_update_device} -- Update device memory from mapped host memory.
3337 @item @emph{Description}
3338 This function updates the device copy from the previously mapped host memory.
3339 The host memory is specified with the host address @var{a} and a length of
3342 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3343 a contiguous array section. The second form @var{a} specifies a variable or
3344 array element and @var{len} specifies the length in bytes.
3347 @multitable @columnfractions .20 .80
3348 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len);}
3349 @item @emph{Prototype}: @tab @code{acc_update_device(h_void *a, size_t len, async);}
3352 @item @emph{Fortran}:
3353 @multitable @columnfractions .20 .80
3354 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a)}
3355 @item @tab @code{type, dimension(:[,:]...) :: a}
3356 @item @emph{Interface}: @tab @code{subroutine acc_update_device(a, len)}
3357 @item @tab @code{type, dimension(:[,:]...) :: a}
3358 @item @tab @code{integer len}
3359 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, async)}
3360 @item @tab @code{type, dimension(:[,:]...) :: a}
3361 @item @tab @code{integer(acc_handle_kind) :: async}
3362 @item @emph{Interface}: @tab @code{subroutine acc_update_device_async(a, len, async)}
3363 @item @tab @code{type, dimension(:[,:]...) :: a}
3364 @item @tab @code{integer len}
3365 @item @tab @code{integer(acc_handle_kind) :: async}
3368 @item @emph{Reference}:
3369 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3375 @node acc_update_self
3376 @section @code{acc_update_self} -- Update host memory from mapped device memory.
3378 @item @emph{Description}
3379 This function updates the host copy from the previously mapped device memory.
3380 The host memory is specified with the host address @var{a} and a length of
3383 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3384 a contiguous array section. The second form @var{a} specifies a variable or
3385 array element and @var{len} specifies the length in bytes.
3388 @multitable @columnfractions .20 .80
3389 @item @emph{Prototype}: @tab @code{acc_update_self(h_void *a, size_t len);}
3390 @item @emph{Prototype}: @tab @code{acc_update_self_async(h_void *a, size_t len, int async);}
3393 @item @emph{Fortran}:
3394 @multitable @columnfractions .20 .80
3395 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a)}
3396 @item @tab @code{type, dimension(:[,:]...) :: a}
3397 @item @emph{Interface}: @tab @code{subroutine acc_update_self(a, len)}
3398 @item @tab @code{type, dimension(:[,:]...) :: a}
3399 @item @tab @code{integer len}
3400 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, async)}
3401 @item @tab @code{type, dimension(:[,:]...) :: a}
3402 @item @tab @code{integer(acc_handle_kind) :: async}
3403 @item @emph{Interface}: @tab @code{subroutine acc_update_self_async(a, len, async)}
3404 @item @tab @code{type, dimension(:[,:]...) :: a}
3405 @item @tab @code{integer len}
3406 @item @tab @code{integer(acc_handle_kind) :: async}
3409 @item @emph{Reference}:
3410 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3417 @section @code{acc_map_data} -- Map previously allocated device memory to host memory.
3419 @item @emph{Description}
3420 This function maps previously allocated device and host memory. The device
3421 memory is specified with the device address @var{d}. The host memory is
3422 specified with the host address @var{h} and a length of @var{len}.
3425 @multitable @columnfractions .20 .80
3426 @item @emph{Prototype}: @tab @code{acc_map_data(h_void *h, d_void *d, size_t len);}
3429 @item @emph{Reference}:
3430 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3436 @node acc_unmap_data
3437 @section @code{acc_unmap_data} -- Unmap device memory from host memory.
3439 @item @emph{Description}
3440 This function unmaps previously mapped device and host memory. The latter
3441 specified by @var{h}.
3444 @multitable @columnfractions .20 .80
3445 @item @emph{Prototype}: @tab @code{acc_unmap_data(h_void *h);}
3448 @item @emph{Reference}:
3449 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3456 @section @code{acc_deviceptr} -- Get device pointer associated with specific host address.
3458 @item @emph{Description}
3459 This function returns the device address that has been mapped to the
3460 host address specified by @var{h}.
3463 @multitable @columnfractions .20 .80
3464 @item @emph{Prototype}: @tab @code{void *acc_deviceptr(h_void *h);}
3467 @item @emph{Reference}:
3468 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3475 @section @code{acc_hostptr} -- Get host pointer associated with specific device address.
3477 @item @emph{Description}
3478 This function returns the host address that has been mapped to the
3479 device address specified by @var{d}.
3482 @multitable @columnfractions .20 .80
3483 @item @emph{Prototype}: @tab @code{void *acc_hostptr(d_void *d);}
3486 @item @emph{Reference}:
3487 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3493 @node acc_is_present
3494 @section @code{acc_is_present} -- Indicate whether host variable / array is present on device.
3496 @item @emph{Description}
3497 This function indicates whether the specified host address in @var{a} and a
3498 length of @var{len} bytes is present on the device. In C/C++, a non-zero
3499 value is returned to indicate the presence of the mapped memory on the
3500 device. A zero is returned to indicate the memory is not mapped on the
3503 In Fortran, two (2) forms are supported. In the first form, @var{a} specifies
3504 a contiguous array section. The second form @var{a} specifies a variable or
3505 array element and @var{len} specifies the length in bytes. If the host
3506 memory is mapped to device memory, then a @code{true} is returned. Otherwise,
3507 a @code{false} is return to indicate the mapped memory is not present.
3510 @multitable @columnfractions .20 .80
3511 @item @emph{Prototype}: @tab @code{int acc_is_present(h_void *a, size_t len);}
3514 @item @emph{Fortran}:
3515 @multitable @columnfractions .20 .80
3516 @item @emph{Interface}: @tab @code{function acc_is_present(a)}
3517 @item @tab @code{type, dimension(:[,:]...) :: a}
3518 @item @tab @code{logical acc_is_present}
3519 @item @emph{Interface}: @tab @code{function acc_is_present(a, len)}
3520 @item @tab @code{type, dimension(:[,:]...) :: a}
3521 @item @tab @code{integer len}
3522 @item @tab @code{logical acc_is_present}
3525 @item @emph{Reference}:
3526 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3532 @node acc_memcpy_to_device
3533 @section @code{acc_memcpy_to_device} -- Copy host memory to device memory.
3535 @item @emph{Description}
3536 This function copies host memory specified by host address of @var{src} to
3537 device memory specified by the device address @var{dest} for a length of
3541 @multitable @columnfractions .20 .80
3542 @item @emph{Prototype}: @tab @code{acc_memcpy_to_device(d_void *dest, h_void *src, size_t bytes);}
3545 @item @emph{Reference}:
3546 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3552 @node acc_memcpy_from_device
3553 @section @code{acc_memcpy_from_device} -- Copy device memory to host memory.
3555 @item @emph{Description}
3556 This function copies host memory specified by host address of @var{src} from
3557 device memory specified by the device address @var{dest} for a length of
3561 @multitable @columnfractions .20 .80
3562 @item @emph{Prototype}: @tab @code{acc_memcpy_from_device(d_void *dest, h_void *src, size_t bytes);}
3565 @item @emph{Reference}:
3566 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3573 @section @code{acc_attach} -- Let device pointer point to device-pointer target.
3575 @item @emph{Description}
3576 This function updates a pointer on the device from pointing to a host-pointer
3577 address to pointing to the corresponding device data.
3580 @multitable @columnfractions .20 .80
3581 @item @emph{Prototype}: @tab @code{acc_attach(h_void **ptr);}
3582 @item @emph{Prototype}: @tab @code{acc_attach_async(h_void **ptr, int async);}
3585 @item @emph{Reference}:
3586 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3593 @section @code{acc_detach} -- Let device pointer point to host-pointer target.
3595 @item @emph{Description}
3596 This function updates a pointer on the device from pointing to a device-pointer
3597 address to pointing to the corresponding host data.
3600 @multitable @columnfractions .20 .80
3601 @item @emph{Prototype}: @tab @code{acc_detach(h_void **ptr);}
3602 @item @emph{Prototype}: @tab @code{acc_detach_async(h_void **ptr, int async);}
3603 @item @emph{Prototype}: @tab @code{acc_detach_finalize(h_void **ptr);}
3604 @item @emph{Prototype}: @tab @code{acc_detach_finalize_async(h_void **ptr, int async);}
3607 @item @emph{Reference}:
3608 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3614 @node acc_get_current_cuda_device
3615 @section @code{acc_get_current_cuda_device} -- Get CUDA device handle.
3617 @item @emph{Description}
3618 This function returns the CUDA device handle. This handle is the same
3619 as used by the CUDA Runtime or Driver API's.
3622 @multitable @columnfractions .20 .80
3623 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_device(void);}
3626 @item @emph{Reference}:
3627 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3633 @node acc_get_current_cuda_context
3634 @section @code{acc_get_current_cuda_context} -- Get CUDA context handle.
3636 @item @emph{Description}
3637 This function returns the CUDA context handle. This handle is the same
3638 as used by the CUDA Runtime or Driver API's.
3641 @multitable @columnfractions .20 .80
3642 @item @emph{Prototype}: @tab @code{void *acc_get_current_cuda_context(void);}
3645 @item @emph{Reference}:
3646 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3652 @node acc_get_cuda_stream
3653 @section @code{acc_get_cuda_stream} -- Get CUDA stream handle.
3655 @item @emph{Description}
3656 This function returns the CUDA stream handle for the queue @var{async}.
3657 This handle is the same as used by the CUDA Runtime or Driver API's.
3660 @multitable @columnfractions .20 .80
3661 @item @emph{Prototype}: @tab @code{void *acc_get_cuda_stream(int async);}
3664 @item @emph{Reference}:
3665 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3671 @node acc_set_cuda_stream
3672 @section @code{acc_set_cuda_stream} -- Set CUDA stream handle.
3674 @item @emph{Description}
3675 This function associates the stream handle specified by @var{stream} with
3676 the queue @var{async}.
3678 This cannot be used to change the stream handle associated with
3679 @code{acc_async_sync}.
3681 The return value is not specified.
3684 @multitable @columnfractions .20 .80
3685 @item @emph{Prototype}: @tab @code{int acc_set_cuda_stream(int async, void *stream);}
3688 @item @emph{Reference}:
3689 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3695 @node acc_prof_register
3696 @section @code{acc_prof_register} -- Register callbacks.
3698 @item @emph{Description}:
3699 This function registers callbacks.
3702 @multitable @columnfractions .20 .80
3703 @item @emph{Prototype}: @tab @code{void acc_prof_register (acc_event_t, acc_prof_callback, acc_register_t);}
3706 @item @emph{See also}:
3707 @ref{OpenACC Profiling Interface}
3709 @item @emph{Reference}:
3710 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3716 @node acc_prof_unregister
3717 @section @code{acc_prof_unregister} -- Unregister callbacks.
3719 @item @emph{Description}:
3720 This function unregisters callbacks.
3723 @multitable @columnfractions .20 .80
3724 @item @emph{Prototype}: @tab @code{void acc_prof_unregister (acc_event_t, acc_prof_callback, acc_register_t);}
3727 @item @emph{See also}:
3728 @ref{OpenACC Profiling Interface}
3730 @item @emph{Reference}:
3731 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3737 @node acc_prof_lookup
3738 @section @code{acc_prof_lookup} -- Obtain inquiry functions.
3740 @item @emph{Description}:
3741 Function to obtain inquiry functions.
3744 @multitable @columnfractions .20 .80
3745 @item @emph{Prototype}: @tab @code{acc_query_fn acc_prof_lookup (const char *);}
3748 @item @emph{See also}:
3749 @ref{OpenACC Profiling Interface}
3751 @item @emph{Reference}:
3752 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3758 @node acc_register_library
3759 @section @code{acc_register_library} -- Library registration.
3761 @item @emph{Description}:
3762 Function for library registration.
3765 @multitable @columnfractions .20 .80
3766 @item @emph{Prototype}: @tab @code{void acc_register_library (acc_prof_reg, acc_prof_reg, acc_prof_lookup_func);}
3769 @item @emph{See also}:
3770 @ref{OpenACC Profiling Interface}, @ref{ACC_PROFLIB}
3772 @item @emph{Reference}:
3773 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3779 @c ---------------------------------------------------------------------
3780 @c OpenACC Environment Variables
3781 @c ---------------------------------------------------------------------
3783 @node OpenACC Environment Variables
3784 @chapter OpenACC Environment Variables
3786 The variables @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}
3787 are defined by section 4 of the OpenACC specification in version 2.0.
3788 The variable @env{ACC_PROFLIB}
3789 is defined by section 4 of the OpenACC specification in version 2.6.
3790 The variable @env{GCC_ACC_NOTIFY} is used for diagnostic purposes.
3801 @node ACC_DEVICE_TYPE
3802 @section @code{ACC_DEVICE_TYPE}
3804 @item @emph{Reference}:
3805 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3811 @node ACC_DEVICE_NUM
3812 @section @code{ACC_DEVICE_NUM}
3814 @item @emph{Reference}:
3815 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3822 @section @code{ACC_PROFLIB}
3824 @item @emph{See also}:
3825 @ref{acc_register_library}, @ref{OpenACC Profiling Interface}
3827 @item @emph{Reference}:
3828 @uref{https://www.openacc.org, OpenACC specification v2.6}, section
3834 @node GCC_ACC_NOTIFY
3835 @section @code{GCC_ACC_NOTIFY}
3837 @item @emph{Description}:
3838 Print debug information pertaining to the accelerator.
3843 @c ---------------------------------------------------------------------
3844 @c CUDA Streams Usage
3845 @c ---------------------------------------------------------------------
3847 @node CUDA Streams Usage
3848 @chapter CUDA Streams Usage
3850 This applies to the @code{nvptx} plugin only.
3852 The library provides elements that perform asynchronous movement of
3853 data and asynchronous operation of computing constructs. This
3854 asynchronous functionality is implemented by making use of CUDA
3855 streams@footnote{See "Stream Management" in "CUDA Driver API",
3856 TRM-06703-001, Version 5.5, for additional information}.
3858 The primary means by that the asynchronous functionality is accessed
3859 is through the use of those OpenACC directives which make use of the
3860 @code{async} and @code{wait} clauses. When the @code{async} clause is
3861 first used with a directive, it creates a CUDA stream. If an
3862 @code{async-argument} is used with the @code{async} clause, then the
3863 stream is associated with the specified @code{async-argument}.
3865 Following the creation of an association between a CUDA stream and the
3866 @code{async-argument} of an @code{async} clause, both the @code{wait}
3867 clause and the @code{wait} directive can be used. When either the
3868 clause or directive is used after stream creation, it creates a
3869 rendezvous point whereby execution waits until all operations
3870 associated with the @code{async-argument}, that is, stream, have
3873 Normally, the management of the streams that are created as a result of
3874 using the @code{async} clause, is done without any intervention by the
3875 caller. This implies the association between the @code{async-argument}
3876 and the CUDA stream will be maintained for the lifetime of the program.
3877 However, this association can be changed through the use of the library
3878 function @code{acc_set_cuda_stream}. When the function
3879 @code{acc_set_cuda_stream} is called, the CUDA stream that was
3880 originally associated with the @code{async} clause will be destroyed.
3881 Caution should be taken when changing the association as subsequent
3882 references to the @code{async-argument} refer to a different
3887 @c ---------------------------------------------------------------------
3888 @c OpenACC Library Interoperability
3889 @c ---------------------------------------------------------------------
3891 @node OpenACC Library Interoperability
3892 @chapter OpenACC Library Interoperability
3894 @section Introduction
3896 The OpenACC library uses the CUDA Driver API, and may interact with
3897 programs that use the Runtime library directly, or another library
3898 based on the Runtime library, e.g., CUBLAS@footnote{See section 2.26,
3899 "Interactions with the CUDA Driver API" in
3900 "CUDA Runtime API", Version 5.5, and section 2.27, "VDPAU
3901 Interoperability", in "CUDA Driver API", TRM-06703-001, Version 5.5,
3902 for additional information on library interoperability.}.
3903 This chapter describes the use cases and what changes are
3904 required in order to use both the OpenACC library and the CUBLAS and Runtime
3905 libraries within a program.
3907 @section First invocation: NVIDIA CUBLAS library API
3909 In this first use case (see below), a function in the CUBLAS library is called
3910 prior to any of the functions in the OpenACC library. More specifically, the
3911 function @code{cublasCreate()}.
3913 When invoked, the function initializes the library and allocates the
3914 hardware resources on the host and the device on behalf of the caller. Once
3915 the initialization and allocation has completed, a handle is returned to the
3916 caller. The OpenACC library also requires initialization and allocation of
3917 hardware resources. Since the CUBLAS library has already allocated the
3918 hardware resources for the device, all that is left to do is to initialize
3919 the OpenACC library and acquire the hardware resources on the host.
3921 Prior to calling the OpenACC function that initializes the library and
3922 allocate the host hardware resources, you need to acquire the device number
3923 that was allocated during the call to @code{cublasCreate()}. The invoking of the
3924 runtime library function @code{cudaGetDevice()} accomplishes this. Once
3925 acquired, the device number is passed along with the device type as
3926 parameters to the OpenACC library function @code{acc_set_device_num()}.
3928 Once the call to @code{acc_set_device_num()} has completed, the OpenACC
3929 library uses the context that was created during the call to
3930 @code{cublasCreate()}. In other words, both libraries will be sharing the
3934 /* Create the handle */
3935 s = cublasCreate(&h);
3936 if (s != CUBLAS_STATUS_SUCCESS)
3938 fprintf(stderr, "cublasCreate failed %d\n", s);
3942 /* Get the device number */
3943 e = cudaGetDevice(&dev);
3944 if (e != cudaSuccess)
3946 fprintf(stderr, "cudaGetDevice failed %d\n", e);
3950 /* Initialize OpenACC library and use device 'dev' */
3951 acc_set_device_num(dev, acc_device_nvidia);
3956 @section First invocation: OpenACC library API
3958 In this second use case (see below), a function in the OpenACC library is
3959 called prior to any of the functions in the CUBLAS library. More specificially,
3960 the function @code{acc_set_device_num()}.
3962 In the use case presented here, the function @code{acc_set_device_num()}
3963 is used to both initialize the OpenACC library and allocate the hardware
3964 resources on the host and the device. In the call to the function, the
3965 call parameters specify which device to use and what device
3966 type to use, i.e., @code{acc_device_nvidia}. It should be noted that this
3967 is but one method to initialize the OpenACC library and allocate the
3968 appropriate hardware resources. Other methods are available through the
3969 use of environment variables and these will be discussed in the next section.
3971 Once the call to @code{acc_set_device_num()} has completed, other OpenACC
3972 functions can be called as seen with multiple calls being made to
3973 @code{acc_copyin()}. In addition, calls can be made to functions in the
3974 CUBLAS library. In the use case a call to @code{cublasCreate()} is made
3975 subsequent to the calls to @code{acc_copyin()}.
3976 As seen in the previous use case, a call to @code{cublasCreate()}
3977 initializes the CUBLAS library and allocates the hardware resources on the
3978 host and the device. However, since the device has already been allocated,
3979 @code{cublasCreate()} will only initialize the CUBLAS library and allocate
3980 the appropriate hardware resources on the host. The context that was created
3981 as part of the OpenACC initialization is shared with the CUBLAS library,
3982 similarly to the first use case.
3987 acc_set_device_num(dev, acc_device_nvidia);
3989 /* Copy the first set to the device */
3990 d_X = acc_copyin(&h_X[0], N * sizeof (float));
3993 fprintf(stderr, "copyin error h_X\n");
3997 /* Copy the second set to the device */
3998 d_Y = acc_copyin(&h_Y1[0], N * sizeof (float));
4001 fprintf(stderr, "copyin error h_Y1\n");
4005 /* Create the handle */
4006 s = cublasCreate(&h);
4007 if (s != CUBLAS_STATUS_SUCCESS)
4009 fprintf(stderr, "cublasCreate failed %d\n", s);
4013 /* Perform saxpy using CUBLAS library function */
4014 s = cublasSaxpy(h, N, &alpha, d_X, 1, d_Y, 1);
4015 if (s != CUBLAS_STATUS_SUCCESS)
4017 fprintf(stderr, "cublasSaxpy failed %d\n", s);
4021 /* Copy the results from the device */
4022 acc_memcpy_from_device(&h_Y1[0], d_Y, N * sizeof (float));
4027 @section OpenACC library and environment variables
4029 There are two environment variables associated with the OpenACC library
4030 that may be used to control the device type and device number:
4031 @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM}, respectively. These two
4032 environment variables can be used as an alternative to calling
4033 @code{acc_set_device_num()}. As seen in the second use case, the device
4034 type and device number were specified using @code{acc_set_device_num()}.
4035 If however, the aforementioned environment variables were set, then the
4036 call to @code{acc_set_device_num()} would not be required.
4039 The use of the environment variables is only relevant when an OpenACC function
4040 is called prior to a call to @code{cudaCreate()}. If @code{cudaCreate()}
4041 is called prior to a call to an OpenACC function, then you must call
4042 @code{acc_set_device_num()}@footnote{More complete information
4043 about @env{ACC_DEVICE_TYPE} and @env{ACC_DEVICE_NUM} can be found in
4044 sections 4.1 and 4.2 of the @uref{https://www.openacc.org, OpenACC}
4045 Application Programming Interface”, Version 2.6.}
4049 @c ---------------------------------------------------------------------
4050 @c OpenACC Profiling Interface
4051 @c ---------------------------------------------------------------------
4053 @node OpenACC Profiling Interface
4054 @chapter OpenACC Profiling Interface
4056 @section Implementation Status and Implementation-Defined Behavior
4058 We're implementing the OpenACC Profiling Interface as defined by the
4059 OpenACC 2.6 specification. We're clarifying some aspects here as
4060 @emph{implementation-defined behavior}, while they're still under
4061 discussion within the OpenACC Technical Committee.
4063 This implementation is tuned to keep the performance impact as low as
4064 possible for the (very common) case that the Profiling Interface is
4065 not enabled. This is relevant, as the Profiling Interface affects all
4066 the @emph{hot} code paths (in the target code, not in the offloaded
4067 code). Users of the OpenACC Profiling Interface can be expected to
4068 understand that performance will be impacted to some degree once the
4069 Profiling Interface has gotten enabled: for example, because of the
4070 @emph{runtime} (libgomp) calling into a third-party @emph{library} for
4071 every event that has been registered.
4073 We're not yet accounting for the fact that @cite{OpenACC events may
4074 occur during event processing}.
4075 We just handle one case specially, as required by CUDA 9.0
4076 @command{nvprof}, that @code{acc_get_device_type}
4077 (@ref{acc_get_device_type})) may be called from
4078 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4081 We're not yet implementing initialization via a
4082 @code{acc_register_library} function that is either statically linked
4083 in, or dynamically via @env{LD_PRELOAD}.
4084 Initialization via @code{acc_register_library} functions dynamically
4085 loaded via the @env{ACC_PROFLIB} environment variable does work, as
4086 does directly calling @code{acc_prof_register},
4087 @code{acc_prof_unregister}, @code{acc_prof_lookup}.
4089 As currently there are no inquiry functions defined, calls to
4090 @code{acc_prof_lookup} will always return @code{NULL}.
4092 There aren't separate @emph{start}, @emph{stop} events defined for the
4093 event types @code{acc_ev_create}, @code{acc_ev_delete},
4094 @code{acc_ev_alloc}, @code{acc_ev_free}. It's not clear if these
4095 should be triggered before or after the actual device-specific call is
4096 made. We trigger them after.
4098 Remarks about data provided to callbacks:
4102 @item @code{acc_prof_info.event_type}
4103 It's not clear if for @emph{nested} event callbacks (for example,
4104 @code{acc_ev_enqueue_launch_start} as part of a parent compute
4105 construct), this should be set for the nested event
4106 (@code{acc_ev_enqueue_launch_start}), or if the value of the parent
4107 construct should remain (@code{acc_ev_compute_construct_start}). In
4108 this implementation, the value will generally correspond to the
4109 innermost nested event type.
4111 @item @code{acc_prof_info.device_type}
4115 For @code{acc_ev_compute_construct_start}, and in presence of an
4116 @code{if} clause with @emph{false} argument, this will still refer to
4117 the offloading device type.
4118 It's not clear if that's the expected behavior.
4121 Complementary to the item before, for
4122 @code{acc_ev_compute_construct_end}, this is set to
4123 @code{acc_device_host} in presence of an @code{if} clause with
4124 @emph{false} argument.
4125 It's not clear if that's the expected behavior.
4129 @item @code{acc_prof_info.thread_id}
4130 Always @code{-1}; not yet implemented.
4132 @item @code{acc_prof_info.async}
4136 Not yet implemented correctly for
4137 @code{acc_ev_compute_construct_start}.
4140 In a compute construct, for host-fallback
4141 execution/@code{acc_device_host} it will always be
4142 @code{acc_async_sync}.
4143 It's not clear if that's the expected behavior.
4146 For @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end},
4147 it will always be @code{acc_async_sync}.
4148 It's not clear if that's the expected behavior.
4152 @item @code{acc_prof_info.async_queue}
4153 There is no @cite{limited number of asynchronous queues} in libgomp.
4154 This will always have the same value as @code{acc_prof_info.async}.
4156 @item @code{acc_prof_info.src_file}
4157 Always @code{NULL}; not yet implemented.
4159 @item @code{acc_prof_info.func_name}
4160 Always @code{NULL}; not yet implemented.
4162 @item @code{acc_prof_info.line_no}
4163 Always @code{-1}; not yet implemented.
4165 @item @code{acc_prof_info.end_line_no}
4166 Always @code{-1}; not yet implemented.
4168 @item @code{acc_prof_info.func_line_no}
4169 Always @code{-1}; not yet implemented.
4171 @item @code{acc_prof_info.func_end_line_no}
4172 Always @code{-1}; not yet implemented.
4174 @item @code{acc_event_info.event_type}, @code{acc_event_info.*.event_type}
4175 Relating to @code{acc_prof_info.event_type} discussed above, in this
4176 implementation, this will always be the same value as
4177 @code{acc_prof_info.event_type}.
4179 @item @code{acc_event_info.*.parent_construct}
4183 Will be @code{acc_construct_parallel} for all OpenACC compute
4184 constructs as well as many OpenACC Runtime API calls; should be the
4185 one matching the actual construct, or
4186 @code{acc_construct_runtime_api}, respectively.
4189 Will be @code{acc_construct_enter_data} or
4190 @code{acc_construct_exit_data} when processing variable mappings
4191 specified in OpenACC @emph{declare} directives; should be
4192 @code{acc_construct_declare}.
4195 For implicit @code{acc_ev_device_init_start},
4196 @code{acc_ev_device_init_end}, and explicit as well as implicit
4197 @code{acc_ev_alloc}, @code{acc_ev_free},
4198 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4199 @code{acc_ev_enqueue_download_start}, and
4200 @code{acc_ev_enqueue_download_end}, will be
4201 @code{acc_construct_parallel}; should reflect the real parent
4206 @item @code{acc_event_info.*.implicit}
4207 For @code{acc_ev_alloc}, @code{acc_ev_free},
4208 @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end},
4209 @code{acc_ev_enqueue_download_start}, and
4210 @code{acc_ev_enqueue_download_end}, this currently will be @code{1}
4211 also for explicit usage.
4213 @item @code{acc_event_info.data_event.var_name}
4214 Always @code{NULL}; not yet implemented.
4216 @item @code{acc_event_info.data_event.host_ptr}
4217 For @code{acc_ev_alloc}, and @code{acc_ev_free}, this is always
4220 @item @code{typedef union acc_api_info}
4221 @dots{} as printed in @cite{5.2.3. Third Argument: API-Specific
4222 Information}. This should obviously be @code{typedef @emph{struct}
4225 @item @code{acc_api_info.device_api}
4226 Possibly not yet implemented correctly for
4227 @code{acc_ev_compute_construct_start},
4228 @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}:
4229 will always be @code{acc_device_api_none} for these event types.
4230 For @code{acc_ev_enter_data_start}, it will be
4231 @code{acc_device_api_none} in some cases.
4233 @item @code{acc_api_info.device_type}
4234 Always the same as @code{acc_prof_info.device_type}.
4236 @item @code{acc_api_info.vendor}
4237 Always @code{-1}; not yet implemented.
4239 @item @code{acc_api_info.device_handle}
4240 Always @code{NULL}; not yet implemented.
4242 @item @code{acc_api_info.context_handle}
4243 Always @code{NULL}; not yet implemented.
4245 @item @code{acc_api_info.async_handle}
4246 Always @code{NULL}; not yet implemented.
4250 Remarks about certain event types:
4254 @item @code{acc_ev_device_init_start}, @code{acc_ev_device_init_end}
4258 @c See 'DEVICE_INIT_INSIDE_COMPUTE_CONSTRUCT' in
4259 @c 'libgomp.oacc-c-c++-common/acc_prof-kernels-1.c',
4260 @c 'libgomp.oacc-c-c++-common/acc_prof-parallel-1.c'.
4261 When a compute construct triggers implicit
4262 @code{acc_ev_device_init_start} and @code{acc_ev_device_init_end}
4263 events, they currently aren't @emph{nested within} the corresponding
4264 @code{acc_ev_compute_construct_start} and
4265 @code{acc_ev_compute_construct_end}, but they're currently observed
4266 @emph{before} @code{acc_ev_compute_construct_start}.
4267 It's not clear what to do: the standard asks us provide a lot of
4268 details to the @code{acc_ev_compute_construct_start} callback, without
4269 (implicitly) initializing a device before?
4272 Callbacks for these event types will not be invoked for calls to the
4273 @code{acc_set_device_type} and @code{acc_set_device_num} functions.
4274 It's not clear if they should be.
4278 @item @code{acc_ev_enter_data_start}, @code{acc_ev_enter_data_end}, @code{acc_ev_exit_data_start}, @code{acc_ev_exit_data_end}
4282 Callbacks for these event types will also be invoked for OpenACC
4283 @emph{host_data} constructs.
4284 It's not clear if they should be.
4287 Callbacks for these event types will also be invoked when processing
4288 variable mappings specified in OpenACC @emph{declare} directives.
4289 It's not clear if they should be.
4295 Callbacks for the following event types will be invoked, but dispatch
4296 and information provided therein has not yet been thoroughly reviewed:
4299 @item @code{acc_ev_alloc}
4300 @item @code{acc_ev_free}
4301 @item @code{acc_ev_update_start}, @code{acc_ev_update_end}
4302 @item @code{acc_ev_enqueue_upload_start}, @code{acc_ev_enqueue_upload_end}
4303 @item @code{acc_ev_enqueue_download_start}, @code{acc_ev_enqueue_download_end}
4306 During device initialization, and finalization, respectively,
4307 callbacks for the following event types will not yet be invoked:
4310 @item @code{acc_ev_alloc}
4311 @item @code{acc_ev_free}
4314 Callbacks for the following event types have not yet been implemented,
4315 so currently won't be invoked:
4318 @item @code{acc_ev_device_shutdown_start}, @code{acc_ev_device_shutdown_end}
4319 @item @code{acc_ev_runtime_shutdown}
4320 @item @code{acc_ev_create}, @code{acc_ev_delete}
4321 @item @code{acc_ev_wait_start}, @code{acc_ev_wait_end}
4324 For the following runtime library functions, not all expected
4325 callbacks will be invoked (mostly concerning implicit device
4329 @item @code{acc_get_num_devices}
4330 @item @code{acc_set_device_type}
4331 @item @code{acc_get_device_type}
4332 @item @code{acc_set_device_num}
4333 @item @code{acc_get_device_num}
4334 @item @code{acc_init}
4335 @item @code{acc_shutdown}
4338 Aside from implicit device initialization, for the following runtime
4339 library functions, no callbacks will be invoked for shared-memory
4340 offloading devices (it's not clear if they should be):
4343 @item @code{acc_malloc}
4344 @item @code{acc_free}
4345 @item @code{acc_copyin}, @code{acc_present_or_copyin}, @code{acc_copyin_async}
4346 @item @code{acc_create}, @code{acc_present_or_create}, @code{acc_create_async}
4347 @item @code{acc_copyout}, @code{acc_copyout_async}, @code{acc_copyout_finalize}, @code{acc_copyout_finalize_async}
4348 @item @code{acc_delete}, @code{acc_delete_async}, @code{acc_delete_finalize}, @code{acc_delete_finalize_async}
4349 @item @code{acc_update_device}, @code{acc_update_device_async}
4350 @item @code{acc_update_self}, @code{acc_update_self_async}
4351 @item @code{acc_map_data}, @code{acc_unmap_data}
4352 @item @code{acc_memcpy_to_device}, @code{acc_memcpy_to_device_async}
4353 @item @code{acc_memcpy_from_device}, @code{acc_memcpy_from_device_async}
4356 @c ---------------------------------------------------------------------
4357 @c OpenMP-Implementation Specifics
4358 @c ---------------------------------------------------------------------
4360 @node OpenMP-Implementation Specifics
4361 @chapter OpenMP-Implementation Specifics
4364 * OpenMP Context Selectors::
4365 * Memory allocation with libmemkind::
4368 @node OpenMP Context Selectors
4369 @section OpenMP Context Selectors
4371 @code{vendor} is always @code{gnu}. References are to the GCC manual.
4373 @multitable @columnfractions .60 .10 .25
4374 @headitem @code{arch} @tab @code{kind} @tab @code{isa}
4375 @item @code{x86}, @code{x86_64}, @code{i386}, @code{i486},
4376 @code{i586}, @code{i686}, @code{ia32}
4378 @tab See @code{-m...} flags in ``x86 Options'' (without @code{-m})
4379 @item @code{amdgcn}, @code{gcn}
4381 @tab See @code{-march=} in ``AMD GCN Options''@footnote{Additionally,
4382 @code{gfx803} is supported as an alias for @code{fiji}.}
4385 @tab See @code{-march=} in ``Nvidia PTX Options''
4388 @node Memory allocation with libmemkind
4389 @section Memory allocation with libmemkind
4391 On Linux systems, where the @uref{https://github.com/memkind/memkind, memkind
4392 library} (@code{libmemkind.so.0}) is available at runtime, it is used when
4393 creating memory allocators requesting
4396 @item the memory space @code{omp_high_bw_mem_space}
4397 @item the memory space @code{omp_large_cap_mem_space}
4398 @item the partition trait @code{omp_atv_interleaved}
4402 @c ---------------------------------------------------------------------
4403 @c Offload-Target Specifics
4404 @c ---------------------------------------------------------------------
4406 @node Offload-Target Specifics
4407 @chapter Offload-Target Specifics
4409 The following sections present notes on the offload-target specifics
4417 @section AMD Radeon (GCN)
4419 On the hardware side, there is the hierarchy (fine to coarse):
4421 @item work item (thread)
4424 @item compute unit (CU)
4427 All OpenMP and OpenACC levels are used, i.e.
4429 @item OpenMP's simd and OpenACC's vector map to work items (thread)
4430 @item OpenMP's threads (``parallel'') and OpenACC's workers map
4432 @item OpenMP's teams and OpenACC's gang use a threadpool with the
4433 size of the number of teams or gangs, respectively.
4438 @item Number of teams is the specified @code{num_teams} (OpenMP) or
4439 @code{num_gangs} (OpenACC) or otherwise the number of CU. It is limited
4440 by two times the number of CU.
4441 @item Number of wavefronts is 4 for gfx900 and 16 otherwise;
4442 @code{num_threads} (OpenMP) and @code{num_workers} (OpenACC)
4443 overrides this if smaller.
4444 @item The wavefront has 102 scalars and 64 vectors
4445 @item Number of workitems is always 64
4446 @item The hardware permits maximally 40 workgroups/CU and
4447 16 wavefronts/workgroup up to a limit of 40 wavefronts in total per CU.
4448 @item 80 scalars registers and 24 vector registers in non-kernel functions
4449 (the chosen procedure-calling API).
4450 @item For the kernel itself: as many as register pressure demands (number of
4451 teams and number of threads, scaled down if registers are exhausted)
4454 The implementation remark:
4456 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4457 using the C library @code{printf} functions and the Fortran
4458 @code{print}/@code{write} statements.
4466 On the hardware side, there is the hierarchy (fine to coarse):
4471 @item streaming multiprocessor
4474 All OpenMP and OpenACC levels are used, i.e.
4476 @item OpenMP's simd and OpenACC's vector map to threads
4477 @item OpenMP's threads (``parallel'') and OpenACC's workers map to warps
4478 @item OpenMP's teams and OpenACC's gang use a threadpool with the
4479 size of the number of teams or gangs, respectively.
4484 @item The @code{warp_size} is always 32
4485 @item CUDA kernel launched: @code{dim=@{#teams,1,1@}, blocks=@{#threads,warp_size,1@}}.
4486 @item The number of teams is limited by the number of blocks the device can
4487 host simultaneously.
4490 Additional information can be obtained by setting the environment variable to
4491 @code{GOMP_DEBUG=1} (very verbose; grep for @code{kernel.*launch} for launch
4494 GCC generates generic PTX ISA code, which is just-in-time compiled by CUDA,
4495 which caches the JIT in the user's directory (see CUDA documentation; can be
4496 tuned by the environment variables @code{CUDA_CACHE_@{DISABLE,MAXSIZE,PATH@}}.
4498 Note: While PTX ISA is generic, the @code{-mptx=} and @code{-march=} commandline
4499 options still affect the used PTX ISA code and, thus, the requirments on
4500 CUDA version and hardware.
4502 The implementation remark:
4504 @item I/O within OpenMP target regions and OpenACC parallel/kernels is supported
4505 using the C library @code{printf} functions. Note that the Fortran
4506 @code{print}/@code{write} statements are not supported, yet.
4507 @item Compilation OpenMP code that contains @code{requires reverse_offload}
4508 requires at least @code{-march=sm_35}, compiling for @code{-march=sm_30}
4513 @c ---------------------------------------------------------------------
4515 @c ---------------------------------------------------------------------
4517 @node The libgomp ABI
4518 @chapter The libgomp ABI
4520 The following sections present notes on the external ABI as
4521 presented by libgomp. Only maintainers should need them.
4524 * Implementing MASTER construct::
4525 * Implementing CRITICAL construct::
4526 * Implementing ATOMIC construct::
4527 * Implementing FLUSH construct::
4528 * Implementing BARRIER construct::
4529 * Implementing THREADPRIVATE construct::
4530 * Implementing PRIVATE clause::
4531 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
4532 * Implementing REDUCTION clause::
4533 * Implementing PARALLEL construct::
4534 * Implementing FOR construct::
4535 * Implementing ORDERED construct::
4536 * Implementing SECTIONS construct::
4537 * Implementing SINGLE construct::
4538 * Implementing OpenACC's PARALLEL construct::
4542 @node Implementing MASTER construct
4543 @section Implementing MASTER construct
4546 if (omp_get_thread_num () == 0)
4550 Alternately, we generate two copies of the parallel subfunction
4551 and only include this in the version run by the primary thread.
4552 Surely this is not worthwhile though...
4556 @node Implementing CRITICAL construct
4557 @section Implementing CRITICAL construct
4559 Without a specified name,
4562 void GOMP_critical_start (void);
4563 void GOMP_critical_end (void);
4566 so that we don't get COPY relocations from libgomp to the main
4569 With a specified name, use omp_set_lock and omp_unset_lock with
4570 name being transformed into a variable declared like
4573 omp_lock_t gomp_critical_user_<name> __attribute__((common))
4576 Ideally the ABI would specify that all zero is a valid unlocked
4577 state, and so we wouldn't need to initialize this at
4582 @node Implementing ATOMIC construct
4583 @section Implementing ATOMIC construct
4585 The target should implement the @code{__sync} builtins.
4587 Failing that we could add
4590 void GOMP_atomic_enter (void)
4591 void GOMP_atomic_exit (void)
4594 which reuses the regular lock code, but with yet another lock
4595 object private to the library.
4599 @node Implementing FLUSH construct
4600 @section Implementing FLUSH construct
4602 Expands to the @code{__sync_synchronize} builtin.
4606 @node Implementing BARRIER construct
4607 @section Implementing BARRIER construct
4610 void GOMP_barrier (void)
4614 @node Implementing THREADPRIVATE construct
4615 @section Implementing THREADPRIVATE construct
4617 In _most_ cases we can map this directly to @code{__thread}. Except
4618 that OMP allows constructors for C++ objects. We can either
4619 refuse to support this (how often is it used?) or we can
4620 implement something akin to .ctors.
4622 Even more ideally, this ctor feature is handled by extensions
4623 to the main pthreads library. Failing that, we can have a set
4624 of entry points to register ctor functions to be called.
4628 @node Implementing PRIVATE clause
4629 @section Implementing PRIVATE clause
4631 In association with a PARALLEL, or within the lexical extent
4632 of a PARALLEL block, the variable becomes a local variable in
4633 the parallel subfunction.
4635 In association with FOR or SECTIONS blocks, create a new
4636 automatic variable within the current function. This preserves
4637 the semantic of new variable creation.
4641 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4642 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
4644 This seems simple enough for PARALLEL blocks. Create a private
4645 struct for communicating between the parent and subfunction.
4646 In the parent, copy in values for scalar and "small" structs;
4647 copy in addresses for others TREE_ADDRESSABLE types. In the
4648 subfunction, copy the value into the local variable.
4650 It is not clear what to do with bare FOR or SECTION blocks.
4651 The only thing I can figure is that we do something like:
4654 #pragma omp for firstprivate(x) lastprivate(y)
4655 for (int i = 0; i < n; ++i)
4672 where the "x=x" and "y=y" assignments actually have different
4673 uids for the two variables, i.e. not something you could write
4674 directly in C. Presumably this only makes sense if the "outer"
4675 x and y are global variables.
4677 COPYPRIVATE would work the same way, except the structure
4678 broadcast would have to happen via SINGLE machinery instead.
4682 @node Implementing REDUCTION clause
4683 @section Implementing REDUCTION clause
4685 The private struct mentioned in the previous section should have
4686 a pointer to an array of the type of the variable, indexed by the
4687 thread's @var{team_id}. The thread stores its final value into the
4688 array, and after the barrier, the primary thread iterates over the
4689 array to collect the values.
4692 @node Implementing PARALLEL construct
4693 @section Implementing PARALLEL construct
4696 #pragma omp parallel
4705 void subfunction (void *data)
4712 GOMP_parallel_start (subfunction, &data, num_threads);
4713 subfunction (&data);
4714 GOMP_parallel_end ();
4718 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
4721 The @var{FN} argument is the subfunction to be run in parallel.
4723 The @var{DATA} argument is a pointer to a structure used to
4724 communicate data in and out of the subfunction, as discussed
4725 above with respect to FIRSTPRIVATE et al.
4727 The @var{NUM_THREADS} argument is 1 if an IF clause is present
4728 and false, or the value of the NUM_THREADS clause, if
4731 The function needs to create the appropriate number of
4732 threads and/or launch them from the dock. It needs to
4733 create the team structure and assign team ids.
4736 void GOMP_parallel_end (void)
4739 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
4743 @node Implementing FOR construct
4744 @section Implementing FOR construct
4747 #pragma omp parallel for
4748 for (i = lb; i <= ub; i++)
4755 void subfunction (void *data)
4758 while (GOMP_loop_static_next (&_s0, &_e0))
4761 for (i = _s0; i < _e1; i++)
4764 GOMP_loop_end_nowait ();
4767 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
4769 GOMP_parallel_end ();
4773 #pragma omp for schedule(runtime)
4774 for (i = 0; i < n; i++)
4783 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
4786 for (i = _s0, i < _e0; i++)
4788 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
4793 Note that while it looks like there is trickiness to propagating
4794 a non-constant STEP, there isn't really. We're explicitly allowed
4795 to evaluate it as many times as we want, and any variables involved
4796 should automatically be handled as PRIVATE or SHARED like any other
4797 variables. So the expression should remain evaluable in the
4798 subfunction. We can also pull it into a local variable if we like,
4799 but since its supposed to remain unchanged, we can also not if we like.
4801 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
4802 able to get away with no work-sharing context at all, since we can
4803 simply perform the arithmetic directly in each thread to divide up
4804 the iterations. Which would mean that we wouldn't need to call any
4807 There are separate routines for handling loops with an ORDERED
4808 clause. Bookkeeping for that is non-trivial...
4812 @node Implementing ORDERED construct
4813 @section Implementing ORDERED construct
4816 void GOMP_ordered_start (void)
4817 void GOMP_ordered_end (void)
4822 @node Implementing SECTIONS construct
4823 @section Implementing SECTIONS construct
4828 #pragma omp sections
4842 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
4859 @node Implementing SINGLE construct
4860 @section Implementing SINGLE construct
4874 if (GOMP_single_start ())
4882 #pragma omp single copyprivate(x)
4889 datap = GOMP_single_copy_start ();
4894 GOMP_single_copy_end (&data);
4903 @node Implementing OpenACC's PARALLEL construct
4904 @section Implementing OpenACC's PARALLEL construct
4907 void GOACC_parallel ()
4912 @c ---------------------------------------------------------------------
4914 @c ---------------------------------------------------------------------
4916 @node Reporting Bugs
4917 @chapter Reporting Bugs
4919 Bugs in the GNU Offloading and Multi Processing Runtime Library should
4920 be reported via @uref{https://gcc.gnu.org/bugzilla/, Bugzilla}. Please add
4921 "openacc", or "openmp", or both to the keywords field in the bug
4922 report, as appropriate.
4926 @c ---------------------------------------------------------------------
4927 @c GNU General Public License
4928 @c ---------------------------------------------------------------------
4930 @include gpl_v3.texi
4934 @c ---------------------------------------------------------------------
4935 @c GNU Free Documentation License
4936 @c ---------------------------------------------------------------------
4942 @c ---------------------------------------------------------------------
4943 @c Funding Free Software
4944 @c ---------------------------------------------------------------------
4946 @include funding.texi
4948 @c ---------------------------------------------------------------------
4950 @c ---------------------------------------------------------------------
4953 @unnumbered Library Index