1 \input texinfo @c -*-texinfo-*-
4 @setfilename libgomp.info
10 Copyright @copyright{} 2006, 2007, 2008, 2010, 2011, 2012 Free Software Foundation, Inc.
12 Permission is granted to copy, distribute and/or modify this document
13 under the terms of the GNU Free Documentation License, Version 1.3 or
14 any later version published by the Free Software Foundation; with the
15 Invariant Sections being ``Funding Free Software'', the Front-Cover
16 texts being (a) (see below), and with the Back-Cover Texts being (b)
17 (see below). A copy of the license is included in the section entitled
18 ``GNU Free Documentation License''.
20 (a) The FSF's Front-Cover Text is:
24 (b) The FSF's Back-Cover Text is:
26 You have freedom to copy and modify this GNU Manual, like GNU
27 software. Copies published by the Free Software Foundation raise
28 funds for GNU development.
32 @dircategory GNU Libraries
34 * libgomp: (libgomp). GNU OpenMP runtime library
37 This manual documents the GNU implementation of the OpenMP API for
38 multi-platform shared-memory parallel programming in C/C++ and Fortran.
40 Published by the Free Software Foundation
41 51 Franklin Street, Fifth Floor
42 Boston, MA 02110-1301 USA
48 @setchapternewpage odd
51 @title The GNU OpenMP Implementation
53 @vskip 0pt plus 1filll
54 @comment For the @value{version-GCC} Version*
56 Published by the Free Software Foundation @*
57 51 Franklin Street, Fifth Floor@*
58 Boston, MA 02110-1301, USA@*
72 This manual documents the usage of libgomp, the GNU implementation of the
73 @uref{http://www.openmp.org, OpenMP} Application Programming Interface (API)
74 for multi-platform shared-memory parallel programming in C/C++ and Fortran.
79 @comment When you add a new menu item, please keep the right hand
80 @comment aligned to the same column. Do not use tabs. This provides
81 @comment better formatting.
84 * Enabling OpenMP:: How to enable OpenMP for your applications.
85 * Runtime Library Routines:: The OpenMP runtime application programming
87 * Environment Variables:: Influencing runtime behavior with environment
89 * The libgomp ABI:: Notes on the external ABI presented by libgomp.
90 * Reporting Bugs:: How to report bugs in GNU OpenMP.
91 * Copying:: GNU general public license says
92 how you can copy and share libgomp.
93 * GNU Free Documentation License::
94 How you can copy and share this manual.
95 * Funding:: How to help assure continued work for free
97 * Library Index:: Index of this documentation.
101 @c ---------------------------------------------------------------------
103 @c ---------------------------------------------------------------------
105 @node Enabling OpenMP
106 @chapter Enabling OpenMP
108 To activate the OpenMP extensions for C/C++ and Fortran, the compile-time
109 flag @command{-fopenmp} must be specified. This enables the OpenMP directive
110 @code{#pragma omp} in C/C++ and @code{!$omp} directives in free form,
111 @code{c$omp}, @code{*$omp} and @code{!$omp} directives in fixed form,
112 @code{!$} conditional compilation sentinels in free form and @code{c$},
113 @code{*$} and @code{!$} sentinels in fixed form, for Fortran. The flag also
114 arranges for automatic linking of the OpenMP runtime library
115 (@ref{Runtime Library Routines}).
117 A complete description of all OpenMP directives accepted may be found in
118 the @uref{http://www.openmp.org, OpenMP Application Program Interface} manual,
122 @c ---------------------------------------------------------------------
123 @c Runtime Library Routines
124 @c ---------------------------------------------------------------------
126 @node Runtime Library Routines
127 @chapter Runtime Library Routines
129 The runtime routines described here are defined by section 3 of the OpenMP
130 specifications in version 3.1. The routines are structured in following
133 Control threads, processors and the parallel environment.
136 * omp_get_active_level:: Number of active parallel regions
137 * omp_get_ancestor_thread_num:: Ancestor thread ID
138 * omp_get_dynamic:: Dynamic teams setting
139 * omp_get_level:: Number of parallel regions
140 * omp_get_max_active_levels:: Maximum number of active regions
141 * omp_get_max_threads:: Maximum number of threads of parallel region
142 * omp_get_nested:: Nested parallel regions
143 * omp_get_num_procs:: Number of processors online
144 * omp_get_num_threads:: Size of the active team
145 * omp_get_schedule:: Obtain the runtime scheduling method
146 * omp_get_team_size:: Number of threads in a team
147 * omp_get_thread_limit:: Maximum number of threads
148 * omp_get_thread_num:: Current thread ID
149 * omp_in_parallel:: Whether a parallel region is active
150 * omp_in_final:: Whether in final or included task region
151 * omp_set_dynamic:: Enable/disable dynamic teams
152 * omp_set_max_active_levels:: Limits the number of active parallel regions
153 * omp_set_nested:: Enable/disable nested parallel regions
154 * omp_set_num_threads:: Set upper team size limit
155 * omp_set_schedule:: Set the runtime scheduling method
158 Initialize, set, test, unset and destroy simple and nested locks.
161 * omp_init_lock:: Initialize simple lock
162 * omp_set_lock:: Wait for and set simple lock
163 * omp_test_lock:: Test and set simple lock if available
164 * omp_unset_lock:: Unset simple lock
165 * omp_destroy_lock:: Destroy simple lock
166 * omp_init_nest_lock:: Initialize nested lock
167 * omp_set_nest_lock:: Wait for and set simple lock
168 * omp_test_nest_lock:: Test and set nested lock if available
169 * omp_unset_nest_lock:: Unset nested lock
170 * omp_destroy_nest_lock:: Destroy nested lock
173 Portable, thread-based, wall clock timer.
176 * omp_get_wtick:: Get timer precision.
177 * omp_get_wtime:: Elapsed wall clock time.
182 @node omp_get_active_level
183 @section @code{omp_get_active_level} -- Number of parallel regions
185 @item @emph{Description}:
186 This function returns the nesting level for the active parallel blocks,
187 which enclose the calling call.
190 @multitable @columnfractions .20 .80
191 @item @emph{Prototype}: @tab @code{int omp_get_active_level(void);}
194 @item @emph{Fortran}:
195 @multitable @columnfractions .20 .80
196 @item @emph{Interface}: @tab @code{integer function omp_get_active_level()}
199 @item @emph{See also}:
200 @ref{omp_get_level}, @ref{omp_get_max_active_levels}, @ref{omp_set_max_active_levels}
202 @item @emph{Reference}:
203 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.19.
208 @node omp_get_ancestor_thread_num
209 @section @code{omp_get_ancestor_thread_num} -- Ancestor thread ID
211 @item @emph{Description}:
212 This function returns the thread identification number for the given
213 nesting level of the current thread. For values of @var{level} outside
214 zero to @code{omp_get_level} -1 is returned; if @var{level} is
215 @code{omp_get_level} the result is identical to @code{omp_get_thread_num}.
218 @multitable @columnfractions .20 .80
219 @item @emph{Prototype}: @tab @code{int omp_get_ancestor_thread_num(int level);}
222 @item @emph{Fortran}:
223 @multitable @columnfractions .20 .80
224 @item @emph{Interface}: @tab @code{integer function omp_get_ancestor_thread_num(level)}
225 @item @tab @code{integer level}
228 @item @emph{See also}:
229 @ref{omp_get_level}, @ref{omp_get_thread_num}, @ref{omp_get_team_size}
231 @item @emph{Reference}:
232 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.17.
237 @node omp_get_dynamic
238 @section @code{omp_get_dynamic} -- Dynamic teams setting
240 @item @emph{Description}:
241 This function returns @code{true} if enabled, @code{false} otherwise.
242 Here, @code{true} and @code{false} represent their language-specific
245 The dynamic team setting may be initialized at startup by the
246 @code{OMP_DYNAMIC} environment variable or at runtime using
247 @code{omp_set_dynamic}. If undefined, dynamic adjustment is
251 @multitable @columnfractions .20 .80
252 @item @emph{Prototype}: @tab @code{int omp_get_dynamic(void);}
255 @item @emph{Fortran}:
256 @multitable @columnfractions .20 .80
257 @item @emph{Interface}: @tab @code{logical function omp_get_dynamic()}
260 @item @emph{See also}:
261 @ref{omp_set_dynamic}, @ref{OMP_DYNAMIC}
263 @item @emph{Reference}:
264 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.8.
270 @section @code{omp_get_level} -- Obtain the current nesting level
272 @item @emph{Description}:
273 This function returns the nesting level for the parallel blocks,
274 which enclose the calling call.
277 @multitable @columnfractions .20 .80
278 @item @emph{Prototype}: @tab @code{int omp_get_level(void);}
281 @item @emph{Fortran}:
282 @multitable @columnfractions .20 .80
283 @item @emph{Interface}: @tab @code{integer function omp_level()}
286 @item @emph{See also}:
287 @ref{omp_get_active_level}
289 @item @emph{Reference}:
290 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.16.
295 @node omp_get_max_active_levels
296 @section @code{omp_get_max_active_levels} -- Maximum number of active regions
298 @item @emph{Description}:
299 This function obtains the maximum allowed number of nested, active parallel regions.
302 @multitable @columnfractions .20 .80
303 @item @emph{Prototype}: @tab @code{int omp_get_max_active_levels(void);}
306 @item @emph{Fortran}:
307 @multitable @columnfractions .20 .80
308 @item @emph{Interface}: @tab @code{integer function omp_get_max_active_levels()}
311 @item @emph{See also}:
312 @ref{omp_set_max_active_levels}, @ref{omp_get_active_level}
314 @item @emph{Reference}:
315 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.15.
320 @node omp_get_max_threads
321 @section @code{omp_get_max_threads} -- Maximum number of threads of parallel region
323 @item @emph{Description}:
324 Return the maximum number of threads used for the current parallel region
325 that does not use the clause @code{num_threads}.
328 @multitable @columnfractions .20 .80
329 @item @emph{Prototype}: @tab @code{int omp_get_max_threads(void);}
332 @item @emph{Fortran}:
333 @multitable @columnfractions .20 .80
334 @item @emph{Interface}: @tab @code{integer function omp_get_max_threads()}
337 @item @emph{See also}:
338 @ref{omp_set_num_threads}, @ref{omp_set_dynamic}, @ref{omp_get_thread_limit}
340 @item @emph{Reference}:
341 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.3.
347 @section @code{omp_get_nested} -- Nested parallel regions
349 @item @emph{Description}:
350 This function returns @code{true} if nested parallel regions are
351 enabled, @code{false} otherwise. Here, @code{true} and @code{false}
352 represent their language-specific counterparts.
354 Nested parallel regions may be initialized at startup by the
355 @code{OMP_NESTED} environment variable or at runtime using
356 @code{omp_set_nested}. If undefined, nested parallel regions are
360 @multitable @columnfractions .20 .80
361 @item @emph{Prototype}: @tab @code{int omp_get_nested(void);}
364 @item @emph{Fortran}:
365 @multitable @columnfractions .20 .80
366 @item @emph{Interface}: @tab @code{logical function omp_get_nested()}
369 @item @emph{See also}:
370 @ref{omp_set_nested}, @ref{OMP_NESTED}
372 @item @emph{Reference}:
373 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.10.
378 @node omp_get_num_procs
379 @section @code{omp_get_num_procs} -- Number of processors online
381 @item @emph{Description}:
382 Returns the number of processors online.
385 @multitable @columnfractions .20 .80
386 @item @emph{Prototype}: @tab @code{int omp_get_num_procs(void);}
389 @item @emph{Fortran}:
390 @multitable @columnfractions .20 .80
391 @item @emph{Interface}: @tab @code{integer function omp_get_num_procs()}
394 @item @emph{Reference}:
395 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.5.
400 @node omp_get_num_threads
401 @section @code{omp_get_num_threads} -- Size of the active team
403 @item @emph{Description}:
404 Returns the number of threads in the current team. In a sequential section of
405 the program @code{omp_get_num_threads} returns 1.
407 The default team size may be initialized at startup by the
408 @code{OMP_NUM_THREADS} environment variable. At runtime, the size
409 of the current team may be set either by the @code{NUM_THREADS}
410 clause or by @code{omp_set_num_threads}. If none of the above were
411 used to define a specific value and @code{OMP_DYNAMIC} is disabled,
412 one thread per CPU online is used.
415 @multitable @columnfractions .20 .80
416 @item @emph{Prototype}: @tab @code{int omp_get_num_threads(void);}
419 @item @emph{Fortran}:
420 @multitable @columnfractions .20 .80
421 @item @emph{Interface}: @tab @code{integer function omp_get_num_threads()}
424 @item @emph{See also}:
425 @ref{omp_get_max_threads}, @ref{omp_set_num_threads}, @ref{OMP_NUM_THREADS}
427 @item @emph{Reference}:
428 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.2.
433 @node omp_get_schedule
434 @section @code{omp_get_schedule} -- Obtain the runtime scheduling method
436 @item @emph{Description}:
437 Obtain the runtime scheduling method. The @var{kind} argument will be
438 set to the value @code{omp_sched_static}, @code{omp_sched_dynamic},
439 @code{omp_sched_guided} or @code{omp_sched_auto}. The second argument,
440 @var{modifier}, is set to the chunk size.
443 @multitable @columnfractions .20 .80
444 @item @emph{Prototype}: @tab @code{void omp_schedule(omp_sched_t *kind, int *modifier);}
447 @item @emph{Fortran}:
448 @multitable @columnfractions .20 .80
449 @item @emph{Interface}: @tab @code{subroutine omp_schedule(kind, modifier)}
450 @item @tab @code{integer(kind=omp_sched_kind) kind}
451 @item @tab @code{integer modifier}
454 @item @emph{See also}:
455 @ref{omp_set_schedule}, @ref{OMP_SCHEDULE}
457 @item @emph{Reference}:
458 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.12.
463 @node omp_get_team_size
464 @section @code{omp_get_team_size} -- Number of threads in a team
466 @item @emph{Description}:
467 This function returns the number of threads in a thread team to which
468 either the current thread or its ancestor belongs. For values of @var{level}
469 outside zero to @code{omp_get_level}, -1 is returned; if @var{level} is zero,
470 1 is returned, and for @code{omp_get_level}, the result is identical
471 to @code{omp_get_num_threads}.
474 @multitable @columnfractions .20 .80
475 @item @emph{Prototype}: @tab @code{int omp_get_team_size(int level);}
478 @item @emph{Fortran}:
479 @multitable @columnfractions .20 .80
480 @item @emph{Interface}: @tab @code{integer function omp_get_team_size(level)}
481 @item @tab @code{integer level}
484 @item @emph{See also}:
485 @ref{omp_get_num_threads}, @ref{omp_get_level}, @ref{omp_get_ancestor_thread_num}
487 @item @emph{Reference}:
488 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.18.
493 @node omp_get_thread_limit
494 @section @code{omp_get_thread_limit} -- Maximum number of threads
496 @item @emph{Description}:
497 Return the maximum number of threads of the program.
500 @multitable @columnfractions .20 .80
501 @item @emph{Prototype}: @tab @code{int omp_get_thread_limit(void);}
504 @item @emph{Fortran}:
505 @multitable @columnfractions .20 .80
506 @item @emph{Interface}: @tab @code{integer function omp_get_thread_limit()}
509 @item @emph{See also}:
510 @ref{omp_get_max_threads}, @ref{OMP_THREAD_LIMIT}
512 @item @emph{Reference}:
513 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.13.
518 @node omp_get_thread_num
519 @section @code{omp_get_thread_num} -- Current thread ID
521 @item @emph{Description}:
522 Returns a unique thread identification number within the current team.
523 In a sequential parts of the program, @code{omp_get_thread_num}
524 always returns 0. In parallel regions the return value varies
525 from 0 to @code{omp_get_num_threads}-1 inclusive. The return
526 value of the master thread of a team is always 0.
529 @multitable @columnfractions .20 .80
530 @item @emph{Prototype}: @tab @code{int omp_get_thread_num(void);}
533 @item @emph{Fortran}:
534 @multitable @columnfractions .20 .80
535 @item @emph{Interface}: @tab @code{integer function omp_get_thread_num()}
538 @item @emph{See also}:
539 @ref{omp_get_num_threads}, @ref{omp_get_ancestor_thread_num}
541 @item @emph{Reference}:
542 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.4.
547 @node omp_in_parallel
548 @section @code{omp_in_parallel} -- Whether a parallel region is active
550 @item @emph{Description}:
551 This function returns @code{true} if currently running in parallel,
552 @code{false} otherwise. Here, @code{true} and @code{false} represent
553 their language-specific counterparts.
556 @multitable @columnfractions .20 .80
557 @item @emph{Prototype}: @tab @code{int omp_in_parallel(void);}
560 @item @emph{Fortran}:
561 @multitable @columnfractions .20 .80
562 @item @emph{Interface}: @tab @code{logical function omp_in_parallel()}
565 @item @emph{Reference}:
566 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.6.
571 @section @code{omp_in_final} -- Whether in final or included task region
573 @item @emph{Description}:
574 This function returns @code{true} if currently running in a final
575 or included task region, @code{false} otherwise. Here, @code{true}
576 and @code{false} represent their language-specific counterparts.
579 @multitable @columnfractions .20 .80
580 @item @emph{Prototype}: @tab @code{int omp_in_final(void);}
583 @item @emph{Fortran}:
584 @multitable @columnfractions .20 .80
585 @item @emph{Interface}: @tab @code{logical function omp_in_final()}
588 @item @emph{Reference}:
589 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.20.
593 @node omp_set_dynamic
594 @section @code{omp_set_dynamic} -- Enable/disable dynamic teams
596 @item @emph{Description}:
597 Enable or disable the dynamic adjustment of the number of threads
598 within a team. The function takes the language-specific equivalent
599 of @code{true} and @code{false}, where @code{true} enables dynamic
600 adjustment of team sizes and @code{false} disables it.
603 @multitable @columnfractions .20 .80
604 @item @emph{Prototype}: @tab @code{void omp_set_dynamic(int set);}
607 @item @emph{Fortran}:
608 @multitable @columnfractions .20 .80
609 @item @emph{Interface}: @tab @code{subroutine omp_set_dynamic(set)}
610 @item @tab @code{logical, intent(in) :: set}
613 @item @emph{See also}:
614 @ref{OMP_DYNAMIC}, @ref{omp_get_dynamic}
616 @item @emph{Reference}:
617 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.7.
622 @node omp_set_max_active_levels
623 @section @code{omp_set_max_active_levels} -- Limits the number of active parallel regions
625 @item @emph{Description}:
626 This function limits the maximum allowed number of nested, active
630 @multitable @columnfractions .20 .80
631 @item @emph{Prototype}: @tab @code{void omp_set_max_active_levels(int max_levels);}
634 @item @emph{Fortran}:
635 @multitable @columnfractions .20 .80
636 @item @emph{Interface}: @tab @code{subroutine omp_set_max_active_levels(max_levels)}
637 @item @tab @code{integer max_levels}
640 @item @emph{See also}:
641 @ref{omp_get_max_active_levels}, @ref{omp_get_active_level}
643 @item @emph{Reference}:
644 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.14.
650 @section @code{omp_set_nested} -- Enable/disable nested parallel regions
652 @item @emph{Description}:
653 Enable or disable nested parallel regions, i.e., whether team members
654 are allowed to create new teams. The function takes the language-specific
655 equivalent of @code{true} and @code{false}, where @code{true} enables
656 dynamic adjustment of team sizes and @code{false} disables it.
659 @multitable @columnfractions .20 .80
660 @item @emph{Prototype}: @tab @code{void omp_set_nested(int set);}
663 @item @emph{Fortran}:
664 @multitable @columnfractions .20 .80
665 @item @emph{Interface}: @tab @code{subroutine omp_set_nested(set)}
666 @item @tab @code{logical, intent(in) :: set}
669 @item @emph{See also}:
670 @ref{OMP_NESTED}, @ref{omp_get_nested}
672 @item @emph{Reference}:
673 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.9.
678 @node omp_set_num_threads
679 @section @code{omp_set_num_threads} -- Set upper team size limit
681 @item @emph{Description}:
682 Specifies the number of threads used by default in subsequent parallel
683 sections, if those do not specify a @code{num_threads} clause. The
684 argument of @code{omp_set_num_threads} shall be a positive integer.
687 @multitable @columnfractions .20 .80
688 @item @emph{Prototype}: @tab @code{void omp_set_num_threads(int n);}
691 @item @emph{Fortran}:
692 @multitable @columnfractions .20 .80
693 @item @emph{Interface}: @tab @code{subroutine omp_set_num_threads(n)}
694 @item @tab @code{integer, intent(in) :: n}
697 @item @emph{See also}:
698 @ref{OMP_NUM_THREADS}, @ref{omp_get_num_threads}, @ref{omp_get_max_threads}
700 @item @emph{Reference}:
701 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.1.
706 @node omp_set_schedule
707 @section @code{omp_set_schedule} -- Set the runtime scheduling method
709 @item @emph{Description}:
710 Sets the runtime scheduling method. The @var{kind} argument can have the
711 value @code{omp_sched_static}, @code{omp_sched_dynamic},
712 @code{omp_sched_guided} or @code{omp_sched_auto}. Except for
713 @code{omp_sched_auto}, the chunk size is set to the value of
714 @var{modifier} if positive, or to the default value if zero or negative.
715 For @code{omp_sched_auto} the @var{modifier} argument is ignored.
718 @multitable @columnfractions .20 .80
719 @item @emph{Prototype}: @tab @code{void omp_set_schedule(omp_sched_t *kind, int *modifier);}
722 @item @emph{Fortran}:
723 @multitable @columnfractions .20 .80
724 @item @emph{Interface}: @tab @code{subroutine omp_set_schedule(kind, modifier)}
725 @item @tab @code{integer(kind=omp_sched_kind) kind}
726 @item @tab @code{integer modifier}
729 @item @emph{See also}:
730 @ref{omp_get_schedule}
733 @item @emph{Reference}:
734 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.2.11.
740 @section @code{omp_init_lock} -- Initialize simple lock
742 @item @emph{Description}:
743 Initialize a simple lock. After initialization, the lock is in
747 @multitable @columnfractions .20 .80
748 @item @emph{Prototype}: @tab @code{void omp_init_lock(omp_lock_t *lock);}
751 @item @emph{Fortran}:
752 @multitable @columnfractions .20 .80
753 @item @emph{Interface}: @tab @code{subroutine omp_init_lock(lock)}
754 @item @tab @code{integer(omp_lock_kind), intent(out) :: lock}
757 @item @emph{See also}:
758 @ref{omp_destroy_lock}
760 @item @emph{Reference}:
761 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.1.
767 @section @code{omp_set_lock} -- Wait for and set simple lock
769 @item @emph{Description}:
770 Before setting a simple lock, the lock variable must be initialized by
771 @code{omp_init_lock}. The calling thread is blocked until the lock
772 is available. If the lock is already held by the current thread,
776 @multitable @columnfractions .20 .80
777 @item @emph{Prototype}: @tab @code{void omp_set_lock(omp_lock_t *lock);}
780 @item @emph{Fortran}:
781 @multitable @columnfractions .20 .80
782 @item @emph{Interface}: @tab @code{subroutine omp_set_lock(lock)}
783 @item @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
786 @item @emph{See also}:
787 @ref{omp_init_lock}, @ref{omp_test_lock}, @ref{omp_unset_lock}
789 @item @emph{Reference}:
790 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.3.
796 @section @code{omp_test_lock} -- Test and set simple lock if available
798 @item @emph{Description}:
799 Before setting a simple lock, the lock variable must be initialized by
800 @code{omp_init_lock}. Contrary to @code{omp_set_lock}, @code{omp_test_lock}
801 does not block if the lock is not available. This function returns
802 @code{true} upon success, @code{false} otherwise. Here, @code{true} and
803 @code{false} represent their language-specific counterparts.
806 @multitable @columnfractions .20 .80
807 @item @emph{Prototype}: @tab @code{int omp_test_lock(omp_lock_t *lock);}
810 @item @emph{Fortran}:
811 @multitable @columnfractions .20 .80
812 @item @emph{Interface}: @tab @code{logical function omp_test_lock(lock)}
813 @item @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
816 @item @emph{See also}:
817 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
819 @item @emph{Reference}:
820 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.5.
826 @section @code{omp_unset_lock} -- Unset simple lock
828 @item @emph{Description}:
829 A simple lock about to be unset must have been locked by @code{omp_set_lock}
830 or @code{omp_test_lock} before. In addition, the lock must be held by the
831 thread calling @code{omp_unset_lock}. Then, the lock becomes unlocked. If one
832 or more threads attempted to set the lock before, one of them is chosen to,
833 again, set the lock to itself.
836 @multitable @columnfractions .20 .80
837 @item @emph{Prototype}: @tab @code{void omp_unset_lock(omp_lock_t *lock);}
840 @item @emph{Fortran}:
841 @multitable @columnfractions .20 .80
842 @item @emph{Interface}: @tab @code{subroutine omp_unset_lock(lock)}
843 @item @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
846 @item @emph{See also}:
847 @ref{omp_set_lock}, @ref{omp_test_lock}
849 @item @emph{Reference}:
850 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.4.
855 @node omp_destroy_lock
856 @section @code{omp_destroy_lock} -- Destroy simple lock
858 @item @emph{Description}:
859 Destroy a simple lock. In order to be destroyed, a simple lock must be
860 in the unlocked state.
863 @multitable @columnfractions .20 .80
864 @item @emph{Prototype}: @tab @code{void omp_destroy_lock(omp_lock_t *lock);}
867 @item @emph{Fortran}:
868 @multitable @columnfractions .20 .80
869 @item @emph{Interface}: @tab @code{subroutine omp_destroy_lock(lock)}
870 @item @tab @code{integer(omp_lock_kind), intent(inout) :: lock}
873 @item @emph{See also}:
876 @item @emph{Reference}:
877 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.2.
882 @node omp_init_nest_lock
883 @section @code{omp_init_nest_lock} -- Initialize nested lock
885 @item @emph{Description}:
886 Initialize a nested lock. After initialization, the lock is in
887 an unlocked state and the nesting count is set to zero.
890 @multitable @columnfractions .20 .80
891 @item @emph{Prototype}: @tab @code{void omp_init_nest_lock(omp_nest_lock_t *lock);}
894 @item @emph{Fortran}:
895 @multitable @columnfractions .20 .80
896 @item @emph{Interface}: @tab @code{subroutine omp_init_nest_lock(lock)}
897 @item @tab @code{integer(omp_nest_lock_kind), intent(out) :: lock}
900 @item @emph{See also}:
901 @ref{omp_destroy_nest_lock}
903 @item @emph{Reference}:
904 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.1.
908 @node omp_set_nest_lock
909 @section @code{omp_set_nest_lock} -- Wait for and set nested lock
911 @item @emph{Description}:
912 Before setting a nested lock, the lock variable must be initialized by
913 @code{omp_init_nest_lock}. The calling thread is blocked until the lock
914 is available. If the lock is already held by the current thread, the
915 nesting count for the lock is incremented.
918 @multitable @columnfractions .20 .80
919 @item @emph{Prototype}: @tab @code{void omp_set_nest_lock(omp_nest_lock_t *lock);}
922 @item @emph{Fortran}:
923 @multitable @columnfractions .20 .80
924 @item @emph{Interface}: @tab @code{subroutine omp_set_nest_lock(lock)}
925 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
928 @item @emph{See also}:
929 @ref{omp_init_nest_lock}, @ref{omp_unset_nest_lock}
931 @item @emph{Reference}:
932 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.3.
937 @node omp_test_nest_lock
938 @section @code{omp_test_nest_lock} -- Test and set nested lock if available
940 @item @emph{Description}:
941 Before setting a nested lock, the lock variable must be initialized by
942 @code{omp_init_nest_lock}. Contrary to @code{omp_set_nest_lock},
943 @code{omp_test_nest_lock} does not block if the lock is not available.
944 If the lock is already held by the current thread, the new nesting count
945 is returned. Otherwise, the return value equals zero.
948 @multitable @columnfractions .20 .80
949 @item @emph{Prototype}: @tab @code{int omp_test_nest_lock(omp_nest_lock_t *lock);}
952 @item @emph{Fortran}:
953 @multitable @columnfractions .20 .80
954 @item @emph{Interface}: @tab @code{logical function omp_test_nest_lock(lock)}
955 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
959 @item @emph{See also}:
960 @ref{omp_init_lock}, @ref{omp_set_lock}, @ref{omp_set_lock}
962 @item @emph{Reference}:
963 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.5.
968 @node omp_unset_nest_lock
969 @section @code{omp_unset_nest_lock} -- Unset nested lock
971 @item @emph{Description}:
972 A nested lock about to be unset must have been locked by @code{omp_set_nested_lock}
973 or @code{omp_test_nested_lock} before. In addition, the lock must be held by the
974 thread calling @code{omp_unset_nested_lock}. If the nesting count drops to zero, the
975 lock becomes unlocked. If one ore more threads attempted to set the lock before,
976 one of them is chosen to, again, set the lock to itself.
979 @multitable @columnfractions .20 .80
980 @item @emph{Prototype}: @tab @code{void omp_unset_nest_lock(omp_nest_lock_t *lock);}
983 @item @emph{Fortran}:
984 @multitable @columnfractions .20 .80
985 @item @emph{Interface}: @tab @code{subroutine omp_unset_nest_lock(lock)}
986 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
989 @item @emph{See also}:
990 @ref{omp_set_nest_lock}
992 @item @emph{Reference}:
993 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.4.
998 @node omp_destroy_nest_lock
999 @section @code{omp_destroy_nest_lock} -- Destroy nested lock
1001 @item @emph{Description}:
1002 Destroy a nested lock. In order to be destroyed, a nested lock must be
1003 in the unlocked state and its nesting count must equal zero.
1006 @multitable @columnfractions .20 .80
1007 @item @emph{Prototype}: @tab @code{void omp_destroy_nest_lock(omp_nest_lock_t *);}
1010 @item @emph{Fortran}:
1011 @multitable @columnfractions .20 .80
1012 @item @emph{Interface}: @tab @code{subroutine omp_destroy_nest_lock(lock)}
1013 @item @tab @code{integer(omp_nest_lock_kind), intent(inout) :: lock}
1016 @item @emph{See also}:
1019 @item @emph{Reference}:
1020 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.3.2.
1026 @section @code{omp_get_wtick} -- Get timer precision
1028 @item @emph{Description}:
1029 Gets the timer precision, i.e., the number of seconds between two
1030 successive clock ticks.
1033 @multitable @columnfractions .20 .80
1034 @item @emph{Prototype}: @tab @code{double omp_get_wtick(void);}
1037 @item @emph{Fortran}:
1038 @multitable @columnfractions .20 .80
1039 @item @emph{Interface}: @tab @code{double precision function omp_get_wtick()}
1042 @item @emph{See also}:
1045 @item @emph{Reference}:
1046 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.4.2.
1052 @section @code{omp_get_wtime} -- Elapsed wall clock time
1054 @item @emph{Description}:
1055 Elapsed wall clock time in seconds. The time is measured per thread, no
1056 guarantee can be made that two distinct threads measure the same time.
1057 Time is measured from some "time in the past", which is an arbitrary time
1058 guaranteed not to change during the execution of the program.
1061 @multitable @columnfractions .20 .80
1062 @item @emph{Prototype}: @tab @code{double omp_get_wtime(void);}
1065 @item @emph{Fortran}:
1066 @multitable @columnfractions .20 .80
1067 @item @emph{Interface}: @tab @code{double precision function omp_get_wtime()}
1070 @item @emph{See also}:
1073 @item @emph{Reference}:
1074 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 3.4.1.
1079 @c ---------------------------------------------------------------------
1080 @c Environment Variables
1081 @c ---------------------------------------------------------------------
1083 @node Environment Variables
1084 @chapter Environment Variables
1086 The variables @env{OMP_DYNAMIC}, @env{OMP_MAX_ACTIVE_LEVELS},
1087 @env{OMP_NESTED}, @env{OMP_NUM_THREADS}, @env{OMP_SCHEDULE},
1088 @env{OMP_STACKSIZE},@env{OMP_THREAD_LIMIT} and @env{OMP_WAIT_POLICY}
1089 are defined by section 4 of the OpenMP specifications in version 3.1,
1090 while @env{GOMP_CPU_AFFINITY} and @env{GOMP_STACKSIZE} are GNU
1094 * OMP_DYNAMIC:: Dynamic adjustment of threads
1095 * OMP_MAX_ACTIVE_LEVELS:: Set the maximum number of nested parallel regions
1096 * OMP_NESTED:: Nested parallel regions
1097 * OMP_NUM_THREADS:: Specifies the number of threads to use
1098 * OMP_STACKSIZE:: Set default thread stack size
1099 * OMP_SCHEDULE:: How threads are scheduled
1100 * OMP_THREAD_LIMIT:: Set the maximum number of threads
1101 * OMP_WAIT_POLICY:: How waiting threads are handled
1102 * OMP_PROC_BIND:: Whether theads may be moved between CPUs
1103 * GOMP_CPU_AFFINITY:: Bind threads to specific CPUs
1104 * GOMP_STACKSIZE:: Set default thread stack size
1109 @section @env{OMP_DYNAMIC} -- Dynamic adjustment of threads
1110 @cindex Environment Variable
1112 @item @emph{Description}:
1113 Enable or disable the dynamic adjustment of the number of threads
1114 within a team. The value of this environment variable shall be
1115 @code{TRUE} or @code{FALSE}. If undefined, dynamic adjustment is
1116 disabled by default.
1118 @item @emph{See also}:
1119 @ref{omp_set_dynamic}
1121 @item @emph{Reference}:
1122 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.3
1127 @node OMP_MAX_ACTIVE_LEVELS
1128 @section @env{OMP_MAX_ACTIVE_LEVELS} -- Set the maximum number of nested parallel regions
1129 @cindex Environment Variable
1131 @item @emph{Description}:
1132 Specifies the initial value for the maximum number of nested parallel
1133 regions. The value of this variable shall be a positive integer.
1134 If undefined, the number of active levels is unlimited.
1136 @item @emph{See also}:
1137 @ref{omp_set_max_active_levels}
1139 @item @emph{Reference}:
1140 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.8
1146 @section @env{OMP_NESTED} -- Nested parallel regions
1147 @cindex Environment Variable
1148 @cindex Implementation specific setting
1150 @item @emph{Description}:
1151 Enable or disable nested parallel regions, i.e., whether team members
1152 are allowed to create new teams. The value of this environment variable
1153 shall be @code{TRUE} or @code{FALSE}. If undefined, nested parallel
1154 regions are disabled by default.
1156 @item @emph{See also}:
1157 @ref{omp_set_nested}
1159 @item @emph{Reference}:
1160 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.5
1165 @node OMP_NUM_THREADS
1166 @section @env{OMP_NUM_THREADS} -- Specifies the number of threads to use
1167 @cindex Environment Variable
1168 @cindex Implementation specific setting
1170 @item @emph{Description}:
1171 Specifies the default number of threads to use in parallel regions. The
1172 value of this variable shall be a comma-separated list of positive integers;
1173 the value specified the number of threads to use for the corresponding nested
1174 level. If undefined one thread per CPU is used.
1176 @item @emph{See also}:
1177 @ref{omp_set_num_threads}
1179 @item @emph{Reference}:
1180 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.2
1186 @section @env{OMP_SCHEDULE} -- How threads are scheduled
1187 @cindex Environment Variable
1188 @cindex Implementation specific setting
1190 @item @emph{Description}:
1191 Allows to specify @code{schedule type} and @code{chunk size}.
1192 The value of the variable shall have the form: @code{type[,chunk]} where
1193 @code{type} is one of @code{static}, @code{dynamic}, @code{guided} or @code{auto}
1194 The optional @code{chunk} size shall be a positive integer. If undefined,
1195 dynamic scheduling and a chunk size of 1 is used.
1197 @item @emph{See also}:
1198 @ref{omp_set_schedule}
1200 @item @emph{Reference}:
1201 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 2.5.1 and 4.1
1207 @section @env{OMP_STACKSIZE} -- Set default thread stack size
1208 @cindex Environment Variable
1210 @item @emph{Description}:
1211 Set the default thread stack size in kilobytes, unless the number
1212 is suffixed by @code{B}, @code{K}, @code{M} or @code{G}, in which
1213 case the size is, respectively, in bytes, kilobytes, megabytes
1214 or gigabytes. This is different from @code{pthread_attr_setstacksize}
1215 which gets the number of bytes as an argument. If the stack size cannot
1216 be set due to system constraints, an error is reported and the initial
1217 stack size is left unchanged. If undefined, the stack size is system
1220 @item @emph{Reference}:
1221 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.6
1226 @node OMP_THREAD_LIMIT
1227 @section @env{OMP_THREAD_LIMIT} -- Set the maximum number of threads
1228 @cindex Environment Variable
1230 @item @emph{Description}:
1231 Specifies the number of threads to use for the whole program. The
1232 value of this variable shall be a positive integer. If undefined,
1233 the number of threads is not limited.
1235 @item @emph{See also}:
1236 @ref{OMP_NUM_THREADS}
1237 @ref{omp_get_thread_limit}
1239 @item @emph{Reference}:
1240 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, section 4.9
1245 @node OMP_WAIT_POLICY
1246 @section @env{OMP_WAIT_POLICY} -- How waiting threads are handled
1247 @cindex Environment Variable
1249 @item @emph{Description}:
1250 Specifies whether waiting threads should be active or passive. If
1251 the value is @code{PASSIVE}, waiting threads should not consume CPU
1252 power while waiting; while the value is @code{ACTIVE} specifies that
1255 @item @emph{Reference}:
1256 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.7
1262 @section @env{OMP_PROC_BIND} -- Whether theads may be moved between CPUs
1263 @cindex Environment Variable
1265 @item @emph{Description}:
1266 Specifies whether threads may be moved between processors. If set to
1267 @code{true}, OpenMP theads should not be moved, if set to @code{false}
1270 @item @emph{See also}:
1271 @ref{GOMP_CPU_AFFINITY}
1273 @item @emph{Reference}:
1274 @uref{http://www.openmp.org/, OpenMP specifications v3.1}, sections 4.4
1279 @node GOMP_CPU_AFFINITY
1280 @section @env{GOMP_CPU_AFFINITY} -- Bind threads to specific CPUs
1281 @cindex Environment Variable
1283 @item @emph{Description}:
1284 Binds threads to specific CPUs. The variable should contain a space-separated
1285 or comma-separated list of CPUs. This list may contain different kinds of
1286 entries: either single CPU numbers in any order, a range of CPUs (M-N)
1287 or a range with some stride (M-N:S). CPU numbers are zero based. For example,
1288 @code{GOMP_CPU_AFFINITY="0 3 1-2 4-15:2"} will bind the initial thread
1289 to CPU 0, the second to CPU 3, the third to CPU 1, the fourth to
1290 CPU 2, the fifth to CPU 4, the sixth through tenth to CPUs 6, 8, 10, 12,
1291 and 14 respectively and then start assigning back from the beginning of
1292 the list. @code{GOMP_CPU_AFFINITY=0} binds all threads to CPU 0.
1294 There is no GNU OpenMP library routine to determine whether a CPU affinity
1295 specification is in effect. As a workaround, language-specific library
1296 functions, e.g., @code{getenv} in C or @code{GET_ENVIRONMENT_VARIABLE} in
1297 Fortran, may be used to query the setting of the @code{GOMP_CPU_AFFINITY}
1298 environment variable. A defined CPU affinity on startup cannot be changed
1299 or disabled during the runtime of the application.
1301 If this environment variable is omitted, the host system will handle the
1302 assignment of threads to CPUs.
1304 @item @emph{See also}:
1310 @node GOMP_STACKSIZE
1311 @section @env{GOMP_STACKSIZE} -- Set default thread stack size
1312 @cindex Environment Variable
1313 @cindex Implementation specific setting
1315 @item @emph{Description}:
1316 Set the default thread stack size in kilobytes. This is different from
1317 @code{pthread_attr_setstacksize} which gets the number of bytes as an
1318 argument. If the stack size cannot be set due to system constraints, an
1319 error is reported and the initial stack size is left unchanged. If undefined,
1320 the stack size is system dependent.
1322 @item @emph{See also}:
1325 @item @emph{Reference}:
1326 @uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00493.html,
1327 GCC Patches Mailinglist},
1328 @uref{http://gcc.gnu.org/ml/gcc-patches/2006-06/msg00496.html,
1329 GCC Patches Mailinglist}
1334 @c ---------------------------------------------------------------------
1336 @c ---------------------------------------------------------------------
1338 @node The libgomp ABI
1339 @chapter The libgomp ABI
1341 The following sections present notes on the external ABI as
1342 presented by libgomp. Only maintainers should need them.
1345 * Implementing MASTER construct::
1346 * Implementing CRITICAL construct::
1347 * Implementing ATOMIC construct::
1348 * Implementing FLUSH construct::
1349 * Implementing BARRIER construct::
1350 * Implementing THREADPRIVATE construct::
1351 * Implementing PRIVATE clause::
1352 * Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses::
1353 * Implementing REDUCTION clause::
1354 * Implementing PARALLEL construct::
1355 * Implementing FOR construct::
1356 * Implementing ORDERED construct::
1357 * Implementing SECTIONS construct::
1358 * Implementing SINGLE construct::
1362 @node Implementing MASTER construct
1363 @section Implementing MASTER construct
1366 if (omp_get_thread_num () == 0)
1370 Alternately, we generate two copies of the parallel subfunction
1371 and only include this in the version run by the master thread.
1372 Surely this is not worthwhile though...
1376 @node Implementing CRITICAL construct
1377 @section Implementing CRITICAL construct
1379 Without a specified name,
1382 void GOMP_critical_start (void);
1383 void GOMP_critical_end (void);
1386 so that we don't get COPY relocations from libgomp to the main
1389 With a specified name, use omp_set_lock and omp_unset_lock with
1390 name being transformed into a variable declared like
1393 omp_lock_t gomp_critical_user_<name> __attribute__((common))
1396 Ideally the ABI would specify that all zero is a valid unlocked
1397 state, and so we wouldn't need to initialize this at
1402 @node Implementing ATOMIC construct
1403 @section Implementing ATOMIC construct
1405 The target should implement the @code{__sync} builtins.
1407 Failing that we could add
1410 void GOMP_atomic_enter (void)
1411 void GOMP_atomic_exit (void)
1414 which reuses the regular lock code, but with yet another lock
1415 object private to the library.
1419 @node Implementing FLUSH construct
1420 @section Implementing FLUSH construct
1422 Expands to the @code{__sync_synchronize} builtin.
1426 @node Implementing BARRIER construct
1427 @section Implementing BARRIER construct
1430 void GOMP_barrier (void)
1434 @node Implementing THREADPRIVATE construct
1435 @section Implementing THREADPRIVATE construct
1437 In _most_ cases we can map this directly to @code{__thread}. Except
1438 that OMP allows constructors for C++ objects. We can either
1439 refuse to support this (how often is it used?) or we can
1440 implement something akin to .ctors.
1442 Even more ideally, this ctor feature is handled by extensions
1443 to the main pthreads library. Failing that, we can have a set
1444 of entry points to register ctor functions to be called.
1448 @node Implementing PRIVATE clause
1449 @section Implementing PRIVATE clause
1451 In association with a PARALLEL, or within the lexical extent
1452 of a PARALLEL block, the variable becomes a local variable in
1453 the parallel subfunction.
1455 In association with FOR or SECTIONS blocks, create a new
1456 automatic variable within the current function. This preserves
1457 the semantic of new variable creation.
1461 @node Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
1462 @section Implementing FIRSTPRIVATE LASTPRIVATE COPYIN and COPYPRIVATE clauses
1464 This seems simple enough for PARALLEL blocks. Create a private
1465 struct for communicating between the parent and subfunction.
1466 In the parent, copy in values for scalar and "small" structs;
1467 copy in addresses for others TREE_ADDRESSABLE types. In the
1468 subfunction, copy the value into the local variable.
1470 It is not clear what to do with bare FOR or SECTION blocks.
1471 The only thing I can figure is that we do something like:
1474 #pragma omp for firstprivate(x) lastprivate(y)
1475 for (int i = 0; i < n; ++i)
1492 where the "x=x" and "y=y" assignments actually have different
1493 uids for the two variables, i.e. not something you could write
1494 directly in C. Presumably this only makes sense if the "outer"
1495 x and y are global variables.
1497 COPYPRIVATE would work the same way, except the structure
1498 broadcast would have to happen via SINGLE machinery instead.
1502 @node Implementing REDUCTION clause
1503 @section Implementing REDUCTION clause
1505 The private struct mentioned in the previous section should have
1506 a pointer to an array of the type of the variable, indexed by the
1507 thread's @var{team_id}. The thread stores its final value into the
1508 array, and after the barrier, the master thread iterates over the
1509 array to collect the values.
1512 @node Implementing PARALLEL construct
1513 @section Implementing PARALLEL construct
1516 #pragma omp parallel
1525 void subfunction (void *data)
1532 GOMP_parallel_start (subfunction, &data, num_threads);
1533 subfunction (&data);
1534 GOMP_parallel_end ();
1538 void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads)
1541 The @var{FN} argument is the subfunction to be run in parallel.
1543 The @var{DATA} argument is a pointer to a structure used to
1544 communicate data in and out of the subfunction, as discussed
1545 above with respect to FIRSTPRIVATE et al.
1547 The @var{NUM_THREADS} argument is 1 if an IF clause is present
1548 and false, or the value of the NUM_THREADS clause, if
1551 The function needs to create the appropriate number of
1552 threads and/or launch them from the dock. It needs to
1553 create the team structure and assign team ids.
1556 void GOMP_parallel_end (void)
1559 Tears down the team and returns us to the previous @code{omp_in_parallel()} state.
1563 @node Implementing FOR construct
1564 @section Implementing FOR construct
1567 #pragma omp parallel for
1568 for (i = lb; i <= ub; i++)
1575 void subfunction (void *data)
1578 while (GOMP_loop_static_next (&_s0, &_e0))
1581 for (i = _s0; i < _e1; i++)
1584 GOMP_loop_end_nowait ();
1587 GOMP_parallel_loop_static (subfunction, NULL, 0, lb, ub+1, 1, 0);
1589 GOMP_parallel_end ();
1593 #pragma omp for schedule(runtime)
1594 for (i = 0; i < n; i++)
1603 if (GOMP_loop_runtime_start (0, n, 1, &_s0, &_e0))
1606 for (i = _s0, i < _e0; i++)
1608 @} while (GOMP_loop_runtime_next (&_s0, _&e0));
1613 Note that while it looks like there is trickiness to propagating
1614 a non-constant STEP, there isn't really. We're explicitly allowed
1615 to evaluate it as many times as we want, and any variables involved
1616 should automatically be handled as PRIVATE or SHARED like any other
1617 variables. So the expression should remain evaluable in the
1618 subfunction. We can also pull it into a local variable if we like,
1619 but since its supposed to remain unchanged, we can also not if we like.
1621 If we have SCHEDULE(STATIC), and no ORDERED, then we ought to be
1622 able to get away with no work-sharing context at all, since we can
1623 simply perform the arithmetic directly in each thread to divide up
1624 the iterations. Which would mean that we wouldn't need to call any
1627 There are separate routines for handling loops with an ORDERED
1628 clause. Bookkeeping for that is non-trivial...
1632 @node Implementing ORDERED construct
1633 @section Implementing ORDERED construct
1636 void GOMP_ordered_start (void)
1637 void GOMP_ordered_end (void)
1642 @node Implementing SECTIONS construct
1643 @section Implementing SECTIONS construct
1648 #pragma omp sections
1662 for (i = GOMP_sections_start (3); i != 0; i = GOMP_sections_next ())
1679 @node Implementing SINGLE construct
1680 @section Implementing SINGLE construct
1694 if (GOMP_single_start ())
1702 #pragma omp single copyprivate(x)
1709 datap = GOMP_single_copy_start ();
1714 GOMP_single_copy_end (&data);
1723 @c ---------------------------------------------------------------------
1725 @c ---------------------------------------------------------------------
1727 @node Reporting Bugs
1728 @chapter Reporting Bugs
1730 Bugs in the GNU OpenMP implementation should be reported via
1731 @uref{http://gcc.gnu.org/bugzilla/, bugzilla}. For all cases, please add
1732 "openmp" to the keywords field in the bug report.
1736 @c ---------------------------------------------------------------------
1737 @c GNU General Public License
1738 @c ---------------------------------------------------------------------
1740 @include gpl_v3.texi
1744 @c ---------------------------------------------------------------------
1745 @c GNU Free Documentation License
1746 @c ---------------------------------------------------------------------
1752 @c ---------------------------------------------------------------------
1753 @c Funding Free Software
1754 @c ---------------------------------------------------------------------
1756 @include funding.texi
1758 @c ---------------------------------------------------------------------
1760 @c ---------------------------------------------------------------------
1763 @unnumbered Library Index