1 .\" Copyright 2015-2017 Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
3 .\" %%%LICENSE_START(VERBATIM)
4 .\" Permission is granted to make and distribute verbatim copies of this
5 .\" manual provided the copyright notice and this permission notice are
6 .\" preserved on all copies.
8 .\" Permission is granted to copy and distribute modified versions of this
9 .\" manual under the conditions for verbatim copying, provided that the
10 .\" entire resulting derived work is distributed under the terms of a
11 .\" permission notice identical to this one.
13 .\" Since the Linux kernel and libraries are constantly changing, this
14 .\" manual page may be incorrect or out-of-date. The author(s) assume no
15 .\" responsibility for errors or omissions, or for damages resulting from
16 .\" the use of the information contained herein. The author(s) may not
17 .\" have taken the same level of care in the production of this manual,
18 .\" which is licensed free of charge, as they might when working
21 .\" Formatted or processed versions of this manual, if unaccompanied by
22 .\" the source, must acknowledge the copyright and authors of this work.
25 .TH MEMBARRIER 2 2018-04-30 "Linux" "Linux Programmer's Manual"
27 membarrier \- issue memory barriers on a set of threads
29 .B #include <linux/membarrier.h>
31 .BI "int membarrier(int " cmd ", int " flags ");
35 system call helps reducing the overhead of the memory barrier
36 instructions required to order memory accesses on multi-core systems.
37 However, this system call is heavier than a memory barrier, so using it
40 as simple as replacing memory barriers with this
41 system call, but requires understanding of the details below.
43 Use of memory barriers needs to be done taking into account that a
44 memory barrier always needs to be either matched with its memory barrier
45 counterparts, or that the architecture's memory model doesn't require the
48 There are cases where one side of the matching barriers (which we will
49 refer to as "fast side") is executed much more often than the other
50 (which we will refer to as "slow side").
51 This is a prime target for the use of
53 The key idea is to replace, for these matching
54 barriers, the fast-side memory barriers by simple compiler barriers,
59 asm volatile ("" : : : "memory")
63 and replace the slow-side memory barriers by calls to
66 This will add overhead to the slow side, and remove overhead from the
67 fast side, thus resulting in an overall performance increase as long as
68 the slow side is infrequent enough that the overhead of the
70 calls does not outweigh the performance gain on the fast side.
74 argument is one of the following:
76 .BR MEMBARRIER_CMD_QUERY " (since Linux 4.3)"
77 Query the set of supported commands.
78 The return value of the call is a bit mask of supported
80 .BR MEMBARRIER_CMD_QUERY ,
81 which has the value 0,
82 is not itself included in this bit mask.
83 This command is always supported (on kernels where
87 .BR MEMBARRIER_CMD_GLOBAL " (since Linux 4.16)"
88 Ensure that all threads from all processes on the system pass through a
89 state where all memory accesses to user-space addresses match program
90 order between entry to and return from the
93 All threads on the system are targeted by this command.
95 .BR MEMBARRIER_CMD_GLOBAL_EXPEDITED " (since Linux 4.16)"
96 Execute a memory barrier on all running threads of all processes that
97 previously registered with
98 .BR MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED .
100 Upon return from the system call, the calling thread has a guarantee that all
101 running threads have passed through a state where all memory accesses to
102 user-space addresses match program order between entry to and return
103 from the system call (non-running threads are de facto in such a state).
104 This guarantee is provided only for the threads of processes that
105 previously registered with
106 .BR MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED .
108 Given that registration is about the intent to receive the barriers, it
110 .BR MEMBARRIER_CMD_GLOBAL_EXPEDITED
111 from a process that has not employed
112 .BR MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED .
114 The "expedited" commands complete faster than the non-expedited ones;
115 they never block, but have the downside of causing extra overhead.
117 .BR MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED " (since Linux 4.16)"
118 Register the process's intent to receive
119 .BR MEMBARRIER_CMD_GLOBAL_EXPEDITED
122 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED " (since Linux 4.14)"
123 Execute a memory barrier on each running thread belonging to the same
124 process as the calling thread.
126 Upon return from the system call, the calling
127 thread has a guarantee that all its running thread siblings have passed
128 through a state where all memory accesses to user-space addresses match
129 program order between entry to and return from the system call
130 (non-running threads are de facto in such a state).
131 This guarantee is provided only for threads in
132 the same process as the calling thread.
134 The "expedited" commands complete faster than the non-expedited ones;
135 they never block, but have the downside of causing extra overhead.
137 A process must register its intent to use the private
138 expedited command prior to using it.
140 .BR MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED " (since Linux 4.14)"
141 Register the process's intent to use
142 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED .
144 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE " (since Linux 4.16)"
145 In addition to providing the memory ordering guarantees described in
146 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED ,
147 upon return from system call the calling thread has a guarantee that all its
148 running thread siblings have executed a core serializing instruction.
149 This guarantee is provided only for threads in
150 the same process as the calling thread.
152 The "expedited" commands complete faster than the non-expedited ones,
153 they never block, but have the downside of causing extra overhead.
155 A process must register its intent to use the private expedited sync
156 core command prior to using it.
158 .BR MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE " (since Linux 4.16)"
159 Register the process's intent to use
160 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE .
162 .BR MEMBARRIER_CMD_SHARED " (since Linux 4.3)"
164 .BR MEMBARRIER_CMD_GLOBAL
165 that exists for header backward compatibility.
169 argument is currently unused and must be specified as 0.
171 All memory accesses performed in program order from each targeted thread
172 are guaranteed to be ordered with respect to
175 If we use the semantic
177 to represent a compiler barrier forcing memory
178 accesses to be performed in program order across the barrier, and
180 to represent explicit memory barriers forcing full memory
181 ordering across the barrier, we have the following ordering table for
187 The pair ordering is detailed as (O: ordered, X: not ordered):
189 barrier() smp_mb() membarrier()
195 .B MEMBARRIER_CMD_QUERY
196 operation returns a bit mask of supported commands, and the
197 .BR MEMBARRIER_CMD_GLOBAL ,
198 .BR MEMBARRIER_CMD_GLOBAL_EXPEDITED ,
199 .BR MEMBARRIER_CMD_REGISTER_GLOBAL_EXPEDITED ,
200 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED ,
201 .BR MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED ,
202 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE ,
204 .B MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE
205 operations return zero.
206 On error, \-1 is returned,
209 is set appropriately.
211 For a given command, with
213 set to 0, this system call is
214 guaranteed to always return the same value until reboot.
215 Further calls with the same arguments will lead to the same result.
218 set to 0, error handling is required only for the first call to
227 .BR MEMBARRIER_CMD_GLOBAL
228 command is disabled because the
230 CPU parameter has been set, or the
231 .BR MEMBARRIER_CMD_PRIVATE_EXPEDITED_SYNC_CORE
233 .BR MEMBARRIER_CMD_REGISTER_PRIVATE_EXPEDITED_SYNC_CORE
234 commands are not implemented by the architecture.
239 system call is not implemented by this kernel.
242 The current process was not registered prior to using private expedited
247 system call was added in Linux 4.3.
253 .\" FIXME See if the following syscalls make it into Linux 4.15 or later
257 A memory barrier instruction is part of the instruction set of
258 architectures with weakly-ordered memory models.
260 accesses prior to the barrier and after the barrier with respect to
261 matching barriers on other cores.
262 For instance, a load fence can order
263 loads prior to and following that fence with respect to stores ordered
266 Program order is the order in which instructions are ordered in the
267 program assembly code.
271 can be useful include implementations
272 of Read-Copy-Update libraries and garbage collectors.
274 Assuming a multithreaded application where "fast_path()" is executed
275 very frequently, and where "slow_path()" is executed infrequently, the
276 following code (x86) can be transformed using
283 static volatile int a, b;
286 fast_path(int *read_b)
289 asm volatile ("mfence" : : : "memory");
294 slow_path(int *read_a)
297 asm volatile ("mfence" : : : "memory");
302 main(int argc, char **argv)
307 * Real applications would call fast_path() and slow_path()
308 * from different threads. Call those from main() to keep
309 * this example short.
316 * read_b == 0 implies read_a == 1 and
317 * read_a == 0 implies read_b == 1.
320 if (read_b == 0 && read_a == 0)
328 The code above transformed to use
338 #include <sys/syscall.h>
339 #include <linux/membarrier.h>
341 static volatile int a, b;
344 membarrier(int cmd, int flags)
346 return syscall(__NR_membarrier, cmd, flags);
350 init_membarrier(void)
354 /* Check that membarrier() is supported. */
356 ret = membarrier(MEMBARRIER_CMD_QUERY, 0);
358 perror("membarrier");
362 if (!(ret & MEMBARRIER_CMD_GLOBAL)) {
364 "membarrier does not support MEMBARRIER_CMD_GLOBAL\\n");
372 fast_path(int *read_b)
375 asm volatile ("" : : : "memory");
380 slow_path(int *read_a)
383 membarrier(MEMBARRIER_CMD_GLOBAL, 0);
388 main(int argc, char **argv)
392 if (init_membarrier())
396 * Real applications would call fast_path() and slow_path()
397 * from different threads. Call those from main() to keep
398 * this example short.
405 * read_b == 0 implies read_a == 1 and
406 * read_a == 0 implies read_b == 1.
409 if (read_b == 0 && read_a == 0)