1 ============================
2 LINUX KERNEL MEMORY BARRIERS
3 ============================
5 By: David Howells <dhowells@redhat.com>
6 Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7 Will Deacon <will.deacon@arm.com>
8 Peter Zijlstra <peterz@infradead.org>
14 This document is not a specification; it is intentionally (for the sake of
15 brevity) and unintentionally (due to being human) incomplete. This document is
16 meant as a guide to using the various memory barriers provided by Linux, but
17 in case of any doubt (and there are many) please ask.
19 To repeat, this document is not a specification of what Linux expects from
22 The purpose of this document is twofold:
24 (1) to specify the minimum functionality that one can rely on for any
25 particular barrier, and
27 (2) to provide a guide as to how to use the barriers that are available.
29 Note that an architecture can provide more than the minimum requirement
30 for any particular barrier, but if the architecture provides less than
31 that, that architecture is incorrect.
33 Note also that it is possible that a barrier may be a no-op for an
34 architecture because the way that arch works renders an explicit barrier
35 unnecessary in that case.
42 (*) Abstract memory access model.
47 (*) What are memory barriers?
49 - Varieties of memory barrier.
50 - What may not be assumed about memory barriers?
51 - Data dependency barriers.
52 - Control dependencies.
53 - SMP barrier pairing.
54 - Examples of memory barrier sequences.
55 - Read memory barriers vs load speculation.
58 (*) Explicit kernel barriers.
61 - CPU memory barriers.
64 (*) Implicit kernel memory barriers.
66 - Lock acquisition functions.
67 - Interrupt disabling functions.
68 - Sleep and wake-up functions.
69 - Miscellaneous functions.
71 (*) Inter-CPU acquiring barrier effects.
73 - Acquires vs memory accesses.
74 - Acquires vs I/O accesses.
76 (*) Where are memory barriers needed?
78 - Interprocessor interaction.
83 (*) Kernel I/O barrier effects.
85 (*) Assumed minimum execution ordering model.
87 (*) The effects of the cpu cache.
90 - Cache coherency vs DMA.
91 - Cache coherency vs MMIO.
93 (*) The things CPUs get up to.
95 - And then there's the Alpha.
96 - Virtual Machine Guests.
105 ============================
106 ABSTRACT MEMORY ACCESS MODEL
107 ============================
109 Consider the following abstract model of the system:
114 +-------+ : +--------+ : +-------+
117 | CPU 1 |<----->| Memory |<----->| CPU 2 |
120 +-------+ : +--------+ : +-------+
128 +---------->| Device |<----------+
134 Each CPU executes a program that generates memory access operations. In the
135 abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
136 perform the memory operations in any order it likes, provided program causality
137 appears to be maintained. Similarly, the compiler may also arrange the
138 instructions it emits in any order it likes, provided it doesn't affect the
139 apparent operation of the program.
141 So in the above diagram, the effects of the memory operations performed by a
142 CPU are perceived by the rest of the system as the operations cross the
143 interface between the CPU and rest of the system (the dotted lines).
146 For example, consider the following sequence of events:
149 =============== ===============
154 The set of accesses as seen by the memory system in the middle can be arranged
155 in 24 different combinations:
157 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4
158 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3
159 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4
160 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4
161 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3
162 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4
163 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4
167 and can thus result in four different combinations of values:
175 Furthermore, the stores committed by a CPU to the memory system may not be
176 perceived by the loads made by another CPU in the same order as the stores were
180 As a further example, consider this sequence of events:
183 =============== ===============
184 { A == 1, B == 2, C == 3, P == &A, Q == &C }
188 There is an obvious data dependency here, as the value loaded into D depends on
189 the address retrieved from P by CPU 2. At the end of the sequence, any of the
190 following results are possible:
192 (Q == &A) and (D == 1)
193 (Q == &B) and (D == 2)
194 (Q == &B) and (D == 4)
196 Note that CPU 2 will never try and load C into D because the CPU will load P
197 into Q before issuing the load of *Q.
203 Some devices present their control interfaces as collections of memory
204 locations, but the order in which the control registers are accessed is very
205 important. For instance, imagine an ethernet card with a set of internal
206 registers that are accessed through an address port register (A) and a data
207 port register (D). To read internal register 5, the following code might then
213 but this might show up as either of the following two sequences:
215 STORE *A = 5, x = LOAD *D
216 x = LOAD *D, STORE *A = 5
218 the second of which will almost certainly result in a malfunction, since it set
219 the address _after_ attempting to read the register.
225 There are some minimal guarantees that may be expected of a CPU:
227 (*) On any given CPU, dependent memory accesses will be issued in order, with
228 respect to itself. This means that for:
230 Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
232 the CPU will issue the following memory operations:
234 Q = LOAD P, D = LOAD *Q
236 and always in that order. On most systems, smp_read_barrier_depends()
237 does nothing, but it is required for DEC Alpha. The READ_ONCE()
238 is required to prevent compiler mischief. Please note that you
239 should normally use something like rcu_dereference() instead of
240 open-coding smp_read_barrier_depends().
242 (*) Overlapping loads and stores within a particular CPU will appear to be
243 ordered within that CPU. This means that for:
245 a = READ_ONCE(*X); WRITE_ONCE(*X, b);
247 the CPU will only issue the following sequence of memory operations:
249 a = LOAD *X, STORE *X = b
253 WRITE_ONCE(*X, c); d = READ_ONCE(*X);
255 the CPU will only issue:
257 STORE *X = c, d = LOAD *X
259 (Loads and stores overlap if they are targeted at overlapping pieces of
262 And there are a number of things that _must_ or _must_not_ be assumed:
264 (*) It _must_not_ be assumed that the compiler will do what you want
265 with memory references that are not protected by READ_ONCE() and
266 WRITE_ONCE(). Without them, the compiler is within its rights to
267 do all sorts of "creative" transformations, which are covered in
268 the COMPILER BARRIER section.
270 (*) It _must_not_ be assumed that independent loads and stores will be issued
271 in the order given. This means that for:
273 X = *A; Y = *B; *D = Z;
275 we may get any of the following sequences:
277 X = LOAD *A, Y = LOAD *B, STORE *D = Z
278 X = LOAD *A, STORE *D = Z, Y = LOAD *B
279 Y = LOAD *B, X = LOAD *A, STORE *D = Z
280 Y = LOAD *B, STORE *D = Z, X = LOAD *A
281 STORE *D = Z, X = LOAD *A, Y = LOAD *B
282 STORE *D = Z, Y = LOAD *B, X = LOAD *A
284 (*) It _must_ be assumed that overlapping memory accesses may be merged or
285 discarded. This means that for:
287 X = *A; Y = *(A + 4);
289 we may get any one of the following sequences:
291 X = LOAD *A; Y = LOAD *(A + 4);
292 Y = LOAD *(A + 4); X = LOAD *A;
293 {X, Y} = LOAD {*A, *(A + 4) };
297 *A = X; *(A + 4) = Y;
301 STORE *A = X; STORE *(A + 4) = Y;
302 STORE *(A + 4) = Y; STORE *A = X;
303 STORE {*A, *(A + 4) } = {X, Y};
305 And there are anti-guarantees:
307 (*) These guarantees do not apply to bitfields, because compilers often
308 generate code to modify these using non-atomic read-modify-write
309 sequences. Do not attempt to use bitfields to synchronize parallel
312 (*) Even in cases where bitfields are protected by locks, all fields
313 in a given bitfield must be protected by one lock. If two fields
314 in a given bitfield are protected by different locks, the compiler's
315 non-atomic read-modify-write sequences can cause an update to one
316 field to corrupt the value of an adjacent field.
318 (*) These guarantees apply only to properly aligned and sized scalar
319 variables. "Properly sized" currently means variables that are
320 the same size as "char", "short", "int" and "long". "Properly
321 aligned" means the natural alignment, thus no constraints for
322 "char", two-byte alignment for "short", four-byte alignment for
323 "int", and either four-byte or eight-byte alignment for "long",
324 on 32-bit and 64-bit systems, respectively. Note that these
325 guarantees were introduced into the C11 standard, so beware when
326 using older pre-C11 compilers (for example, gcc 4.6). The portion
327 of the standard containing this guarantee is Section 3.14, which
328 defines "memory location" as follows:
331 either an object of scalar type, or a maximal sequence
332 of adjacent bit-fields all having nonzero width
334 NOTE 1: Two threads of execution can update and access
335 separate memory locations without interfering with
338 NOTE 2: A bit-field and an adjacent non-bit-field member
339 are in separate memory locations. The same applies
340 to two bit-fields, if one is declared inside a nested
341 structure declaration and the other is not, or if the two
342 are separated by a zero-length bit-field declaration,
343 or if they are separated by a non-bit-field member
344 declaration. It is not safe to concurrently update two
345 bit-fields in the same structure if all members declared
346 between them are also bit-fields, no matter what the
347 sizes of those intervening bit-fields happen to be.
350 =========================
351 WHAT ARE MEMORY BARRIERS?
352 =========================
354 As can be seen above, independent memory operations are effectively performed
355 in random order, but this can be a problem for CPU-CPU interaction and for I/O.
356 What is required is some way of intervening to instruct the compiler and the
357 CPU to restrict the order.
359 Memory barriers are such interventions. They impose a perceived partial
360 ordering over the memory operations on either side of the barrier.
362 Such enforcement is important because the CPUs and other devices in a system
363 can use a variety of tricks to improve performance, including reordering,
364 deferral and combination of memory operations; speculative loads; speculative
365 branch prediction and various types of caching. Memory barriers are used to
366 override or suppress these tricks, allowing the code to sanely control the
367 interaction of multiple CPUs and/or devices.
370 VARIETIES OF MEMORY BARRIER
371 ---------------------------
373 Memory barriers come in four basic varieties:
375 (1) Write (or store) memory barriers.
377 A write memory barrier gives a guarantee that all the STORE operations
378 specified before the barrier will appear to happen before all the STORE
379 operations specified after the barrier with respect to the other
380 components of the system.
382 A write barrier is a partial ordering on stores only; it is not required
383 to have any effect on loads.
385 A CPU can be viewed as committing a sequence of store operations to the
386 memory system as time progresses. All stores before a write barrier will
387 occur in the sequence _before_ all the stores after the write barrier.
389 [!] Note that write barriers should normally be paired with read or data
390 dependency barriers; see the "SMP barrier pairing" subsection.
393 (2) Data dependency barriers.
395 A data dependency barrier is a weaker form of read barrier. In the case
396 where two loads are performed such that the second depends on the result
397 of the first (eg: the first load retrieves the address to which the second
398 load will be directed), a data dependency barrier would be required to
399 make sure that the target of the second load is updated before the address
400 obtained by the first load is accessed.
402 A data dependency barrier is a partial ordering on interdependent loads
403 only; it is not required to have any effect on stores, independent loads
404 or overlapping loads.
406 As mentioned in (1), the other CPUs in the system can be viewed as
407 committing sequences of stores to the memory system that the CPU being
408 considered can then perceive. A data dependency barrier issued by the CPU
409 under consideration guarantees that for any load preceding it, if that
410 load touches one of a sequence of stores from another CPU, then by the
411 time the barrier completes, the effects of all the stores prior to that
412 touched by the load will be perceptible to any loads issued after the data
415 See the "Examples of memory barrier sequences" subsection for diagrams
416 showing the ordering constraints.
418 [!] Note that the first load really has to have a _data_ dependency and
419 not a control dependency. If the address for the second load is dependent
420 on the first load, but the dependency is through a conditional rather than
421 actually loading the address itself, then it's a _control_ dependency and
422 a full read barrier or better is required. See the "Control dependencies"
423 subsection for more information.
425 [!] Note that data dependency barriers should normally be paired with
426 write barriers; see the "SMP barrier pairing" subsection.
429 (3) Read (or load) memory barriers.
431 A read barrier is a data dependency barrier plus a guarantee that all the
432 LOAD operations specified before the barrier will appear to happen before
433 all the LOAD operations specified after the barrier with respect to the
434 other components of the system.
436 A read barrier is a partial ordering on loads only; it is not required to
437 have any effect on stores.
439 Read memory barriers imply data dependency barriers, and so can substitute
442 [!] Note that read barriers should normally be paired with write barriers;
443 see the "SMP barrier pairing" subsection.
446 (4) General memory barriers.
448 A general memory barrier gives a guarantee that all the LOAD and STORE
449 operations specified before the barrier will appear to happen before all
450 the LOAD and STORE operations specified after the barrier with respect to
451 the other components of the system.
453 A general memory barrier is a partial ordering over both loads and stores.
455 General memory barriers imply both read and write memory barriers, and so
456 can substitute for either.
459 And a couple of implicit varieties:
461 (5) ACQUIRE operations.
463 This acts as a one-way permeable barrier. It guarantees that all memory
464 operations after the ACQUIRE operation will appear to happen after the
465 ACQUIRE operation with respect to the other components of the system.
466 ACQUIRE operations include LOCK operations and both smp_load_acquire()
467 and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
468 semantics from relying on a control dependency and smp_rmb().
470 Memory operations that occur before an ACQUIRE operation may appear to
471 happen after it completes.
473 An ACQUIRE operation should almost always be paired with a RELEASE
477 (6) RELEASE operations.
479 This also acts as a one-way permeable barrier. It guarantees that all
480 memory operations before the RELEASE operation will appear to happen
481 before the RELEASE operation with respect to the other components of the
482 system. RELEASE operations include UNLOCK operations and
483 smp_store_release() operations.
485 Memory operations that occur after a RELEASE operation may appear to
486 happen before it completes.
488 The use of ACQUIRE and RELEASE operations generally precludes the need
489 for other sorts of memory barrier (but note the exceptions mentioned in
490 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE
491 pair is -not- guaranteed to act as a full memory barrier. However, after
492 an ACQUIRE on a given variable, all memory accesses preceding any prior
493 RELEASE on that same variable are guaranteed to be visible. In other
494 words, within a given variable's critical section, all accesses of all
495 previous critical sections for that variable are guaranteed to have
498 This means that ACQUIRE acts as a minimal "acquire" operation and
499 RELEASE acts as a minimal "release" operation.
501 A subset of the atomic operations described in atomic_t.txt have ACQUIRE and
502 RELEASE variants in addition to fully-ordered and relaxed (no barrier
503 semantics) definitions. For compound atomics performing both a load and a
504 store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
505 only to the store portion of the operation.
507 Memory barriers are only required where there's a possibility of interaction
508 between two CPUs or between a CPU and a device. If it can be guaranteed that
509 there won't be any such interaction in any particular piece of code, then
510 memory barriers are unnecessary in that piece of code.
513 Note that these are the _minimum_ guarantees. Different architectures may give
514 more substantial guarantees, but they may _not_ be relied upon outside of arch
518 WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
519 ----------------------------------------------
521 There are certain things that the Linux kernel memory barriers do not guarantee:
523 (*) There is no guarantee that any of the memory accesses specified before a
524 memory barrier will be _complete_ by the completion of a memory barrier
525 instruction; the barrier can be considered to draw a line in that CPU's
526 access queue that accesses of the appropriate type may not cross.
528 (*) There is no guarantee that issuing a memory barrier on one CPU will have
529 any direct effect on another CPU or any other hardware in the system. The
530 indirect effect will be the order in which the second CPU sees the effects
531 of the first CPU's accesses occur, but see the next point:
533 (*) There is no guarantee that a CPU will see the correct order of effects
534 from a second CPU's accesses, even _if_ the second CPU uses a memory
535 barrier, unless the first CPU _also_ uses a matching memory barrier (see
536 the subsection on "SMP Barrier Pairing").
538 (*) There is no guarantee that some intervening piece of off-the-CPU
539 hardware[*] will not reorder the memory accesses. CPU cache coherency
540 mechanisms should propagate the indirect effects of a memory barrier
541 between CPUs, but might not do so in order.
543 [*] For information on bus mastering DMA and coherency please read:
545 Documentation/PCI/pci.txt
546 Documentation/DMA-API-HOWTO.txt
547 Documentation/DMA-API.txt
550 DATA DEPENDENCY BARRIERS
551 ------------------------
553 The usage requirements of data dependency barriers are a little subtle, and
554 it's not always obvious that they're needed. To illustrate, consider the
555 following sequence of events:
558 =============== ===============
559 { A == 1, B == 2, C == 3, P == &A, Q == &C }
566 There's a clear data dependency here, and it would seem that by the end of the
567 sequence, Q must be either &A or &B, and that:
569 (Q == &A) implies (D == 1)
570 (Q == &B) implies (D == 4)
572 But! CPU 2's perception of P may be updated _before_ its perception of B, thus
573 leading to the following situation:
575 (Q == &B) and (D == 2) ????
577 Whilst this may seem like a failure of coherency or causality maintenance, it
578 isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
581 To deal with this, a data dependency barrier or better must be inserted
582 between the address load and the data load:
585 =============== ===============
586 { A == 1, B == 2, C == 3, P == &A, Q == &C }
591 <data dependency barrier>
594 This enforces the occurrence of one of the two implications, and prevents the
595 third possibility from arising.
598 [!] Note that this extremely counterintuitive situation arises most easily on
599 machines with split caches, so that, for example, one cache bank processes
600 even-numbered cache lines and the other bank processes odd-numbered cache
601 lines. The pointer P might be stored in an odd-numbered cache line, and the
602 variable B might be stored in an even-numbered cache line. Then, if the
603 even-numbered bank of the reading CPU's cache is extremely busy while the
604 odd-numbered bank is idle, one can see the new value of the pointer P (&B),
605 but the old value of the variable B (2).
608 A data-dependency barrier is not required to order dependent writes
609 because the CPUs that the Linux kernel supports don't do writes
610 until they are certain (1) that the write will actually happen, (2)
611 of the location of the write, and (3) of the value to be written.
612 But please carefully read the "CONTROL DEPENDENCIES" section and the
613 Documentation/RCU/rcu_dereference.txt file: The compiler can and does
614 break dependencies in a great many highly creative ways.
617 =============== ===============
618 { A == 1, B == 2, C = 3, P == &A, Q == &C }
625 Therefore, no data-dependency barrier is required to order the read into
626 Q with the store into *Q. In other words, this outcome is prohibited,
627 even without a data-dependency barrier:
629 (Q == &B) && (B == 4)
631 Please note that this pattern should be rare. After all, the whole point
632 of dependency ordering is to -prevent- writes to the data structure, along
633 with the expensive cache misses associated with those writes. This pattern
634 can be used to record rare error conditions and the like, and the CPUs'
635 naturally occurring ordering prevents such records from being lost.
638 The data dependency barrier is very important to the RCU system,
639 for example. See rcu_assign_pointer() and rcu_dereference() in
640 include/linux/rcupdate.h. This permits the current target of an RCU'd
641 pointer to be replaced with a new modified target, without the replacement
642 target appearing to be incompletely initialised.
644 See also the subsection on "Cache Coherency" for a more thorough example.
650 Control dependencies can be a bit tricky because current compilers do
651 not understand them. The purpose of this section is to help you prevent
652 the compiler's ignorance from breaking your code.
654 A load-load control dependency requires a full read memory barrier, not
655 simply a data dependency barrier to make it work correctly. Consider the
656 following bit of code:
660 <data dependency barrier> /* BUG: No data dependency!!! */
664 This will not have the desired effect because there is no actual data
665 dependency, but rather a control dependency that the CPU may short-circuit
666 by attempting to predict the outcome in advance, so that other CPUs see
667 the load from b as having happened before the load from a. In such a
668 case what's actually required is:
676 However, stores are not speculated. This means that ordering -is- provided
677 for load-store control dependencies, as in the following example:
684 Control dependencies pair normally with other types of barriers.
685 That said, please note that neither READ_ONCE() nor WRITE_ONCE()
686 are optional! Without the READ_ONCE(), the compiler might combine the
687 load from 'a' with other loads from 'a'. Without the WRITE_ONCE(),
688 the compiler might combine the store to 'b' with other stores to 'b'.
689 Either can result in highly counterintuitive effects on ordering.
691 Worse yet, if the compiler is able to prove (say) that the value of
692 variable 'a' is always non-zero, it would be well within its rights
693 to optimize the original example by eliminating the "if" statement
697 b = 1; /* BUG: Compiler and CPU can both reorder!!! */
699 So don't leave out the READ_ONCE().
701 It is tempting to try to enforce ordering on identical stores on both
702 branches of the "if" statement as follows:
715 Unfortunately, current compilers will transform this as follows at high
720 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */
722 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
725 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
729 Now there is no conditional between the load from 'a' and the store to
730 'b', which means that the CPU is within its rights to reorder them:
731 The conditional is absolutely required, and must be present in the
732 assembly code even after all compiler optimizations have been applied.
733 Therefore, if you need ordering in this example, you need explicit
734 memory barriers, for example, smp_store_release():
738 smp_store_release(&b, 1);
741 smp_store_release(&b, 1);
745 In contrast, without explicit memory barriers, two-legged-if control
746 ordering is guaranteed only when the stores differ, for example:
757 The initial READ_ONCE() is still required to prevent the compiler from
758 proving the value of 'a'.
760 In addition, you need to be careful what you do with the local variable 'q',
761 otherwise the compiler might be able to guess the value and again remove
762 the needed conditional. For example:
773 If MAX is defined to be 1, then the compiler knows that (q % MAX) is
774 equal to zero, in which case the compiler is within its rights to
775 transform the above code into the following:
781 Given this transformation, the CPU is not required to respect the ordering
782 between the load from variable 'a' and the store to variable 'b'. It is
783 tempting to add a barrier(), but this does not help. The conditional
784 is gone, and the barrier won't bring it back. Therefore, if you are
785 relying on this ordering, you should make sure that MAX is greater than
786 one, perhaps as follows:
789 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
798 Please note once again that the stores to 'b' differ. If they were
799 identical, as noted earlier, the compiler could pull this store outside
800 of the 'if' statement.
802 You must also be careful not to rely too much on boolean short-circuit
803 evaluation. Consider this example:
809 Because the first condition cannot fault and the second condition is
810 always true, the compiler can transform this example as following,
811 defeating control dependency:
816 This example underscores the need to ensure that the compiler cannot
817 out-guess your code. More generally, although READ_ONCE() does force
818 the compiler to actually emit code for a given load, it does not force
819 the compiler to use the results.
821 In addition, control dependencies apply only to the then-clause and
822 else-clause of the if-statement in question. In particular, it does
823 not necessarily apply to code following the if-statement:
831 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */
833 It is tempting to argue that there in fact is ordering because the
834 compiler cannot reorder volatile accesses and also cannot reorder
835 the writes to 'b' with the condition. Unfortunately for this line
836 of reasoning, the compiler might compile the two writes to 'b' as
837 conditional-move instructions, as in this fanciful pseudo-assembly
847 A weakly ordered CPU would have no dependency of any sort between the load
848 from 'a' and the store to 'c'. The control dependencies would extend
849 only to the pair of cmov instructions and the store depending on them.
850 In short, control dependencies apply only to the stores in the then-clause
851 and else-clause of the if-statement in question (including functions
852 invoked by those two clauses), not to code following that if-statement.
854 Finally, control dependencies do -not- provide transitivity. This is
855 demonstrated by two related examples, with the initial values of
856 'x' and 'y' both being zero:
859 ======================= =======================
860 r1 = READ_ONCE(x); r2 = READ_ONCE(y);
861 if (r1 > 0) if (r2 > 0)
862 WRITE_ONCE(y, 1); WRITE_ONCE(x, 1);
864 assert(!(r1 == 1 && r2 == 1));
866 The above two-CPU example will never trigger the assert(). However,
867 if control dependencies guaranteed transitivity (which they do not),
868 then adding the following CPU would guarantee a related assertion:
871 =====================
874 assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
876 But because control dependencies do -not- provide transitivity, the above
877 assertion can fail after the combined three-CPU example completes. If you
878 need the three-CPU example to provide ordering, you will need smp_mb()
879 between the loads and stores in the CPU 0 and CPU 1 code fragments,
880 that is, just before or just after the "if" statements. Furthermore,
881 the original two-CPU example is very fragile and should be avoided.
883 These two examples are the LB and WWC litmus tests from this paper:
884 http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
885 site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
889 (*) Control dependencies can order prior loads against later stores.
890 However, they do -not- guarantee any other sort of ordering:
891 Not prior loads against later loads, nor prior stores against
892 later anything. If you need these other forms of ordering,
893 use smp_rmb(), smp_wmb(), or, in the case of prior stores and
894 later loads, smp_mb().
896 (*) If both legs of the "if" statement begin with identical stores to
897 the same variable, then those stores must be ordered, either by
898 preceding both of them with smp_mb() or by using smp_store_release()
899 to carry out the stores. Please note that it is -not- sufficient
900 to use barrier() at beginning of each leg of the "if" statement
901 because, as shown by the example above, optimizing compilers can
902 destroy the control dependency while respecting the letter of the
905 (*) Control dependencies require at least one run-time conditional
906 between the prior load and the subsequent store, and this
907 conditional must involve the prior load. If the compiler is able
908 to optimize the conditional away, it will have also optimized
909 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE()
910 can help to preserve the needed conditional.
912 (*) Control dependencies require that the compiler avoid reordering the
913 dependency into nonexistence. Careful use of READ_ONCE() or
914 atomic{,64}_read() can help to preserve your control dependency.
915 Please see the COMPILER BARRIER section for more information.
917 (*) Control dependencies apply only to the then-clause and else-clause
918 of the if-statement containing the control dependency, including
919 any functions that these two clauses call. Control dependencies
920 do -not- apply to code following the if-statement containing the
923 (*) Control dependencies pair normally with other types of barriers.
925 (*) Control dependencies do -not- provide transitivity. If you
926 need transitivity, use smp_mb().
928 (*) Compilers do not understand control dependencies. It is therefore
929 your job to ensure that they do not break your code.
935 When dealing with CPU-CPU interactions, certain types of memory barrier should
936 always be paired. A lack of appropriate pairing is almost certainly an error.
938 General barriers pair with each other, though they also pair with most
939 other types of barriers, albeit without transitivity. An acquire barrier
940 pairs with a release barrier, but both may also pair with other barriers,
941 including of course general barriers. A write barrier pairs with a data
942 dependency barrier, a control dependency, an acquire barrier, a release
943 barrier, a read barrier, or a general barrier. Similarly a read barrier,
944 control dependency, or a data dependency barrier pairs with a write
945 barrier, an acquire barrier, a release barrier, or a general barrier:
948 =============== ===============
951 WRITE_ONCE(b, 2); x = READ_ONCE(b);
958 =============== ===============================
961 WRITE_ONCE(b, &a); x = READ_ONCE(b);
962 <data dependency barrier>
968 =============== ===============================
971 WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) {
972 <implicit control dependency>
976 assert(r1 == 0 || r2 == 0);
978 Basically, the read barrier always has to be there, even though it can be of
981 [!] Note that the stores before the write barrier would normally be expected to
982 match the loads after the read barrier or the data dependency barrier, and vice
986 =================== ===================
987 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c);
988 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d);
989 <write barrier> \ <read barrier>
990 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a);
991 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b);
994 EXAMPLES OF MEMORY BARRIER SEQUENCES
995 ------------------------------------
997 Firstly, write barriers act as partial orderings on store operations.
998 Consider the following sequence of events:
1001 =======================
1009 This sequence of events is committed to the memory coherence system in an order
1010 that the rest of the system might perceive as the unordered set of { STORE A,
1011 STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1016 | |------>| C=3 | } /\
1017 | | : +------+ }----- \ -----> Events perceptible to
1018 | | : | A=1 | } \/ the rest of the system
1020 | CPU 1 | : | B=2 | }
1022 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier
1023 | | +------+ } requires all stores prior to the
1024 | | : | E=5 | } barrier to be committed before
1025 | | : +------+ } further stores may take place
1030 | Sequence in which stores are committed to the
1031 | memory system by CPU 1
1035 Secondly, data dependency barriers act as partial orderings on data-dependent
1036 loads. Consider the following sequence of events:
1039 ======================= =======================
1040 { B = 7; X = 9; Y = 8; C = &Y }
1045 STORE D = 4 LOAD C (gets &B)
1048 Without intervention, CPU 2 may perceive the events on CPU 1 in some
1049 effectively random order, despite the write barrier issued by CPU 1:
1052 | | +------+ +-------+ | Sequence of update
1053 | |------>| B=2 |----- --->| Y->8 | | of perception on
1054 | | : +------+ \ +-------+ | CPU 2
1055 | CPU 1 | : | A=1 | \ --->| C->&Y | V
1056 | | +------+ | +-------+
1057 | | wwwwwwwwwwwwwwww | : :
1059 | | : | C=&B |--- | : : +-------+
1060 | | : +------+ \ | +-------+ | |
1061 | |------>| D=4 | ----------->| C->&B |------>| |
1062 | | +------+ | +-------+ | |
1063 +-------+ : : | : : | |
1067 Apparently incorrect ---> | | B->7 |------>| |
1068 perception of B (!) | +-------+ | |
1071 The load of X holds ---> \ | X->9 |------>| |
1072 up the maintenance \ +-------+ | |
1073 of coherence of B ----->| B->2 | +-------+
1078 In the above example, CPU 2 perceives that B is 7, despite the load of *C
1079 (which would be B) coming after the LOAD of C.
1081 If, however, a data dependency barrier were to be placed between the load of C
1082 and the load of *C (ie: B) on CPU 2:
1085 ======================= =======================
1086 { B = 7; X = 9; Y = 8; C = &Y }
1091 STORE D = 4 LOAD C (gets &B)
1092 <data dependency barrier>
1095 then the following will occur:
1098 | | +------+ +-------+
1099 | |------>| B=2 |----- --->| Y->8 |
1100 | | : +------+ \ +-------+
1101 | CPU 1 | : | A=1 | \ --->| C->&Y |
1102 | | +------+ | +-------+
1103 | | wwwwwwwwwwwwwwww | : :
1105 | | : | C=&B |--- | : : +-------+
1106 | | : +------+ \ | +-------+ | |
1107 | |------>| D=4 | ----------->| C->&B |------>| |
1108 | | +------+ | +-------+ | |
1109 +-------+ : : | : : | |
1113 | | X->9 |------>| |
1115 Makes sure all effects ---> \ ddddddddddddddddd | |
1116 prior to the store of C \ +-------+ | |
1117 are perceptible to ----->| B->2 |------>| |
1118 subsequent loads +-------+ | |
1122 And thirdly, a read barrier acts as a partial order on loads. Consider the
1123 following sequence of events:
1126 ======================= =======================
1134 Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1135 some effectively random order, despite the write barrier issued by CPU 1:
1138 | | +------+ +-------+
1139 | |------>| A=1 |------ --->| A->0 |
1140 | | +------+ \ +-------+
1141 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1142 | | +------+ | +-------+
1143 | |------>| B=2 |--- | : :
1144 | | +------+ \ | : : +-------+
1145 +-------+ : : \ | +-------+ | |
1146 ---------->| B->2 |------>| |
1147 | +-------+ | CPU 2 |
1148 | | A->0 |------>| |
1158 If, however, a read barrier were to be placed between the load of B and the
1162 ======================= =======================
1171 then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
1175 | | +------+ +-------+
1176 | |------>| A=1 |------ --->| A->0 |
1177 | | +------+ \ +-------+
1178 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1179 | | +------+ | +-------+
1180 | |------>| B=2 |--- | : :
1181 | | +------+ \ | : : +-------+
1182 +-------+ : : \ | +-------+ | |
1183 ---------->| B->2 |------>| |
1184 | +-------+ | CPU 2 |
1187 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1188 barrier causes all effects \ +-------+ | |
1189 prior to the storage of B ---->| A->1 |------>| |
1190 to be perceptible to CPU 2 +-------+ | |
1194 To illustrate this more completely, consider what could happen if the code
1195 contained a load of A either side of the read barrier:
1198 ======================= =======================
1204 LOAD A [first load of A]
1206 LOAD A [second load of A]
1208 Even though the two loads of A both occur after the load of B, they may both
1209 come up with different values:
1212 | | +------+ +-------+
1213 | |------>| A=1 |------ --->| A->0 |
1214 | | +------+ \ +-------+
1215 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1216 | | +------+ | +-------+
1217 | |------>| B=2 |--- | : :
1218 | | +------+ \ | : : +-------+
1219 +-------+ : : \ | +-------+ | |
1220 ---------->| B->2 |------>| |
1221 | +-------+ | CPU 2 |
1225 | | A->0 |------>| 1st |
1227 At this point the read ----> \ rrrrrrrrrrrrrrrrr | |
1228 barrier causes all effects \ +-------+ | |
1229 prior to the storage of B ---->| A->1 |------>| 2nd |
1230 to be perceptible to CPU 2 +-------+ | |
1234 But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1235 before the read barrier completes anyway:
1238 | | +------+ +-------+
1239 | |------>| A=1 |------ --->| A->0 |
1240 | | +------+ \ +-------+
1241 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 |
1242 | | +------+ | +-------+
1243 | |------>| B=2 |--- | : :
1244 | | +------+ \ | : : +-------+
1245 +-------+ : : \ | +-------+ | |
1246 ---------->| B->2 |------>| |
1247 | +-------+ | CPU 2 |
1251 ---->| A->1 |------>| 1st |
1253 rrrrrrrrrrrrrrrrr | |
1255 | A->1 |------>| 2nd |
1260 The guarantee is that the second load will always come up with A == 1 if the
1261 load of B came up with B == 2. No such guarantee exists for the first load of
1262 A; that may come up with either A == 0 or A == 1.
1265 READ MEMORY BARRIERS VS LOAD SPECULATION
1266 ----------------------------------------
1268 Many CPUs speculate with loads: that is they see that they will need to load an
1269 item from memory, and they find a time where they're not using the bus for any
1270 other loads, and so do the load in advance - even though they haven't actually
1271 got to that point in the instruction execution flow yet. This permits the
1272 actual load instruction to potentially complete immediately because the CPU
1273 already has the value to hand.
1275 It may turn out that the CPU didn't actually need the value - perhaps because a
1276 branch circumvented the load - in which case it can discard the value or just
1277 cache it for later use.
1282 ======================= =======================
1284 DIVIDE } Divide instructions generally
1285 DIVIDE } take a long time to perform
1288 Which might appear as this:
1292 --->| B->2 |------>| |
1296 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1297 division speculates on the +-------+ ~ | |
1301 Once the divisions are complete --> : : ~-->| |
1302 the CPU can then perform the : : | |
1303 LOAD with immediate effect : : +-------+
1306 Placing a read barrier or a data dependency barrier just before the second
1310 ======================= =======================
1317 will force any value speculatively obtained to be reconsidered to an extent
1318 dependent on the type of barrier used. If there was no change made to the
1319 speculated memory location, then the speculated value will just be used:
1323 --->| B->2 |------>| |
1327 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1328 division speculates on the +-------+ ~ | |
1333 rrrrrrrrrrrrrrrr~ | |
1340 but if there was an update or an invalidation from another CPU pending, then
1341 the speculation will be cancelled and the value reloaded:
1345 --->| B->2 |------>| |
1349 The CPU being busy doing a ---> --->| A->0 |~~~~ | |
1350 division speculates on the +-------+ ~ | |
1355 rrrrrrrrrrrrrrrrr | |
1357 The speculation is discarded ---> --->| A->1 |------>| |
1358 and an updated value is +-------+ | |
1359 retrieved : : +-------+
1365 Transitivity is a deeply intuitive notion about ordering that is not
1366 always provided by real computer systems. The following example
1367 demonstrates transitivity:
1370 ======================= ======================= =======================
1372 STORE X=1 LOAD X STORE Y=1
1373 <general barrier> <general barrier>
1376 Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1377 This indicates that CPU 2's load from X in some sense follows CPU 1's
1378 store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1379 store to Y. The question is then "Can CPU 3's load from X return 0?"
1381 Because CPU 2's load from X in some sense came after CPU 1's store, it
1382 is natural to expect that CPU 3's load from X must therefore return 1.
1383 This expectation is an example of transitivity: if a load executing on
1384 CPU A follows a load from the same variable executing on CPU B, then
1385 CPU A's load must either return the same value that CPU B's load did,
1386 or must return some later value.
1388 In the Linux kernel, use of general memory barriers guarantees
1389 transitivity. Therefore, in the above example, if CPU 2's load from X
1390 returns 1 and its load from Y returns 0, then CPU 3's load from X must
1393 However, transitivity is -not- guaranteed for read or write barriers.
1394 For example, suppose that CPU 2's general barrier in the above example
1395 is changed to a read barrier as shown below:
1398 ======================= ======================= =======================
1400 STORE X=1 LOAD X STORE Y=1
1401 <read barrier> <general barrier>
1404 This substitution destroys transitivity: in this example, it is perfectly
1405 legal for CPU 2's load from X to return 1, its load from Y to return 0,
1406 and CPU 3's load from X to return 0.
1408 The key point is that although CPU 2's read barrier orders its pair
1409 of loads, it does not guarantee to order CPU 1's store. Therefore, if
1410 this example runs on a system where CPUs 1 and 2 share a store buffer
1411 or a level of cache, CPU 2 might have early access to CPU 1's writes.
1412 General barriers are therefore required to ensure that all CPUs agree
1413 on the combined order of CPU 1's and CPU 2's accesses.
1415 General barriers provide "global transitivity", so that all CPUs will
1416 agree on the order of operations. In contrast, a chain of release-acquire
1417 pairs provides only "local transitivity", so that only those CPUs on
1418 the chain are guaranteed to agree on the combined order of the accesses.
1419 For example, switching to C code in deference to Herman Hollerith:
1425 r0 = smp_load_acquire(&x);
1427 smp_store_release(&y, 1);
1432 r1 = smp_load_acquire(&y);
1435 smp_store_release(&z, 1);
1440 r2 = smp_load_acquire(&z);
1441 smp_store_release(&x, 1);
1451 Because cpu0(), cpu1(), and cpu2() participate in a local transitive
1452 chain of smp_store_release()/smp_load_acquire() pairs, the following
1453 outcome is prohibited:
1455 r0 == 1 && r1 == 1 && r2 == 1
1457 Furthermore, because of the release-acquire relationship between cpu0()
1458 and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1459 outcome is prohibited:
1463 However, the transitivity of release-acquire is local to the participating
1464 CPUs and does not apply to cpu3(). Therefore, the following outcome
1467 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1469 As an aside, the following outcome is also possible:
1471 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1473 Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1474 writes in order, CPUs not involved in the release-acquire chain might
1475 well disagree on the order. This disagreement stems from the fact that
1476 the weak memory-barrier instructions used to implement smp_load_acquire()
1477 and smp_store_release() are not required to order prior stores against
1478 subsequent loads in all cases. This means that cpu3() can see cpu0()'s
1479 store to u as happening -after- cpu1()'s load from v, even though
1480 both cpu0() and cpu1() agree that these two operations occurred in the
1483 However, please keep in mind that smp_load_acquire() is not magic.
1484 In particular, it simply reads from its argument with ordering. It does
1485 -not- ensure that any particular value will be read. Therefore, the
1486 following outcome is possible:
1488 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1490 Note that this outcome can happen even on a mythical sequentially
1491 consistent system where nothing is ever reordered.
1493 To reiterate, if your code requires global transitivity, use general
1494 barriers throughout.
1497 ========================
1498 EXPLICIT KERNEL BARRIERS
1499 ========================
1501 The Linux kernel has a variety of different barriers that act at different
1504 (*) Compiler barrier.
1506 (*) CPU memory barriers.
1508 (*) MMIO write barrier.
1514 The Linux kernel has an explicit compiler barrier function that prevents the
1515 compiler from moving the memory accesses either side of it to the other side:
1519 This is a general barrier -- there are no read-read or write-write
1520 variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be
1521 thought of as weak forms of barrier() that affect only the specific
1522 accesses flagged by the READ_ONCE() or WRITE_ONCE().
1524 The barrier() function has the following effects:
1526 (*) Prevents the compiler from reordering accesses following the
1527 barrier() to precede any accesses preceding the barrier().
1528 One example use for this property is to ease communication between
1529 interrupt-handler code and the code that was interrupted.
1531 (*) Within a loop, forces the compiler to load the variables used
1532 in that loop's conditional on each pass through that loop.
1534 The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1535 optimizations that, while perfectly safe in single-threaded code, can
1536 be fatal in concurrent code. Here are some examples of these sorts
1539 (*) The compiler is within its rights to reorder loads and stores
1540 to the same variable, and in some cases, the CPU is within its
1541 rights to reorder loads to the same variable. This means that
1547 Might result in an older value of x stored in a[1] than in a[0].
1548 Prevent both the compiler and the CPU from doing this as follows:
1550 a[0] = READ_ONCE(x);
1551 a[1] = READ_ONCE(x);
1553 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1554 accesses from multiple CPUs to a single variable.
1556 (*) The compiler is within its rights to merge successive loads from
1557 the same variable. Such merging can cause the compiler to "optimize"
1561 do_something_with(tmp);
1563 into the following code, which, although in some sense legitimate
1564 for single-threaded code, is almost certainly not what the developer
1569 do_something_with(tmp);
1571 Use READ_ONCE() to prevent the compiler from doing this to you:
1573 while (tmp = READ_ONCE(a))
1574 do_something_with(tmp);
1576 (*) The compiler is within its rights to reload a variable, for example,
1577 in cases where high register pressure prevents the compiler from
1578 keeping all data of interest in registers. The compiler might
1579 therefore optimize the variable 'tmp' out of our previous example:
1582 do_something_with(tmp);
1584 This could result in the following code, which is perfectly safe in
1585 single-threaded code, but can be fatal in concurrent code:
1588 do_something_with(a);
1590 For example, the optimized version of this code could result in
1591 passing a zero to do_something_with() in the case where the variable
1592 a was modified by some other CPU between the "while" statement and
1593 the call to do_something_with().
1595 Again, use READ_ONCE() to prevent the compiler from doing this:
1597 while (tmp = READ_ONCE(a))
1598 do_something_with(tmp);
1600 Note that if the compiler runs short of registers, it might save
1601 tmp onto the stack. The overhead of this saving and later restoring
1602 is why compilers reload variables. Doing so is perfectly safe for
1603 single-threaded code, so you need to tell the compiler about cases
1604 where it is not safe.
1606 (*) The compiler is within its rights to omit a load entirely if it knows
1607 what the value will be. For example, if the compiler can prove that
1608 the value of variable 'a' is always zero, it can optimize this code:
1611 do_something_with(tmp);
1617 This transformation is a win for single-threaded code because it
1618 gets rid of a load and a branch. The problem is that the compiler
1619 will carry out its proof assuming that the current CPU is the only
1620 one updating variable 'a'. If variable 'a' is shared, then the
1621 compiler's proof will be erroneous. Use READ_ONCE() to tell the
1622 compiler that it doesn't know as much as it thinks it does:
1624 while (tmp = READ_ONCE(a))
1625 do_something_with(tmp);
1627 But please note that the compiler is also closely watching what you
1628 do with the value after the READ_ONCE(). For example, suppose you
1629 do the following and MAX is a preprocessor macro with the value 1:
1631 while ((tmp = READ_ONCE(a)) % MAX)
1632 do_something_with(tmp);
1634 Then the compiler knows that the result of the "%" operator applied
1635 to MAX will always be zero, again allowing the compiler to optimize
1636 the code into near-nonexistence. (It will still load from the
1639 (*) Similarly, the compiler is within its rights to omit a store entirely
1640 if it knows that the variable already has the value being stored.
1641 Again, the compiler assumes that the current CPU is the only one
1642 storing into the variable, which can cause the compiler to do the
1643 wrong thing for shared variables. For example, suppose you have
1647 ... Code that does not store to variable a ...
1650 The compiler sees that the value of variable 'a' is already zero, so
1651 it might well omit the second store. This would come as a fatal
1652 surprise if some other CPU might have stored to variable 'a' in the
1655 Use WRITE_ONCE() to prevent the compiler from making this sort of
1659 ... Code that does not store to variable a ...
1662 (*) The compiler is within its rights to reorder memory accesses unless
1663 you tell it not to. For example, consider the following interaction
1664 between process-level code and an interrupt handler:
1666 void process_level(void)
1668 msg = get_message();
1672 void interrupt_handler(void)
1675 process_message(msg);
1678 There is nothing to prevent the compiler from transforming
1679 process_level() to the following, in fact, this might well be a
1680 win for single-threaded code:
1682 void process_level(void)
1685 msg = get_message();
1688 If the interrupt occurs between these two statement, then
1689 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE()
1690 to prevent this as follows:
1692 void process_level(void)
1694 WRITE_ONCE(msg, get_message());
1695 WRITE_ONCE(flag, true);
1698 void interrupt_handler(void)
1700 if (READ_ONCE(flag))
1701 process_message(READ_ONCE(msg));
1704 Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1705 interrupt_handler() are needed if this interrupt handler can itself
1706 be interrupted by something that also accesses 'flag' and 'msg',
1707 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE()
1708 and WRITE_ONCE() are not needed in interrupt_handler() other than
1709 for documentation purposes. (Note also that nested interrupts
1710 do not typically occur in modern Linux kernels, in fact, if an
1711 interrupt handler returns with interrupts enabled, you will get a
1714 You should assume that the compiler can move READ_ONCE() and
1715 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1716 barrier(), or similar primitives.
1718 This effect could also be achieved using barrier(), but READ_ONCE()
1719 and WRITE_ONCE() are more selective: With READ_ONCE() and
1720 WRITE_ONCE(), the compiler need only forget the contents of the
1721 indicated memory locations, while with barrier() the compiler must
1722 discard the value of all memory locations that it has currented
1723 cached in any machine registers. Of course, the compiler must also
1724 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1725 though the CPU of course need not do so.
1727 (*) The compiler is within its rights to invent stores to a variable,
1728 as in the following example:
1735 The compiler might save a branch by optimizing this as follows:
1741 In single-threaded code, this is not only safe, but also saves
1742 a branch. Unfortunately, in concurrent code, this optimization
1743 could cause some other CPU to see a spurious value of 42 -- even
1744 if variable 'a' was never zero -- when loading variable 'b'.
1745 Use WRITE_ONCE() to prevent this as follows:
1752 The compiler can also invent loads. These are usually less
1753 damaging, but they can result in cache-line bouncing and thus in
1754 poor performance and scalability. Use READ_ONCE() to prevent
1757 (*) For aligned memory locations whose size allows them to be accessed
1758 with a single memory-reference instruction, prevents "load tearing"
1759 and "store tearing," in which a single large access is replaced by
1760 multiple smaller accesses. For example, given an architecture having
1761 16-bit store instructions with 7-bit immediate fields, the compiler
1762 might be tempted to use two 16-bit store-immediate instructions to
1763 implement the following 32-bit store:
1767 Please note that GCC really does use this sort of optimization,
1768 which is not surprising given that it would likely take more
1769 than two instructions to build the constant and then store it.
1770 This optimization can therefore be a win in single-threaded code.
1771 In fact, a recent bug (since fixed) caused GCC to incorrectly use
1772 this optimization in a volatile store. In the absence of such bugs,
1773 use of WRITE_ONCE() prevents store tearing in the following example:
1775 WRITE_ONCE(p, 0x00010002);
1777 Use of packed structures can also result in load and store tearing,
1780 struct __attribute__((__packed__)) foo {
1785 struct foo foo1, foo2;
1792 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1793 volatile markings, the compiler would be well within its rights to
1794 implement these three assignment statements as a pair of 32-bit
1795 loads followed by a pair of 32-bit stores. This would result in
1796 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE()
1797 and WRITE_ONCE() again prevent tearing in this example:
1800 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1803 All that aside, it is never necessary to use READ_ONCE() and
1804 WRITE_ONCE() on a variable that has been marked volatile. For example,
1805 because 'jiffies' is marked volatile, it is never necessary to
1806 say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and
1807 WRITE_ONCE() are implemented as volatile casts, which has no effect when
1808 its argument is already marked volatile.
1810 Please note that these compiler barriers have no direct effect on the CPU,
1811 which may then reorder things however it wishes.
1817 The Linux kernel has eight basic CPU memory barriers:
1819 TYPE MANDATORY SMP CONDITIONAL
1820 =============== ======================= ===========================
1821 GENERAL mb() smp_mb()
1822 WRITE wmb() smp_wmb()
1823 READ rmb() smp_rmb()
1824 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
1827 All memory barriers except the data dependency barriers imply a compiler
1828 barrier. Data dependencies do not impose any additional compiler ordering.
1830 Aside: In the case of data dependencies, the compiler would be expected
1831 to issue the loads in the correct order (eg. `a[b]` would have to load
1832 the value of b before loading a[b]), however there is no guarantee in
1833 the C specification that the compiler may not speculate the value of b
1834 (eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1835 tmp = a[b]; ). There is also the problem of a compiler reloading b after
1836 having loaded a[b], thus having a newer copy of b than a[b]. A consensus
1837 has not yet been reached about these problems, however the READ_ONCE()
1838 macro is a good place to start looking.
1840 SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1841 systems because it is assumed that a CPU will appear to be self-consistent,
1842 and will order overlapping accesses correctly with respect to itself.
1843 However, see the subsection on "Virtual Machine Guests" below.
1845 [!] Note that SMP memory barriers _must_ be used to control the ordering of
1846 references to shared memory on SMP systems, though the use of locking instead
1849 Mandatory barriers should not be used to control SMP effects, since mandatory
1850 barriers impose unnecessary overhead on both SMP and UP systems. They may,
1851 however, be used to control MMIO effects on accesses through relaxed memory I/O
1852 windows. These barriers are required even on non-SMP systems as they affect
1853 the order in which memory operations appear to a device by prohibiting both the
1854 compiler and the CPU from reordering them.
1857 There are some more advanced barrier functions:
1859 (*) smp_store_mb(var, value)
1861 This assigns the value to the variable and then inserts a full memory
1862 barrier after it. It isn't guaranteed to insert anything more than a
1863 compiler barrier in a UP compilation.
1866 (*) smp_mb__before_atomic();
1867 (*) smp_mb__after_atomic();
1869 These are for use with atomic (such as add, subtract, increment and
1870 decrement) functions that don't return a value, especially when used for
1871 reference counting. These functions do not imply memory barriers.
1873 These are also used for atomic bitop functions that do not return a
1874 value (such as set_bit and clear_bit).
1876 As an example, consider a piece of code that marks an object as being dead
1877 and then decrements the object's reference count:
1880 smp_mb__before_atomic();
1881 atomic_dec(&obj->ref_count);
1883 This makes sure that the death mark on the object is perceived to be set
1884 *before* the reference counter is decremented.
1886 See Documentation/atomic_{t,bitops}.txt for more information.
1889 (*) lockless_dereference();
1891 This can be thought of as a pointer-fetch wrapper around the
1892 smp_read_barrier_depends() data-dependency barrier.
1894 This is also similar to rcu_dereference(), but in cases where
1895 object lifetime is handled by some mechanism other than RCU, for
1896 example, when the objects removed only when the system goes down.
1897 In addition, lockless_dereference() is used in some data structures
1898 that can be used both with and without RCU.
1904 These are for use with consistent memory to guarantee the ordering
1905 of writes or reads of shared memory accessible to both the CPU and a
1908 For example, consider a device driver that shares memory with a device
1909 and uses a descriptor status value to indicate if the descriptor belongs
1910 to the device or the CPU, and a doorbell to notify it when new
1911 descriptors are available:
1913 if (desc->status != DEVICE_OWN) {
1914 /* do not read data until we own descriptor */
1917 /* read/modify data */
1918 read_data = desc->data;
1919 desc->data = write_data;
1921 /* flush modifications before status update */
1924 /* assign ownership */
1925 desc->status = DEVICE_OWN;
1927 /* force memory to sync before notifying device via MMIO */
1930 /* notify device of new descriptors */
1931 writel(DESC_NOTIFY, doorbell);
1934 The dma_rmb() allows us guarantee the device has released ownership
1935 before we read the data from the descriptor, and the dma_wmb() allows
1936 us to guarantee the data is written to the descriptor before the device
1937 can see it now has ownership. The wmb() is needed to guarantee that the
1938 cache coherent memory writes have completed before attempting a write to
1939 the cache incoherent MMIO region.
1941 See Documentation/DMA-API.txt for more information on consistent memory.
1947 The Linux kernel also has a special barrier for use with memory-mapped I/O
1952 This is a variation on the mandatory write barrier that causes writes to weakly
1953 ordered I/O regions to be partially ordered. Its effects may go beyond the
1954 CPU->Hardware interface and actually affect the hardware at some level.
1956 See the subsection "Acquires vs I/O accesses" for more information.
1959 ===============================
1960 IMPLICIT KERNEL MEMORY BARRIERS
1961 ===============================
1963 Some of the other functions in the linux kernel imply memory barriers, amongst
1964 which are locking and scheduling functions.
1966 This specification is a _minimum_ guarantee; any particular architecture may
1967 provide more substantial guarantees, but these may not be relied upon outside
1968 of arch specific code.
1971 LOCK ACQUISITION FUNCTIONS
1972 --------------------------
1974 The Linux kernel has a number of locking constructs:
1982 In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1983 for each construct. These operations all imply certain barriers:
1985 (1) ACQUIRE operation implication:
1987 Memory operations issued after the ACQUIRE will be completed after the
1988 ACQUIRE operation has completed.
1990 Memory operations issued before the ACQUIRE may be completed after
1991 the ACQUIRE operation has completed.
1993 (2) RELEASE operation implication:
1995 Memory operations issued before the RELEASE will be completed before the
1996 RELEASE operation has completed.
1998 Memory operations issued after the RELEASE may be completed before the
1999 RELEASE operation has completed.
2001 (3) ACQUIRE vs ACQUIRE implication:
2003 All ACQUIRE operations issued before another ACQUIRE operation will be
2004 completed before that ACQUIRE operation.
2006 (4) ACQUIRE vs RELEASE implication:
2008 All ACQUIRE operations issued before a RELEASE operation will be
2009 completed before the RELEASE operation.
2011 (5) Failed conditional ACQUIRE implication:
2013 Certain locking variants of the ACQUIRE operation may fail, either due to
2014 being unable to get the lock immediately, or due to receiving an unblocked
2015 signal whilst asleep waiting for the lock to become available. Failed
2016 locks do not imply any sort of barrier.
2018 [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2019 one-way barriers is that the effects of instructions outside of a critical
2020 section may seep into the inside of the critical section.
2022 An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2023 because it is possible for an access preceding the ACQUIRE to happen after the
2024 ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2025 the two accesses can themselves then cross:
2034 ACQUIRE M, STORE *B, STORE *A, RELEASE M
2036 When the ACQUIRE and RELEASE are a lock acquisition and release,
2037 respectively, this same reordering can occur if the lock's ACQUIRE and
2038 RELEASE are to the same lock variable, but only from the perspective of
2039 another CPU not holding that lock. In short, a ACQUIRE followed by an
2040 RELEASE may -not- be assumed to be a full memory barrier.
2042 Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2043 not imply a full memory barrier. Therefore, the CPU's execution of the
2044 critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2054 ACQUIRE N, STORE *B, STORE *A, RELEASE M
2056 It might appear that this reordering could introduce a deadlock.
2057 However, this cannot happen because if such a deadlock threatened,
2058 the RELEASE would simply complete, thereby avoiding the deadlock.
2062 One key point is that we are only talking about the CPU doing
2063 the reordering, not the compiler. If the compiler (or, for
2064 that matter, the developer) switched the operations, deadlock
2067 But suppose the CPU reordered the operations. In this case,
2068 the unlock precedes the lock in the assembly code. The CPU
2069 simply elected to try executing the later lock operation first.
2070 If there is a deadlock, this lock operation will simply spin (or
2071 try to sleep, but more on that later). The CPU will eventually
2072 execute the unlock operation (which preceded the lock operation
2073 in the assembly code), which will unravel the potential deadlock,
2074 allowing the lock operation to succeed.
2076 But what if the lock is a sleeplock? In that case, the code will
2077 try to enter the scheduler, where it will eventually encounter
2078 a memory barrier, which will force the earlier unlock operation
2079 to complete, again unraveling the deadlock. There might be
2080 a sleep-unlock race, but the locking primitive needs to resolve
2081 such races properly in any case.
2083 Locks and semaphores may not provide any guarantee of ordering on UP compiled
2084 systems, and so cannot be counted on in such a situation to actually achieve
2085 anything at all - especially with respect to I/O accesses - unless combined
2086 with interrupt disabling operations.
2088 See also the section on "Inter-CPU acquiring barrier effects".
2091 As an example, consider the following:
2102 The following sequence of events is acceptable:
2104 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2106 [+] Note that {*F,*A} indicates a combined access.
2108 But none of the following are:
2110 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E
2111 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F
2112 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F
2113 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E
2117 INTERRUPT DISABLING FUNCTIONS
2118 -----------------------------
2120 Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2121 (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O
2122 barriers are required in such a situation, they must be provided from some
2126 SLEEP AND WAKE-UP FUNCTIONS
2127 ---------------------------
2129 Sleeping and waking on an event flagged in global data can be viewed as an
2130 interaction between two pieces of data: the task state of the task waiting for
2131 the event and the global data used to indicate the event. To make sure that
2132 these appear to happen in the right order, the primitives to begin the process
2133 of going to sleep, and the primitives to initiate a wake up imply certain
2136 Firstly, the sleeper normally follows something like this sequence of events:
2139 set_current_state(TASK_UNINTERRUPTIBLE);
2140 if (event_indicated)
2145 A general memory barrier is interpolated automatically by set_current_state()
2146 after it has altered the task state:
2149 ===============================
2150 set_current_state();
2152 STORE current->state
2154 LOAD event_indicated
2156 set_current_state() may be wrapped by:
2159 prepare_to_wait_exclusive();
2161 which therefore also imply a general memory barrier after setting the state.
2162 The whole sequence above is available in various canned forms, all of which
2163 interpolate the memory barrier in the right place:
2166 wait_event_interruptible();
2167 wait_event_interruptible_exclusive();
2168 wait_event_interruptible_timeout();
2169 wait_event_killable();
2170 wait_event_timeout();
2175 Secondly, code that performs a wake up normally follows something like this:
2177 event_indicated = 1;
2178 wake_up(&event_wait_queue);
2182 event_indicated = 1;
2183 wake_up_process(event_daemon);
2185 A write memory barrier is implied by wake_up() and co. if and only if they
2186 wake something up. The barrier occurs before the task state is cleared, and so
2187 sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2190 =============================== ===============================
2191 set_current_state(); STORE event_indicated
2192 smp_store_mb(); wake_up();
2193 STORE current->state <write barrier>
2194 <general barrier> STORE current->state
2195 LOAD event_indicated
2197 To repeat, this write memory barrier is present if and only if something
2198 is actually awakened. To see this, consider the following sequence of
2199 events, where X and Y are both initially zero:
2202 =============================== ===============================
2203 X = 1; STORE event_indicated
2204 smp_mb(); wake_up();
2205 Y = 1; wait_event(wq, Y == 1);
2206 wake_up(); load from Y sees 1, no memory barrier
2207 load from X might see 0
2209 In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2212 The available waker functions include:
2218 wake_up_interruptible();
2219 wake_up_interruptible_all();
2220 wake_up_interruptible_nr();
2221 wake_up_interruptible_poll();
2222 wake_up_interruptible_sync();
2223 wake_up_interruptible_sync_poll();
2225 wake_up_locked_poll();
2231 [!] Note that the memory barriers implied by the sleeper and the waker do _not_
2232 order multiple stores before the wake-up with respect to loads of those stored
2233 values after the sleeper has called set_current_state(). For instance, if the
2236 set_current_state(TASK_INTERRUPTIBLE);
2237 if (event_indicated)
2239 __set_current_state(TASK_RUNNING);
2240 do_something(my_data);
2245 event_indicated = 1;
2246 wake_up(&event_wait_queue);
2248 there's no guarantee that the change to event_indicated will be perceived by
2249 the sleeper as coming after the change to my_data. In such a circumstance, the
2250 code on both sides must interpolate its own memory barriers between the
2251 separate data accesses. Thus the above sleeper ought to do:
2253 set_current_state(TASK_INTERRUPTIBLE);
2254 if (event_indicated) {
2256 do_something(my_data);
2259 and the waker should do:
2263 event_indicated = 1;
2264 wake_up(&event_wait_queue);
2267 MISCELLANEOUS FUNCTIONS
2268 -----------------------
2270 Other functions that imply barriers:
2272 (*) schedule() and similar imply full memory barriers.
2275 ===================================
2276 INTER-CPU ACQUIRING BARRIER EFFECTS
2277 ===================================
2279 On SMP systems locking primitives give a more substantial form of barrier: one
2280 that does affect memory access ordering on other CPUs, within the context of
2281 conflict on any particular lock.
2284 ACQUIRES VS MEMORY ACCESSES
2285 ---------------------------
2287 Consider the following: the system has a pair of spinlocks (M) and (Q), and
2288 three CPUs; then should the following sequence of events occur:
2291 =============================== ===============================
2292 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e);
2294 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f);
2295 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g);
2297 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h);
2299 Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2300 through *H occur in, other than the constraints imposed by the separate locks
2301 on the separate CPUs. It might, for example, see:
2303 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2305 But it won't see any of:
2307 *B, *C or *D preceding ACQUIRE M
2308 *A, *B or *C following RELEASE M
2309 *F, *G or *H preceding ACQUIRE Q
2310 *E, *F or *G following RELEASE Q
2314 ACQUIRES VS I/O ACCESSES
2315 ------------------------
2317 Under certain circumstances (especially involving NUMA), I/O accesses within
2318 two spinlocked sections on two different CPUs may be seen as interleaved by the
2319 PCI bridge, because the PCI bridge does not necessarily participate in the
2320 cache-coherence protocol, and is therefore incapable of issuing the required
2321 read memory barriers.
2326 =============================== ===============================
2336 may be seen by the PCI bridge as follows:
2338 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2340 which would probably cause the hardware to malfunction.
2343 What is necessary here is to intervene with an mmiowb() before dropping the
2344 spinlock, for example:
2347 =============================== ===============================
2359 this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2360 before either of the stores issued on CPU 2.
2363 Furthermore, following a store by a load from the same device obviates the need
2364 for the mmiowb(), because the load forces the store to complete before the load
2368 =============================== ===============================
2379 See Documentation/driver-api/device-io.rst for more information.
2382 =================================
2383 WHERE ARE MEMORY BARRIERS NEEDED?
2384 =================================
2386 Under normal operation, memory operation reordering is generally not going to
2387 be a problem as a single-threaded linear piece of code will still appear to
2388 work correctly, even if it's in an SMP kernel. There are, however, four
2389 circumstances in which reordering definitely _could_ be a problem:
2391 (*) Interprocessor interaction.
2393 (*) Atomic operations.
2395 (*) Accessing devices.
2400 INTERPROCESSOR INTERACTION
2401 --------------------------
2403 When there's a system with more than one processor, more than one CPU in the
2404 system may be working on the same data set at the same time. This can cause
2405 synchronisation problems, and the usual way of dealing with them is to use
2406 locks. Locks, however, are quite expensive, and so it may be preferable to
2407 operate without the use of a lock if at all possible. In such a case
2408 operations that affect both CPUs may have to be carefully ordered to prevent
2411 Consider, for example, the R/W semaphore slow path. Here a waiting process is
2412 queued on the semaphore, by virtue of it having a piece of its stack linked to
2413 the semaphore's list of waiting processes:
2415 struct rw_semaphore {
2418 struct list_head waiters;
2421 struct rwsem_waiter {
2422 struct list_head list;
2423 struct task_struct *task;
2426 To wake up a particular waiter, the up_read() or up_write() functions have to:
2428 (1) read the next pointer from this waiter's record to know as to where the
2429 next waiter record is;
2431 (2) read the pointer to the waiter's task structure;
2433 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2435 (4) call wake_up_process() on the task; and
2437 (5) release the reference held on the waiter's task struct.
2439 In other words, it has to perform this sequence of events:
2441 LOAD waiter->list.next;
2447 and if any of these steps occur out of order, then the whole thing may
2450 Once it has queued itself and dropped the semaphore lock, the waiter does not
2451 get the lock again; it instead just waits for its task pointer to be cleared
2452 before proceeding. Since the record is on the waiter's stack, this means that
2453 if the task pointer is cleared _before_ the next pointer in the list is read,
2454 another CPU might start processing the waiter and might clobber the waiter's
2455 stack before the up*() function has a chance to read the next pointer.
2457 Consider then what might happen to the above sequence of events:
2460 =============================== ===============================
2467 Woken up by other event
2472 foo() clobbers *waiter
2474 LOAD waiter->list.next;
2477 This could be dealt with using the semaphore lock, but then the down_xxx()
2478 function has to needlessly get the spinlock again after being woken up.
2480 The way to deal with this is to insert a general SMP memory barrier:
2482 LOAD waiter->list.next;
2489 In this case, the barrier makes a guarantee that all memory accesses before the
2490 barrier will appear to happen before all the memory accesses after the barrier
2491 with respect to the other CPUs on the system. It does _not_ guarantee that all
2492 the memory accesses before the barrier will be complete by the time the barrier
2493 instruction itself is complete.
2495 On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2496 compiler barrier, thus making sure the compiler emits the instructions in the
2497 right order without actually intervening in the CPU. Since there's only one
2498 CPU, that CPU's dependency ordering logic will take care of everything else.
2504 Whilst they are technically interprocessor interaction considerations, atomic
2505 operations are noted specially as some of them imply full memory barriers and
2506 some don't, but they're very heavily relied on as a group throughout the
2509 See Documentation/atomic_t.txt for more information.
2515 Many devices can be memory mapped, and so appear to the CPU as if they're just
2516 a set of memory locations. To control such a device, the driver usually has to
2517 make the right memory accesses in exactly the right order.
2519 However, having a clever CPU or a clever compiler creates a potential problem
2520 in that the carefully sequenced accesses in the driver code won't reach the
2521 device in the requisite order if the CPU or the compiler thinks it is more
2522 efficient to reorder, combine or merge accesses - something that would cause
2523 the device to malfunction.
2525 Inside of the Linux kernel, I/O should be done through the appropriate accessor
2526 routines - such as inb() or writel() - which know how to make such accesses
2527 appropriately sequential. Whilst this, for the most part, renders the explicit
2528 use of memory barriers unnecessary, there are a couple of situations where they
2531 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2532 so for _all_ general drivers locks should be used and mmiowb() must be
2533 issued prior to unlocking the critical section.
2535 (2) If the accessor functions are used to refer to an I/O memory window with
2536 relaxed memory access properties, then _mandatory_ memory barriers are
2537 required to enforce ordering.
2539 See Documentation/driver-api/device-io.rst for more information.
2545 A driver may be interrupted by its own interrupt service routine, and thus the
2546 two parts of the driver may interfere with each other's attempts to control or
2549 This may be alleviated - at least in part - by disabling local interrupts (a
2550 form of locking), such that the critical operations are all contained within
2551 the interrupt-disabled section in the driver. Whilst the driver's interrupt
2552 routine is executing, the driver's core may not run on the same CPU, and its
2553 interrupt is not permitted to happen again until the current interrupt has been
2554 handled, thus the interrupt handler does not need to lock against that.
2556 However, consider a driver that was talking to an ethernet card that sports an
2557 address register and a data register. If that driver's core talks to the card
2558 under interrupt-disablement and then the driver's interrupt handler is invoked:
2569 The store to the data register might happen after the second store to the
2570 address register if ordering rules are sufficiently relaxed:
2572 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2575 If ordering rules are relaxed, it must be assumed that accesses done inside an
2576 interrupt disabled section may leak outside of it and may interleave with
2577 accesses performed in an interrupt - and vice versa - unless implicit or
2578 explicit barriers are used.
2580 Normally this won't be a problem because the I/O accesses done inside such
2581 sections will include synchronous load operations on strictly ordered I/O
2582 registers that form implicit I/O barriers. If this isn't sufficient then an
2583 mmiowb() may need to be used explicitly.
2586 A similar situation may occur between an interrupt routine and two routines
2587 running on separate CPUs that communicate with each other. If such a case is
2588 likely, then interrupt-disabling locks should be used to guarantee ordering.
2591 ==========================
2592 KERNEL I/O BARRIER EFFECTS
2593 ==========================
2595 When accessing I/O memory, drivers should use the appropriate accessor
2600 These are intended to talk to I/O space rather than memory space, but
2601 that's primarily a CPU-specific concept. The i386 and x86_64 processors
2602 do indeed have special I/O space access cycles and instructions, but many
2603 CPUs don't have such a concept.
2605 The PCI bus, amongst others, defines an I/O space concept which - on such
2606 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2607 space. However, it may also be mapped as a virtual I/O space in the CPU's
2608 memory map, particularly on those CPUs that don't support alternate I/O
2611 Accesses to this space may be fully synchronous (as on i386), but
2612 intermediary bridges (such as the PCI host bridge) may not fully honour
2615 They are guaranteed to be fully ordered with respect to each other.
2617 They are not guaranteed to be fully ordered with respect to other types of
2618 memory and I/O operation.
2620 (*) readX(), writeX():
2622 Whether these are guaranteed to be fully ordered and uncombined with
2623 respect to each other on the issuing CPU depends on the characteristics
2624 defined for the memory window through which they're accessing. On later
2625 i386 architecture machines, for example, this is controlled by way of the
2628 Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2629 provided they're not accessing a prefetchable device.
2631 However, intermediary hardware (such as a PCI bridge) may indulge in
2632 deferral if it so wishes; to flush a store, a load from the same location
2633 is preferred[*], but a load from the same device or from configuration
2634 space should suffice for PCI.
2636 [*] NOTE! attempting to load from the same location as was written to may
2637 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2640 Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2641 force stores to be ordered.
2643 Please refer to the PCI specification for more information on interactions
2644 between PCI transactions.
2646 (*) readX_relaxed(), writeX_relaxed()
2648 These are similar to readX() and writeX(), but provide weaker memory
2649 ordering guarantees. Specifically, they do not guarantee ordering with
2650 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2651 ordering with respect to LOCK or UNLOCK operations. If the latter is
2652 required, an mmiowb() barrier can be used. Note that relaxed accesses to
2653 the same peripheral are guaranteed to be ordered with respect to each
2656 (*) ioreadX(), iowriteX()
2658 These will perform appropriately for the type of access they're actually
2659 doing, be it inX()/outX() or readX()/writeX().
2662 ========================================
2663 ASSUMED MINIMUM EXECUTION ORDERING MODEL
2664 ========================================
2666 It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2667 maintain the appearance of program causality with respect to itself. Some CPUs
2668 (such as i386 or x86_64) are more constrained than others (such as powerpc or
2669 frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2670 of arch-specific code.
2672 This means that it must be considered that the CPU will execute its instruction
2673 stream in any order it feels like - or even in parallel - provided that if an
2674 instruction in the stream depends on an earlier instruction, then that
2675 earlier instruction must be sufficiently complete[*] before the later
2676 instruction may proceed; in other words: provided that the appearance of
2677 causality is maintained.
2679 [*] Some instructions have more than one effect - such as changing the
2680 condition codes, changing registers or changing memory - and different
2681 instructions may depend on different effects.
2683 A CPU may also discard any instruction sequence that winds up having no
2684 ultimate effect. For example, if two adjacent instructions both load an
2685 immediate value into the same register, the first may be discarded.
2688 Similarly, it has to be assumed that compiler might reorder the instruction
2689 stream in any way it sees fit, again provided the appearance of causality is
2693 ============================
2694 THE EFFECTS OF THE CPU CACHE
2695 ============================
2697 The way cached memory operations are perceived across the system is affected to
2698 a certain extent by the caches that lie between CPUs and memory, and by the
2699 memory coherence system that maintains the consistency of state in the system.
2701 As far as the way a CPU interacts with another part of the system through the
2702 caches goes, the memory system has to include the CPU's caches, and memory
2703 barriers for the most part act at the interface between the CPU and its cache
2704 (memory barriers logically act on the dotted line in the following diagram):
2706 <--- CPU ---> : <----------- Memory ----------->
2708 +--------+ +--------+ : +--------+ +-----------+
2709 | | | | : | | | | +--------+
2710 | CPU | | Memory | : | CPU | | | | |
2711 | Core |--->| Access |----->| Cache |<-->| | | |
2712 | | | Queue | : | | | |--->| Memory |
2713 | | | | : | | | | | |
2714 +--------+ +--------+ : +--------+ | | | |
2715 : | Cache | +--------+
2717 : | Mechanism | +--------+
2718 +--------+ +--------+ : +--------+ | | | |
2719 | | | | : | | | | | |
2720 | CPU | | Memory | : | CPU | | |--->| Device |
2721 | Core |--->| Access |----->| Cache |<-->| | | |
2722 | | | Queue | : | | | | | |
2723 | | | | : | | | | +--------+
2724 +--------+ +--------+ : +--------+ +-----------+
2728 Although any particular load or store may not actually appear outside of the
2729 CPU that issued it since it may have been satisfied within the CPU's own cache,
2730 it will still appear as if the full memory access had taken place as far as the
2731 other CPUs are concerned since the cache coherency mechanisms will migrate the
2732 cacheline over to the accessing CPU and propagate the effects upon conflict.
2734 The CPU core may execute instructions in any order it deems fit, provided the
2735 expected program causality appears to be maintained. Some of the instructions
2736 generate load and store operations which then go into the queue of memory
2737 accesses to be performed. The core may place these in the queue in any order
2738 it wishes, and continue execution until it is forced to wait for an instruction
2741 What memory barriers are concerned with is controlling the order in which
2742 accesses cross from the CPU side of things to the memory side of things, and
2743 the order in which the effects are perceived to happen by the other observers
2746 [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2747 their own loads and stores as if they had happened in program order.
2749 [!] MMIO or other device accesses may bypass the cache system. This depends on
2750 the properties of the memory window through which devices are accessed and/or
2751 the use of any special device communication instructions the CPU may have.
2757 Life isn't quite as simple as it may appear above, however: for while the
2758 caches are expected to be coherent, there's no guarantee that that coherency
2759 will be ordered. This means that whilst changes made on one CPU will
2760 eventually become visible on all CPUs, there's no guarantee that they will
2761 become apparent in the same order on those other CPUs.
2764 Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2765 has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2770 +--------+ : +--->| Cache A |<------->| |
2771 | | : | +---------+ | |
2773 | | : | +---------+ | |
2774 +--------+ : +--->| Cache B |<------->| |
2777 : +---------+ | System |
2778 +--------+ : +--->| Cache C |<------->| |
2779 | | : | +---------+ | |
2781 | | : | +---------+ | |
2782 +--------+ : +--->| Cache D |<------->| |
2787 Imagine the system has the following properties:
2789 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2792 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2795 (*) whilst the CPU core is interrogating one cache, the other cache may be
2796 making use of the bus to access the rest of the system - perhaps to
2797 displace a dirty cacheline or to do a speculative load;
2799 (*) each cache has a queue of operations that need to be applied to that cache
2800 to maintain coherency with the rest of the system;
2802 (*) the coherency queue is not flushed by normal loads to lines already
2803 present in the cache, even though the contents of the queue may
2804 potentially affect those loads.
2806 Imagine, then, that two writes are made on the first CPU, with a write barrier
2807 between them to guarantee that they will appear to reach that CPU's caches in
2808 the requisite order:
2811 =============== =============== =======================================
2812 u == 0, v == 1 and p == &u, q == &u
2814 smp_wmb(); Make sure change to v is visible before
2816 <A:modify v=2> v is now in cache A exclusively
2818 <B:modify p=&v> p is now in cache B exclusively
2820 The write memory barrier forces the other CPUs in the system to perceive that
2821 the local CPU's caches have apparently been updated in the correct order. But
2822 now imagine that the second CPU wants to read those values:
2825 =============== =============== =======================================
2830 The above pair of reads may then fail to happen in the expected order, as the
2831 cacheline holding p may get updated in one of the second CPU's caches whilst
2832 the update to the cacheline holding v is delayed in the other of the second
2833 CPU's caches by some other cache event:
2836 =============== =============== =======================================
2837 u == 0, v == 1 and p == &u, q == &u
2840 <A:modify v=2> <C:busy>
2844 <B:modify p=&v> <D:commit p=&v>
2847 <C:read *q> Reads from v before v updated in cache
2851 Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2852 no guarantee that, without intervention, the order of update will be the same
2853 as that committed on CPU 1.
2856 To intervene, we need to interpolate a data dependency barrier or a read
2857 barrier between the loads. This will force the cache to commit its coherency
2858 queue before processing any further requests:
2861 =============== =============== =======================================
2862 u == 0, v == 1 and p == &u, q == &u
2865 <A:modify v=2> <C:busy>
2869 <B:modify p=&v> <D:commit p=&v>
2871 smp_read_barrier_depends()
2875 <C:read *q> Reads from v after v updated in cache
2878 This sort of problem can be encountered on DEC Alpha processors as they have a
2879 split cache that improves performance by making better use of the data bus.
2880 Whilst most CPUs do imply a data dependency barrier on the read when a memory
2881 access depends on a read, not all do, so it may not be relied on.
2883 Other CPUs may also have split caches, but must coordinate between the various
2884 cachelets for normal memory accesses. The semantics of the Alpha removes the
2885 need for coordination in the absence of memory barriers.
2888 CACHE COHERENCY VS DMA
2889 ----------------------
2891 Not all systems maintain cache coherency with respect to devices doing DMA. In
2892 such cases, a device attempting DMA may obtain stale data from RAM because
2893 dirty cache lines may be resident in the caches of various CPUs, and may not
2894 have been written back to RAM yet. To deal with this, the appropriate part of
2895 the kernel must flush the overlapping bits of cache on each CPU (and maybe
2896 invalidate them as well).
2898 In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2899 cache lines being written back to RAM from a CPU's cache after the device has
2900 installed its own data, or cache lines present in the CPU's cache may simply
2901 obscure the fact that RAM has been updated, until at such time as the cacheline
2902 is discarded from the CPU's cache and reloaded. To deal with this, the
2903 appropriate part of the kernel must invalidate the overlapping bits of the
2906 See Documentation/cachetlb.txt for more information on cache management.
2909 CACHE COHERENCY VS MMIO
2910 -----------------------
2912 Memory mapped I/O usually takes place through memory locations that are part of
2913 a window in the CPU's memory space that has different properties assigned than
2914 the usual RAM directed window.
2916 Amongst these properties is usually the fact that such accesses bypass the
2917 caching entirely and go directly to the device buses. This means MMIO accesses
2918 may, in effect, overtake accesses to cached memory that were emitted earlier.
2919 A memory barrier isn't sufficient in such a case, but rather the cache must be
2920 flushed between the cached memory write and the MMIO access if the two are in
2924 =========================
2925 THE THINGS CPUS GET UP TO
2926 =========================
2928 A programmer might take it for granted that the CPU will perform memory
2929 operations in exactly the order specified, so that if the CPU is, for example,
2930 given the following piece of code to execute:
2938 they would then expect that the CPU will complete the memory operation for each
2939 instruction before moving on to the next one, leading to a definite sequence of
2940 operations as seen by external observers in the system:
2942 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2945 Reality is, of course, much messier. With many CPUs and compilers, the above
2946 assumption doesn't hold because:
2948 (*) loads are more likely to need to be completed immediately to permit
2949 execution progress, whereas stores can often be deferred without a
2952 (*) loads may be done speculatively, and the result discarded should it prove
2953 to have been unnecessary;
2955 (*) loads may be done speculatively, leading to the result having been fetched
2956 at the wrong time in the expected sequence of events;
2958 (*) the order of the memory accesses may be rearranged to promote better use
2959 of the CPU buses and caches;
2961 (*) loads and stores may be combined to improve performance when talking to
2962 memory or I/O hardware that can do batched accesses of adjacent locations,
2963 thus cutting down on transaction setup costs (memory and PCI devices may
2964 both be able to do this); and
2966 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2967 mechanisms may alleviate this - once the store has actually hit the cache
2968 - there's no guarantee that the coherency management will be propagated in
2969 order to other CPUs.
2971 So what another CPU, say, might actually observe from the above piece of code
2974 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2976 (Where "LOAD {*C,*D}" is a combined load)
2979 However, it is guaranteed that a CPU will be self-consistent: it will see its
2980 _own_ accesses appear to be correctly ordered, without the need for a memory
2981 barrier. For instance with the following code:
2990 and assuming no intervention by an external influence, it can be assumed that
2991 the final result will appear to be:
2993 U == the original value of *A
2998 The code above may cause the CPU to generate the full sequence of memory
3001 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3003 in that order, but, without intervention, the sequence may have almost any
3004 combination of elements combined or discarded, provided the program's view
3005 of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE()
3006 are -not- optional in the above example, as there are architectures
3007 where a given CPU might reorder successive loads to the same location.
3008 On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3009 necessary to prevent this, for example, on Itanium the volatile casts
3010 used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3011 and st.rel instructions (respectively) that prevent such reordering.
3013 The compiler may also combine, discard or defer elements of the sequence before
3014 the CPU even sees them.
3025 since, without either a write barrier or an WRITE_ONCE(), it can be
3026 assumed that the effect of the storage of V to *A is lost. Similarly:
3031 may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3037 and the LOAD operation never appear outside of the CPU.
3040 AND THEN THERE'S THE ALPHA
3041 --------------------------
3043 The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that,
3044 some versions of the Alpha CPU have a split data cache, permitting them to have
3045 two semantically-related cache lines updated at separate times. This is where
3046 the data dependency barrier really becomes necessary as this synchronises both
3047 caches with the memory coherence system, thus making it seem like pointer
3048 changes vs new data occur in the right order.
3050 The Alpha defines the Linux kernel's memory barrier model.
3052 See the subsection on "Cache Coherency" above.
3055 VIRTUAL MACHINE GUESTS
3056 ----------------------
3058 Guests running within virtual machines might be affected by SMP effects even if
3059 the guest itself is compiled without SMP support. This is an artifact of
3060 interfacing with an SMP host while running an UP kernel. Using mandatory
3061 barriers for this use-case would be possible but is often suboptimal.
3063 To handle this case optimally, low-level virt_mb() etc macros are available.
3064 These have the same effect as smp_mb() etc when SMP is enabled, but generate
3065 identical code for SMP and non-SMP systems. For example, virtual machine guests
3066 should use virt_mb() rather than smp_mb() when synchronizing against a
3067 (possibly SMP) host.
3069 These are equivalent to smp_mb() etc counterparts in all other respects,
3070 in particular, they do not control MMIO effects: to control
3071 MMIO effects, use mandatory barriers.
3081 Memory barriers can be used to implement circular buffering without the need
3082 of a lock to serialise the producer with the consumer. See:
3084 Documentation/circular-buffers.txt
3093 Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3095 Chapter 5.2: Physical Address Space Characteristics
3096 Chapter 5.4: Caches and Write Buffers
3097 Chapter 5.5: Data Sharing
3098 Chapter 5.6: Read/Write Ordering
3100 AMD64 Architecture Programmer's Manual Volume 2: System Programming
3101 Chapter 7.1: Memory-Access Ordering
3102 Chapter 7.4: Buffering and Combining Memory Writes
3104 IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3105 System Programming Guide
3106 Chapter 7.1: Locked Atomic Operations
3107 Chapter 7.2: Memory Ordering
3108 Chapter 7.4: Serializing Instructions
3110 The SPARC Architecture Manual, Version 9
3111 Chapter 8: Memory Models
3112 Appendix D: Formal Specification of the Memory Models
3113 Appendix J: Programming with the Memory Models
3115 UltraSPARC Programmer Reference Manual
3116 Chapter 5: Memory Accesses and Cacheability
3117 Chapter 15: Sparc-V9 Memory Models
3119 UltraSPARC III Cu User's Manual
3120 Chapter 9: Memory Models
3122 UltraSPARC IIIi Processor User's Manual
3123 Chapter 8: Memory Models
3125 UltraSPARC Architecture 2005
3127 Appendix D: Formal Specifications of the Memory Models
3129 UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3130 Chapter 8: Memory Models
3131 Appendix F: Caches and Cache Coherency
3133 Solaris Internals, Core Kernel Architecture, p63-68:
3134 Chapter 3.3: Hardware Considerations for Locks and
3137 Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3138 for Kernel Programmers:
3139 Chapter 13: Other Memory Models
3141 Intel Itanium Architecture Software Developer's Manual: Volume 1:
3142 Section 2.6: Speculation
3143 Section 4.4: Memory Access