blob: ba818ecce6f99508f1136a0eb0d3ee07eee74715
1 | ============================ |
2 | LINUX KERNEL MEMORY BARRIERS |
3 | ============================ |
4 | |
5 | By: David Howells <dhowells@redhat.com> |
6 | Paul E. McKenney <paulmck@linux.vnet.ibm.com> |
7 | Will Deacon <will.deacon@arm.com> |
8 | Peter Zijlstra <peterz@infradead.org> |
9 | |
10 | ========== |
11 | DISCLAIMER |
12 | ========== |
13 | |
14 | This document is not a specification; it is intentionally (for the sake of |
15 | brevity) and unintentionally (due to being human) incomplete. This document is |
16 | meant as a guide to using the various memory barriers provided by Linux, but |
17 | in case of any doubt (and there are many) please ask. |
18 | |
19 | To repeat, this document is not a specification of what Linux expects from |
20 | hardware. |
21 | |
22 | The purpose of this document is twofold: |
23 | |
24 | (1) to specify the minimum functionality that one can rely on for any |
25 | particular barrier, and |
26 | |
27 | (2) to provide a guide as to how to use the barriers that are available. |
28 | |
29 | Note that an architecture can provide more than the minimum requirement |
30 | for any particular barrier, but if the architecure provides less than |
31 | that, that architecture is incorrect. |
32 | |
33 | Note also that it is possible that a barrier may be a no-op for an |
34 | architecture because the way that arch works renders an explicit barrier |
35 | unnecessary in that case. |
36 | |
37 | |
38 | ======== |
39 | CONTENTS |
40 | ======== |
41 | |
42 | (*) Abstract memory access model. |
43 | |
44 | - Device operations. |
45 | - Guarantees. |
46 | |
47 | (*) What are memory barriers? |
48 | |
49 | - Varieties of memory barrier. |
50 | - What may not be assumed about memory barriers? |
51 | - Data dependency barriers. |
52 | - Control dependencies. |
53 | - SMP barrier pairing. |
54 | - Examples of memory barrier sequences. |
55 | - Read memory barriers vs load speculation. |
56 | - Transitivity |
57 | |
58 | (*) Explicit kernel barriers. |
59 | |
60 | - Compiler barrier. |
61 | - CPU memory barriers. |
62 | - MMIO write barrier. |
63 | |
64 | (*) Implicit kernel memory barriers. |
65 | |
66 | - Lock acquisition functions. |
67 | - Interrupt disabling functions. |
68 | - Sleep and wake-up functions. |
69 | - Miscellaneous functions. |
70 | |
71 | (*) Inter-CPU acquiring barrier effects. |
72 | |
73 | - Acquires vs memory accesses. |
74 | - Acquires vs I/O accesses. |
75 | |
76 | (*) Where are memory barriers needed? |
77 | |
78 | - Interprocessor interaction. |
79 | - Atomic operations. |
80 | - Accessing devices. |
81 | - Interrupts. |
82 | |
83 | (*) Kernel I/O barrier effects. |
84 | |
85 | (*) Assumed minimum execution ordering model. |
86 | |
87 | (*) The effects of the cpu cache. |
88 | |
89 | - Cache coherency. |
90 | - Cache coherency vs DMA. |
91 | - Cache coherency vs MMIO. |
92 | |
93 | (*) The things CPUs get up to. |
94 | |
95 | - And then there's the Alpha. |
96 | - Virtual Machine Guests. |
97 | |
98 | (*) Example uses. |
99 | |
100 | - Circular buffers. |
101 | |
102 | (*) References. |
103 | |
104 | |
105 | ============================ |
106 | ABSTRACT MEMORY ACCESS MODEL |
107 | ============================ |
108 | |
109 | Consider the following abstract model of the system: |
110 | |
111 | : : |
112 | : : |
113 | : : |
114 | +-------+ : +--------+ : +-------+ |
115 | | | : | | : | | |
116 | | | : | | : | | |
117 | | CPU 1 |<----->| Memory |<----->| CPU 2 | |
118 | | | : | | : | | |
119 | | | : | | : | | |
120 | +-------+ : +--------+ : +-------+ |
121 | ^ : ^ : ^ |
122 | | : | : | |
123 | | : | : | |
124 | | : v : | |
125 | | : +--------+ : | |
126 | | : | | : | |
127 | | : | | : | |
128 | +---------->| Device |<----------+ |
129 | : | | : |
130 | : | | : |
131 | : +--------+ : |
132 | : : |
133 | |
134 | Each CPU executes a program that generates memory access operations. In the |
135 | abstract CPU, memory operation ordering is very relaxed, and a CPU may actually |
136 | perform the memory operations in any order it likes, provided program causality |
137 | appears to be maintained. Similarly, the compiler may also arrange the |
138 | instructions it emits in any order it likes, provided it doesn't affect the |
139 | apparent operation of the program. |
140 | |
141 | So in the above diagram, the effects of the memory operations performed by a |
142 | CPU are perceived by the rest of the system as the operations cross the |
143 | interface between the CPU and rest of the system (the dotted lines). |
144 | |
145 | |
146 | For example, consider the following sequence of events: |
147 | |
148 | CPU 1 CPU 2 |
149 | =============== =============== |
150 | { A == 1; B == 2 } |
151 | A = 3; x = B; |
152 | B = 4; y = A; |
153 | |
154 | The set of accesses as seen by the memory system in the middle can be arranged |
155 | in 24 different combinations: |
156 | |
157 | STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 |
158 | STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 |
159 | STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 |
160 | STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 |
161 | STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 |
162 | STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 |
163 | STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 |
164 | STORE B=4, ... |
165 | ... |
166 | |
167 | and can thus result in four different combinations of values: |
168 | |
169 | x == 2, y == 1 |
170 | x == 2, y == 3 |
171 | x == 4, y == 1 |
172 | x == 4, y == 3 |
173 | |
174 | |
175 | Furthermore, the stores committed by a CPU to the memory system may not be |
176 | perceived by the loads made by another CPU in the same order as the stores were |
177 | committed. |
178 | |
179 | |
180 | As a further example, consider this sequence of events: |
181 | |
182 | CPU 1 CPU 2 |
183 | =============== =============== |
184 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
185 | B = 4; Q = P; |
186 | P = &B D = *Q; |
187 | |
188 | There is an obvious data dependency here, as the value loaded into D depends on |
189 | the address retrieved from P by CPU 2. At the end of the sequence, any of the |
190 | following results are possible: |
191 | |
192 | (Q == &A) and (D == 1) |
193 | (Q == &B) and (D == 2) |
194 | (Q == &B) and (D == 4) |
195 | |
196 | Note that CPU 2 will never try and load C into D because the CPU will load P |
197 | into Q before issuing the load of *Q. |
198 | |
199 | |
200 | DEVICE OPERATIONS |
201 | ----------------- |
202 | |
203 | Some devices present their control interfaces as collections of memory |
204 | locations, but the order in which the control registers are accessed is very |
205 | important. For instance, imagine an ethernet card with a set of internal |
206 | registers that are accessed through an address port register (A) and a data |
207 | port register (D). To read internal register 5, the following code might then |
208 | be used: |
209 | |
210 | *A = 5; |
211 | x = *D; |
212 | |
213 | but this might show up as either of the following two sequences: |
214 | |
215 | STORE *A = 5, x = LOAD *D |
216 | x = LOAD *D, STORE *A = 5 |
217 | |
218 | the second of which will almost certainly result in a malfunction, since it set |
219 | the address _after_ attempting to read the register. |
220 | |
221 | |
222 | GUARANTEES |
223 | ---------- |
224 | |
225 | There are some minimal guarantees that may be expected of a CPU: |
226 | |
227 | (*) On any given CPU, dependent memory accesses will be issued in order, with |
228 | respect to itself. This means that for: |
229 | |
230 | Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q); |
231 | |
232 | the CPU will issue the following memory operations: |
233 | |
234 | Q = LOAD P, D = LOAD *Q |
235 | |
236 | and always in that order. On most systems, smp_read_barrier_depends() |
237 | does nothing, but it is required for DEC Alpha. The READ_ONCE() |
238 | is required to prevent compiler mischief. Please note that you |
239 | should normally use something like rcu_dereference() instead of |
240 | open-coding smp_read_barrier_depends(). |
241 | |
242 | (*) Overlapping loads and stores within a particular CPU will appear to be |
243 | ordered within that CPU. This means that for: |
244 | |
245 | a = READ_ONCE(*X); WRITE_ONCE(*X, b); |
246 | |
247 | the CPU will only issue the following sequence of memory operations: |
248 | |
249 | a = LOAD *X, STORE *X = b |
250 | |
251 | And for: |
252 | |
253 | WRITE_ONCE(*X, c); d = READ_ONCE(*X); |
254 | |
255 | the CPU will only issue: |
256 | |
257 | STORE *X = c, d = LOAD *X |
258 | |
259 | (Loads and stores overlap if they are targeted at overlapping pieces of |
260 | memory). |
261 | |
262 | And there are a number of things that _must_ or _must_not_ be assumed: |
263 | |
264 | (*) It _must_not_ be assumed that the compiler will do what you want |
265 | with memory references that are not protected by READ_ONCE() and |
266 | WRITE_ONCE(). Without them, the compiler is within its rights to |
267 | do all sorts of "creative" transformations, which are covered in |
268 | the COMPILER BARRIER section. |
269 | |
270 | (*) It _must_not_ be assumed that independent loads and stores will be issued |
271 | in the order given. This means that for: |
272 | |
273 | X = *A; Y = *B; *D = Z; |
274 | |
275 | we may get any of the following sequences: |
276 | |
277 | X = LOAD *A, Y = LOAD *B, STORE *D = Z |
278 | X = LOAD *A, STORE *D = Z, Y = LOAD *B |
279 | Y = LOAD *B, X = LOAD *A, STORE *D = Z |
280 | Y = LOAD *B, STORE *D = Z, X = LOAD *A |
281 | STORE *D = Z, X = LOAD *A, Y = LOAD *B |
282 | STORE *D = Z, Y = LOAD *B, X = LOAD *A |
283 | |
284 | (*) It _must_ be assumed that overlapping memory accesses may be merged or |
285 | discarded. This means that for: |
286 | |
287 | X = *A; Y = *(A + 4); |
288 | |
289 | we may get any one of the following sequences: |
290 | |
291 | X = LOAD *A; Y = LOAD *(A + 4); |
292 | Y = LOAD *(A + 4); X = LOAD *A; |
293 | {X, Y} = LOAD {*A, *(A + 4) }; |
294 | |
295 | And for: |
296 | |
297 | *A = X; *(A + 4) = Y; |
298 | |
299 | we may get any of: |
300 | |
301 | STORE *A = X; STORE *(A + 4) = Y; |
302 | STORE *(A + 4) = Y; STORE *A = X; |
303 | STORE {*A, *(A + 4) } = {X, Y}; |
304 | |
305 | And there are anti-guarantees: |
306 | |
307 | (*) These guarantees do not apply to bitfields, because compilers often |
308 | generate code to modify these using non-atomic read-modify-write |
309 | sequences. Do not attempt to use bitfields to synchronize parallel |
310 | algorithms. |
311 | |
312 | (*) Even in cases where bitfields are protected by locks, all fields |
313 | in a given bitfield must be protected by one lock. If two fields |
314 | in a given bitfield are protected by different locks, the compiler's |
315 | non-atomic read-modify-write sequences can cause an update to one |
316 | field to corrupt the value of an adjacent field. |
317 | |
318 | (*) These guarantees apply only to properly aligned and sized scalar |
319 | variables. "Properly sized" currently means variables that are |
320 | the same size as "char", "short", "int" and "long". "Properly |
321 | aligned" means the natural alignment, thus no constraints for |
322 | "char", two-byte alignment for "short", four-byte alignment for |
323 | "int", and either four-byte or eight-byte alignment for "long", |
324 | on 32-bit and 64-bit systems, respectively. Note that these |
325 | guarantees were introduced into the C11 standard, so beware when |
326 | using older pre-C11 compilers (for example, gcc 4.6). The portion |
327 | of the standard containing this guarantee is Section 3.14, which |
328 | defines "memory location" as follows: |
329 | |
330 | memory location |
331 | either an object of scalar type, or a maximal sequence |
332 | of adjacent bit-fields all having nonzero width |
333 | |
334 | NOTE 1: Two threads of execution can update and access |
335 | separate memory locations without interfering with |
336 | each other. |
337 | |
338 | NOTE 2: A bit-field and an adjacent non-bit-field member |
339 | are in separate memory locations. The same applies |
340 | to two bit-fields, if one is declared inside a nested |
341 | structure declaration and the other is not, or if the two |
342 | are separated by a zero-length bit-field declaration, |
343 | or if they are separated by a non-bit-field member |
344 | declaration. It is not safe to concurrently update two |
345 | bit-fields in the same structure if all members declared |
346 | between them are also bit-fields, no matter what the |
347 | sizes of those intervening bit-fields happen to be. |
348 | |
349 | |
350 | ========================= |
351 | WHAT ARE MEMORY BARRIERS? |
352 | ========================= |
353 | |
354 | As can be seen above, independent memory operations are effectively performed |
355 | in random order, but this can be a problem for CPU-CPU interaction and for I/O. |
356 | What is required is some way of intervening to instruct the compiler and the |
357 | CPU to restrict the order. |
358 | |
359 | Memory barriers are such interventions. They impose a perceived partial |
360 | ordering over the memory operations on either side of the barrier. |
361 | |
362 | Such enforcement is important because the CPUs and other devices in a system |
363 | can use a variety of tricks to improve performance, including reordering, |
364 | deferral and combination of memory operations; speculative loads; speculative |
365 | branch prediction and various types of caching. Memory barriers are used to |
366 | override or suppress these tricks, allowing the code to sanely control the |
367 | interaction of multiple CPUs and/or devices. |
368 | |
369 | |
370 | VARIETIES OF MEMORY BARRIER |
371 | --------------------------- |
372 | |
373 | Memory barriers come in four basic varieties: |
374 | |
375 | (1) Write (or store) memory barriers. |
376 | |
377 | A write memory barrier gives a guarantee that all the STORE operations |
378 | specified before the barrier will appear to happen before all the STORE |
379 | operations specified after the barrier with respect to the other |
380 | components of the system. |
381 | |
382 | A write barrier is a partial ordering on stores only; it is not required |
383 | to have any effect on loads. |
384 | |
385 | A CPU can be viewed as committing a sequence of store operations to the |
386 | memory system as time progresses. All stores before a write barrier will |
387 | occur in the sequence _before_ all the stores after the write barrier. |
388 | |
389 | [!] Note that write barriers should normally be paired with read or data |
390 | dependency barriers; see the "SMP barrier pairing" subsection. |
391 | |
392 | |
393 | (2) Data dependency barriers. |
394 | |
395 | A data dependency barrier is a weaker form of read barrier. In the case |
396 | where two loads are performed such that the second depends on the result |
397 | of the first (eg: the first load retrieves the address to which the second |
398 | load will be directed), a data dependency barrier would be required to |
399 | make sure that the target of the second load is updated before the address |
400 | obtained by the first load is accessed. |
401 | |
402 | A data dependency barrier is a partial ordering on interdependent loads |
403 | only; it is not required to have any effect on stores, independent loads |
404 | or overlapping loads. |
405 | |
406 | As mentioned in (1), the other CPUs in the system can be viewed as |
407 | committing sequences of stores to the memory system that the CPU being |
408 | considered can then perceive. A data dependency barrier issued by the CPU |
409 | under consideration guarantees that for any load preceding it, if that |
410 | load touches one of a sequence of stores from another CPU, then by the |
411 | time the barrier completes, the effects of all the stores prior to that |
412 | touched by the load will be perceptible to any loads issued after the data |
413 | dependency barrier. |
414 | |
415 | See the "Examples of memory barrier sequences" subsection for diagrams |
416 | showing the ordering constraints. |
417 | |
418 | [!] Note that the first load really has to have a _data_ dependency and |
419 | not a control dependency. If the address for the second load is dependent |
420 | on the first load, but the dependency is through a conditional rather than |
421 | actually loading the address itself, then it's a _control_ dependency and |
422 | a full read barrier or better is required. See the "Control dependencies" |
423 | subsection for more information. |
424 | |
425 | [!] Note that data dependency barriers should normally be paired with |
426 | write barriers; see the "SMP barrier pairing" subsection. |
427 | |
428 | |
429 | (3) Read (or load) memory barriers. |
430 | |
431 | A read barrier is a data dependency barrier plus a guarantee that all the |
432 | LOAD operations specified before the barrier will appear to happen before |
433 | all the LOAD operations specified after the barrier with respect to the |
434 | other components of the system. |
435 | |
436 | A read barrier is a partial ordering on loads only; it is not required to |
437 | have any effect on stores. |
438 | |
439 | Read memory barriers imply data dependency barriers, and so can substitute |
440 | for them. |
441 | |
442 | [!] Note that read barriers should normally be paired with write barriers; |
443 | see the "SMP barrier pairing" subsection. |
444 | |
445 | |
446 | (4) General memory barriers. |
447 | |
448 | A general memory barrier gives a guarantee that all the LOAD and STORE |
449 | operations specified before the barrier will appear to happen before all |
450 | the LOAD and STORE operations specified after the barrier with respect to |
451 | the other components of the system. |
452 | |
453 | A general memory barrier is a partial ordering over both loads and stores. |
454 | |
455 | General memory barriers imply both read and write memory barriers, and so |
456 | can substitute for either. |
457 | |
458 | |
459 | And a couple of implicit varieties: |
460 | |
461 | (5) ACQUIRE operations. |
462 | |
463 | This acts as a one-way permeable barrier. It guarantees that all memory |
464 | operations after the ACQUIRE operation will appear to happen after the |
465 | ACQUIRE operation with respect to the other components of the system. |
466 | ACQUIRE operations include LOCK operations and both smp_load_acquire() |
467 | and smp_cond_acquire() operations. The later builds the necessary ACQUIRE |
468 | semantics from relying on a control dependency and smp_rmb(). |
469 | |
470 | Memory operations that occur before an ACQUIRE operation may appear to |
471 | happen after it completes. |
472 | |
473 | An ACQUIRE operation should almost always be paired with a RELEASE |
474 | operation. |
475 | |
476 | |
477 | (6) RELEASE operations. |
478 | |
479 | This also acts as a one-way permeable barrier. It guarantees that all |
480 | memory operations before the RELEASE operation will appear to happen |
481 | before the RELEASE operation with respect to the other components of the |
482 | system. RELEASE operations include UNLOCK operations and |
483 | smp_store_release() operations. |
484 | |
485 | Memory operations that occur after a RELEASE operation may appear to |
486 | happen before it completes. |
487 | |
488 | The use of ACQUIRE and RELEASE operations generally precludes the need |
489 | for other sorts of memory barrier (but note the exceptions mentioned in |
490 | the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE |
491 | pair is -not- guaranteed to act as a full memory barrier. However, after |
492 | an ACQUIRE on a given variable, all memory accesses preceding any prior |
493 | RELEASE on that same variable are guaranteed to be visible. In other |
494 | words, within a given variable's critical section, all accesses of all |
495 | previous critical sections for that variable are guaranteed to have |
496 | completed. |
497 | |
498 | This means that ACQUIRE acts as a minimal "acquire" operation and |
499 | RELEASE acts as a minimal "release" operation. |
500 | |
501 | A subset of the atomic operations described in atomic_ops.txt have ACQUIRE |
502 | and RELEASE variants in addition to fully-ordered and relaxed (no barrier |
503 | semantics) definitions. For compound atomics performing both a load and a |
504 | store, ACQUIRE semantics apply only to the load and RELEASE semantics apply |
505 | only to the store portion of the operation. |
506 | |
507 | Memory barriers are only required where there's a possibility of interaction |
508 | between two CPUs or between a CPU and a device. If it can be guaranteed that |
509 | there won't be any such interaction in any particular piece of code, then |
510 | memory barriers are unnecessary in that piece of code. |
511 | |
512 | |
513 | Note that these are the _minimum_ guarantees. Different architectures may give |
514 | more substantial guarantees, but they may _not_ be relied upon outside of arch |
515 | specific code. |
516 | |
517 | |
518 | WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? |
519 | ---------------------------------------------- |
520 | |
521 | There are certain things that the Linux kernel memory barriers do not guarantee: |
522 | |
523 | (*) There is no guarantee that any of the memory accesses specified before a |
524 | memory barrier will be _complete_ by the completion of a memory barrier |
525 | instruction; the barrier can be considered to draw a line in that CPU's |
526 | access queue that accesses of the appropriate type may not cross. |
527 | |
528 | (*) There is no guarantee that issuing a memory barrier on one CPU will have |
529 | any direct effect on another CPU or any other hardware in the system. The |
530 | indirect effect will be the order in which the second CPU sees the effects |
531 | of the first CPU's accesses occur, but see the next point: |
532 | |
533 | (*) There is no guarantee that a CPU will see the correct order of effects |
534 | from a second CPU's accesses, even _if_ the second CPU uses a memory |
535 | barrier, unless the first CPU _also_ uses a matching memory barrier (see |
536 | the subsection on "SMP Barrier Pairing"). |
537 | |
538 | (*) There is no guarantee that some intervening piece of off-the-CPU |
539 | hardware[*] will not reorder the memory accesses. CPU cache coherency |
540 | mechanisms should propagate the indirect effects of a memory barrier |
541 | between CPUs, but might not do so in order. |
542 | |
543 | [*] For information on bus mastering DMA and coherency please read: |
544 | |
545 | Documentation/PCI/pci.txt |
546 | Documentation/DMA-API-HOWTO.txt |
547 | Documentation/DMA-API.txt |
548 | |
549 | |
550 | DATA DEPENDENCY BARRIERS |
551 | ------------------------ |
552 | |
553 | The usage requirements of data dependency barriers are a little subtle, and |
554 | it's not always obvious that they're needed. To illustrate, consider the |
555 | following sequence of events: |
556 | |
557 | CPU 1 CPU 2 |
558 | =============== =============== |
559 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
560 | B = 4; |
561 | <write barrier> |
562 | WRITE_ONCE(P, &B) |
563 | Q = READ_ONCE(P); |
564 | D = *Q; |
565 | |
566 | There's a clear data dependency here, and it would seem that by the end of the |
567 | sequence, Q must be either &A or &B, and that: |
568 | |
569 | (Q == &A) implies (D == 1) |
570 | (Q == &B) implies (D == 4) |
571 | |
572 | But! CPU 2's perception of P may be updated _before_ its perception of B, thus |
573 | leading to the following situation: |
574 | |
575 | (Q == &B) and (D == 2) ???? |
576 | |
577 | Whilst this may seem like a failure of coherency or causality maintenance, it |
578 | isn't, and this behaviour can be observed on certain real CPUs (such as the DEC |
579 | Alpha). |
580 | |
581 | To deal with this, a data dependency barrier or better must be inserted |
582 | between the address load and the data load: |
583 | |
584 | CPU 1 CPU 2 |
585 | =============== =============== |
586 | { A == 1, B == 2, C == 3, P == &A, Q == &C } |
587 | B = 4; |
588 | <write barrier> |
589 | WRITE_ONCE(P, &B); |
590 | Q = READ_ONCE(P); |
591 | <data dependency barrier> |
592 | D = *Q; |
593 | |
594 | This enforces the occurrence of one of the two implications, and prevents the |
595 | third possibility from arising. |
596 | |
597 | A data-dependency barrier must also order against dependent writes: |
598 | |
599 | CPU 1 CPU 2 |
600 | =============== =============== |
601 | { A == 1, B == 2, C = 3, P == &A, Q == &C } |
602 | B = 4; |
603 | <write barrier> |
604 | WRITE_ONCE(P, &B); |
605 | Q = READ_ONCE(P); |
606 | <data dependency barrier> |
607 | *Q = 5; |
608 | |
609 | The data-dependency barrier must order the read into Q with the store |
610 | into *Q. This prohibits this outcome: |
611 | |
612 | (Q == &B) && (B == 4) |
613 | |
614 | Please note that this pattern should be rare. After all, the whole point |
615 | of dependency ordering is to -prevent- writes to the data structure, along |
616 | with the expensive cache misses associated with those writes. This pattern |
617 | can be used to record rare error conditions and the like, and the ordering |
618 | prevents such records from being lost. |
619 | |
620 | |
621 | [!] Note that this extremely counterintuitive situation arises most easily on |
622 | machines with split caches, so that, for example, one cache bank processes |
623 | even-numbered cache lines and the other bank processes odd-numbered cache |
624 | lines. The pointer P might be stored in an odd-numbered cache line, and the |
625 | variable B might be stored in an even-numbered cache line. Then, if the |
626 | even-numbered bank of the reading CPU's cache is extremely busy while the |
627 | odd-numbered bank is idle, one can see the new value of the pointer P (&B), |
628 | but the old value of the variable B (2). |
629 | |
630 | |
631 | The data dependency barrier is very important to the RCU system, |
632 | for example. See rcu_assign_pointer() and rcu_dereference() in |
633 | include/linux/rcupdate.h. This permits the current target of an RCU'd |
634 | pointer to be replaced with a new modified target, without the replacement |
635 | target appearing to be incompletely initialised. |
636 | |
637 | See also the subsection on "Cache Coherency" for a more thorough example. |
638 | |
639 | |
640 | CONTROL DEPENDENCIES |
641 | -------------------- |
642 | |
643 | A load-load control dependency requires a full read memory barrier, not |
644 | simply a data dependency barrier to make it work correctly. Consider the |
645 | following bit of code: |
646 | |
647 | q = READ_ONCE(a); |
648 | if (q) { |
649 | <data dependency barrier> /* BUG: No data dependency!!! */ |
650 | p = READ_ONCE(b); |
651 | } |
652 | |
653 | This will not have the desired effect because there is no actual data |
654 | dependency, but rather a control dependency that the CPU may short-circuit |
655 | by attempting to predict the outcome in advance, so that other CPUs see |
656 | the load from b as having happened before the load from a. In such a |
657 | case what's actually required is: |
658 | |
659 | q = READ_ONCE(a); |
660 | if (q) { |
661 | <read barrier> |
662 | p = READ_ONCE(b); |
663 | } |
664 | |
665 | However, stores are not speculated. This means that ordering -is- provided |
666 | for load-store control dependencies, as in the following example: |
667 | |
668 | q = READ_ONCE(a); |
669 | if (q) { |
670 | WRITE_ONCE(b, p); |
671 | } |
672 | |
673 | Control dependencies pair normally with other types of barriers. That |
674 | said, please note that READ_ONCE() is not optional! Without the |
675 | READ_ONCE(), the compiler might combine the load from 'a' with other |
676 | loads from 'a', and the store to 'b' with other stores to 'b', with |
677 | possible highly counterintuitive effects on ordering. |
678 | |
679 | Worse yet, if the compiler is able to prove (say) that the value of |
680 | variable 'a' is always non-zero, it would be well within its rights |
681 | to optimize the original example by eliminating the "if" statement |
682 | as follows: |
683 | |
684 | q = a; |
685 | b = p; /* BUG: Compiler and CPU can both reorder!!! */ |
686 | |
687 | So don't leave out the READ_ONCE(). |
688 | |
689 | It is tempting to try to enforce ordering on identical stores on both |
690 | branches of the "if" statement as follows: |
691 | |
692 | q = READ_ONCE(a); |
693 | if (q) { |
694 | barrier(); |
695 | WRITE_ONCE(b, p); |
696 | do_something(); |
697 | } else { |
698 | barrier(); |
699 | WRITE_ONCE(b, p); |
700 | do_something_else(); |
701 | } |
702 | |
703 | Unfortunately, current compilers will transform this as follows at high |
704 | optimization levels: |
705 | |
706 | q = READ_ONCE(a); |
707 | barrier(); |
708 | WRITE_ONCE(b, p); /* BUG: No ordering vs. load from a!!! */ |
709 | if (q) { |
710 | /* WRITE_ONCE(b, p); -- moved up, BUG!!! */ |
711 | do_something(); |
712 | } else { |
713 | /* WRITE_ONCE(b, p); -- moved up, BUG!!! */ |
714 | do_something_else(); |
715 | } |
716 | |
717 | Now there is no conditional between the load from 'a' and the store to |
718 | 'b', which means that the CPU is within its rights to reorder them: |
719 | The conditional is absolutely required, and must be present in the |
720 | assembly code even after all compiler optimizations have been applied. |
721 | Therefore, if you need ordering in this example, you need explicit |
722 | memory barriers, for example, smp_store_release(): |
723 | |
724 | q = READ_ONCE(a); |
725 | if (q) { |
726 | smp_store_release(&b, p); |
727 | do_something(); |
728 | } else { |
729 | smp_store_release(&b, p); |
730 | do_something_else(); |
731 | } |
732 | |
733 | In contrast, without explicit memory barriers, two-legged-if control |
734 | ordering is guaranteed only when the stores differ, for example: |
735 | |
736 | q = READ_ONCE(a); |
737 | if (q) { |
738 | WRITE_ONCE(b, p); |
739 | do_something(); |
740 | } else { |
741 | WRITE_ONCE(b, r); |
742 | do_something_else(); |
743 | } |
744 | |
745 | The initial READ_ONCE() is still required to prevent the compiler from |
746 | proving the value of 'a'. |
747 | |
748 | In addition, you need to be careful what you do with the local variable 'q', |
749 | otherwise the compiler might be able to guess the value and again remove |
750 | the needed conditional. For example: |
751 | |
752 | q = READ_ONCE(a); |
753 | if (q % MAX) { |
754 | WRITE_ONCE(b, p); |
755 | do_something(); |
756 | } else { |
757 | WRITE_ONCE(b, r); |
758 | do_something_else(); |
759 | } |
760 | |
761 | If MAX is defined to be 1, then the compiler knows that (q % MAX) is |
762 | equal to zero, in which case the compiler is within its rights to |
763 | transform the above code into the following: |
764 | |
765 | q = READ_ONCE(a); |
766 | WRITE_ONCE(b, p); |
767 | do_something_else(); |
768 | |
769 | Given this transformation, the CPU is not required to respect the ordering |
770 | between the load from variable 'a' and the store to variable 'b'. It is |
771 | tempting to add a barrier(), but this does not help. The conditional |
772 | is gone, and the barrier won't bring it back. Therefore, if you are |
773 | relying on this ordering, you should make sure that MAX is greater than |
774 | one, perhaps as follows: |
775 | |
776 | q = READ_ONCE(a); |
777 | BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ |
778 | if (q % MAX) { |
779 | WRITE_ONCE(b, p); |
780 | do_something(); |
781 | } else { |
782 | WRITE_ONCE(b, r); |
783 | do_something_else(); |
784 | } |
785 | |
786 | Please note once again that the stores to 'b' differ. If they were |
787 | identical, as noted earlier, the compiler could pull this store outside |
788 | of the 'if' statement. |
789 | |
790 | You must also be careful not to rely too much on boolean short-circuit |
791 | evaluation. Consider this example: |
792 | |
793 | q = READ_ONCE(a); |
794 | if (q || 1 > 0) |
795 | WRITE_ONCE(b, 1); |
796 | |
797 | Because the first condition cannot fault and the second condition is |
798 | always true, the compiler can transform this example as following, |
799 | defeating control dependency: |
800 | |
801 | q = READ_ONCE(a); |
802 | WRITE_ONCE(b, 1); |
803 | |
804 | This example underscores the need to ensure that the compiler cannot |
805 | out-guess your code. More generally, although READ_ONCE() does force |
806 | the compiler to actually emit code for a given load, it does not force |
807 | the compiler to use the results. |
808 | |
809 | In addition, control dependencies apply only to the then-clause and |
810 | else-clause of the if-statement in question. In particular, it does |
811 | not necessarily apply to code following the if-statement: |
812 | |
813 | q = READ_ONCE(a); |
814 | if (q) { |
815 | WRITE_ONCE(b, p); |
816 | } else { |
817 | WRITE_ONCE(b, r); |
818 | } |
819 | WRITE_ONCE(c, 1); /* BUG: No ordering against the read from "a". */ |
820 | |
821 | It is tempting to argue that there in fact is ordering because the |
822 | compiler cannot reorder volatile accesses and also cannot reorder |
823 | the writes to "b" with the condition. Unfortunately for this line |
824 | of reasoning, the compiler might compile the two writes to "b" as |
825 | conditional-move instructions, as in this fanciful pseudo-assembly |
826 | language: |
827 | |
828 | ld r1,a |
829 | ld r2,p |
830 | ld r3,r |
831 | cmp r1,$0 |
832 | cmov,ne r4,r2 |
833 | cmov,eq r4,r3 |
834 | st r4,b |
835 | st $1,c |
836 | |
837 | A weakly ordered CPU would have no dependency of any sort between the load |
838 | from "a" and the store to "c". The control dependencies would extend |
839 | only to the pair of cmov instructions and the store depending on them. |
840 | In short, control dependencies apply only to the stores in the then-clause |
841 | and else-clause of the if-statement in question (including functions |
842 | invoked by those two clauses), not to code following that if-statement. |
843 | |
844 | Finally, control dependencies do -not- provide transitivity. This is |
845 | demonstrated by two related examples, with the initial values of |
846 | x and y both being zero: |
847 | |
848 | CPU 0 CPU 1 |
849 | ======================= ======================= |
850 | r1 = READ_ONCE(x); r2 = READ_ONCE(y); |
851 | if (r1 > 0) if (r2 > 0) |
852 | WRITE_ONCE(y, 1); WRITE_ONCE(x, 1); |
853 | |
854 | assert(!(r1 == 1 && r2 == 1)); |
855 | |
856 | The above two-CPU example will never trigger the assert(). However, |
857 | if control dependencies guaranteed transitivity (which they do not), |
858 | then adding the following CPU would guarantee a related assertion: |
859 | |
860 | CPU 2 |
861 | ===================== |
862 | WRITE_ONCE(x, 2); |
863 | |
864 | assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */ |
865 | |
866 | But because control dependencies do -not- provide transitivity, the above |
867 | assertion can fail after the combined three-CPU example completes. If you |
868 | need the three-CPU example to provide ordering, you will need smp_mb() |
869 | between the loads and stores in the CPU 0 and CPU 1 code fragments, |
870 | that is, just before or just after the "if" statements. Furthermore, |
871 | the original two-CPU example is very fragile and should be avoided. |
872 | |
873 | These two examples are the LB and WWC litmus tests from this paper: |
874 | http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this |
875 | site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html. |
876 | |
877 | In summary: |
878 | |
879 | (*) Control dependencies can order prior loads against later stores. |
880 | However, they do -not- guarantee any other sort of ordering: |
881 | Not prior loads against later loads, nor prior stores against |
882 | later anything. If you need these other forms of ordering, |
883 | use smp_rmb(), smp_wmb(), or, in the case of prior stores and |
884 | later loads, smp_mb(). |
885 | |
886 | (*) If both legs of the "if" statement begin with identical stores to |
887 | the same variable, then those stores must be ordered, either by |
888 | preceding both of them with smp_mb() or by using smp_store_release() |
889 | to carry out the stores. Please note that it is -not- sufficient |
890 | to use barrier() at beginning of each leg of the "if" statement |
891 | because, as shown by the example above, optimizing compilers can |
892 | destroy the control dependency while respecting the letter of the |
893 | barrier() law. |
894 | |
895 | (*) Control dependencies require at least one run-time conditional |
896 | between the prior load and the subsequent store, and this |
897 | conditional must involve the prior load. If the compiler is able |
898 | to optimize the conditional away, it will have also optimized |
899 | away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() |
900 | can help to preserve the needed conditional. |
901 | |
902 | (*) Control dependencies require that the compiler avoid reordering the |
903 | dependency into nonexistence. Careful use of READ_ONCE() or |
904 | atomic{,64}_read() can help to preserve your control dependency. |
905 | Please see the COMPILER BARRIER section for more information. |
906 | |
907 | (*) Control dependencies apply only to the then-clause and else-clause |
908 | of the if-statement containing the control dependency, including |
909 | any functions that these two clauses call. Control dependencies |
910 | do -not- apply to code following the if-statement containing the |
911 | control dependency. |
912 | |
913 | (*) Control dependencies pair normally with other types of barriers. |
914 | |
915 | (*) Control dependencies do -not- provide transitivity. If you |
916 | need transitivity, use smp_mb(). |
917 | |
918 | |
919 | SMP BARRIER PAIRING |
920 | ------------------- |
921 | |
922 | When dealing with CPU-CPU interactions, certain types of memory barrier should |
923 | always be paired. A lack of appropriate pairing is almost certainly an error. |
924 | |
925 | General barriers pair with each other, though they also pair with most |
926 | other types of barriers, albeit without transitivity. An acquire barrier |
927 | pairs with a release barrier, but both may also pair with other barriers, |
928 | including of course general barriers. A write barrier pairs with a data |
929 | dependency barrier, a control dependency, an acquire barrier, a release |
930 | barrier, a read barrier, or a general barrier. Similarly a read barrier, |
931 | control dependency, or a data dependency barrier pairs with a write |
932 | barrier, an acquire barrier, a release barrier, or a general barrier: |
933 | |
934 | CPU 1 CPU 2 |
935 | =============== =============== |
936 | WRITE_ONCE(a, 1); |
937 | <write barrier> |
938 | WRITE_ONCE(b, 2); x = READ_ONCE(b); |
939 | <read barrier> |
940 | y = READ_ONCE(a); |
941 | |
942 | Or: |
943 | |
944 | CPU 1 CPU 2 |
945 | =============== =============================== |
946 | a = 1; |
947 | <write barrier> |
948 | WRITE_ONCE(b, &a); x = READ_ONCE(b); |
949 | <data dependency barrier> |
950 | y = *x; |
951 | |
952 | Or even: |
953 | |
954 | CPU 1 CPU 2 |
955 | =============== =============================== |
956 | r1 = READ_ONCE(y); |
957 | <general barrier> |
958 | WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) { |
959 | <implicit control dependency> |
960 | WRITE_ONCE(y, 1); |
961 | } |
962 | |
963 | assert(r1 == 0 || r2 == 0); |
964 | |
965 | Basically, the read barrier always has to be there, even though it can be of |
966 | the "weaker" type. |
967 | |
968 | [!] Note that the stores before the write barrier would normally be expected to |
969 | match the loads after the read barrier or the data dependency barrier, and vice |
970 | versa: |
971 | |
972 | CPU 1 CPU 2 |
973 | =================== =================== |
974 | WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); |
975 | WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); |
976 | <write barrier> \ <read barrier> |
977 | WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); |
978 | WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); |
979 | |
980 | |
981 | EXAMPLES OF MEMORY BARRIER SEQUENCES |
982 | ------------------------------------ |
983 | |
984 | Firstly, write barriers act as partial orderings on store operations. |
985 | Consider the following sequence of events: |
986 | |
987 | CPU 1 |
988 | ======================= |
989 | STORE A = 1 |
990 | STORE B = 2 |
991 | STORE C = 3 |
992 | <write barrier> |
993 | STORE D = 4 |
994 | STORE E = 5 |
995 | |
996 | This sequence of events is committed to the memory coherence system in an order |
997 | that the rest of the system might perceive as the unordered set of { STORE A, |
998 | STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E |
999 | }: |
1000 | |
1001 | +-------+ : : |
1002 | | | +------+ |
1003 | | |------>| C=3 | } /\ |
1004 | | | : +------+ }----- \ -----> Events perceptible to |
1005 | | | : | A=1 | } \/ the rest of the system |
1006 | | | : +------+ } |
1007 | | CPU 1 | : | B=2 | } |
1008 | | | +------+ } |
1009 | | | wwwwwwwwwwwwwwww } <--- At this point the write barrier |
1010 | | | +------+ } requires all stores prior to the |
1011 | | | : | E=5 | } barrier to be committed before |
1012 | | | : +------+ } further stores may take place |
1013 | | |------>| D=4 | } |
1014 | | | +------+ |
1015 | +-------+ : : |
1016 | | |
1017 | | Sequence in which stores are committed to the |
1018 | | memory system by CPU 1 |
1019 | V |
1020 | |
1021 | |
1022 | Secondly, data dependency barriers act as partial orderings on data-dependent |
1023 | loads. Consider the following sequence of events: |
1024 | |
1025 | CPU 1 CPU 2 |
1026 | ======================= ======================= |
1027 | { B = 7; X = 9; Y = 8; C = &Y } |
1028 | STORE A = 1 |
1029 | STORE B = 2 |
1030 | <write barrier> |
1031 | STORE C = &B LOAD X |
1032 | STORE D = 4 LOAD C (gets &B) |
1033 | LOAD *C (reads B) |
1034 | |
1035 | Without intervention, CPU 2 may perceive the events on CPU 1 in some |
1036 | effectively random order, despite the write barrier issued by CPU 1: |
1037 | |
1038 | +-------+ : : : : |
1039 | | | +------+ +-------+ | Sequence of update |
1040 | | |------>| B=2 |----- --->| Y->8 | | of perception on |
1041 | | | : +------+ \ +-------+ | CPU 2 |
1042 | | CPU 1 | : | A=1 | \ --->| C->&Y | V |
1043 | | | +------+ | +-------+ |
1044 | | | wwwwwwwwwwwwwwww | : : |
1045 | | | +------+ | : : |
1046 | | | : | C=&B |--- | : : +-------+ |
1047 | | | : +------+ \ | +-------+ | | |
1048 | | |------>| D=4 | ----------->| C->&B |------>| | |
1049 | | | +------+ | +-------+ | | |
1050 | +-------+ : : | : : | | |
1051 | | : : | | |
1052 | | : : | CPU 2 | |
1053 | | +-------+ | | |
1054 | Apparently incorrect ---> | | B->7 |------>| | |
1055 | perception of B (!) | +-------+ | | |
1056 | | : : | | |
1057 | | +-------+ | | |
1058 | The load of X holds ---> \ | X->9 |------>| | |
1059 | up the maintenance \ +-------+ | | |
1060 | of coherence of B ----->| B->2 | +-------+ |
1061 | +-------+ |
1062 | : : |
1063 | |
1064 | |
1065 | In the above example, CPU 2 perceives that B is 7, despite the load of *C |
1066 | (which would be B) coming after the LOAD of C. |
1067 | |
1068 | If, however, a data dependency barrier were to be placed between the load of C |
1069 | and the load of *C (ie: B) on CPU 2: |
1070 | |
1071 | CPU 1 CPU 2 |
1072 | ======================= ======================= |
1073 | { B = 7; X = 9; Y = 8; C = &Y } |
1074 | STORE A = 1 |
1075 | STORE B = 2 |
1076 | <write barrier> |
1077 | STORE C = &B LOAD X |
1078 | STORE D = 4 LOAD C (gets &B) |
1079 | <data dependency barrier> |
1080 | LOAD *C (reads B) |
1081 | |
1082 | then the following will occur: |
1083 | |
1084 | +-------+ : : : : |
1085 | | | +------+ +-------+ |
1086 | | |------>| B=2 |----- --->| Y->8 | |
1087 | | | : +------+ \ +-------+ |
1088 | | CPU 1 | : | A=1 | \ --->| C->&Y | |
1089 | | | +------+ | +-------+ |
1090 | | | wwwwwwwwwwwwwwww | : : |
1091 | | | +------+ | : : |
1092 | | | : | C=&B |--- | : : +-------+ |
1093 | | | : +------+ \ | +-------+ | | |
1094 | | |------>| D=4 | ----------->| C->&B |------>| | |
1095 | | | +------+ | +-------+ | | |
1096 | +-------+ : : | : : | | |
1097 | | : : | | |
1098 | | : : | CPU 2 | |
1099 | | +-------+ | | |
1100 | | | X->9 |------>| | |
1101 | | +-------+ | | |
1102 | Makes sure all effects ---> \ ddddddddddddddddd | | |
1103 | prior to the store of C \ +-------+ | | |
1104 | are perceptible to ----->| B->2 |------>| | |
1105 | subsequent loads +-------+ | | |
1106 | : : +-------+ |
1107 | |
1108 | |
1109 | And thirdly, a read barrier acts as a partial order on loads. Consider the |
1110 | following sequence of events: |
1111 | |
1112 | CPU 1 CPU 2 |
1113 | ======================= ======================= |
1114 | { A = 0, B = 9 } |
1115 | STORE A=1 |
1116 | <write barrier> |
1117 | STORE B=2 |
1118 | LOAD B |
1119 | LOAD A |
1120 | |
1121 | Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in |
1122 | some effectively random order, despite the write barrier issued by CPU 1: |
1123 | |
1124 | +-------+ : : : : |
1125 | | | +------+ +-------+ |
1126 | | |------>| A=1 |------ --->| A->0 | |
1127 | | | +------+ \ +-------+ |
1128 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
1129 | | | +------+ | +-------+ |
1130 | | |------>| B=2 |--- | : : |
1131 | | | +------+ \ | : : +-------+ |
1132 | +-------+ : : \ | +-------+ | | |
1133 | ---------->| B->2 |------>| | |
1134 | | +-------+ | CPU 2 | |
1135 | | | A->0 |------>| | |
1136 | | +-------+ | | |
1137 | | : : +-------+ |
1138 | \ : : |
1139 | \ +-------+ |
1140 | ---->| A->1 | |
1141 | +-------+ |
1142 | : : |
1143 | |
1144 | |
1145 | If, however, a read barrier were to be placed between the load of B and the |
1146 | load of A on CPU 2: |
1147 | |
1148 | CPU 1 CPU 2 |
1149 | ======================= ======================= |
1150 | { A = 0, B = 9 } |
1151 | STORE A=1 |
1152 | <write barrier> |
1153 | STORE B=2 |
1154 | LOAD B |
1155 | <read barrier> |
1156 | LOAD A |
1157 | |
1158 | then the partial ordering imposed by CPU 1 will be perceived correctly by CPU |
1159 | 2: |
1160 | |
1161 | +-------+ : : : : |
1162 | | | +------+ +-------+ |
1163 | | |------>| A=1 |------ --->| A->0 | |
1164 | | | +------+ \ +-------+ |
1165 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
1166 | | | +------+ | +-------+ |
1167 | | |------>| B=2 |--- | : : |
1168 | | | +------+ \ | : : +-------+ |
1169 | +-------+ : : \ | +-------+ | | |
1170 | ---------->| B->2 |------>| | |
1171 | | +-------+ | CPU 2 | |
1172 | | : : | | |
1173 | | : : | | |
1174 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | |
1175 | barrier causes all effects \ +-------+ | | |
1176 | prior to the storage of B ---->| A->1 |------>| | |
1177 | to be perceptible to CPU 2 +-------+ | | |
1178 | : : +-------+ |
1179 | |
1180 | |
1181 | To illustrate this more completely, consider what could happen if the code |
1182 | contained a load of A either side of the read barrier: |
1183 | |
1184 | CPU 1 CPU 2 |
1185 | ======================= ======================= |
1186 | { A = 0, B = 9 } |
1187 | STORE A=1 |
1188 | <write barrier> |
1189 | STORE B=2 |
1190 | LOAD B |
1191 | LOAD A [first load of A] |
1192 | <read barrier> |
1193 | LOAD A [second load of A] |
1194 | |
1195 | Even though the two loads of A both occur after the load of B, they may both |
1196 | come up with different values: |
1197 | |
1198 | +-------+ : : : : |
1199 | | | +------+ +-------+ |
1200 | | |------>| A=1 |------ --->| A->0 | |
1201 | | | +------+ \ +-------+ |
1202 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
1203 | | | +------+ | +-------+ |
1204 | | |------>| B=2 |--- | : : |
1205 | | | +------+ \ | : : +-------+ |
1206 | +-------+ : : \ | +-------+ | | |
1207 | ---------->| B->2 |------>| | |
1208 | | +-------+ | CPU 2 | |
1209 | | : : | | |
1210 | | : : | | |
1211 | | +-------+ | | |
1212 | | | A->0 |------>| 1st | |
1213 | | +-------+ | | |
1214 | At this point the read ----> \ rrrrrrrrrrrrrrrrr | | |
1215 | barrier causes all effects \ +-------+ | | |
1216 | prior to the storage of B ---->| A->1 |------>| 2nd | |
1217 | to be perceptible to CPU 2 +-------+ | | |
1218 | : : +-------+ |
1219 | |
1220 | |
1221 | But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 |
1222 | before the read barrier completes anyway: |
1223 | |
1224 | +-------+ : : : : |
1225 | | | +------+ +-------+ |
1226 | | |------>| A=1 |------ --->| A->0 | |
1227 | | | +------+ \ +-------+ |
1228 | | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | |
1229 | | | +------+ | +-------+ |
1230 | | |------>| B=2 |--- | : : |
1231 | | | +------+ \ | : : +-------+ |
1232 | +-------+ : : \ | +-------+ | | |
1233 | ---------->| B->2 |------>| | |
1234 | | +-------+ | CPU 2 | |
1235 | | : : | | |
1236 | \ : : | | |
1237 | \ +-------+ | | |
1238 | ---->| A->1 |------>| 1st | |
1239 | +-------+ | | |
1240 | rrrrrrrrrrrrrrrrr | | |
1241 | +-------+ | | |
1242 | | A->1 |------>| 2nd | |
1243 | +-------+ | | |
1244 | : : +-------+ |
1245 | |
1246 | |
1247 | The guarantee is that the second load will always come up with A == 1 if the |
1248 | load of B came up with B == 2. No such guarantee exists for the first load of |
1249 | A; that may come up with either A == 0 or A == 1. |
1250 | |
1251 | |
1252 | READ MEMORY BARRIERS VS LOAD SPECULATION |
1253 | ---------------------------------------- |
1254 | |
1255 | Many CPUs speculate with loads: that is they see that they will need to load an |
1256 | item from memory, and they find a time where they're not using the bus for any |
1257 | other loads, and so do the load in advance - even though they haven't actually |
1258 | got to that point in the instruction execution flow yet. This permits the |
1259 | actual load instruction to potentially complete immediately because the CPU |
1260 | already has the value to hand. |
1261 | |
1262 | It may turn out that the CPU didn't actually need the value - perhaps because a |
1263 | branch circumvented the load - in which case it can discard the value or just |
1264 | cache it for later use. |
1265 | |
1266 | Consider: |
1267 | |
1268 | CPU 1 CPU 2 |
1269 | ======================= ======================= |
1270 | LOAD B |
1271 | DIVIDE } Divide instructions generally |
1272 | DIVIDE } take a long time to perform |
1273 | LOAD A |
1274 | |
1275 | Which might appear as this: |
1276 | |
1277 | : : +-------+ |
1278 | +-------+ | | |
1279 | --->| B->2 |------>| | |
1280 | +-------+ | CPU 2 | |
1281 | : :DIVIDE | | |
1282 | +-------+ | | |
1283 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
1284 | division speculates on the +-------+ ~ | | |
1285 | LOAD of A : : ~ | | |
1286 | : :DIVIDE | | |
1287 | : : ~ | | |
1288 | Once the divisions are complete --> : : ~-->| | |
1289 | the CPU can then perform the : : | | |
1290 | LOAD with immediate effect : : +-------+ |
1291 | |
1292 | |
1293 | Placing a read barrier or a data dependency barrier just before the second |
1294 | load: |
1295 | |
1296 | CPU 1 CPU 2 |
1297 | ======================= ======================= |
1298 | LOAD B |
1299 | DIVIDE |
1300 | DIVIDE |
1301 | <read barrier> |
1302 | LOAD A |
1303 | |
1304 | will force any value speculatively obtained to be reconsidered to an extent |
1305 | dependent on the type of barrier used. If there was no change made to the |
1306 | speculated memory location, then the speculated value will just be used: |
1307 | |
1308 | : : +-------+ |
1309 | +-------+ | | |
1310 | --->| B->2 |------>| | |
1311 | +-------+ | CPU 2 | |
1312 | : :DIVIDE | | |
1313 | +-------+ | | |
1314 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
1315 | division speculates on the +-------+ ~ | | |
1316 | LOAD of A : : ~ | | |
1317 | : :DIVIDE | | |
1318 | : : ~ | | |
1319 | : : ~ | | |
1320 | rrrrrrrrrrrrrrrr~ | | |
1321 | : : ~ | | |
1322 | : : ~-->| | |
1323 | : : | | |
1324 | : : +-------+ |
1325 | |
1326 | |
1327 | but if there was an update or an invalidation from another CPU pending, then |
1328 | the speculation will be cancelled and the value reloaded: |
1329 | |
1330 | : : +-------+ |
1331 | +-------+ | | |
1332 | --->| B->2 |------>| | |
1333 | +-------+ | CPU 2 | |
1334 | : :DIVIDE | | |
1335 | +-------+ | | |
1336 | The CPU being busy doing a ---> --->| A->0 |~~~~ | | |
1337 | division speculates on the +-------+ ~ | | |
1338 | LOAD of A : : ~ | | |
1339 | : :DIVIDE | | |
1340 | : : ~ | | |
1341 | : : ~ | | |
1342 | rrrrrrrrrrrrrrrrr | | |
1343 | +-------+ | | |
1344 | The speculation is discarded ---> --->| A->1 |------>| | |
1345 | and an updated value is +-------+ | | |
1346 | retrieved : : +-------+ |
1347 | |
1348 | |
1349 | TRANSITIVITY |
1350 | ------------ |
1351 | |
1352 | Transitivity is a deeply intuitive notion about ordering that is not |
1353 | always provided by real computer systems. The following example |
1354 | demonstrates transitivity: |
1355 | |
1356 | CPU 1 CPU 2 CPU 3 |
1357 | ======================= ======================= ======================= |
1358 | { X = 0, Y = 0 } |
1359 | STORE X=1 LOAD X STORE Y=1 |
1360 | <general barrier> <general barrier> |
1361 | LOAD Y LOAD X |
1362 | |
1363 | Suppose that CPU 2's load from X returns 1 and its load from Y returns 0. |
1364 | This indicates that CPU 2's load from X in some sense follows CPU 1's |
1365 | store to X and that CPU 2's load from Y in some sense preceded CPU 3's |
1366 | store to Y. The question is then "Can CPU 3's load from X return 0?" |
1367 | |
1368 | Because CPU 2's load from X in some sense came after CPU 1's store, it |
1369 | is natural to expect that CPU 3's load from X must therefore return 1. |
1370 | This expectation is an example of transitivity: if a load executing on |
1371 | CPU A follows a load from the same variable executing on CPU B, then |
1372 | CPU A's load must either return the same value that CPU B's load did, |
1373 | or must return some later value. |
1374 | |
1375 | In the Linux kernel, use of general memory barriers guarantees |
1376 | transitivity. Therefore, in the above example, if CPU 2's load from X |
1377 | returns 1 and its load from Y returns 0, then CPU 3's load from X must |
1378 | also return 1. |
1379 | |
1380 | However, transitivity is -not- guaranteed for read or write barriers. |
1381 | For example, suppose that CPU 2's general barrier in the above example |
1382 | is changed to a read barrier as shown below: |
1383 | |
1384 | CPU 1 CPU 2 CPU 3 |
1385 | ======================= ======================= ======================= |
1386 | { X = 0, Y = 0 } |
1387 | STORE X=1 LOAD X STORE Y=1 |
1388 | <read barrier> <general barrier> |
1389 | LOAD Y LOAD X |
1390 | |
1391 | This substitution destroys transitivity: in this example, it is perfectly |
1392 | legal for CPU 2's load from X to return 1, its load from Y to return 0, |
1393 | and CPU 3's load from X to return 0. |
1394 | |
1395 | The key point is that although CPU 2's read barrier orders its pair |
1396 | of loads, it does not guarantee to order CPU 1's store. Therefore, if |
1397 | this example runs on a system where CPUs 1 and 2 share a store buffer |
1398 | or a level of cache, CPU 2 might have early access to CPU 1's writes. |
1399 | General barriers are therefore required to ensure that all CPUs agree |
1400 | on the combined order of CPU 1's and CPU 2's accesses. |
1401 | |
1402 | General barriers provide "global transitivity", so that all CPUs will |
1403 | agree on the order of operations. In contrast, a chain of release-acquire |
1404 | pairs provides only "local transitivity", so that only those CPUs on |
1405 | the chain are guaranteed to agree on the combined order of the accesses. |
1406 | For example, switching to C code in deference to Herman Hollerith: |
1407 | |
1408 | int u, v, x, y, z; |
1409 | |
1410 | void cpu0(void) |
1411 | { |
1412 | r0 = smp_load_acquire(&x); |
1413 | WRITE_ONCE(u, 1); |
1414 | smp_store_release(&y, 1); |
1415 | } |
1416 | |
1417 | void cpu1(void) |
1418 | { |
1419 | r1 = smp_load_acquire(&y); |
1420 | r4 = READ_ONCE(v); |
1421 | r5 = READ_ONCE(u); |
1422 | smp_store_release(&z, 1); |
1423 | } |
1424 | |
1425 | void cpu2(void) |
1426 | { |
1427 | r2 = smp_load_acquire(&z); |
1428 | smp_store_release(&x, 1); |
1429 | } |
1430 | |
1431 | void cpu3(void) |
1432 | { |
1433 | WRITE_ONCE(v, 1); |
1434 | smp_mb(); |
1435 | r3 = READ_ONCE(u); |
1436 | } |
1437 | |
1438 | Because cpu0(), cpu1(), and cpu2() participate in a local transitive |
1439 | chain of smp_store_release()/smp_load_acquire() pairs, the following |
1440 | outcome is prohibited: |
1441 | |
1442 | r0 == 1 && r1 == 1 && r2 == 1 |
1443 | |
1444 | Furthermore, because of the release-acquire relationship between cpu0() |
1445 | and cpu1(), cpu1() must see cpu0()'s writes, so that the following |
1446 | outcome is prohibited: |
1447 | |
1448 | r1 == 1 && r5 == 0 |
1449 | |
1450 | However, the transitivity of release-acquire is local to the participating |
1451 | CPUs and does not apply to cpu3(). Therefore, the following outcome |
1452 | is possible: |
1453 | |
1454 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 |
1455 | |
1456 | As an aside, the following outcome is also possible: |
1457 | |
1458 | r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 |
1459 | |
1460 | Although cpu0(), cpu1(), and cpu2() will see their respective reads and |
1461 | writes in order, CPUs not involved in the release-acquire chain might |
1462 | well disagree on the order. This disagreement stems from the fact that |
1463 | the weak memory-barrier instructions used to implement smp_load_acquire() |
1464 | and smp_store_release() are not required to order prior stores against |
1465 | subsequent loads in all cases. This means that cpu3() can see cpu0()'s |
1466 | store to u as happening -after- cpu1()'s load from v, even though |
1467 | both cpu0() and cpu1() agree that these two operations occurred in the |
1468 | intended order. |
1469 | |
1470 | However, please keep in mind that smp_load_acquire() is not magic. |
1471 | In particular, it simply reads from its argument with ordering. It does |
1472 | -not- ensure that any particular value will be read. Therefore, the |
1473 | following outcome is possible: |
1474 | |
1475 | r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 |
1476 | |
1477 | Note that this outcome can happen even on a mythical sequentially |
1478 | consistent system where nothing is ever reordered. |
1479 | |
1480 | To reiterate, if your code requires global transitivity, use general |
1481 | barriers throughout. |
1482 | |
1483 | |
1484 | ======================== |
1485 | EXPLICIT KERNEL BARRIERS |
1486 | ======================== |
1487 | |
1488 | The Linux kernel has a variety of different barriers that act at different |
1489 | levels: |
1490 | |
1491 | (*) Compiler barrier. |
1492 | |
1493 | (*) CPU memory barriers. |
1494 | |
1495 | (*) MMIO write barrier. |
1496 | |
1497 | |
1498 | COMPILER BARRIER |
1499 | ---------------- |
1500 | |
1501 | The Linux kernel has an explicit compiler barrier function that prevents the |
1502 | compiler from moving the memory accesses either side of it to the other side: |
1503 | |
1504 | barrier(); |
1505 | |
1506 | This is a general barrier -- there are no read-read or write-write |
1507 | variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be |
1508 | thought of as weak forms of barrier() that affect only the specific |
1509 | accesses flagged by the READ_ONCE() or WRITE_ONCE(). |
1510 | |
1511 | The barrier() function has the following effects: |
1512 | |
1513 | (*) Prevents the compiler from reordering accesses following the |
1514 | barrier() to precede any accesses preceding the barrier(). |
1515 | One example use for this property is to ease communication between |
1516 | interrupt-handler code and the code that was interrupted. |
1517 | |
1518 | (*) Within a loop, forces the compiler to load the variables used |
1519 | in that loop's conditional on each pass through that loop. |
1520 | |
1521 | The READ_ONCE() and WRITE_ONCE() functions can prevent any number of |
1522 | optimizations that, while perfectly safe in single-threaded code, can |
1523 | be fatal in concurrent code. Here are some examples of these sorts |
1524 | of optimizations: |
1525 | |
1526 | (*) The compiler is within its rights to reorder loads and stores |
1527 | to the same variable, and in some cases, the CPU is within its |
1528 | rights to reorder loads to the same variable. This means that |
1529 | the following code: |
1530 | |
1531 | a[0] = x; |
1532 | a[1] = x; |
1533 | |
1534 | Might result in an older value of x stored in a[1] than in a[0]. |
1535 | Prevent both the compiler and the CPU from doing this as follows: |
1536 | |
1537 | a[0] = READ_ONCE(x); |
1538 | a[1] = READ_ONCE(x); |
1539 | |
1540 | In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for |
1541 | accesses from multiple CPUs to a single variable. |
1542 | |
1543 | (*) The compiler is within its rights to merge successive loads from |
1544 | the same variable. Such merging can cause the compiler to "optimize" |
1545 | the following code: |
1546 | |
1547 | while (tmp = a) |
1548 | do_something_with(tmp); |
1549 | |
1550 | into the following code, which, although in some sense legitimate |
1551 | for single-threaded code, is almost certainly not what the developer |
1552 | intended: |
1553 | |
1554 | if (tmp = a) |
1555 | for (;;) |
1556 | do_something_with(tmp); |
1557 | |
1558 | Use READ_ONCE() to prevent the compiler from doing this to you: |
1559 | |
1560 | while (tmp = READ_ONCE(a)) |
1561 | do_something_with(tmp); |
1562 | |
1563 | (*) The compiler is within its rights to reload a variable, for example, |
1564 | in cases where high register pressure prevents the compiler from |
1565 | keeping all data of interest in registers. The compiler might |
1566 | therefore optimize the variable 'tmp' out of our previous example: |
1567 | |
1568 | while (tmp = a) |
1569 | do_something_with(tmp); |
1570 | |
1571 | This could result in the following code, which is perfectly safe in |
1572 | single-threaded code, but can be fatal in concurrent code: |
1573 | |
1574 | while (a) |
1575 | do_something_with(a); |
1576 | |
1577 | For example, the optimized version of this code could result in |
1578 | passing a zero to do_something_with() in the case where the variable |
1579 | a was modified by some other CPU between the "while" statement and |
1580 | the call to do_something_with(). |
1581 | |
1582 | Again, use READ_ONCE() to prevent the compiler from doing this: |
1583 | |
1584 | while (tmp = READ_ONCE(a)) |
1585 | do_something_with(tmp); |
1586 | |
1587 | Note that if the compiler runs short of registers, it might save |
1588 | tmp onto the stack. The overhead of this saving and later restoring |
1589 | is why compilers reload variables. Doing so is perfectly safe for |
1590 | single-threaded code, so you need to tell the compiler about cases |
1591 | where it is not safe. |
1592 | |
1593 | (*) The compiler is within its rights to omit a load entirely if it knows |
1594 | what the value will be. For example, if the compiler can prove that |
1595 | the value of variable 'a' is always zero, it can optimize this code: |
1596 | |
1597 | while (tmp = a) |
1598 | do_something_with(tmp); |
1599 | |
1600 | Into this: |
1601 | |
1602 | do { } while (0); |
1603 | |
1604 | This transformation is a win for single-threaded code because it |
1605 | gets rid of a load and a branch. The problem is that the compiler |
1606 | will carry out its proof assuming that the current CPU is the only |
1607 | one updating variable 'a'. If variable 'a' is shared, then the |
1608 | compiler's proof will be erroneous. Use READ_ONCE() to tell the |
1609 | compiler that it doesn't know as much as it thinks it does: |
1610 | |
1611 | while (tmp = READ_ONCE(a)) |
1612 | do_something_with(tmp); |
1613 | |
1614 | But please note that the compiler is also closely watching what you |
1615 | do with the value after the READ_ONCE(). For example, suppose you |
1616 | do the following and MAX is a preprocessor macro with the value 1: |
1617 | |
1618 | while ((tmp = READ_ONCE(a)) % MAX) |
1619 | do_something_with(tmp); |
1620 | |
1621 | Then the compiler knows that the result of the "%" operator applied |
1622 | to MAX will always be zero, again allowing the compiler to optimize |
1623 | the code into near-nonexistence. (It will still load from the |
1624 | variable 'a'.) |
1625 | |
1626 | (*) Similarly, the compiler is within its rights to omit a store entirely |
1627 | if it knows that the variable already has the value being stored. |
1628 | Again, the compiler assumes that the current CPU is the only one |
1629 | storing into the variable, which can cause the compiler to do the |
1630 | wrong thing for shared variables. For example, suppose you have |
1631 | the following: |
1632 | |
1633 | a = 0; |
1634 | ... Code that does not store to variable a ... |
1635 | a = 0; |
1636 | |
1637 | The compiler sees that the value of variable 'a' is already zero, so |
1638 | it might well omit the second store. This would come as a fatal |
1639 | surprise if some other CPU might have stored to variable 'a' in the |
1640 | meantime. |
1641 | |
1642 | Use WRITE_ONCE() to prevent the compiler from making this sort of |
1643 | wrong guess: |
1644 | |
1645 | WRITE_ONCE(a, 0); |
1646 | ... Code that does not store to variable a ... |
1647 | WRITE_ONCE(a, 0); |
1648 | |
1649 | (*) The compiler is within its rights to reorder memory accesses unless |
1650 | you tell it not to. For example, consider the following interaction |
1651 | between process-level code and an interrupt handler: |
1652 | |
1653 | void process_level(void) |
1654 | { |
1655 | msg = get_message(); |
1656 | flag = true; |
1657 | } |
1658 | |
1659 | void interrupt_handler(void) |
1660 | { |
1661 | if (flag) |
1662 | process_message(msg); |
1663 | } |
1664 | |
1665 | There is nothing to prevent the compiler from transforming |
1666 | process_level() to the following, in fact, this might well be a |
1667 | win for single-threaded code: |
1668 | |
1669 | void process_level(void) |
1670 | { |
1671 | flag = true; |
1672 | msg = get_message(); |
1673 | } |
1674 | |
1675 | If the interrupt occurs between these two statement, then |
1676 | interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() |
1677 | to prevent this as follows: |
1678 | |
1679 | void process_level(void) |
1680 | { |
1681 | WRITE_ONCE(msg, get_message()); |
1682 | WRITE_ONCE(flag, true); |
1683 | } |
1684 | |
1685 | void interrupt_handler(void) |
1686 | { |
1687 | if (READ_ONCE(flag)) |
1688 | process_message(READ_ONCE(msg)); |
1689 | } |
1690 | |
1691 | Note that the READ_ONCE() and WRITE_ONCE() wrappers in |
1692 | interrupt_handler() are needed if this interrupt handler can itself |
1693 | be interrupted by something that also accesses 'flag' and 'msg', |
1694 | for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() |
1695 | and WRITE_ONCE() are not needed in interrupt_handler() other than |
1696 | for documentation purposes. (Note also that nested interrupts |
1697 | do not typically occur in modern Linux kernels, in fact, if an |
1698 | interrupt handler returns with interrupts enabled, you will get a |
1699 | WARN_ONCE() splat.) |
1700 | |
1701 | You should assume that the compiler can move READ_ONCE() and |
1702 | WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), |
1703 | barrier(), or similar primitives. |
1704 | |
1705 | This effect could also be achieved using barrier(), but READ_ONCE() |
1706 | and WRITE_ONCE() are more selective: With READ_ONCE() and |
1707 | WRITE_ONCE(), the compiler need only forget the contents of the |
1708 | indicated memory locations, while with barrier() the compiler must |
1709 | discard the value of all memory locations that it has currented |
1710 | cached in any machine registers. Of course, the compiler must also |
1711 | respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, |
1712 | though the CPU of course need not do so. |
1713 | |
1714 | (*) The compiler is within its rights to invent stores to a variable, |
1715 | as in the following example: |
1716 | |
1717 | if (a) |
1718 | b = a; |
1719 | else |
1720 | b = 42; |
1721 | |
1722 | The compiler might save a branch by optimizing this as follows: |
1723 | |
1724 | b = 42; |
1725 | if (a) |
1726 | b = a; |
1727 | |
1728 | In single-threaded code, this is not only safe, but also saves |
1729 | a branch. Unfortunately, in concurrent code, this optimization |
1730 | could cause some other CPU to see a spurious value of 42 -- even |
1731 | if variable 'a' was never zero -- when loading variable 'b'. |
1732 | Use WRITE_ONCE() to prevent this as follows: |
1733 | |
1734 | if (a) |
1735 | WRITE_ONCE(b, a); |
1736 | else |
1737 | WRITE_ONCE(b, 42); |
1738 | |
1739 | The compiler can also invent loads. These are usually less |
1740 | damaging, but they can result in cache-line bouncing and thus in |
1741 | poor performance and scalability. Use READ_ONCE() to prevent |
1742 | invented loads. |
1743 | |
1744 | (*) For aligned memory locations whose size allows them to be accessed |
1745 | with a single memory-reference instruction, prevents "load tearing" |
1746 | and "store tearing," in which a single large access is replaced by |
1747 | multiple smaller accesses. For example, given an architecture having |
1748 | 16-bit store instructions with 7-bit immediate fields, the compiler |
1749 | might be tempted to use two 16-bit store-immediate instructions to |
1750 | implement the following 32-bit store: |
1751 | |
1752 | p = 0x00010002; |
1753 | |
1754 | Please note that GCC really does use this sort of optimization, |
1755 | which is not surprising given that it would likely take more |
1756 | than two instructions to build the constant and then store it. |
1757 | This optimization can therefore be a win in single-threaded code. |
1758 | In fact, a recent bug (since fixed) caused GCC to incorrectly use |
1759 | this optimization in a volatile store. In the absence of such bugs, |
1760 | use of WRITE_ONCE() prevents store tearing in the following example: |
1761 | |
1762 | WRITE_ONCE(p, 0x00010002); |
1763 | |
1764 | Use of packed structures can also result in load and store tearing, |
1765 | as in this example: |
1766 | |
1767 | struct __attribute__((__packed__)) foo { |
1768 | short a; |
1769 | int b; |
1770 | short c; |
1771 | }; |
1772 | struct foo foo1, foo2; |
1773 | ... |
1774 | |
1775 | foo2.a = foo1.a; |
1776 | foo2.b = foo1.b; |
1777 | foo2.c = foo1.c; |
1778 | |
1779 | Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no |
1780 | volatile markings, the compiler would be well within its rights to |
1781 | implement these three assignment statements as a pair of 32-bit |
1782 | loads followed by a pair of 32-bit stores. This would result in |
1783 | load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() |
1784 | and WRITE_ONCE() again prevent tearing in this example: |
1785 | |
1786 | foo2.a = foo1.a; |
1787 | WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); |
1788 | foo2.c = foo1.c; |
1789 | |
1790 | All that aside, it is never necessary to use READ_ONCE() and |
1791 | WRITE_ONCE() on a variable that has been marked volatile. For example, |
1792 | because 'jiffies' is marked volatile, it is never necessary to |
1793 | say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and |
1794 | WRITE_ONCE() are implemented as volatile casts, which has no effect when |
1795 | its argument is already marked volatile. |
1796 | |
1797 | Please note that these compiler barriers have no direct effect on the CPU, |
1798 | which may then reorder things however it wishes. |
1799 | |
1800 | |
1801 | CPU MEMORY BARRIERS |
1802 | ------------------- |
1803 | |
1804 | The Linux kernel has eight basic CPU memory barriers: |
1805 | |
1806 | TYPE MANDATORY SMP CONDITIONAL |
1807 | =============== ======================= =========================== |
1808 | GENERAL mb() smp_mb() |
1809 | WRITE wmb() smp_wmb() |
1810 | READ rmb() smp_rmb() |
1811 | DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() |
1812 | |
1813 | |
1814 | All memory barriers except the data dependency barriers imply a compiler |
1815 | barrier. Data dependencies do not impose any additional compiler ordering. |
1816 | |
1817 | Aside: In the case of data dependencies, the compiler would be expected |
1818 | to issue the loads in the correct order (eg. `a[b]` would have to load |
1819 | the value of b before loading a[b]), however there is no guarantee in |
1820 | the C specification that the compiler may not speculate the value of b |
1821 | (eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1) |
1822 | tmp = a[b]; ). There is also the problem of a compiler reloading b after |
1823 | having loaded a[b], thus having a newer copy of b than a[b]. A consensus |
1824 | has not yet been reached about these problems, however the READ_ONCE() |
1825 | macro is a good place to start looking. |
1826 | |
1827 | SMP memory barriers are reduced to compiler barriers on uniprocessor compiled |
1828 | systems because it is assumed that a CPU will appear to be self-consistent, |
1829 | and will order overlapping accesses correctly with respect to itself. |
1830 | However, see the subsection on "Virtual Machine Guests" below. |
1831 | |
1832 | [!] Note that SMP memory barriers _must_ be used to control the ordering of |
1833 | references to shared memory on SMP systems, though the use of locking instead |
1834 | is sufficient. |
1835 | |
1836 | Mandatory barriers should not be used to control SMP effects, since mandatory |
1837 | barriers impose unnecessary overhead on both SMP and UP systems. They may, |
1838 | however, be used to control MMIO effects on accesses through relaxed memory I/O |
1839 | windows. These barriers are required even on non-SMP systems as they affect |
1840 | the order in which memory operations appear to a device by prohibiting both the |
1841 | compiler and the CPU from reordering them. |
1842 | |
1843 | |
1844 | There are some more advanced barrier functions: |
1845 | |
1846 | (*) smp_store_mb(var, value) |
1847 | |
1848 | This assigns the value to the variable and then inserts a full memory |
1849 | barrier after it. It isn't guaranteed to insert anything more than a |
1850 | compiler barrier in a UP compilation. |
1851 | |
1852 | |
1853 | (*) smp_mb__before_atomic(); |
1854 | (*) smp_mb__after_atomic(); |
1855 | |
1856 | These are for use with atomic (such as add, subtract, increment and |
1857 | decrement) functions that don't return a value, especially when used for |
1858 | reference counting. These functions do not imply memory barriers. |
1859 | |
1860 | These are also used for atomic bitop functions that do not return a |
1861 | value (such as set_bit and clear_bit). |
1862 | |
1863 | As an example, consider a piece of code that marks an object as being dead |
1864 | and then decrements the object's reference count: |
1865 | |
1866 | obj->dead = 1; |
1867 | smp_mb__before_atomic(); |
1868 | atomic_dec(&obj->ref_count); |
1869 | |
1870 | This makes sure that the death mark on the object is perceived to be set |
1871 | *before* the reference counter is decremented. |
1872 | |
1873 | See Documentation/atomic_ops.txt for more information. See the "Atomic |
1874 | operations" subsection for information on where to use these. |
1875 | |
1876 | |
1877 | (*) lockless_dereference(); |
1878 | |
1879 | This can be thought of as a pointer-fetch wrapper around the |
1880 | smp_read_barrier_depends() data-dependency barrier. |
1881 | |
1882 | This is also similar to rcu_dereference(), but in cases where |
1883 | object lifetime is handled by some mechanism other than RCU, for |
1884 | example, when the objects removed only when the system goes down. |
1885 | In addition, lockless_dereference() is used in some data structures |
1886 | that can be used both with and without RCU. |
1887 | |
1888 | |
1889 | (*) dma_wmb(); |
1890 | (*) dma_rmb(); |
1891 | |
1892 | These are for use with consistent memory to guarantee the ordering |
1893 | of writes or reads of shared memory accessible to both the CPU and a |
1894 | DMA capable device. |
1895 | |
1896 | For example, consider a device driver that shares memory with a device |
1897 | and uses a descriptor status value to indicate if the descriptor belongs |
1898 | to the device or the CPU, and a doorbell to notify it when new |
1899 | descriptors are available: |
1900 | |
1901 | if (desc->status != DEVICE_OWN) { |
1902 | /* do not read data until we own descriptor */ |
1903 | dma_rmb(); |
1904 | |
1905 | /* read/modify data */ |
1906 | read_data = desc->data; |
1907 | desc->data = write_data; |
1908 | |
1909 | /* flush modifications before status update */ |
1910 | dma_wmb(); |
1911 | |
1912 | /* assign ownership */ |
1913 | desc->status = DEVICE_OWN; |
1914 | |
1915 | /* force memory to sync before notifying device via MMIO */ |
1916 | wmb(); |
1917 | |
1918 | /* notify device of new descriptors */ |
1919 | writel(DESC_NOTIFY, doorbell); |
1920 | } |
1921 | |
1922 | The dma_rmb() allows us guarantee the device has released ownership |
1923 | before we read the data from the descriptor, and the dma_wmb() allows |
1924 | us to guarantee the data is written to the descriptor before the device |
1925 | can see it now has ownership. The wmb() is needed to guarantee that the |
1926 | cache coherent memory writes have completed before attempting a write to |
1927 | the cache incoherent MMIO region. |
1928 | |
1929 | See Documentation/DMA-API.txt for more information on consistent memory. |
1930 | |
1931 | |
1932 | MMIO WRITE BARRIER |
1933 | ------------------ |
1934 | |
1935 | The Linux kernel also has a special barrier for use with memory-mapped I/O |
1936 | writes: |
1937 | |
1938 | mmiowb(); |
1939 | |
1940 | This is a variation on the mandatory write barrier that causes writes to weakly |
1941 | ordered I/O regions to be partially ordered. Its effects may go beyond the |
1942 | CPU->Hardware interface and actually affect the hardware at some level. |
1943 | |
1944 | See the subsection "Acquires vs I/O accesses" for more information. |
1945 | |
1946 | |
1947 | =============================== |
1948 | IMPLICIT KERNEL MEMORY BARRIERS |
1949 | =============================== |
1950 | |
1951 | Some of the other functions in the linux kernel imply memory barriers, amongst |
1952 | which are locking and scheduling functions. |
1953 | |
1954 | This specification is a _minimum_ guarantee; any particular architecture may |
1955 | provide more substantial guarantees, but these may not be relied upon outside |
1956 | of arch specific code. |
1957 | |
1958 | |
1959 | LOCK ACQUISITION FUNCTIONS |
1960 | -------------------------- |
1961 | |
1962 | The Linux kernel has a number of locking constructs: |
1963 | |
1964 | (*) spin locks |
1965 | (*) R/W spin locks |
1966 | (*) mutexes |
1967 | (*) semaphores |
1968 | (*) R/W semaphores |
1969 | |
1970 | In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations |
1971 | for each construct. These operations all imply certain barriers: |
1972 | |
1973 | (1) ACQUIRE operation implication: |
1974 | |
1975 | Memory operations issued after the ACQUIRE will be completed after the |
1976 | ACQUIRE operation has completed. |
1977 | |
1978 | Memory operations issued before the ACQUIRE may be completed after |
1979 | the ACQUIRE operation has completed. An smp_mb__before_spinlock(), |
1980 | combined with a following ACQUIRE, orders prior stores against |
1981 | subsequent loads and stores. Note that this is weaker than smp_mb()! |
1982 | The smp_mb__before_spinlock() primitive is free on many architectures. |
1983 | |
1984 | (2) RELEASE operation implication: |
1985 | |
1986 | Memory operations issued before the RELEASE will be completed before the |
1987 | RELEASE operation has completed. |
1988 | |
1989 | Memory operations issued after the RELEASE may be completed before the |
1990 | RELEASE operation has completed. |
1991 | |
1992 | (3) ACQUIRE vs ACQUIRE implication: |
1993 | |
1994 | All ACQUIRE operations issued before another ACQUIRE operation will be |
1995 | completed before that ACQUIRE operation. |
1996 | |
1997 | (4) ACQUIRE vs RELEASE implication: |
1998 | |
1999 | All ACQUIRE operations issued before a RELEASE operation will be |
2000 | completed before the RELEASE operation. |
2001 | |
2002 | (5) Failed conditional ACQUIRE implication: |
2003 | |
2004 | Certain locking variants of the ACQUIRE operation may fail, either due to |
2005 | being unable to get the lock immediately, or due to receiving an unblocked |
2006 | signal whilst asleep waiting for the lock to become available. Failed |
2007 | locks do not imply any sort of barrier. |
2008 | |
2009 | [!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only |
2010 | one-way barriers is that the effects of instructions outside of a critical |
2011 | section may seep into the inside of the critical section. |
2012 | |
2013 | An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier |
2014 | because it is possible for an access preceding the ACQUIRE to happen after the |
2015 | ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and |
2016 | the two accesses can themselves then cross: |
2017 | |
2018 | *A = a; |
2019 | ACQUIRE M |
2020 | RELEASE M |
2021 | *B = b; |
2022 | |
2023 | may occur as: |
2024 | |
2025 | ACQUIRE M, STORE *B, STORE *A, RELEASE M |
2026 | |
2027 | When the ACQUIRE and RELEASE are a lock acquisition and release, |
2028 | respectively, this same reordering can occur if the lock's ACQUIRE and |
2029 | RELEASE are to the same lock variable, but only from the perspective of |
2030 | another CPU not holding that lock. In short, a ACQUIRE followed by an |
2031 | RELEASE may -not- be assumed to be a full memory barrier. |
2032 | |
2033 | Similarly, the reverse case of a RELEASE followed by an ACQUIRE does |
2034 | not imply a full memory barrier. Therefore, the CPU's execution of the |
2035 | critical sections corresponding to the RELEASE and the ACQUIRE can cross, |
2036 | so that: |
2037 | |
2038 | *A = a; |
2039 | RELEASE M |
2040 | ACQUIRE N |
2041 | *B = b; |
2042 | |
2043 | could occur as: |
2044 | |
2045 | ACQUIRE N, STORE *B, STORE *A, RELEASE M |
2046 | |
2047 | It might appear that this reordering could introduce a deadlock. |
2048 | However, this cannot happen because if such a deadlock threatened, |
2049 | the RELEASE would simply complete, thereby avoiding the deadlock. |
2050 | |
2051 | Why does this work? |
2052 | |
2053 | One key point is that we are only talking about the CPU doing |
2054 | the reordering, not the compiler. If the compiler (or, for |
2055 | that matter, the developer) switched the operations, deadlock |
2056 | -could- occur. |
2057 | |
2058 | But suppose the CPU reordered the operations. In this case, |
2059 | the unlock precedes the lock in the assembly code. The CPU |
2060 | simply elected to try executing the later lock operation first. |
2061 | If there is a deadlock, this lock operation will simply spin (or |
2062 | try to sleep, but more on that later). The CPU will eventually |
2063 | execute the unlock operation (which preceded the lock operation |
2064 | in the assembly code), which will unravel the potential deadlock, |
2065 | allowing the lock operation to succeed. |
2066 | |
2067 | But what if the lock is a sleeplock? In that case, the code will |
2068 | try to enter the scheduler, where it will eventually encounter |
2069 | a memory barrier, which will force the earlier unlock operation |
2070 | to complete, again unraveling the deadlock. There might be |
2071 | a sleep-unlock race, but the locking primitive needs to resolve |
2072 | such races properly in any case. |
2073 | |
2074 | Locks and semaphores may not provide any guarantee of ordering on UP compiled |
2075 | systems, and so cannot be counted on in such a situation to actually achieve |
2076 | anything at all - especially with respect to I/O accesses - unless combined |
2077 | with interrupt disabling operations. |
2078 | |
2079 | See also the section on "Inter-CPU acquiring barrier effects". |
2080 | |
2081 | |
2082 | As an example, consider the following: |
2083 | |
2084 | *A = a; |
2085 | *B = b; |
2086 | ACQUIRE |
2087 | *C = c; |
2088 | *D = d; |
2089 | RELEASE |
2090 | *E = e; |
2091 | *F = f; |
2092 | |
2093 | The following sequence of events is acceptable: |
2094 | |
2095 | ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE |
2096 | |
2097 | [+] Note that {*F,*A} indicates a combined access. |
2098 | |
2099 | But none of the following are: |
2100 | |
2101 | {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E |
2102 | *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F |
2103 | *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F |
2104 | *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E |
2105 | |
2106 | |
2107 | |
2108 | INTERRUPT DISABLING FUNCTIONS |
2109 | ----------------------------- |
2110 | |
2111 | Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts |
2112 | (RELEASE equivalent) will act as compiler barriers only. So if memory or I/O |
2113 | barriers are required in such a situation, they must be provided from some |
2114 | other means. |
2115 | |
2116 | |
2117 | SLEEP AND WAKE-UP FUNCTIONS |
2118 | --------------------------- |
2119 | |
2120 | Sleeping and waking on an event flagged in global data can be viewed as an |
2121 | interaction between two pieces of data: the task state of the task waiting for |
2122 | the event and the global data used to indicate the event. To make sure that |
2123 | these appear to happen in the right order, the primitives to begin the process |
2124 | of going to sleep, and the primitives to initiate a wake up imply certain |
2125 | barriers. |
2126 | |
2127 | Firstly, the sleeper normally follows something like this sequence of events: |
2128 | |
2129 | for (;;) { |
2130 | set_current_state(TASK_UNINTERRUPTIBLE); |
2131 | if (event_indicated) |
2132 | break; |
2133 | schedule(); |
2134 | } |
2135 | |
2136 | A general memory barrier is interpolated automatically by set_current_state() |
2137 | after it has altered the task state: |
2138 | |
2139 | CPU 1 |
2140 | =============================== |
2141 | set_current_state(); |
2142 | smp_store_mb(); |
2143 | STORE current->state |
2144 | <general barrier> |
2145 | LOAD event_indicated |
2146 | |
2147 | set_current_state() may be wrapped by: |
2148 | |
2149 | prepare_to_wait(); |
2150 | prepare_to_wait_exclusive(); |
2151 | |
2152 | which therefore also imply a general memory barrier after setting the state. |
2153 | The whole sequence above is available in various canned forms, all of which |
2154 | interpolate the memory barrier in the right place: |
2155 | |
2156 | wait_event(); |
2157 | wait_event_interruptible(); |
2158 | wait_event_interruptible_exclusive(); |
2159 | wait_event_interruptible_timeout(); |
2160 | wait_event_killable(); |
2161 | wait_event_timeout(); |
2162 | wait_on_bit(); |
2163 | wait_on_bit_lock(); |
2164 | |
2165 | |
2166 | Secondly, code that performs a wake up normally follows something like this: |
2167 | |
2168 | event_indicated = 1; |
2169 | wake_up(&event_wait_queue); |
2170 | |
2171 | or: |
2172 | |
2173 | event_indicated = 1; |
2174 | wake_up_process(event_daemon); |
2175 | |
2176 | A write memory barrier is implied by wake_up() and co. if and only if they |
2177 | wake something up. The barrier occurs before the task state is cleared, and so |
2178 | sits between the STORE to indicate the event and the STORE to set TASK_RUNNING: |
2179 | |
2180 | CPU 1 CPU 2 |
2181 | =============================== =============================== |
2182 | set_current_state(); STORE event_indicated |
2183 | smp_store_mb(); wake_up(); |
2184 | STORE current->state <write barrier> |
2185 | <general barrier> STORE current->state |
2186 | LOAD event_indicated |
2187 | |
2188 | To repeat, this write memory barrier is present if and only if something |
2189 | is actually awakened. To see this, consider the following sequence of |
2190 | events, where X and Y are both initially zero: |
2191 | |
2192 | CPU 1 CPU 2 |
2193 | =============================== =============================== |
2194 | X = 1; STORE event_indicated |
2195 | smp_mb(); wake_up(); |
2196 | Y = 1; wait_event(wq, Y == 1); |
2197 | wake_up(); load from Y sees 1, no memory barrier |
2198 | load from X might see 0 |
2199 | |
2200 | In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed |
2201 | to see 1. |
2202 | |
2203 | The available waker functions include: |
2204 | |
2205 | complete(); |
2206 | wake_up(); |
2207 | wake_up_all(); |
2208 | wake_up_bit(); |
2209 | wake_up_interruptible(); |
2210 | wake_up_interruptible_all(); |
2211 | wake_up_interruptible_nr(); |
2212 | wake_up_interruptible_poll(); |
2213 | wake_up_interruptible_sync(); |
2214 | wake_up_interruptible_sync_poll(); |
2215 | wake_up_locked(); |
2216 | wake_up_locked_poll(); |
2217 | wake_up_nr(); |
2218 | wake_up_poll(); |
2219 | wake_up_process(); |
2220 | |
2221 | |
2222 | [!] Note that the memory barriers implied by the sleeper and the waker do _not_ |
2223 | order multiple stores before the wake-up with respect to loads of those stored |
2224 | values after the sleeper has called set_current_state(). For instance, if the |
2225 | sleeper does: |
2226 | |
2227 | set_current_state(TASK_INTERRUPTIBLE); |
2228 | if (event_indicated) |
2229 | break; |
2230 | __set_current_state(TASK_RUNNING); |
2231 | do_something(my_data); |
2232 | |
2233 | and the waker does: |
2234 | |
2235 | my_data = value; |
2236 | event_indicated = 1; |
2237 | wake_up(&event_wait_queue); |
2238 | |
2239 | there's no guarantee that the change to event_indicated will be perceived by |
2240 | the sleeper as coming after the change to my_data. In such a circumstance, the |
2241 | code on both sides must interpolate its own memory barriers between the |
2242 | separate data accesses. Thus the above sleeper ought to do: |
2243 | |
2244 | set_current_state(TASK_INTERRUPTIBLE); |
2245 | if (event_indicated) { |
2246 | smp_rmb(); |
2247 | do_something(my_data); |
2248 | } |
2249 | |
2250 | and the waker should do: |
2251 | |
2252 | my_data = value; |
2253 | smp_wmb(); |
2254 | event_indicated = 1; |
2255 | wake_up(&event_wait_queue); |
2256 | |
2257 | |
2258 | MISCELLANEOUS FUNCTIONS |
2259 | ----------------------- |
2260 | |
2261 | Other functions that imply barriers: |
2262 | |
2263 | (*) schedule() and similar imply full memory barriers. |
2264 | |
2265 | |
2266 | =================================== |
2267 | INTER-CPU ACQUIRING BARRIER EFFECTS |
2268 | =================================== |
2269 | |
2270 | On SMP systems locking primitives give a more substantial form of barrier: one |
2271 | that does affect memory access ordering on other CPUs, within the context of |
2272 | conflict on any particular lock. |
2273 | |
2274 | |
2275 | ACQUIRES VS MEMORY ACCESSES |
2276 | --------------------------- |
2277 | |
2278 | Consider the following: the system has a pair of spinlocks (M) and (Q), and |
2279 | three CPUs; then should the following sequence of events occur: |
2280 | |
2281 | CPU 1 CPU 2 |
2282 | =============================== =============================== |
2283 | WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); |
2284 | ACQUIRE M ACQUIRE Q |
2285 | WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); |
2286 | WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); |
2287 | RELEASE M RELEASE Q |
2288 | WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); |
2289 | |
2290 | Then there is no guarantee as to what order CPU 3 will see the accesses to *A |
2291 | through *H occur in, other than the constraints imposed by the separate locks |
2292 | on the separate CPUs. It might, for example, see: |
2293 | |
2294 | *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M |
2295 | |
2296 | But it won't see any of: |
2297 | |
2298 | *B, *C or *D preceding ACQUIRE M |
2299 | *A, *B or *C following RELEASE M |
2300 | *F, *G or *H preceding ACQUIRE Q |
2301 | *E, *F or *G following RELEASE Q |
2302 | |
2303 | |
2304 | |
2305 | ACQUIRES VS I/O ACCESSES |
2306 | ------------------------ |
2307 | |
2308 | Under certain circumstances (especially involving NUMA), I/O accesses within |
2309 | two spinlocked sections on two different CPUs may be seen as interleaved by the |
2310 | PCI bridge, because the PCI bridge does not necessarily participate in the |
2311 | cache-coherence protocol, and is therefore incapable of issuing the required |
2312 | read memory barriers. |
2313 | |
2314 | For example: |
2315 | |
2316 | CPU 1 CPU 2 |
2317 | =============================== =============================== |
2318 | spin_lock(Q) |
2319 | writel(0, ADDR) |
2320 | writel(1, DATA); |
2321 | spin_unlock(Q); |
2322 | spin_lock(Q); |
2323 | writel(4, ADDR); |
2324 | writel(5, DATA); |
2325 | spin_unlock(Q); |
2326 | |
2327 | may be seen by the PCI bridge as follows: |
2328 | |
2329 | STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 |
2330 | |
2331 | which would probably cause the hardware to malfunction. |
2332 | |
2333 | |
2334 | What is necessary here is to intervene with an mmiowb() before dropping the |
2335 | spinlock, for example: |
2336 | |
2337 | CPU 1 CPU 2 |
2338 | =============================== =============================== |
2339 | spin_lock(Q) |
2340 | writel(0, ADDR) |
2341 | writel(1, DATA); |
2342 | mmiowb(); |
2343 | spin_unlock(Q); |
2344 | spin_lock(Q); |
2345 | writel(4, ADDR); |
2346 | writel(5, DATA); |
2347 | mmiowb(); |
2348 | spin_unlock(Q); |
2349 | |
2350 | this will ensure that the two stores issued on CPU 1 appear at the PCI bridge |
2351 | before either of the stores issued on CPU 2. |
2352 | |
2353 | |
2354 | Furthermore, following a store by a load from the same device obviates the need |
2355 | for the mmiowb(), because the load forces the store to complete before the load |
2356 | is performed: |
2357 | |
2358 | CPU 1 CPU 2 |
2359 | =============================== =============================== |
2360 | spin_lock(Q) |
2361 | writel(0, ADDR) |
2362 | a = readl(DATA); |
2363 | spin_unlock(Q); |
2364 | spin_lock(Q); |
2365 | writel(4, ADDR); |
2366 | b = readl(DATA); |
2367 | spin_unlock(Q); |
2368 | |
2369 | |
2370 | See Documentation/DocBook/deviceiobook.tmpl for more information. |
2371 | |
2372 | |
2373 | ================================= |
2374 | WHERE ARE MEMORY BARRIERS NEEDED? |
2375 | ================================= |
2376 | |
2377 | Under normal operation, memory operation reordering is generally not going to |
2378 | be a problem as a single-threaded linear piece of code will still appear to |
2379 | work correctly, even if it's in an SMP kernel. There are, however, four |
2380 | circumstances in which reordering definitely _could_ be a problem: |
2381 | |
2382 | (*) Interprocessor interaction. |
2383 | |
2384 | (*) Atomic operations. |
2385 | |
2386 | (*) Accessing devices. |
2387 | |
2388 | (*) Interrupts. |
2389 | |
2390 | |
2391 | INTERPROCESSOR INTERACTION |
2392 | -------------------------- |
2393 | |
2394 | When there's a system with more than one processor, more than one CPU in the |
2395 | system may be working on the same data set at the same time. This can cause |
2396 | synchronisation problems, and the usual way of dealing with them is to use |
2397 | locks. Locks, however, are quite expensive, and so it may be preferable to |
2398 | operate without the use of a lock if at all possible. In such a case |
2399 | operations that affect both CPUs may have to be carefully ordered to prevent |
2400 | a malfunction. |
2401 | |
2402 | Consider, for example, the R/W semaphore slow path. Here a waiting process is |
2403 | queued on the semaphore, by virtue of it having a piece of its stack linked to |
2404 | the semaphore's list of waiting processes: |
2405 | |
2406 | struct rw_semaphore { |
2407 | ... |
2408 | spinlock_t lock; |
2409 | struct list_head waiters; |
2410 | }; |
2411 | |
2412 | struct rwsem_waiter { |
2413 | struct list_head list; |
2414 | struct task_struct *task; |
2415 | }; |
2416 | |
2417 | To wake up a particular waiter, the up_read() or up_write() functions have to: |
2418 | |
2419 | (1) read the next pointer from this waiter's record to know as to where the |
2420 | next waiter record is; |
2421 | |
2422 | (2) read the pointer to the waiter's task structure; |
2423 | |
2424 | (3) clear the task pointer to tell the waiter it has been given the semaphore; |
2425 | |
2426 | (4) call wake_up_process() on the task; and |
2427 | |
2428 | (5) release the reference held on the waiter's task struct. |
2429 | |
2430 | In other words, it has to perform this sequence of events: |
2431 | |
2432 | LOAD waiter->list.next; |
2433 | LOAD waiter->task; |
2434 | STORE waiter->task; |
2435 | CALL wakeup |
2436 | RELEASE task |
2437 | |
2438 | and if any of these steps occur out of order, then the whole thing may |
2439 | malfunction. |
2440 | |
2441 | Once it has queued itself and dropped the semaphore lock, the waiter does not |
2442 | get the lock again; it instead just waits for its task pointer to be cleared |
2443 | before proceeding. Since the record is on the waiter's stack, this means that |
2444 | if the task pointer is cleared _before_ the next pointer in the list is read, |
2445 | another CPU might start processing the waiter and might clobber the waiter's |
2446 | stack before the up*() function has a chance to read the next pointer. |
2447 | |
2448 | Consider then what might happen to the above sequence of events: |
2449 | |
2450 | CPU 1 CPU 2 |
2451 | =============================== =============================== |
2452 | down_xxx() |
2453 | Queue waiter |
2454 | Sleep |
2455 | up_yyy() |
2456 | LOAD waiter->task; |
2457 | STORE waiter->task; |
2458 | Woken up by other event |
2459 | <preempt> |
2460 | Resume processing |
2461 | down_xxx() returns |
2462 | call foo() |
2463 | foo() clobbers *waiter |
2464 | </preempt> |
2465 | LOAD waiter->list.next; |
2466 | --- OOPS --- |
2467 | |
2468 | This could be dealt with using the semaphore lock, but then the down_xxx() |
2469 | function has to needlessly get the spinlock again after being woken up. |
2470 | |
2471 | The way to deal with this is to insert a general SMP memory barrier: |
2472 | |
2473 | LOAD waiter->list.next; |
2474 | LOAD waiter->task; |
2475 | smp_mb(); |
2476 | STORE waiter->task; |
2477 | CALL wakeup |
2478 | RELEASE task |
2479 | |
2480 | In this case, the barrier makes a guarantee that all memory accesses before the |
2481 | barrier will appear to happen before all the memory accesses after the barrier |
2482 | with respect to the other CPUs on the system. It does _not_ guarantee that all |
2483 | the memory accesses before the barrier will be complete by the time the barrier |
2484 | instruction itself is complete. |
2485 | |
2486 | On a UP system - where this wouldn't be a problem - the smp_mb() is just a |
2487 | compiler barrier, thus making sure the compiler emits the instructions in the |
2488 | right order without actually intervening in the CPU. Since there's only one |
2489 | CPU, that CPU's dependency ordering logic will take care of everything else. |
2490 | |
2491 | |
2492 | ATOMIC OPERATIONS |
2493 | ----------------- |
2494 | |
2495 | Whilst they are technically interprocessor interaction considerations, atomic |
2496 | operations are noted specially as some of them imply full memory barriers and |
2497 | some don't, but they're very heavily relied on as a group throughout the |
2498 | kernel. |
2499 | |
2500 | Any atomic operation that modifies some state in memory and returns information |
2501 | about the state (old or new) implies an SMP-conditional general memory barrier |
2502 | (smp_mb()) on each side of the actual operation (with the exception of |
2503 | explicit lock operations, described later). These include: |
2504 | |
2505 | xchg(); |
2506 | atomic_xchg(); atomic_long_xchg(); |
2507 | atomic_inc_return(); atomic_long_inc_return(); |
2508 | atomic_dec_return(); atomic_long_dec_return(); |
2509 | atomic_add_return(); atomic_long_add_return(); |
2510 | atomic_sub_return(); atomic_long_sub_return(); |
2511 | atomic_inc_and_test(); atomic_long_inc_and_test(); |
2512 | atomic_dec_and_test(); atomic_long_dec_and_test(); |
2513 | atomic_sub_and_test(); atomic_long_sub_and_test(); |
2514 | atomic_add_negative(); atomic_long_add_negative(); |
2515 | test_and_set_bit(); |
2516 | test_and_clear_bit(); |
2517 | test_and_change_bit(); |
2518 | |
2519 | /* when succeeds */ |
2520 | cmpxchg(); |
2521 | atomic_cmpxchg(); atomic_long_cmpxchg(); |
2522 | atomic_add_unless(); atomic_long_add_unless(); |
2523 | |
2524 | These are used for such things as implementing ACQUIRE-class and RELEASE-class |
2525 | operations and adjusting reference counters towards object destruction, and as |
2526 | such the implicit memory barrier effects are necessary. |
2527 | |
2528 | |
2529 | The following operations are potential problems as they do _not_ imply memory |
2530 | barriers, but might be used for implementing such things as RELEASE-class |
2531 | operations: |
2532 | |
2533 | atomic_set(); |
2534 | set_bit(); |
2535 | clear_bit(); |
2536 | change_bit(); |
2537 | |
2538 | With these the appropriate explicit memory barrier should be used if necessary |
2539 | (smp_mb__before_atomic() for instance). |
2540 | |
2541 | |
2542 | The following also do _not_ imply memory barriers, and so may require explicit |
2543 | memory barriers under some circumstances (smp_mb__before_atomic() for |
2544 | instance): |
2545 | |
2546 | atomic_add(); |
2547 | atomic_sub(); |
2548 | atomic_inc(); |
2549 | atomic_dec(); |
2550 | |
2551 | If they're used for statistics generation, then they probably don't need memory |
2552 | barriers, unless there's a coupling between statistical data. |
2553 | |
2554 | If they're used for reference counting on an object to control its lifetime, |
2555 | they probably don't need memory barriers because either the reference count |
2556 | will be adjusted inside a locked section, or the caller will already hold |
2557 | sufficient references to make the lock, and thus a memory barrier unnecessary. |
2558 | |
2559 | If they're used for constructing a lock of some description, then they probably |
2560 | do need memory barriers as a lock primitive generally has to do things in a |
2561 | specific order. |
2562 | |
2563 | Basically, each usage case has to be carefully considered as to whether memory |
2564 | barriers are needed or not. |
2565 | |
2566 | The following operations are special locking primitives: |
2567 | |
2568 | test_and_set_bit_lock(); |
2569 | clear_bit_unlock(); |
2570 | __clear_bit_unlock(); |
2571 | |
2572 | These implement ACQUIRE-class and RELEASE-class operations. These should be |
2573 | used in preference to other operations when implementing locking primitives, |
2574 | because their implementations can be optimised on many architectures. |
2575 | |
2576 | [!] Note that special memory barrier primitives are available for these |
2577 | situations because on some CPUs the atomic instructions used imply full memory |
2578 | barriers, and so barrier instructions are superfluous in conjunction with them, |
2579 | and in such cases the special barrier primitives will be no-ops. |
2580 | |
2581 | See Documentation/atomic_ops.txt for more information. |
2582 | |
2583 | |
2584 | ACCESSING DEVICES |
2585 | ----------------- |
2586 | |
2587 | Many devices can be memory mapped, and so appear to the CPU as if they're just |
2588 | a set of memory locations. To control such a device, the driver usually has to |
2589 | make the right memory accesses in exactly the right order. |
2590 | |
2591 | However, having a clever CPU or a clever compiler creates a potential problem |
2592 | in that the carefully sequenced accesses in the driver code won't reach the |
2593 | device in the requisite order if the CPU or the compiler thinks it is more |
2594 | efficient to reorder, combine or merge accesses - something that would cause |
2595 | the device to malfunction. |
2596 | |
2597 | Inside of the Linux kernel, I/O should be done through the appropriate accessor |
2598 | routines - such as inb() or writel() - which know how to make such accesses |
2599 | appropriately sequential. Whilst this, for the most part, renders the explicit |
2600 | use of memory barriers unnecessary, there are a couple of situations where they |
2601 | might be needed: |
2602 | |
2603 | (1) On some systems, I/O stores are not strongly ordered across all CPUs, and |
2604 | so for _all_ general drivers locks should be used and mmiowb() must be |
2605 | issued prior to unlocking the critical section. |
2606 | |
2607 | (2) If the accessor functions are used to refer to an I/O memory window with |
2608 | relaxed memory access properties, then _mandatory_ memory barriers are |
2609 | required to enforce ordering. |
2610 | |
2611 | See Documentation/DocBook/deviceiobook.tmpl for more information. |
2612 | |
2613 | |
2614 | INTERRUPTS |
2615 | ---------- |
2616 | |
2617 | A driver may be interrupted by its own interrupt service routine, and thus the |
2618 | two parts of the driver may interfere with each other's attempts to control or |
2619 | access the device. |
2620 | |
2621 | This may be alleviated - at least in part - by disabling local interrupts (a |
2622 | form of locking), such that the critical operations are all contained within |
2623 | the interrupt-disabled section in the driver. Whilst the driver's interrupt |
2624 | routine is executing, the driver's core may not run on the same CPU, and its |
2625 | interrupt is not permitted to happen again until the current interrupt has been |
2626 | handled, thus the interrupt handler does not need to lock against that. |
2627 | |
2628 | However, consider a driver that was talking to an ethernet card that sports an |
2629 | address register and a data register. If that driver's core talks to the card |
2630 | under interrupt-disablement and then the driver's interrupt handler is invoked: |
2631 | |
2632 | LOCAL IRQ DISABLE |
2633 | writew(ADDR, 3); |
2634 | writew(DATA, y); |
2635 | LOCAL IRQ ENABLE |
2636 | <interrupt> |
2637 | writew(ADDR, 4); |
2638 | q = readw(DATA); |
2639 | </interrupt> |
2640 | |
2641 | The store to the data register might happen after the second store to the |
2642 | address register if ordering rules are sufficiently relaxed: |
2643 | |
2644 | STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA |
2645 | |
2646 | |
2647 | If ordering rules are relaxed, it must be assumed that accesses done inside an |
2648 | interrupt disabled section may leak outside of it and may interleave with |
2649 | accesses performed in an interrupt - and vice versa - unless implicit or |
2650 | explicit barriers are used. |
2651 | |
2652 | Normally this won't be a problem because the I/O accesses done inside such |
2653 | sections will include synchronous load operations on strictly ordered I/O |
2654 | registers that form implicit I/O barriers. If this isn't sufficient then an |
2655 | mmiowb() may need to be used explicitly. |
2656 | |
2657 | |
2658 | A similar situation may occur between an interrupt routine and two routines |
2659 | running on separate CPUs that communicate with each other. If such a case is |
2660 | likely, then interrupt-disabling locks should be used to guarantee ordering. |
2661 | |
2662 | |
2663 | ========================== |
2664 | KERNEL I/O BARRIER EFFECTS |
2665 | ========================== |
2666 | |
2667 | When accessing I/O memory, drivers should use the appropriate accessor |
2668 | functions: |
2669 | |
2670 | (*) inX(), outX(): |
2671 | |
2672 | These are intended to talk to I/O space rather than memory space, but |
2673 | that's primarily a CPU-specific concept. The i386 and x86_64 processors |
2674 | do indeed have special I/O space access cycles and instructions, but many |
2675 | CPUs don't have such a concept. |
2676 | |
2677 | The PCI bus, amongst others, defines an I/O space concept which - on such |
2678 | CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O |
2679 | space. However, it may also be mapped as a virtual I/O space in the CPU's |
2680 | memory map, particularly on those CPUs that don't support alternate I/O |
2681 | spaces. |
2682 | |
2683 | Accesses to this space may be fully synchronous (as on i386), but |
2684 | intermediary bridges (such as the PCI host bridge) may not fully honour |
2685 | that. |
2686 | |
2687 | They are guaranteed to be fully ordered with respect to each other. |
2688 | |
2689 | They are not guaranteed to be fully ordered with respect to other types of |
2690 | memory and I/O operation. |
2691 | |
2692 | (*) readX(), writeX(): |
2693 | |
2694 | Whether these are guaranteed to be fully ordered and uncombined with |
2695 | respect to each other on the issuing CPU depends on the characteristics |
2696 | defined for the memory window through which they're accessing. On later |
2697 | i386 architecture machines, for example, this is controlled by way of the |
2698 | MTRR registers. |
2699 | |
2700 | Ordinarily, these will be guaranteed to be fully ordered and uncombined, |
2701 | provided they're not accessing a prefetchable device. |
2702 | |
2703 | However, intermediary hardware (such as a PCI bridge) may indulge in |
2704 | deferral if it so wishes; to flush a store, a load from the same location |
2705 | is preferred[*], but a load from the same device or from configuration |
2706 | space should suffice for PCI. |
2707 | |
2708 | [*] NOTE! attempting to load from the same location as was written to may |
2709 | cause a malfunction - consider the 16550 Rx/Tx serial registers for |
2710 | example. |
2711 | |
2712 | Used with prefetchable I/O memory, an mmiowb() barrier may be required to |
2713 | force stores to be ordered. |
2714 | |
2715 | Please refer to the PCI specification for more information on interactions |
2716 | between PCI transactions. |
2717 | |
2718 | (*) readX_relaxed(), writeX_relaxed() |
2719 | |
2720 | These are similar to readX() and writeX(), but provide weaker memory |
2721 | ordering guarantees. Specifically, they do not guarantee ordering with |
2722 | respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee |
2723 | ordering with respect to LOCK or UNLOCK operations. If the latter is |
2724 | required, an mmiowb() barrier can be used. Note that relaxed accesses to |
2725 | the same peripheral are guaranteed to be ordered with respect to each |
2726 | other. |
2727 | |
2728 | (*) ioreadX(), iowriteX() |
2729 | |
2730 | These will perform appropriately for the type of access they're actually |
2731 | doing, be it inX()/outX() or readX()/writeX(). |
2732 | |
2733 | |
2734 | ======================================== |
2735 | ASSUMED MINIMUM EXECUTION ORDERING MODEL |
2736 | ======================================== |
2737 | |
2738 | It has to be assumed that the conceptual CPU is weakly-ordered but that it will |
2739 | maintain the appearance of program causality with respect to itself. Some CPUs |
2740 | (such as i386 or x86_64) are more constrained than others (such as powerpc or |
2741 | frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside |
2742 | of arch-specific code. |
2743 | |
2744 | This means that it must be considered that the CPU will execute its instruction |
2745 | stream in any order it feels like - or even in parallel - provided that if an |
2746 | instruction in the stream depends on an earlier instruction, then that |
2747 | earlier instruction must be sufficiently complete[*] before the later |
2748 | instruction may proceed; in other words: provided that the appearance of |
2749 | causality is maintained. |
2750 | |
2751 | [*] Some instructions have more than one effect - such as changing the |
2752 | condition codes, changing registers or changing memory - and different |
2753 | instructions may depend on different effects. |
2754 | |
2755 | A CPU may also discard any instruction sequence that winds up having no |
2756 | ultimate effect. For example, if two adjacent instructions both load an |
2757 | immediate value into the same register, the first may be discarded. |
2758 | |
2759 | |
2760 | Similarly, it has to be assumed that compiler might reorder the instruction |
2761 | stream in any way it sees fit, again provided the appearance of causality is |
2762 | maintained. |
2763 | |
2764 | |
2765 | ============================ |
2766 | THE EFFECTS OF THE CPU CACHE |
2767 | ============================ |
2768 | |
2769 | The way cached memory operations are perceived across the system is affected to |
2770 | a certain extent by the caches that lie between CPUs and memory, and by the |
2771 | memory coherence system that maintains the consistency of state in the system. |
2772 | |
2773 | As far as the way a CPU interacts with another part of the system through the |
2774 | caches goes, the memory system has to include the CPU's caches, and memory |
2775 | barriers for the most part act at the interface between the CPU and its cache |
2776 | (memory barriers logically act on the dotted line in the following diagram): |
2777 | |
2778 | <--- CPU ---> : <----------- Memory -----------> |
2779 | : |
2780 | +--------+ +--------+ : +--------+ +-----------+ |
2781 | | | | | : | | | | +--------+ |
2782 | | CPU | | Memory | : | CPU | | | | | |
2783 | | Core |--->| Access |----->| Cache |<-->| | | | |
2784 | | | | Queue | : | | | |--->| Memory | |
2785 | | | | | : | | | | | | |
2786 | +--------+ +--------+ : +--------+ | | | | |
2787 | : | Cache | +--------+ |
2788 | : | Coherency | |
2789 | : | Mechanism | +--------+ |
2790 | +--------+ +--------+ : +--------+ | | | | |
2791 | | | | | : | | | | | | |
2792 | | CPU | | Memory | : | CPU | | |--->| Device | |
2793 | | Core |--->| Access |----->| Cache |<-->| | | | |
2794 | | | | Queue | : | | | | | | |
2795 | | | | | : | | | | +--------+ |
2796 | +--------+ +--------+ : +--------+ +-----------+ |
2797 | : |
2798 | : |
2799 | |
2800 | Although any particular load or store may not actually appear outside of the |
2801 | CPU that issued it since it may have been satisfied within the CPU's own cache, |
2802 | it will still appear as if the full memory access had taken place as far as the |
2803 | other CPUs are concerned since the cache coherency mechanisms will migrate the |
2804 | cacheline over to the accessing CPU and propagate the effects upon conflict. |
2805 | |
2806 | The CPU core may execute instructions in any order it deems fit, provided the |
2807 | expected program causality appears to be maintained. Some of the instructions |
2808 | generate load and store operations which then go into the queue of memory |
2809 | accesses to be performed. The core may place these in the queue in any order |
2810 | it wishes, and continue execution until it is forced to wait for an instruction |
2811 | to complete. |
2812 | |
2813 | What memory barriers are concerned with is controlling the order in which |
2814 | accesses cross from the CPU side of things to the memory side of things, and |
2815 | the order in which the effects are perceived to happen by the other observers |
2816 | in the system. |
2817 | |
2818 | [!] Memory barriers are _not_ needed within a given CPU, as CPUs always see |
2819 | their own loads and stores as if they had happened in program order. |
2820 | |
2821 | [!] MMIO or other device accesses may bypass the cache system. This depends on |
2822 | the properties of the memory window through which devices are accessed and/or |
2823 | the use of any special device communication instructions the CPU may have. |
2824 | |
2825 | |
2826 | CACHE COHERENCY |
2827 | --------------- |
2828 | |
2829 | Life isn't quite as simple as it may appear above, however: for while the |
2830 | caches are expected to be coherent, there's no guarantee that that coherency |
2831 | will be ordered. This means that whilst changes made on one CPU will |
2832 | eventually become visible on all CPUs, there's no guarantee that they will |
2833 | become apparent in the same order on those other CPUs. |
2834 | |
2835 | |
2836 | Consider dealing with a system that has a pair of CPUs (1 & 2), each of which |
2837 | has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): |
2838 | |
2839 | : |
2840 | : +--------+ |
2841 | : +---------+ | | |
2842 | +--------+ : +--->| Cache A |<------->| | |
2843 | | | : | +---------+ | | |
2844 | | CPU 1 |<---+ | | |
2845 | | | : | +---------+ | | |
2846 | +--------+ : +--->| Cache B |<------->| | |
2847 | : +---------+ | | |
2848 | : | Memory | |
2849 | : +---------+ | System | |
2850 | +--------+ : +--->| Cache C |<------->| | |
2851 | | | : | +---------+ | | |
2852 | | CPU 2 |<---+ | | |
2853 | | | : | +---------+ | | |
2854 | +--------+ : +--->| Cache D |<------->| | |
2855 | : +---------+ | | |
2856 | : +--------+ |
2857 | : |
2858 | |
2859 | Imagine the system has the following properties: |
2860 | |
2861 | (*) an odd-numbered cache line may be in cache A, cache C or it may still be |
2862 | resident in memory; |
2863 | |
2864 | (*) an even-numbered cache line may be in cache B, cache D or it may still be |
2865 | resident in memory; |
2866 | |
2867 | (*) whilst the CPU core is interrogating one cache, the other cache may be |
2868 | making use of the bus to access the rest of the system - perhaps to |
2869 | displace a dirty cacheline or to do a speculative load; |
2870 | |
2871 | (*) each cache has a queue of operations that need to be applied to that cache |
2872 | to maintain coherency with the rest of the system; |
2873 | |
2874 | (*) the coherency queue is not flushed by normal loads to lines already |
2875 | present in the cache, even though the contents of the queue may |
2876 | potentially affect those loads. |
2877 | |
2878 | Imagine, then, that two writes are made on the first CPU, with a write barrier |
2879 | between them to guarantee that they will appear to reach that CPU's caches in |
2880 | the requisite order: |
2881 | |
2882 | CPU 1 CPU 2 COMMENT |
2883 | =============== =============== ======================================= |
2884 | u == 0, v == 1 and p == &u, q == &u |
2885 | v = 2; |
2886 | smp_wmb(); Make sure change to v is visible before |
2887 | change to p |
2888 | <A:modify v=2> v is now in cache A exclusively |
2889 | p = &v; |
2890 | <B:modify p=&v> p is now in cache B exclusively |
2891 | |
2892 | The write memory barrier forces the other CPUs in the system to perceive that |
2893 | the local CPU's caches have apparently been updated in the correct order. But |
2894 | now imagine that the second CPU wants to read those values: |
2895 | |
2896 | CPU 1 CPU 2 COMMENT |
2897 | =============== =============== ======================================= |
2898 | ... |
2899 | q = p; |
2900 | x = *q; |
2901 | |
2902 | The above pair of reads may then fail to happen in the expected order, as the |
2903 | cacheline holding p may get updated in one of the second CPU's caches whilst |
2904 | the update to the cacheline holding v is delayed in the other of the second |
2905 | CPU's caches by some other cache event: |
2906 | |
2907 | CPU 1 CPU 2 COMMENT |
2908 | =============== =============== ======================================= |
2909 | u == 0, v == 1 and p == &u, q == &u |
2910 | v = 2; |
2911 | smp_wmb(); |
2912 | <A:modify v=2> <C:busy> |
2913 | <C:queue v=2> |
2914 | p = &v; q = p; |
2915 | <D:request p> |
2916 | <B:modify p=&v> <D:commit p=&v> |
2917 | <D:read p> |
2918 | x = *q; |
2919 | <C:read *q> Reads from v before v updated in cache |
2920 | <C:unbusy> |
2921 | <C:commit v=2> |
2922 | |
2923 | Basically, whilst both cachelines will be updated on CPU 2 eventually, there's |
2924 | no guarantee that, without intervention, the order of update will be the same |
2925 | as that committed on CPU 1. |
2926 | |
2927 | |
2928 | To intervene, we need to interpolate a data dependency barrier or a read |
2929 | barrier between the loads. This will force the cache to commit its coherency |
2930 | queue before processing any further requests: |
2931 | |
2932 | CPU 1 CPU 2 COMMENT |
2933 | =============== =============== ======================================= |
2934 | u == 0, v == 1 and p == &u, q == &u |
2935 | v = 2; |
2936 | smp_wmb(); |
2937 | <A:modify v=2> <C:busy> |
2938 | <C:queue v=2> |
2939 | p = &v; q = p; |
2940 | <D:request p> |
2941 | <B:modify p=&v> <D:commit p=&v> |
2942 | <D:read p> |
2943 | smp_read_barrier_depends() |
2944 | <C:unbusy> |
2945 | <C:commit v=2> |
2946 | x = *q; |
2947 | <C:read *q> Reads from v after v updated in cache |
2948 | |
2949 | |
2950 | This sort of problem can be encountered on DEC Alpha processors as they have a |
2951 | split cache that improves performance by making better use of the data bus. |
2952 | Whilst most CPUs do imply a data dependency barrier on the read when a memory |
2953 | access depends on a read, not all do, so it may not be relied on. |
2954 | |
2955 | Other CPUs may also have split caches, but must coordinate between the various |
2956 | cachelets for normal memory accesses. The semantics of the Alpha removes the |
2957 | need for coordination in the absence of memory barriers. |
2958 | |
2959 | |
2960 | CACHE COHERENCY VS DMA |
2961 | ---------------------- |
2962 | |
2963 | Not all systems maintain cache coherency with respect to devices doing DMA. In |
2964 | such cases, a device attempting DMA may obtain stale data from RAM because |
2965 | dirty cache lines may be resident in the caches of various CPUs, and may not |
2966 | have been written back to RAM yet. To deal with this, the appropriate part of |
2967 | the kernel must flush the overlapping bits of cache on each CPU (and maybe |
2968 | invalidate them as well). |
2969 | |
2970 | In addition, the data DMA'd to RAM by a device may be overwritten by dirty |
2971 | cache lines being written back to RAM from a CPU's cache after the device has |
2972 | installed its own data, or cache lines present in the CPU's cache may simply |
2973 | obscure the fact that RAM has been updated, until at such time as the cacheline |
2974 | is discarded from the CPU's cache and reloaded. To deal with this, the |
2975 | appropriate part of the kernel must invalidate the overlapping bits of the |
2976 | cache on each CPU. |
2977 | |
2978 | See Documentation/cachetlb.txt for more information on cache management. |
2979 | |
2980 | |
2981 | CACHE COHERENCY VS MMIO |
2982 | ----------------------- |
2983 | |
2984 | Memory mapped I/O usually takes place through memory locations that are part of |
2985 | a window in the CPU's memory space that has different properties assigned than |
2986 | the usual RAM directed window. |
2987 | |
2988 | Amongst these properties is usually the fact that such accesses bypass the |
2989 | caching entirely and go directly to the device buses. This means MMIO accesses |
2990 | may, in effect, overtake accesses to cached memory that were emitted earlier. |
2991 | A memory barrier isn't sufficient in such a case, but rather the cache must be |
2992 | flushed between the cached memory write and the MMIO access if the two are in |
2993 | any way dependent. |
2994 | |
2995 | |
2996 | ========================= |
2997 | THE THINGS CPUS GET UP TO |
2998 | ========================= |
2999 | |
3000 | A programmer might take it for granted that the CPU will perform memory |
3001 | operations in exactly the order specified, so that if the CPU is, for example, |
3002 | given the following piece of code to execute: |
3003 | |
3004 | a = READ_ONCE(*A); |
3005 | WRITE_ONCE(*B, b); |
3006 | c = READ_ONCE(*C); |
3007 | d = READ_ONCE(*D); |
3008 | WRITE_ONCE(*E, e); |
3009 | |
3010 | they would then expect that the CPU will complete the memory operation for each |
3011 | instruction before moving on to the next one, leading to a definite sequence of |
3012 | operations as seen by external observers in the system: |
3013 | |
3014 | LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. |
3015 | |
3016 | |
3017 | Reality is, of course, much messier. With many CPUs and compilers, the above |
3018 | assumption doesn't hold because: |
3019 | |
3020 | (*) loads are more likely to need to be completed immediately to permit |
3021 | execution progress, whereas stores can often be deferred without a |
3022 | problem; |
3023 | |
3024 | (*) loads may be done speculatively, and the result discarded should it prove |
3025 | to have been unnecessary; |
3026 | |
3027 | (*) loads may be done speculatively, leading to the result having been fetched |
3028 | at the wrong time in the expected sequence of events; |
3029 | |
3030 | (*) the order of the memory accesses may be rearranged to promote better use |
3031 | of the CPU buses and caches; |
3032 | |
3033 | (*) loads and stores may be combined to improve performance when talking to |
3034 | memory or I/O hardware that can do batched accesses of adjacent locations, |
3035 | thus cutting down on transaction setup costs (memory and PCI devices may |
3036 | both be able to do this); and |
3037 | |
3038 | (*) the CPU's data cache may affect the ordering, and whilst cache-coherency |
3039 | mechanisms may alleviate this - once the store has actually hit the cache |
3040 | - there's no guarantee that the coherency management will be propagated in |
3041 | order to other CPUs. |
3042 | |
3043 | So what another CPU, say, might actually observe from the above piece of code |
3044 | is: |
3045 | |
3046 | LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B |
3047 | |
3048 | (Where "LOAD {*C,*D}" is a combined load) |
3049 | |
3050 | |
3051 | However, it is guaranteed that a CPU will be self-consistent: it will see its |
3052 | _own_ accesses appear to be correctly ordered, without the need for a memory |
3053 | barrier. For instance with the following code: |
3054 | |
3055 | U = READ_ONCE(*A); |
3056 | WRITE_ONCE(*A, V); |
3057 | WRITE_ONCE(*A, W); |
3058 | X = READ_ONCE(*A); |
3059 | WRITE_ONCE(*A, Y); |
3060 | Z = READ_ONCE(*A); |
3061 | |
3062 | and assuming no intervention by an external influence, it can be assumed that |
3063 | the final result will appear to be: |
3064 | |
3065 | U == the original value of *A |
3066 | X == W |
3067 | Z == Y |
3068 | *A == Y |
3069 | |
3070 | The code above may cause the CPU to generate the full sequence of memory |
3071 | accesses: |
3072 | |
3073 | U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A |
3074 | |
3075 | in that order, but, without intervention, the sequence may have almost any |
3076 | combination of elements combined or discarded, provided the program's view |
3077 | of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() |
3078 | are -not- optional in the above example, as there are architectures |
3079 | where a given CPU might reorder successive loads to the same location. |
3080 | On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is |
3081 | necessary to prevent this, for example, on Itanium the volatile casts |
3082 | used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq |
3083 | and st.rel instructions (respectively) that prevent such reordering. |
3084 | |
3085 | The compiler may also combine, discard or defer elements of the sequence before |
3086 | the CPU even sees them. |
3087 | |
3088 | For instance: |
3089 | |
3090 | *A = V; |
3091 | *A = W; |
3092 | |
3093 | may be reduced to: |
3094 | |
3095 | *A = W; |
3096 | |
3097 | since, without either a write barrier or an WRITE_ONCE(), it can be |
3098 | assumed that the effect of the storage of V to *A is lost. Similarly: |
3099 | |
3100 | *A = Y; |
3101 | Z = *A; |
3102 | |
3103 | may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be |
3104 | reduced to: |
3105 | |
3106 | *A = Y; |
3107 | Z = Y; |
3108 | |
3109 | and the LOAD operation never appear outside of the CPU. |
3110 | |
3111 | |
3112 | AND THEN THERE'S THE ALPHA |
3113 | -------------------------- |
3114 | |
3115 | The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, |
3116 | some versions of the Alpha CPU have a split data cache, permitting them to have |
3117 | two semantically-related cache lines updated at separate times. This is where |
3118 | the data dependency barrier really becomes necessary as this synchronises both |
3119 | caches with the memory coherence system, thus making it seem like pointer |
3120 | changes vs new data occur in the right order. |
3121 | |
3122 | The Alpha defines the Linux kernel's memory barrier model. |
3123 | |
3124 | See the subsection on "Cache Coherency" above. |
3125 | |
3126 | |
3127 | VIRTUAL MACHINE GUESTS |
3128 | ---------------------- |
3129 | |
3130 | Guests running within virtual machines might be affected by SMP effects even if |
3131 | the guest itself is compiled without SMP support. This is an artifact of |
3132 | interfacing with an SMP host while running an UP kernel. Using mandatory |
3133 | barriers for this use-case would be possible but is often suboptimal. |
3134 | |
3135 | To handle this case optimally, low-level virt_mb() etc macros are available. |
3136 | These have the same effect as smp_mb() etc when SMP is enabled, but generate |
3137 | identical code for SMP and non-SMP systems. For example, virtual machine guests |
3138 | should use virt_mb() rather than smp_mb() when synchronizing against a |
3139 | (possibly SMP) host. |
3140 | |
3141 | These are equivalent to smp_mb() etc counterparts in all other respects, |
3142 | in particular, they do not control MMIO effects: to control |
3143 | MMIO effects, use mandatory barriers. |
3144 | |
3145 | |
3146 | ============ |
3147 | EXAMPLE USES |
3148 | ============ |
3149 | |
3150 | CIRCULAR BUFFERS |
3151 | ---------------- |
3152 | |
3153 | Memory barriers can be used to implement circular buffering without the need |
3154 | of a lock to serialise the producer with the consumer. See: |
3155 | |
3156 | Documentation/circular-buffers.txt |
3157 | |
3158 | for details. |
3159 | |
3160 | |
3161 | ========== |
3162 | REFERENCES |
3163 | ========== |
3164 | |
3165 | Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, |
3166 | Digital Press) |
3167 | Chapter 5.2: Physical Address Space Characteristics |
3168 | Chapter 5.4: Caches and Write Buffers |
3169 | Chapter 5.5: Data Sharing |
3170 | Chapter 5.6: Read/Write Ordering |
3171 | |
3172 | AMD64 Architecture Programmer's Manual Volume 2: System Programming |
3173 | Chapter 7.1: Memory-Access Ordering |
3174 | Chapter 7.4: Buffering and Combining Memory Writes |
3175 | |
3176 | IA-32 Intel Architecture Software Developer's Manual, Volume 3: |
3177 | System Programming Guide |
3178 | Chapter 7.1: Locked Atomic Operations |
3179 | Chapter 7.2: Memory Ordering |
3180 | Chapter 7.4: Serializing Instructions |
3181 | |
3182 | The SPARC Architecture Manual, Version 9 |
3183 | Chapter 8: Memory Models |
3184 | Appendix D: Formal Specification of the Memory Models |
3185 | Appendix J: Programming with the Memory Models |
3186 | |
3187 | UltraSPARC Programmer Reference Manual |
3188 | Chapter 5: Memory Accesses and Cacheability |
3189 | Chapter 15: Sparc-V9 Memory Models |
3190 | |
3191 | UltraSPARC III Cu User's Manual |
3192 | Chapter 9: Memory Models |
3193 | |
3194 | UltraSPARC IIIi Processor User's Manual |
3195 | Chapter 8: Memory Models |
3196 | |
3197 | UltraSPARC Architecture 2005 |
3198 | Chapter 9: Memory |
3199 | Appendix D: Formal Specifications of the Memory Models |
3200 | |
3201 | UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 |
3202 | Chapter 8: Memory Models |
3203 | Appendix F: Caches and Cache Coherency |
3204 | |
3205 | Solaris Internals, Core Kernel Architecture, p63-68: |
3206 | Chapter 3.3: Hardware Considerations for Locks and |
3207 | Synchronization |
3208 | |
3209 | Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching |
3210 | for Kernel Programmers: |
3211 | Chapter 13: Other Memory Models |
3212 | |
3213 | Intel Itanium Architecture Software Developer's Manual: Volume 1: |
3214 | Section 2.6: Speculation |
3215 | Section 4.4: Memory Access |
3216 |