blob: 2cbf71975381d0a850d1a254aa76af7957b35058
1 | this_cpu operations |
2 | ------------------- |
3 | |
4 | this_cpu operations are a way of optimizing access to per cpu |
5 | variables associated with the *currently* executing processor. This is |
6 | done through the use of segment registers (or a dedicated register where |
7 | the cpu permanently stored the beginning of the per cpu area for a |
8 | specific processor). |
9 | |
10 | this_cpu operations add a per cpu variable offset to the processor |
11 | specific per cpu base and encode that operation in the instruction |
12 | operating on the per cpu variable. |
13 | |
14 | This means that there are no atomicity issues between the calculation of |
15 | the offset and the operation on the data. Therefore it is not |
16 | necessary to disable preemption or interrupts to ensure that the |
17 | processor is not changed between the calculation of the address and |
18 | the operation on the data. |
19 | |
20 | Read-modify-write operations are of particular interest. Frequently |
21 | processors have special lower latency instructions that can operate |
22 | without the typical synchronization overhead, but still provide some |
23 | sort of relaxed atomicity guarantees. The x86, for example, can execute |
24 | RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the |
25 | lock prefix and the associated latency penalty. |
26 | |
27 | Access to the variable without the lock prefix is not synchronized but |
28 | synchronization is not necessary since we are dealing with per cpu |
29 | data specific to the currently executing processor. Only the current |
30 | processor should be accessing that variable and therefore there are no |
31 | concurrency issues with other processors in the system. |
32 | |
33 | Please note that accesses by remote processors to a per cpu area are |
34 | exceptional situations and may impact performance and/or correctness |
35 | (remote write operations) of local RMW operations via this_cpu_*. |
36 | |
37 | The main use of the this_cpu operations has been to optimize counter |
38 | operations. |
39 | |
40 | The following this_cpu() operations with implied preemption protection |
41 | are defined. These operations can be used without worrying about |
42 | preemption and interrupts. |
43 | |
44 | this_cpu_read(pcp) |
45 | this_cpu_write(pcp, val) |
46 | this_cpu_add(pcp, val) |
47 | this_cpu_and(pcp, val) |
48 | this_cpu_or(pcp, val) |
49 | this_cpu_add_return(pcp, val) |
50 | this_cpu_xchg(pcp, nval) |
51 | this_cpu_cmpxchg(pcp, oval, nval) |
52 | this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) |
53 | this_cpu_sub(pcp, val) |
54 | this_cpu_inc(pcp) |
55 | this_cpu_dec(pcp) |
56 | this_cpu_sub_return(pcp, val) |
57 | this_cpu_inc_return(pcp) |
58 | this_cpu_dec_return(pcp) |
59 | |
60 | |
61 | Inner working of this_cpu operations |
62 | ------------------------------------ |
63 | |
64 | On x86 the fs: or the gs: segment registers contain the base of the |
65 | per cpu area. It is then possible to simply use the segment override |
66 | to relocate a per cpu relative address to the proper per cpu area for |
67 | the processor. So the relocation to the per cpu base is encoded in the |
68 | instruction via a segment register prefix. |
69 | |
70 | For example: |
71 | |
72 | DEFINE_PER_CPU(int, x); |
73 | int z; |
74 | |
75 | z = this_cpu_read(x); |
76 | |
77 | results in a single instruction |
78 | |
79 | mov ax, gs:[x] |
80 | |
81 | instead of a sequence of calculation of the address and then a fetch |
82 | from that address which occurs with the per cpu operations. Before |
83 | this_cpu_ops such sequence also required preempt disable/enable to |
84 | prevent the kernel from moving the thread to a different processor |
85 | while the calculation is performed. |
86 | |
87 | Consider the following this_cpu operation: |
88 | |
89 | this_cpu_inc(x) |
90 | |
91 | The above results in the following single instruction (no lock prefix!) |
92 | |
93 | inc gs:[x] |
94 | |
95 | instead of the following operations required if there is no segment |
96 | register: |
97 | |
98 | int *y; |
99 | int cpu; |
100 | |
101 | cpu = get_cpu(); |
102 | y = per_cpu_ptr(&x, cpu); |
103 | (*y)++; |
104 | put_cpu(); |
105 | |
106 | Note that these operations can only be used on per cpu data that is |
107 | reserved for a specific processor. Without disabling preemption in the |
108 | surrounding code this_cpu_inc() will only guarantee that one of the |
109 | per cpu counters is correctly incremented. However, there is no |
110 | guarantee that the OS will not move the process directly before or |
111 | after the this_cpu instruction is executed. In general this means that |
112 | the value of the individual counters for each processor are |
113 | meaningless. The sum of all the per cpu counters is the only value |
114 | that is of interest. |
115 | |
116 | Per cpu variables are used for performance reasons. Bouncing cache |
117 | lines can be avoided if multiple processors concurrently go through |
118 | the same code paths. Since each processor has its own per cpu |
119 | variables no concurrent cache line updates take place. The price that |
120 | has to be paid for this optimization is the need to add up the per cpu |
121 | counters when the value of a counter is needed. |
122 | |
123 | |
124 | Special operations: |
125 | ------------------- |
126 | |
127 | y = this_cpu_ptr(&x) |
128 | |
129 | Takes the offset of a per cpu variable (&x !) and returns the address |
130 | of the per cpu variable that belongs to the currently executing |
131 | processor. this_cpu_ptr avoids multiple steps that the common |
132 | get_cpu/put_cpu sequence requires. No processor number is |
133 | available. Instead, the offset of the local per cpu area is simply |
134 | added to the per cpu offset. |
135 | |
136 | Note that this operation is usually used in a code segment when |
137 | preemption has been disabled. The pointer is then used to |
138 | access local per cpu data in a critical section. When preemption |
139 | is re-enabled this pointer is usually no longer useful since it may |
140 | no longer point to per cpu data of the current processor. |
141 | |
142 | |
143 | Per cpu variables and offsets |
144 | ----------------------------- |
145 | |
146 | Per cpu variables have *offsets* to the beginning of the per cpu |
147 | area. They do not have addresses although they look like that in the |
148 | code. Offsets cannot be directly dereferenced. The offset must be |
149 | added to a base pointer of a per cpu area of a processor in order to |
150 | form a valid address. |
151 | |
152 | Therefore the use of x or &x outside of the context of per cpu |
153 | operations is invalid and will generally be treated like a NULL |
154 | pointer dereference. |
155 | |
156 | DEFINE_PER_CPU(int, x); |
157 | |
158 | In the context of per cpu operations the above implies that x is a per |
159 | cpu variable. Most this_cpu operations take a cpu variable. |
160 | |
161 | int __percpu *p = &x; |
162 | |
163 | &x and hence p is the *offset* of a per cpu variable. this_cpu_ptr() |
164 | takes the offset of a per cpu variable which makes this look a bit |
165 | strange. |
166 | |
167 | |
168 | Operations on a field of a per cpu structure |
169 | -------------------------------------------- |
170 | |
171 | Let's say we have a percpu structure |
172 | |
173 | struct s { |
174 | int n,m; |
175 | }; |
176 | |
177 | DEFINE_PER_CPU(struct s, p); |
178 | |
179 | |
180 | Operations on these fields are straightforward |
181 | |
182 | this_cpu_inc(p.m) |
183 | |
184 | z = this_cpu_cmpxchg(p.m, 0, 1); |
185 | |
186 | |
187 | If we have an offset to struct s: |
188 | |
189 | struct s __percpu *ps = &p; |
190 | |
191 | this_cpu_dec(ps->m); |
192 | |
193 | z = this_cpu_inc_return(ps->n); |
194 | |
195 | |
196 | The calculation of the pointer may require the use of this_cpu_ptr() |
197 | if we do not make use of this_cpu ops later to manipulate fields: |
198 | |
199 | struct s *pp; |
200 | |
201 | pp = this_cpu_ptr(&p); |
202 | |
203 | pp->m--; |
204 | |
205 | z = pp->n++; |
206 | |
207 | |
208 | Variants of this_cpu ops |
209 | ------------------------- |
210 | |
211 | this_cpu ops are interrupt safe. Some architectures do not support |
212 | these per cpu local operations. In that case the operation must be |
213 | replaced by code that disables interrupts, then does the operations |
214 | that are guaranteed to be atomic and then re-enable interrupts. Doing |
215 | so is expensive. If there are other reasons why the scheduler cannot |
216 | change the processor we are executing on then there is no reason to |
217 | disable interrupts. For that purpose the following __this_cpu operations |
218 | are provided. |
219 | |
220 | These operations have no guarantee against concurrent interrupts or |
221 | preemption. If a per cpu variable is not used in an interrupt context |
222 | and the scheduler cannot preempt, then they are safe. If any interrupts |
223 | still occur while an operation is in progress and if the interrupt too |
224 | modifies the variable, then RMW actions can not be guaranteed to be |
225 | safe. |
226 | |
227 | __this_cpu_read(pcp) |
228 | __this_cpu_write(pcp, val) |
229 | __this_cpu_add(pcp, val) |
230 | __this_cpu_and(pcp, val) |
231 | __this_cpu_or(pcp, val) |
232 | __this_cpu_add_return(pcp, val) |
233 | __this_cpu_xchg(pcp, nval) |
234 | __this_cpu_cmpxchg(pcp, oval, nval) |
235 | __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2) |
236 | __this_cpu_sub(pcp, val) |
237 | __this_cpu_inc(pcp) |
238 | __this_cpu_dec(pcp) |
239 | __this_cpu_sub_return(pcp, val) |
240 | __this_cpu_inc_return(pcp) |
241 | __this_cpu_dec_return(pcp) |
242 | |
243 | |
244 | Will increment x and will not fall-back to code that disables |
245 | interrupts on platforms that cannot accomplish atomicity through |
246 | address relocation and a Read-Modify-Write operation in the same |
247 | instruction. |
248 | |
249 | |
250 | &this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n) |
251 | -------------------------------------------- |
252 | |
253 | The first operation takes the offset and forms an address and then |
254 | adds the offset of the n field. This may result in two add |
255 | instructions emitted by the compiler. |
256 | |
257 | The second one first adds the two offsets and then does the |
258 | relocation. IMHO the second form looks cleaner and has an easier time |
259 | with (). The second form also is consistent with the way |
260 | this_cpu_read() and friends are used. |
261 | |
262 | |
263 | Remote access to per cpu data |
264 | ------------------------------ |
265 | |
266 | Per cpu data structures are designed to be used by one cpu exclusively. |
267 | If you use the variables as intended, this_cpu_ops() are guaranteed to |
268 | be "atomic" as no other CPU has access to these data structures. |
269 | |
270 | There are special cases where you might need to access per cpu data |
271 | structures remotely. It is usually safe to do a remote read access |
272 | and that is frequently done to summarize counters. Remote write access |
273 | something which could be problematic because this_cpu ops do not |
274 | have lock semantics. A remote write may interfere with a this_cpu |
275 | RMW operation. |
276 | |
277 | Remote write accesses to percpu data structures are highly discouraged |
278 | unless absolutely necessary. Please consider using an IPI to wake up |
279 | the remote CPU and perform the update to its per cpu area. |
280 | |
281 | To access per-cpu data structure remotely, typically the per_cpu_ptr() |
282 | function is used: |
283 | |
284 | |
285 | DEFINE_PER_CPU(struct data, datap); |
286 | |
287 | struct data *p = per_cpu_ptr(&datap, cpu); |
288 | |
289 | This makes it explicit that we are getting ready to access a percpu |
290 | area remotely. |
291 | |
292 | You can also do the following to convert the datap offset to an address |
293 | |
294 | struct data *p = this_cpu_ptr(&datap); |
295 | |
296 | but, passing of pointers calculated via this_cpu_ptr to other cpus is |
297 | unusual and should be avoided. |
298 | |
299 | Remote access are typically only for reading the status of another cpus |
300 | per cpu data. Write accesses can cause unique problems due to the |
301 | relaxed synchronization requirements for this_cpu operations. |
302 | |
303 | One example that illustrates some concerns with write operations is |
304 | the following scenario that occurs because two per cpu variables |
305 | share a cache-line but the relaxed synchronization is applied to |
306 | only one process updating the cache-line. |
307 | |
308 | Consider the following example |
309 | |
310 | |
311 | struct test { |
312 | atomic_t a; |
313 | int b; |
314 | }; |
315 | |
316 | DEFINE_PER_CPU(struct test, onecacheline); |
317 | |
318 | There is some concern about what would happen if the field 'a' is updated |
319 | remotely from one processor and the local processor would use this_cpu ops |
320 | to update field b. Care should be taken that such simultaneous accesses to |
321 | data within the same cache line are avoided. Also costly synchronization |
322 | may be necessary. IPIs are generally recommended in such scenarios instead |
323 | of a remote write to the per cpu area of another processor. |
324 | |
325 | Even in cases where the remote writes are rare, please bear in |
326 | mind that a remote write will evict the cache line from the processor |
327 | that most likely will access it. If the processor wakes up and finds a |
328 | missing local cache line of a per cpu area, its performance and hence |
329 | the wake up times will be affected. |
330 | |
331 | Christoph Lameter, August 4th, 2014 |
332 | Pranith Kumar, Aug 2nd, 2014 |
333 |