blob: 6b20128fab8a2b2d4b50bbca1a36e37a2df853e0
1 | Dynamic DMA mapping using the generic device |
2 | ============================================ |
3 | |
4 | James E.J. Bottomley <James.Bottomley@HansenPartnership.com> |
5 | |
6 | This document describes the DMA API. For a more gentle introduction |
7 | of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. |
8 | |
9 | This API is split into two pieces. Part I describes the basic API. |
10 | Part II describes extensions for supporting non-consistent memory |
11 | machines. Unless you know that your driver absolutely has to support |
12 | non-consistent platforms (this is usually only legacy platforms) you |
13 | should only use the API described in part I. |
14 | |
15 | Part I - dma_ API |
16 | ------------------------------------- |
17 | |
18 | To get the dma_ API, you must #include <linux/dma-mapping.h>. This |
19 | provides dma_addr_t and the interfaces described below. |
20 | |
21 | A dma_addr_t can hold any valid DMA address for the platform. It can be |
22 | given to a device to use as a DMA source or target. A CPU cannot reference |
23 | a dma_addr_t directly because there may be translation between its physical |
24 | address space and the DMA address space. |
25 | |
26 | Part Ia - Using large DMA-coherent buffers |
27 | ------------------------------------------ |
28 | |
29 | void * |
30 | dma_alloc_coherent(struct device *dev, size_t size, |
31 | dma_addr_t *dma_handle, gfp_t flag) |
32 | |
33 | Consistent memory is memory for which a write by either the device or |
34 | the processor can immediately be read by the processor or device |
35 | without having to worry about caching effects. (You may however need |
36 | to make sure to flush the processor's write buffers before telling |
37 | devices to read that memory.) |
38 | |
39 | This routine allocates a region of <size> bytes of consistent memory. |
40 | |
41 | It returns a pointer to the allocated region (in the processor's virtual |
42 | address space) or NULL if the allocation failed. |
43 | |
44 | It also returns a <dma_handle> which may be cast to an unsigned integer the |
45 | same width as the bus and given to the device as the DMA address base of |
46 | the region. |
47 | |
48 | Note: consistent memory can be expensive on some platforms, and the |
49 | minimum allocation length may be as big as a page, so you should |
50 | consolidate your requests for consistent memory as much as possible. |
51 | The simplest way to do that is to use the dma_pool calls (see below). |
52 | |
53 | The flag parameter (dma_alloc_coherent() only) allows the caller to |
54 | specify the GFP_ flags (see kmalloc()) for the allocation (the |
55 | implementation may choose to ignore flags that affect the location of |
56 | the returned memory, like GFP_DMA). |
57 | |
58 | void * |
59 | dma_zalloc_coherent(struct device *dev, size_t size, |
60 | dma_addr_t *dma_handle, gfp_t flag) |
61 | |
62 | Wraps dma_alloc_coherent() and also zeroes the returned memory if the |
63 | allocation attempt succeeded. |
64 | |
65 | void |
66 | dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, |
67 | dma_addr_t dma_handle) |
68 | |
69 | Free a region of consistent memory you previously allocated. dev, |
70 | size and dma_handle must all be the same as those passed into |
71 | dma_alloc_coherent(). cpu_addr must be the virtual address returned by |
72 | the dma_alloc_coherent(). |
73 | |
74 | Note that unlike their sibling allocation calls, these routines |
75 | may only be called with IRQs enabled. |
76 | |
77 | |
78 | Part Ib - Using small DMA-coherent buffers |
79 | ------------------------------------------ |
80 | |
81 | To get this part of the dma_ API, you must #include <linux/dmapool.h> |
82 | |
83 | Many drivers need lots of small DMA-coherent memory regions for DMA |
84 | descriptors or I/O buffers. Rather than allocating in units of a page |
85 | or more using dma_alloc_coherent(), you can use DMA pools. These work |
86 | much like a struct kmem_cache, except that they use the DMA-coherent allocator, |
87 | not __get_free_pages(). Also, they understand common hardware constraints |
88 | for alignment, like queue heads needing to be aligned on N-byte boundaries. |
89 | |
90 | |
91 | struct dma_pool * |
92 | dma_pool_create(const char *name, struct device *dev, |
93 | size_t size, size_t align, size_t alloc); |
94 | |
95 | dma_pool_create() initializes a pool of DMA-coherent buffers |
96 | for use with a given device. It must be called in a context which |
97 | can sleep. |
98 | |
99 | The "name" is for diagnostics (like a struct kmem_cache name); dev and size |
100 | are like what you'd pass to dma_alloc_coherent(). The device's hardware |
101 | alignment requirement for this type of data is "align" (which is expressed |
102 | in bytes, and must be a power of two). If your device has no boundary |
103 | crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated |
104 | from this pool must not cross 4KByte boundaries. |
105 | |
106 | |
107 | void *dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, |
108 | dma_addr_t *handle) |
109 | |
110 | Wraps dma_pool_alloc() and also zeroes the returned memory if the |
111 | allocation attempt succeeded. |
112 | |
113 | |
114 | void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, |
115 | dma_addr_t *dma_handle); |
116 | |
117 | This allocates memory from the pool; the returned memory will meet the |
118 | size and alignment requirements specified at creation time. Pass |
119 | GFP_ATOMIC to prevent blocking, or if it's permitted (not |
120 | in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow |
121 | blocking. Like dma_alloc_coherent(), this returns two values: an |
122 | address usable by the CPU, and the DMA address usable by the pool's |
123 | device. |
124 | |
125 | |
126 | void dma_pool_free(struct dma_pool *pool, void *vaddr, |
127 | dma_addr_t addr); |
128 | |
129 | This puts memory back into the pool. The pool is what was passed to |
130 | dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what |
131 | were returned when that routine allocated the memory being freed. |
132 | |
133 | |
134 | void dma_pool_destroy(struct dma_pool *pool); |
135 | |
136 | dma_pool_destroy() frees the resources of the pool. It must be |
137 | called in a context which can sleep. Make sure you've freed all allocated |
138 | memory back to the pool before you destroy it. |
139 | |
140 | |
141 | Part Ic - DMA addressing limitations |
142 | ------------------------------------ |
143 | |
144 | int |
145 | dma_set_mask_and_coherent(struct device *dev, u64 mask) |
146 | |
147 | Checks to see if the mask is possible and updates the device |
148 | streaming and coherent DMA mask parameters if it is. |
149 | |
150 | Returns: 0 if successful and a negative error if not. |
151 | |
152 | int |
153 | dma_set_mask(struct device *dev, u64 mask) |
154 | |
155 | Checks to see if the mask is possible and updates the device |
156 | parameters if it is. |
157 | |
158 | Returns: 0 if successful and a negative error if not. |
159 | |
160 | int |
161 | dma_set_coherent_mask(struct device *dev, u64 mask) |
162 | |
163 | Checks to see if the mask is possible and updates the device |
164 | parameters if it is. |
165 | |
166 | Returns: 0 if successful and a negative error if not. |
167 | |
168 | u64 |
169 | dma_get_required_mask(struct device *dev) |
170 | |
171 | This API returns the mask that the platform requires to |
172 | operate efficiently. Usually this means the returned mask |
173 | is the minimum required to cover all of memory. Examining the |
174 | required mask gives drivers with variable descriptor sizes the |
175 | opportunity to use smaller descriptors as necessary. |
176 | |
177 | Requesting the required mask does not alter the current mask. If you |
178 | wish to take advantage of it, you should issue a dma_set_mask() |
179 | call to set the mask to the value returned. |
180 | |
181 | |
182 | Part Id - Streaming DMA mappings |
183 | -------------------------------- |
184 | |
185 | dma_addr_t |
186 | dma_map_single(struct device *dev, void *cpu_addr, size_t size, |
187 | enum dma_data_direction direction) |
188 | |
189 | Maps a piece of processor virtual memory so it can be accessed by the |
190 | device and returns the DMA address of the memory. |
191 | |
192 | The direction for both APIs may be converted freely by casting. |
193 | However the dma_ API uses a strongly typed enumerator for its |
194 | direction: |
195 | |
196 | DMA_NONE no direction (used for debugging) |
197 | DMA_TO_DEVICE data is going from the memory to the device |
198 | DMA_FROM_DEVICE data is coming from the device to the memory |
199 | DMA_BIDIRECTIONAL direction isn't known |
200 | |
201 | Notes: Not all memory regions in a machine can be mapped by this API. |
202 | Further, contiguous kernel virtual space may not be contiguous as |
203 | physical memory. Since this API does not provide any scatter/gather |
204 | capability, it will fail if the user tries to map a non-physically |
205 | contiguous piece of memory. For this reason, memory to be mapped by |
206 | this API should be obtained from sources which guarantee it to be |
207 | physically contiguous (like kmalloc). |
208 | |
209 | Further, the DMA address of the memory must be within the |
210 | dma_mask of the device (the dma_mask is a bit mask of the |
211 | addressable region for the device, i.e., if the DMA address of |
212 | the memory ANDed with the dma_mask is still equal to the DMA |
213 | address, then the device can perform DMA to the memory). To |
214 | ensure that the memory allocated by kmalloc is within the dma_mask, |
215 | the driver may specify various platform-dependent flags to restrict |
216 | the DMA address range of the allocation (e.g., on x86, GFP_DMA |
217 | guarantees to be within the first 16MB of available DMA addresses, |
218 | as required by ISA devices). |
219 | |
220 | Note also that the above constraints on physical contiguity and |
221 | dma_mask may not apply if the platform has an IOMMU (a device which |
222 | maps an I/O DMA address to a physical memory address). However, to be |
223 | portable, device driver writers may *not* assume that such an IOMMU |
224 | exists. |
225 | |
226 | Warnings: Memory coherency operates at a granularity called the cache |
227 | line width. In order for memory mapped by this API to operate |
228 | correctly, the mapped region must begin exactly on a cache line |
229 | boundary and end exactly on one (to prevent two separately mapped |
230 | regions from sharing a single cache line). Since the cache line size |
231 | may not be known at compile time, the API will not enforce this |
232 | requirement. Therefore, it is recommended that driver writers who |
233 | don't take special care to determine the cache line size at run time |
234 | only map virtual regions that begin and end on page boundaries (which |
235 | are guaranteed also to be cache line boundaries). |
236 | |
237 | DMA_TO_DEVICE synchronisation must be done after the last modification |
238 | of the memory region by the software and before it is handed off to |
239 | the device. Once this primitive is used, memory covered by this |
240 | primitive should be treated as read-only by the device. If the device |
241 | may write to it at any point, it should be DMA_BIDIRECTIONAL (see |
242 | below). |
243 | |
244 | DMA_FROM_DEVICE synchronisation must be done before the driver |
245 | accesses data that may be changed by the device. This memory should |
246 | be treated as read-only by the driver. If the driver needs to write |
247 | to it at any point, it should be DMA_BIDIRECTIONAL (see below). |
248 | |
249 | DMA_BIDIRECTIONAL requires special handling: it means that the driver |
250 | isn't sure if the memory was modified before being handed off to the |
251 | device and also isn't sure if the device will also modify it. Thus, |
252 | you must always sync bidirectional memory twice: once before the |
253 | memory is handed off to the device (to make sure all memory changes |
254 | are flushed from the processor) and once before the data may be |
255 | accessed after being used by the device (to make sure any processor |
256 | cache lines are updated with data that the device may have changed). |
257 | |
258 | void |
259 | dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, |
260 | enum dma_data_direction direction) |
261 | |
262 | Unmaps the region previously mapped. All the parameters passed in |
263 | must be identical to those passed in (and returned) by the mapping |
264 | API. |
265 | |
266 | dma_addr_t |
267 | dma_map_page(struct device *dev, struct page *page, |
268 | unsigned long offset, size_t size, |
269 | enum dma_data_direction direction) |
270 | void |
271 | dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, |
272 | enum dma_data_direction direction) |
273 | |
274 | API for mapping and unmapping for pages. All the notes and warnings |
275 | for the other mapping APIs apply here. Also, although the <offset> |
276 | and <size> parameters are provided to do partial page mapping, it is |
277 | recommended that you never use these unless you really know what the |
278 | cache width is. |
279 | |
280 | dma_addr_t |
281 | dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, |
282 | enum dma_data_direction dir, unsigned long attrs) |
283 | |
284 | void |
285 | dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, |
286 | enum dma_data_direction dir, unsigned long attrs) |
287 | |
288 | API for mapping and unmapping for MMIO resources. All the notes and |
289 | warnings for the other mapping APIs apply here. The API should only be |
290 | used to map device MMIO resources, mapping of RAM is not permitted. |
291 | |
292 | int |
293 | dma_mapping_error(struct device *dev, dma_addr_t dma_addr) |
294 | |
295 | In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() |
296 | will fail to create a mapping. A driver can check for these errors by testing |
297 | the returned DMA address with dma_mapping_error(). A non-zero return value |
298 | means the mapping could not be created and the driver should take appropriate |
299 | action (e.g. reduce current DMA mapping usage or delay and try again later). |
300 | |
301 | int |
302 | dma_map_sg(struct device *dev, struct scatterlist *sg, |
303 | int nents, enum dma_data_direction direction) |
304 | |
305 | Returns: the number of DMA address segments mapped (this may be shorter |
306 | than <nents> passed in if some elements of the scatter/gather list are |
307 | physically or virtually adjacent and an IOMMU maps them with a single |
308 | entry). |
309 | |
310 | Please note that the sg cannot be mapped again if it has been mapped once. |
311 | The mapping process is allowed to destroy information in the sg. |
312 | |
313 | As with the other mapping interfaces, dma_map_sg() can fail. When it |
314 | does, 0 is returned and a driver must take appropriate action. It is |
315 | critical that the driver do something, in the case of a block driver |
316 | aborting the request or even oopsing is better than doing nothing and |
317 | corrupting the filesystem. |
318 | |
319 | With scatterlists, you use the resulting mapping like this: |
320 | |
321 | int i, count = dma_map_sg(dev, sglist, nents, direction); |
322 | struct scatterlist *sg; |
323 | |
324 | for_each_sg(sglist, sg, count, i) { |
325 | hw_address[i] = sg_dma_address(sg); |
326 | hw_len[i] = sg_dma_len(sg); |
327 | } |
328 | |
329 | where nents is the number of entries in the sglist. |
330 | |
331 | The implementation is free to merge several consecutive sglist entries |
332 | into one (e.g. with an IOMMU, or if several pages just happen to be |
333 | physically contiguous) and returns the actual number of sg entries it |
334 | mapped them to. On failure 0, is returned. |
335 | |
336 | Then you should loop count times (note: this can be less than nents times) |
337 | and use sg_dma_address() and sg_dma_len() macros where you previously |
338 | accessed sg->address and sg->length as shown above. |
339 | |
340 | void |
341 | dma_unmap_sg(struct device *dev, struct scatterlist *sg, |
342 | int nents, enum dma_data_direction direction) |
343 | |
344 | Unmap the previously mapped scatter/gather list. All the parameters |
345 | must be the same as those and passed in to the scatter/gather mapping |
346 | API. |
347 | |
348 | Note: <nents> must be the number you passed in, *not* the number of |
349 | DMA address entries returned. |
350 | |
351 | void |
352 | dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, |
353 | enum dma_data_direction direction) |
354 | void |
355 | dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, |
356 | enum dma_data_direction direction) |
357 | void |
358 | dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nents, |
359 | enum dma_data_direction direction) |
360 | void |
361 | dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nents, |
362 | enum dma_data_direction direction) |
363 | |
364 | Synchronise a single contiguous or scatter/gather mapping for the CPU |
365 | and device. With the sync_sg API, all the parameters must be the same |
366 | as those passed into the single mapping API. With the sync_single API, |
367 | you can use dma_handle and size parameters that aren't identical to |
368 | those passed into the single mapping API to do a partial sync. |
369 | |
370 | Notes: You must do this: |
371 | |
372 | - Before reading values that have been written by DMA from the device |
373 | (use the DMA_FROM_DEVICE direction) |
374 | - After writing values that will be written to the device using DMA |
375 | (use the DMA_TO_DEVICE) direction |
376 | - before *and* after handing memory to the device if the memory is |
377 | DMA_BIDIRECTIONAL |
378 | |
379 | See also dma_map_single(). |
380 | |
381 | dma_addr_t |
382 | dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, |
383 | enum dma_data_direction dir, |
384 | unsigned long attrs) |
385 | |
386 | void |
387 | dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, |
388 | size_t size, enum dma_data_direction dir, |
389 | unsigned long attrs) |
390 | |
391 | int |
392 | dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, |
393 | int nents, enum dma_data_direction dir, |
394 | unsigned long attrs) |
395 | |
396 | void |
397 | dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, |
398 | int nents, enum dma_data_direction dir, |
399 | unsigned long attrs) |
400 | |
401 | The four functions above are just like the counterpart functions |
402 | without the _attrs suffixes, except that they pass an optional |
403 | dma_attrs. |
404 | |
405 | The interpretation of DMA attributes is architecture-specific, and |
406 | each attribute should be documented in Documentation/DMA-attributes.txt. |
407 | |
408 | If dma_attrs are 0, the semantics of each of these functions |
409 | is identical to those of the corresponding function |
410 | without the _attrs suffix. As a result dma_map_single_attrs() |
411 | can generally replace dma_map_single(), etc. |
412 | |
413 | As an example of the use of the *_attrs functions, here's how |
414 | you could pass an attribute DMA_ATTR_FOO when mapping memory |
415 | for DMA: |
416 | |
417 | #include <linux/dma-mapping.h> |
418 | /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and |
419 | * documented in Documentation/DMA-attributes.txt */ |
420 | ... |
421 | |
422 | unsigned long attr; |
423 | attr |= DMA_ATTR_FOO; |
424 | .... |
425 | n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); |
426 | .... |
427 | |
428 | Architectures that care about DMA_ATTR_FOO would check for its |
429 | presence in their implementations of the mapping and unmapping |
430 | routines, e.g.: |
431 | |
432 | void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, |
433 | size_t size, enum dma_data_direction dir, |
434 | unsigned long attrs) |
435 | { |
436 | .... |
437 | if (attrs & DMA_ATTR_FOO) |
438 | /* twizzle the frobnozzle */ |
439 | .... |
440 | |
441 | |
442 | Part II - Advanced dma_ usage |
443 | ----------------------------- |
444 | |
445 | Warning: These pieces of the DMA API should not be used in the |
446 | majority of cases, since they cater for unlikely corner cases that |
447 | don't belong in usual drivers. |
448 | |
449 | If you don't understand how cache line coherency works between a |
450 | processor and an I/O device, you should not be using this part of the |
451 | API at all. |
452 | |
453 | void * |
454 | dma_alloc_noncoherent(struct device *dev, size_t size, |
455 | dma_addr_t *dma_handle, gfp_t flag) |
456 | |
457 | Identical to dma_alloc_coherent() except that the platform will |
458 | choose to return either consistent or non-consistent memory as it sees |
459 | fit. By using this API, you are guaranteeing to the platform that you |
460 | have all the correct and necessary sync points for this memory in the |
461 | driver should it choose to return non-consistent memory. |
462 | |
463 | Note: where the platform can return consistent memory, it will |
464 | guarantee that the sync points become nops. |
465 | |
466 | Warning: Handling non-consistent memory is a real pain. You should |
467 | only use this API if you positively know your driver will be |
468 | required to work on one of the rare (usually non-PCI) architectures |
469 | that simply cannot make consistent memory. |
470 | |
471 | void |
472 | dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr, |
473 | dma_addr_t dma_handle) |
474 | |
475 | Free memory allocated by the nonconsistent API. All parameters must |
476 | be identical to those passed in (and returned by |
477 | dma_alloc_noncoherent()). |
478 | |
479 | int |
480 | dma_get_cache_alignment(void) |
481 | |
482 | Returns the processor cache alignment. This is the absolute minimum |
483 | alignment *and* width that you must observe when either mapping |
484 | memory or doing partial flushes. |
485 | |
486 | Notes: This API may return a number *larger* than the actual cache |
487 | line, but it will guarantee that one or more cache lines fit exactly |
488 | into the width returned by this call. It will also always be a power |
489 | of two for easy alignment. |
490 | |
491 | void |
492 | dma_cache_sync(struct device *dev, void *vaddr, size_t size, |
493 | enum dma_data_direction direction) |
494 | |
495 | Do a partial sync of memory that was allocated by |
496 | dma_alloc_noncoherent(), starting at virtual address vaddr and |
497 | continuing on for size. Again, you *must* observe the cache line |
498 | boundaries when doing this. |
499 | |
500 | int |
501 | dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, |
502 | dma_addr_t device_addr, size_t size, int |
503 | flags) |
504 | |
505 | Declare region of memory to be handed out by dma_alloc_coherent() when |
506 | it's asked for coherent memory for this device. |
507 | |
508 | phys_addr is the CPU physical address to which the memory is currently |
509 | assigned (this will be ioremapped so the CPU can access the region). |
510 | |
511 | device_addr is the DMA address the device needs to be programmed |
512 | with to actually address this memory (this will be handed out as the |
513 | dma_addr_t in dma_alloc_coherent()). |
514 | |
515 | size is the size of the area (must be multiples of PAGE_SIZE). |
516 | |
517 | flags can be ORed together and are: |
518 | |
519 | DMA_MEMORY_MAP - request that the memory returned from |
520 | dma_alloc_coherent() be directly writable. |
521 | |
522 | DMA_MEMORY_IO - request that the memory returned from |
523 | dma_alloc_coherent() be addressable using read()/write()/memcpy_toio() etc. |
524 | |
525 | One or both of these flags must be present. |
526 | |
527 | DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by |
528 | dma_alloc_coherent of any child devices of this one (for memory residing |
529 | on a bridge). |
530 | |
531 | DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. |
532 | Do not allow dma_alloc_coherent() to fall back to system memory when |
533 | it's out of memory in the declared region. |
534 | |
535 | The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and |
536 | must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO |
537 | if only DMA_MEMORY_MAP were passed in) for success or zero for |
538 | failure. |
539 | |
540 | Note, for DMA_MEMORY_IO returns, all subsequent memory returned by |
541 | dma_alloc_coherent() may no longer be accessed directly, but instead |
542 | must be accessed using the correct bus functions. If your driver |
543 | isn't prepared to handle this contingency, it should not specify |
544 | DMA_MEMORY_IO in the input flags. |
545 | |
546 | As a simplification for the platforms, only *one* such region of |
547 | memory may be declared per device. |
548 | |
549 | For reasons of efficiency, most platforms choose to track the declared |
550 | region only at the granularity of a page. For smaller allocations, |
551 | you should use the dma_pool() API. |
552 | |
553 | void |
554 | dma_release_declared_memory(struct device *dev) |
555 | |
556 | Remove the memory region previously declared from the system. This |
557 | API performs *no* in-use checking for this region and will return |
558 | unconditionally having removed all the required structures. It is the |
559 | driver's job to ensure that no parts of this memory region are |
560 | currently in use. |
561 | |
562 | void * |
563 | dma_mark_declared_memory_occupied(struct device *dev, |
564 | dma_addr_t device_addr, size_t size) |
565 | |
566 | This is used to occupy specific regions of the declared space |
567 | (dma_alloc_coherent() will hand out the first free region it finds). |
568 | |
569 | device_addr is the *device* address of the region requested. |
570 | |
571 | size is the size (and should be a page-sized multiple). |
572 | |
573 | The return value will be either a pointer to the processor virtual |
574 | address of the memory, or an error (via PTR_ERR()) if any part of the |
575 | region is occupied. |
576 | |
577 | Part III - Debug drivers use of the DMA-API |
578 | ------------------------------------------- |
579 | |
580 | The DMA-API as described above has some constraints. DMA addresses must be |
581 | released with the corresponding function with the same size for example. With |
582 | the advent of hardware IOMMUs it becomes more and more important that drivers |
583 | do not violate those constraints. In the worst case such a violation can |
584 | result in data corruption up to destroyed filesystems. |
585 | |
586 | To debug drivers and find bugs in the usage of the DMA-API checking code can |
587 | be compiled into the kernel which will tell the developer about those |
588 | violations. If your architecture supports it you can select the "Enable |
589 | debugging of DMA-API usage" option in your kernel configuration. Enabling this |
590 | option has a performance impact. Do not enable it in production kernels. |
591 | |
592 | If you boot the resulting kernel will contain code which does some bookkeeping |
593 | about what DMA memory was allocated for which device. If this code detects an |
594 | error it prints a warning message with some details into your kernel log. An |
595 | example warning message may look like this: |
596 | |
597 | ------------[ cut here ]------------ |
598 | WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 |
599 | check_unmap+0x203/0x490() |
600 | Hardware name: |
601 | forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong |
602 | function [device address=0x00000000640444be] [size=66 bytes] [mapped as |
603 | single] [unmapped as page] |
604 | Modules linked in: nfsd exportfs bridge stp llc r8169 |
605 | Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1 |
606 | Call Trace: |
607 | <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 |
608 | [<ffffffff80647b70>] _spin_unlock+0x10/0x30 |
609 | [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 |
610 | [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 |
611 | [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 |
612 | [<ffffffff80252f96>] queue_work+0x56/0x60 |
613 | [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 |
614 | [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 |
615 | [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 |
616 | [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 |
617 | [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 |
618 | [<ffffffff803c7ea3>] check_unmap+0x203/0x490 |
619 | [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 |
620 | [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 |
621 | [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 |
622 | [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 |
623 | [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 |
624 | [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 |
625 | [<ffffffff8020c093>] ret_from_intr+0x0/0xa |
626 | <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- |
627 | |
628 | The driver developer can find the driver and the device including a stacktrace |
629 | of the DMA-API call which caused this warning. |
630 | |
631 | Per default only the first error will result in a warning message. All other |
632 | errors will only silently counted. This limitation exist to prevent the code |
633 | from flooding your kernel log. To support debugging a device driver this can |
634 | be disabled via debugfs. See the debugfs interface documentation below for |
635 | details. |
636 | |
637 | The debugfs directory for the DMA-API debugging code is called dma-api/. In |
638 | this directory the following files can currently be found: |
639 | |
640 | dma-api/all_errors This file contains a numeric value. If this |
641 | value is not equal to zero the debugging code |
642 | will print a warning for every error it finds |
643 | into the kernel log. Be careful with this |
644 | option, as it can easily flood your logs. |
645 | |
646 | dma-api/disabled This read-only file contains the character 'Y' |
647 | if the debugging code is disabled. This can |
648 | happen when it runs out of memory or if it was |
649 | disabled at boot time |
650 | |
651 | dma-api/error_count This file is read-only and shows the total |
652 | numbers of errors found. |
653 | |
654 | dma-api/num_errors The number in this file shows how many |
655 | warnings will be printed to the kernel log |
656 | before it stops. This number is initialized to |
657 | one at system boot and be set by writing into |
658 | this file |
659 | |
660 | dma-api/min_free_entries |
661 | This read-only file can be read to get the |
662 | minimum number of free dma_debug_entries the |
663 | allocator has ever seen. If this value goes |
664 | down to zero the code will disable itself |
665 | because it is not longer reliable. |
666 | |
667 | dma-api/num_free_entries |
668 | The current number of free dma_debug_entries |
669 | in the allocator. |
670 | |
671 | dma-api/driver-filter |
672 | You can write a name of a driver into this file |
673 | to limit the debug output to requests from that |
674 | particular driver. Write an empty string to |
675 | that file to disable the filter and see |
676 | all errors again. |
677 | |
678 | If you have this code compiled into your kernel it will be enabled by default. |
679 | If you want to boot without the bookkeeping anyway you can provide |
680 | 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. |
681 | Notice that you can not enable it again at runtime. You have to reboot to do |
682 | so. |
683 | |
684 | If you want to see debug messages only for a special device driver you can |
685 | specify the dma_debug_driver=<drivername> parameter. This will enable the |
686 | driver filter at boot time. The debug code will only print errors for that |
687 | driver afterwards. This filter can be disabled or changed later using debugfs. |
688 | |
689 | When the code disables itself at runtime this is most likely because it ran |
690 | out of dma_debug_entries. These entries are preallocated at boot. The number |
691 | of preallocated entries is defined per architecture. If it is too low for you |
692 | boot with 'dma_debug_entries=<your_desired_number>' to overwrite the |
693 | architectural default. |
694 | |
695 | void debug_dmap_mapping_error(struct device *dev, dma_addr_t dma_addr); |
696 | |
697 | dma-debug interface debug_dma_mapping_error() to debug drivers that fail |
698 | to check DMA mapping errors on addresses returned by dma_map_single() and |
699 | dma_map_page() interfaces. This interface clears a flag set by |
700 | debug_dma_map_page() to indicate that dma_mapping_error() has been called by |
701 | the driver. When driver does unmap, debug_dma_unmap() checks the flag and if |
702 | this flag is still set, prints warning message that includes call trace that |
703 | leads up to the unmap. This interface can be called from dma_mapping_error() |
704 | routines to enable DMA mapping error check debugging. |
705 | |
706 |