~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/core-api/dma-api-howto.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 =========================
  2 Dynamic DMA mapping Guide
  3 =========================
  4 
  5 :Author: David S. Miller <davem@redhat.com>
  6 :Author: Richard Henderson <rth@cygnus.com>
  7 :Author: Jakub Jelinek <jakub@redhat.com>
  8 
  9 This is a guide to device driver writers on how to use the DMA API
 10 with example pseudo-code.  For a concise description of the API, see
 11 Documentation/core-api/dma-api.rst.
 12 
 13 CPU and DMA addresses
 14 =====================
 15 
 16 There are several kinds of addresses involved in the DMA API, and it's
 17 important to understand the differences.
 18 
 19 The kernel normally uses virtual addresses.  Any address returned by
 20 kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
 21 be stored in a ``void *``.
 22 
 23 The virtual memory system (TLB, page tables, etc.) translates virtual
 24 addresses to CPU physical addresses, which are stored as "phys_addr_t" or
 25 "resource_size_t".  The kernel manages device resources like registers as
 26 physical addresses.  These are the addresses in /proc/iomem.  The physical
 27 address is not directly useful to a driver; it must use ioremap() to map
 28 the space and produce a virtual address.
 29 
 30 I/O devices use a third kind of address: a "bus address".  If a device has
 31 registers at an MMIO address, or if it performs DMA to read or write system
 32 memory, the addresses used by the device are bus addresses.  In some
 33 systems, bus addresses are identical to CPU physical addresses, but in
 34 general they are not.  IOMMUs and host bridges can produce arbitrary
 35 mappings between physical and bus addresses.
 36 
 37 From a device's point of view, DMA uses the bus address space, but it may
 38 be restricted to a subset of that space.  For example, even if a system
 39 supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
 40 so devices only need to use 32-bit DMA addresses.
 41 
 42 Here's a picture and some examples::
 43 
 44                CPU                  CPU                  Bus
 45              Virtual              Physical             Address
 46              Address              Address               Space
 47               Space                Space
 48 
 49             +-------+             +------+             +------+
 50             |       |             |MMIO  |   Offset    |      |
 51             |       |  Virtual    |Space |   applied   |      |
 52           C +-------+ --------> B +------+ ----------> +------+ A
 53             |       |  mapping    |      |   by host   |      |
 54   +-----+   |       |             |      |   bridge    |      |   +--------+
 55   |     |   |       |             +------+             |      |   |        |
 56   | CPU |   |       |             | RAM  |             |      |   | Device |
 57   |     |   |       |             |      |             |      |   |        |
 58   +-----+   +-------+             +------+             +------+   +--------+
 59             |       |  Virtual    |Buffer|   Mapping   |      |
 60           X +-------+ --------> Y +------+ <---------- +------+ Z
 61             |       |  mapping    | RAM  |   by IOMMU
 62             |       |             |      |
 63             |       |             |      |
 64             +-------+             +------+
 65 
 66 During the enumeration process, the kernel learns about I/O devices and
 67 their MMIO space and the host bridges that connect them to the system.  For
 68 example, if a PCI device has a BAR, the kernel reads the bus address (A)
 69 from the BAR and converts it to a CPU physical address (B).  The address B
 70 is stored in a struct resource and usually exposed via /proc/iomem.  When a
 71 driver claims a device, it typically uses ioremap() to map physical address
 72 B at a virtual address (C).  It can then use, e.g., ioread32(C), to access
 73 the device registers at bus address A.
 74 
 75 If the device supports DMA, the driver sets up a buffer using kmalloc() or
 76 a similar interface, which returns a virtual address (X).  The virtual
 77 memory system maps X to a physical address (Y) in system RAM.  The driver
 78 can use virtual address X to access the buffer, but the device itself
 79 cannot because DMA doesn't go through the CPU virtual memory system.
 80 
 81 In some simple systems, the device can do DMA directly to physical address
 82 Y.  But in many others, there is IOMMU hardware that translates DMA
 83 addresses to physical addresses, e.g., it translates Z to Y.  This is part
 84 of the reason for the DMA API: the driver can give a virtual address X to
 85 an interface like dma_map_single(), which sets up any required IOMMU
 86 mapping and returns the DMA address Z.  The driver then tells the device to
 87 do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
 88 RAM.
 89 
 90 So that Linux can use the dynamic DMA mapping, it needs some help from the
 91 drivers, namely it has to take into account that DMA addresses should be
 92 mapped only for the time they are actually used and unmapped after the DMA
 93 transfer.
 94 
 95 The following API will work of course even on platforms where no such
 96 hardware exists.
 97 
 98 Note that the DMA API works with any bus independent of the underlying
 99 microprocessor architecture. You should use the DMA API rather than the
100 bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
101 pci_map_*() interfaces.
102 
103 First of all, you should make sure::
104 
105         #include <linux/dma-mapping.h>
106 
107 is in your driver, which provides the definition of dma_addr_t.  This type
108 can hold any valid DMA address for the platform and should be used
109 everywhere you hold a DMA address returned from the DMA mapping functions.
110 
111 What memory is DMA'able?
112 ========================
113 
114 The first piece of information you must know is what kernel memory can
115 be used with the DMA mapping facilities.  There has been an unwritten
116 set of rules regarding this, and this text is an attempt to finally
117 write them down.
118 
119 If you acquired your memory via the page allocator
120 (i.e. __get_free_page*()) or the generic memory allocators
121 (i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
122 that memory using the addresses returned from those routines.
123 
124 This means specifically that you may _not_ use the memory/addresses
125 returned from vmalloc() for DMA.  It is possible to DMA to the
126 _underlying_ memory mapped into a vmalloc() area, but this requires
127 walking page tables to get the physical addresses, and then
128 translating each of those pages back to a kernel address using
129 something like __va().  [ EDIT: Update this when we integrate
130 Gerd Knorr's generic code which does this. ]
131 
132 This rule also means that you may use neither kernel image addresses
133 (items in data/text/bss segments), nor module image addresses, nor
134 stack addresses for DMA.  These could all be mapped somewhere entirely
135 different than the rest of physical memory.  Even if those classes of
136 memory could physically work with DMA, you'd need to ensure the I/O
137 buffers were cacheline-aligned.  Without that, you'd see cacheline
138 sharing problems (data corruption) on CPUs with DMA-incoherent caches.
139 (The CPU could write to one word, DMA would write to a different one
140 in the same cache line, and one of them could be overwritten.)
141 
142 Also, this means that you cannot take the return of a kmap()
143 call and DMA to/from that.  This is similar to vmalloc().
144 
145 What about block I/O and networking buffers?  The block I/O and
146 networking subsystems make sure that the buffers they use are valid
147 for you to DMA from/to.
148 
149 DMA addressing capabilities
150 ===========================
151 
152 By default, the kernel assumes that your device can address 32-bits of DMA
153 addressing.  For a 64-bit capable device, this needs to be increased, and for
154 a device with limitations, it needs to be decreased.
155 
156 Special note about PCI: PCI-X specification requires PCI-X devices to support
157 64-bit addressing (DAC) for all transactions.  And at least one platform (SGI
158 SN2) requires 64-bit consistent allocations to operate correctly when the IO
159 bus is in PCI-X mode.
160 
161 For correct operation, you must set the DMA mask to inform the kernel about
162 your devices DMA addressing capabilities.
163 
164 This is performed via a call to dma_set_mask_and_coherent()::
165 
166         int dma_set_mask_and_coherent(struct device *dev, u64 mask);
167 
168 which will set the mask for both streaming and coherent APIs together.  If you
169 have some special requirements, then the following two separate calls can be
170 used instead:
171 
172         The setup for streaming mappings is performed via a call to
173         dma_set_mask()::
174 
175                 int dma_set_mask(struct device *dev, u64 mask);
176 
177         The setup for consistent allocations is performed via a call
178         to dma_set_coherent_mask()::
179 
180                 int dma_set_coherent_mask(struct device *dev, u64 mask);
181 
182 Here, dev is a pointer to the device struct of your device, and mask is a bit
183 mask describing which bits of an address your device supports.  Often the
184 device struct of your device is embedded in the bus-specific device struct of
185 your device.  For example, &pdev->dev is a pointer to the device struct of a
186 PCI device (pdev is a pointer to the PCI device struct of your device).
187 
188 These calls usually return zero to indicate your device can perform DMA
189 properly on the machine given the address mask you provided, but they might
190 return an error if the mask is too small to be supportable on the given
191 system.  If it returns non-zero, your device cannot perform DMA properly on
192 this platform, and attempting to do so will result in undefined behavior.
193 You must not use DMA on this device unless the dma_set_mask family of
194 functions has returned success.
195 
196 This means that in the failure case, you have two options:
197 
198 1) Use some non-DMA mode for data transfer, if possible.
199 2) Ignore this device and do not initialize it.
200 
201 It is recommended that your driver print a kernel KERN_WARNING message when
202 setting the DMA mask fails.  In this manner, if a user of your driver reports
203 that performance is bad or that the device is not even detected, you can ask
204 them for the kernel messages to find out exactly why.
205 
206 The 24-bit addressing device would do something like this::
207 
208         if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(24))) {
209                 dev_warn(dev, "mydev: No suitable DMA available\n");
210                 goto ignore_this_device;
211         }
212 
213 The standard 64-bit addressing device would do something like this::
214 
215         dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))
216 
217 dma_set_mask_and_coherent() never return fail when DMA_BIT_MASK(64). Typical
218 error code like::
219 
220         /* Wrong code */
221         if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64)))
222                 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))
223 
224 dma_set_mask_and_coherent() will never return failure when bigger than 32.
225 So typical code like::
226 
227         /* Recommended code */
228         if (support_64bit)
229                 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
230         else
231                 dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
232 
233 If the device only supports 32-bit addressing for descriptors in the
234 coherent allocations, but supports full 64-bits for streaming mappings
235 it would look like this::
236 
237         if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
238                 dev_warn(dev, "mydev: No suitable DMA available\n");
239                 goto ignore_this_device;
240         }
241 
242 The coherent mask will always be able to set the same or a smaller mask as
243 the streaming mask. However for the rare case that a device driver only
244 uses consistent allocations, one would have to check the return value from
245 dma_set_coherent_mask().
246 
247 Finally, if your device can only drive the low 24-bits of
248 address you might do something like::
249 
250         if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
251                 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
252                 goto ignore_this_device;
253         }
254 
255 When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
256 returns zero, the kernel saves away this mask you have provided.  The
257 kernel will use this information later when you make DMA mappings.
258 
259 There is a case which we are aware of at this time, which is worth
260 mentioning in this documentation.  If your device supports multiple
261 functions (for example a sound card provides playback and record
262 functions) and the various different functions have _different_
263 DMA addressing limitations, you may wish to probe each mask and
264 only provide the functionality which the machine can handle.  It
265 is important that the last call to dma_set_mask() be for the
266 most specific mask.
267 
268 Here is pseudo-code showing how this might be done::
269 
270         #define PLAYBACK_ADDRESS_BITS   DMA_BIT_MASK(32)
271         #define RECORD_ADDRESS_BITS     DMA_BIT_MASK(24)
272 
273         struct my_sound_card *card;
274         struct device *dev;
275 
276         ...
277         if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
278                 card->playback_enabled = 1;
279         } else {
280                 card->playback_enabled = 0;
281                 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
282                        card->name);
283         }
284         if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
285                 card->record_enabled = 1;
286         } else {
287                 card->record_enabled = 0;
288                 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
289                        card->name);
290         }
291 
292 A sound card was used as an example here because this genre of PCI
293 devices seems to be littered with ISA chips given a PCI front end,
294 and thus retaining the 16MB DMA addressing limitations of ISA.
295 
296 Types of DMA mappings
297 =====================
298 
299 There are two types of DMA mappings:
300 
301 - Consistent DMA mappings which are usually mapped at driver
302   initialization, unmapped at the end and for which the hardware should
303   guarantee that the device and the CPU can access the data
304   in parallel and will see updates made by each other without any
305   explicit software flushing.
306 
307   Think of "consistent" as "synchronous" or "coherent".
308 
309   The current default is to return consistent memory in the low 32
310   bits of the DMA space.  However, for future compatibility you should
311   set the consistent mask even if this default is fine for your
312   driver.
313 
314   Good examples of what to use consistent mappings for are:
315 
316         - Network card DMA ring descriptors.
317         - SCSI adapter mailbox command data structures.
318         - Device firmware microcode executed out of
319           main memory.
320 
321   The invariant these examples all require is that any CPU store
322   to memory is immediately visible to the device, and vice
323   versa.  Consistent mappings guarantee this.
324 
325   .. important::
326 
327              Consistent DMA memory does not preclude the usage of
328              proper memory barriers.  The CPU may reorder stores to
329              consistent memory just as it may normal memory.  Example:
330              if it is important for the device to see the first word
331              of a descriptor updated before the second, you must do
332              something like::
333 
334                 desc->word0 = address;
335                 wmb();
336                 desc->word1 = DESC_VALID;
337 
338              in order to get correct behavior on all platforms.
339 
340              Also, on some platforms your driver may need to flush CPU write
341              buffers in much the same way as it needs to flush write buffers
342              found in PCI bridges (such as by reading a register's value
343              after writing it).
344 
345 - Streaming DMA mappings which are usually mapped for one DMA
346   transfer, unmapped right after it (unless you use dma_sync_* below)
347   and for which hardware can optimize for sequential accesses.
348 
349   Think of "streaming" as "asynchronous" or "outside the coherency
350   domain".
351 
352   Good examples of what to use streaming mappings for are:
353 
354         - Networking buffers transmitted/received by a device.
355         - Filesystem buffers written/read by a SCSI device.
356 
357   The interfaces for using this type of mapping were designed in
358   such a way that an implementation can make whatever performance
359   optimizations the hardware allows.  To this end, when using
360   such mappings you must be explicit about what you want to happen.
361 
362 Neither type of DMA mapping has alignment restrictions that come from
363 the underlying bus, although some devices may have such restrictions.
364 Also, systems with caches that aren't DMA-coherent will work better
365 when the underlying buffers don't share cache lines with other data.
366 
367 
368 Using Consistent DMA mappings
369 =============================
370 
371 To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
372 you should do::
373 
374         dma_addr_t dma_handle;
375 
376         cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
377 
378 where device is a ``struct device *``. This may be called in interrupt
379 context with the GFP_ATOMIC flag.
380 
381 Size is the length of the region you want to allocate, in bytes.
382 
383 This routine will allocate RAM for that region, so it acts similarly to
384 __get_free_pages() (but takes size instead of a page order).  If your
385 driver needs regions sized smaller than a page, you may prefer using
386 the dma_pool interface, described below.
387 
388 The consistent DMA mapping interfaces, will by default return a DMA address
389 which is 32-bit addressable.  Even if the device indicates (via the DMA mask)
390 that it may address the upper 32-bits, consistent allocation will only
391 return > 32-bit addresses for DMA if the consistent DMA mask has been
392 explicitly changed via dma_set_coherent_mask().  This is true of the
393 dma_pool interface as well.
394 
395 dma_alloc_coherent() returns two values: the virtual address which you
396 can use to access it from the CPU and dma_handle which you pass to the
397 card.
398 
399 The CPU virtual address and the DMA address are both
400 guaranteed to be aligned to the smallest PAGE_SIZE order which
401 is greater than or equal to the requested size.  This invariant
402 exists (for example) to guarantee that if you allocate a chunk
403 which is smaller than or equal to 64 kilobytes, the extent of the
404 buffer you receive will not cross a 64K boundary.
405 
406 To unmap and free such a DMA region, you call::
407 
408         dma_free_coherent(dev, size, cpu_addr, dma_handle);
409 
410 where dev, size are the same as in the above call and cpu_addr and
411 dma_handle are the values dma_alloc_coherent() returned to you.
412 This function may not be called in interrupt context.
413 
414 If your driver needs lots of smaller memory regions, you can write
415 custom code to subdivide pages returned by dma_alloc_coherent(),
416 or you can use the dma_pool API to do that.  A dma_pool is like
417 a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
418 Also, it understands common hardware constraints for alignment,
419 like queue heads needing to be aligned on N byte boundaries.
420 
421 Create a dma_pool like this::
422 
423         struct dma_pool *pool;
424 
425         pool = dma_pool_create(name, dev, size, align, boundary);
426 
427 The "name" is for diagnostics (like a kmem_cache name); dev and size
428 are as above.  The device's hardware alignment requirement for this
429 type of data is "align" (which is expressed in bytes, and must be a
430 power of two).  If your device has no boundary crossing restrictions,
431 pass 0 for boundary; passing 4096 says memory allocated from this pool
432 must not cross 4KByte boundaries (but at that time it may be better to
433 use dma_alloc_coherent() directly instead).
434 
435 Allocate memory from a DMA pool like this::
436 
437         cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
438 
439 flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
440 holding SMP locks), GFP_ATOMIC otherwise.  Like dma_alloc_coherent(),
441 this returns two values, cpu_addr and dma_handle.
442 
443 Free memory that was allocated from a dma_pool like this::
444 
445         dma_pool_free(pool, cpu_addr, dma_handle);
446 
447 where pool is what you passed to dma_pool_alloc(), and cpu_addr and
448 dma_handle are the values dma_pool_alloc() returned. This function
449 may be called in interrupt context.
450 
451 Destroy a dma_pool by calling::
452 
453         dma_pool_destroy(pool);
454 
455 Make sure you've called dma_pool_free() for all memory allocated
456 from a pool before you destroy the pool. This function may not
457 be called in interrupt context.
458 
459 DMA Direction
460 =============
461 
462 The interfaces described in subsequent portions of this document
463 take a DMA direction argument, which is an integer and takes on
464 one of the following values::
465 
466  DMA_BIDIRECTIONAL
467  DMA_TO_DEVICE
468  DMA_FROM_DEVICE
469  DMA_NONE
470 
471 You should provide the exact DMA direction if you know it.
472 
473 DMA_TO_DEVICE means "from main memory to the device"
474 DMA_FROM_DEVICE means "from the device to main memory"
475 It is the direction in which the data moves during the DMA
476 transfer.
477 
478 You are _strongly_ encouraged to specify this as precisely
479 as you possibly can.
480 
481 If you absolutely cannot know the direction of the DMA transfer,
482 specify DMA_BIDIRECTIONAL.  It means that the DMA can go in
483 either direction.  The platform guarantees that you may legally
484 specify this, and that it will work, but this may be at the
485 cost of performance for example.
486 
487 The value DMA_NONE is to be used for debugging.  One can
488 hold this in a data structure before you come to know the
489 precise direction, and this will help catch cases where your
490 direction tracking logic has failed to set things up properly.
491 
492 Another advantage of specifying this value precisely (outside of
493 potential platform-specific optimizations of such) is for debugging.
494 Some platforms actually have a write permission boolean which DMA
495 mappings can be marked with, much like page protections in the user
496 program address space.  Such platforms can and do report errors in the
497 kernel logs when the DMA controller hardware detects violation of the
498 permission setting.
499 
500 Only streaming mappings specify a direction, consistent mappings
501 implicitly have a direction attribute setting of
502 DMA_BIDIRECTIONAL.
503 
504 The SCSI subsystem tells you the direction to use in the
505 'sc_data_direction' member of the SCSI command your driver is
506 working on.
507 
508 For Networking drivers, it's a rather simple affair.  For transmit
509 packets, map/unmap them with the DMA_TO_DEVICE direction
510 specifier.  For receive packets, just the opposite, map/unmap them
511 with the DMA_FROM_DEVICE direction specifier.
512 
513 Using Streaming DMA mappings
514 ============================
515 
516 The streaming DMA mapping routines can be called from interrupt
517 context.  There are two versions of each map/unmap, one which will
518 map/unmap a single memory region, and one which will map/unmap a
519 scatterlist.
520 
521 To map a single region, you do::
522 
523         struct device *dev = &my_dev->dev;
524         dma_addr_t dma_handle;
525         void *addr = buffer->ptr;
526         size_t size = buffer->len;
527 
528         dma_handle = dma_map_single(dev, addr, size, direction);
529         if (dma_mapping_error(dev, dma_handle)) {
530                 /*
531                  * reduce current DMA mapping usage,
532                  * delay and try again later or
533                  * reset driver.
534                  */
535                 goto map_error_handling;
536         }
537 
538 and to unmap it::
539 
540         dma_unmap_single(dev, dma_handle, size, direction);
541 
542 You should call dma_mapping_error() as dma_map_single() could fail and return
543 error.  Doing so will ensure that the mapping code will work correctly on all
544 DMA implementations without any dependency on the specifics of the underlying
545 implementation. Using the returned address without checking for errors could
546 result in failures ranging from panics to silent data corruption.  The same
547 applies to dma_map_page() as well.
548 
549 You should call dma_unmap_single() when the DMA activity is finished, e.g.,
550 from the interrupt which told you that the DMA transfer is done.
551 
552 Using CPU pointers like this for single mappings has a disadvantage:
553 you cannot reference HIGHMEM memory in this way.  Thus, there is a
554 map/unmap interface pair akin to dma_{map,unmap}_single().  These
555 interfaces deal with page/offset pairs instead of CPU pointers.
556 Specifically::
557 
558         struct device *dev = &my_dev->dev;
559         dma_addr_t dma_handle;
560         struct page *page = buffer->page;
561         unsigned long offset = buffer->offset;
562         size_t size = buffer->len;
563 
564         dma_handle = dma_map_page(dev, page, offset, size, direction);
565         if (dma_mapping_error(dev, dma_handle)) {
566                 /*
567                  * reduce current DMA mapping usage,
568                  * delay and try again later or
569                  * reset driver.
570                  */
571                 goto map_error_handling;
572         }
573 
574         ...
575 
576         dma_unmap_page(dev, dma_handle, size, direction);
577 
578 Here, "offset" means byte offset within the given page.
579 
580 You should call dma_mapping_error() as dma_map_page() could fail and return
581 error as outlined under the dma_map_single() discussion.
582 
583 You should call dma_unmap_page() when the DMA activity is finished, e.g.,
584 from the interrupt which told you that the DMA transfer is done.
585 
586 With scatterlists, you map a region gathered from several regions by::
587 
588         int i, count = dma_map_sg(dev, sglist, nents, direction);
589         struct scatterlist *sg;
590 
591         for_each_sg(sglist, sg, count, i) {
592                 hw_address[i] = sg_dma_address(sg);
593                 hw_len[i] = sg_dma_len(sg);
594         }
595 
596 where nents is the number of entries in the sglist.
597 
598 The implementation is free to merge several consecutive sglist entries
599 into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
600 consecutive sglist entries can be merged into one provided the first one
601 ends and the second one starts on a page boundary - in fact this is a huge
602 advantage for cards which either cannot do scatter-gather or have very
603 limited number of scatter-gather entries) and returns the actual number
604 of sg entries it mapped them to. On failure 0 is returned.
605 
606 Then you should loop count times (note: this can be less than nents times)
607 and use sg_dma_address() and sg_dma_len() macros where you previously
608 accessed sg->address and sg->length as shown above.
609 
610 To unmap a scatterlist, just call::
611 
612         dma_unmap_sg(dev, sglist, nents, direction);
613 
614 Again, make sure DMA activity has already finished.
615 
616 .. note::
617 
618         The 'nents' argument to the dma_unmap_sg call must be
619         the _same_ one you passed into the dma_map_sg call,
620         it should _NOT_ be the 'count' value _returned_ from the
621         dma_map_sg call.
622 
623 Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
624 counterpart, because the DMA address space is a shared resource and
625 you could render the machine unusable by consuming all DMA addresses.
626 
627 If you need to use the same streaming DMA region multiple times and touch
628 the data in between the DMA transfers, the buffer needs to be synced
629 properly in order for the CPU and device to see the most up-to-date and
630 correct copy of the DMA buffer.
631 
632 So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
633 transfer call either::
634 
635         dma_sync_single_for_cpu(dev, dma_handle, size, direction);
636 
637 or::
638 
639         dma_sync_sg_for_cpu(dev, sglist, nents, direction);
640 
641 as appropriate.
642 
643 Then, if you wish to let the device get at the DMA area again,
644 finish accessing the data with the CPU, and then before actually
645 giving the buffer to the hardware call either::
646 
647         dma_sync_single_for_device(dev, dma_handle, size, direction);
648 
649 or::
650 
651         dma_sync_sg_for_device(dev, sglist, nents, direction);
652 
653 as appropriate.
654 
655 .. note::
656 
657               The 'nents' argument to dma_sync_sg_for_cpu() and
658               dma_sync_sg_for_device() must be the same passed to
659               dma_map_sg(). It is _NOT_ the count returned by
660               dma_map_sg().
661 
662 After the last DMA transfer call one of the DMA unmap routines
663 dma_unmap_{single,sg}(). If you don't touch the data from the first
664 dma_map_*() call till dma_unmap_*(), then you don't have to call the
665 dma_sync_*() routines at all.
666 
667 Here is pseudo code which shows a situation in which you would need
668 to use the dma_sync_*() interfaces::
669 
670         my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
671         {
672                 dma_addr_t mapping;
673 
674                 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
675                 if (dma_mapping_error(cp->dev, mapping)) {
676                         /*
677                          * reduce current DMA mapping usage,
678                          * delay and try again later or
679                          * reset driver.
680                          */
681                         goto map_error_handling;
682                 }
683 
684                 cp->rx_buf = buffer;
685                 cp->rx_len = len;
686                 cp->rx_dma = mapping;
687 
688                 give_rx_buf_to_card(cp);
689         }
690 
691         ...
692 
693         my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
694         {
695                 struct my_card *cp = devid;
696 
697                 ...
698                 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
699                         struct my_card_header *hp;
700 
701                         /* Examine the header to see if we wish
702                          * to accept the data.  But synchronize
703                          * the DMA transfer with the CPU first
704                          * so that we see updated contents.
705                          */
706                         dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
707                                                 cp->rx_len,
708                                                 DMA_FROM_DEVICE);
709 
710                         /* Now it is safe to examine the buffer. */
711                         hp = (struct my_card_header *) cp->rx_buf;
712                         if (header_is_ok(hp)) {
713                                 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
714                                                  DMA_FROM_DEVICE);
715                                 pass_to_upper_layers(cp->rx_buf);
716                                 make_and_setup_new_rx_buf(cp);
717                         } else {
718                                 /* CPU should not write to
719                                  * DMA_FROM_DEVICE-mapped area,
720                                  * so dma_sync_single_for_device() is
721                                  * not needed here. It would be required
722                                  * for DMA_BIDIRECTIONAL mapping if
723                                  * the memory was modified.
724                                  */
725                                 give_rx_buf_to_card(cp);
726                         }
727                 }
728         }
729 
730 Handling Errors
731 ===============
732 
733 DMA address space is limited on some architectures and an allocation
734 failure can be determined by:
735 
736 - checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
737 
738 - checking the dma_addr_t returned from dma_map_single() and dma_map_page()
739   by using dma_mapping_error()::
740 
741         dma_addr_t dma_handle;
742 
743         dma_handle = dma_map_single(dev, addr, size, direction);
744         if (dma_mapping_error(dev, dma_handle)) {
745                 /*
746                  * reduce current DMA mapping usage,
747                  * delay and try again later or
748                  * reset driver.
749                  */
750                 goto map_error_handling;
751         }
752 
753 - unmap pages that are already mapped, when mapping error occurs in the middle
754   of a multiple page mapping attempt. These example are applicable to
755   dma_map_page() as well.
756 
757 Example 1::
758 
759         dma_addr_t dma_handle1;
760         dma_addr_t dma_handle2;
761 
762         dma_handle1 = dma_map_single(dev, addr, size, direction);
763         if (dma_mapping_error(dev, dma_handle1)) {
764                 /*
765                  * reduce current DMA mapping usage,
766                  * delay and try again later or
767                  * reset driver.
768                  */
769                 goto map_error_handling1;
770         }
771         dma_handle2 = dma_map_single(dev, addr, size, direction);
772         if (dma_mapping_error(dev, dma_handle2)) {
773                 /*
774                  * reduce current DMA mapping usage,
775                  * delay and try again later or
776                  * reset driver.
777                  */
778                 goto map_error_handling2;
779         }
780 
781         ...
782 
783         map_error_handling2:
784                 dma_unmap_single(dma_handle1);
785         map_error_handling1:
786 
787 Example 2::
788 
789         /*
790          * if buffers are allocated in a loop, unmap all mapped buffers when
791          * mapping error is detected in the middle
792          */
793 
794         dma_addr_t dma_addr;
795         dma_addr_t array[DMA_BUFFERS];
796         int save_index = 0;
797 
798         for (i = 0; i < DMA_BUFFERS; i++) {
799 
800                 ...
801 
802                 dma_addr = dma_map_single(dev, addr, size, direction);
803                 if (dma_mapping_error(dev, dma_addr)) {
804                         /*
805                          * reduce current DMA mapping usage,
806                          * delay and try again later or
807                          * reset driver.
808                          */
809                         goto map_error_handling;
810                 }
811                 array[i].dma_addr = dma_addr;
812                 save_index++;
813         }
814 
815         ...
816 
817         map_error_handling:
818 
819         for (i = 0; i < save_index; i++) {
820 
821                 ...
822 
823                 dma_unmap_single(array[i].dma_addr);
824         }
825 
826 Networking drivers must call dev_kfree_skb() to free the socket buffer
827 and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
828 (ndo_start_xmit). This means that the socket buffer is just dropped in
829 the failure case.
830 
831 SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
832 fails in the queuecommand hook. This means that the SCSI subsystem
833 passes the command to the driver again later.
834 
835 Optimizing Unmap State Space Consumption
836 ========================================
837 
838 On many platforms, dma_unmap_{single,page}() is simply a nop.
839 Therefore, keeping track of the mapping address and length is a waste
840 of space.  Instead of filling your drivers up with ifdefs and the like
841 to "work around" this (which would defeat the whole purpose of a
842 portable API) the following facilities are provided.
843 
844 Actually, instead of describing the macros one by one, we'll
845 transform some example code.
846 
847 1) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
848    Example, before::
849 
850         struct ring_state {
851                 struct sk_buff *skb;
852                 dma_addr_t mapping;
853                 __u32 len;
854         };
855 
856    after::
857 
858         struct ring_state {
859                 struct sk_buff *skb;
860                 DEFINE_DMA_UNMAP_ADDR(mapping);
861                 DEFINE_DMA_UNMAP_LEN(len);
862         };
863 
864 2) Use dma_unmap_{addr,len}_set() to set these values.
865    Example, before::
866 
867         ringp->mapping = FOO;
868         ringp->len = BAR;
869 
870    after::
871 
872         dma_unmap_addr_set(ringp, mapping, FOO);
873         dma_unmap_len_set(ringp, len, BAR);
874 
875 3) Use dma_unmap_{addr,len}() to access these values.
876    Example, before::
877 
878         dma_unmap_single(dev, ringp->mapping, ringp->len,
879                          DMA_FROM_DEVICE);
880 
881    after::
882 
883         dma_unmap_single(dev,
884                          dma_unmap_addr(ringp, mapping),
885                          dma_unmap_len(ringp, len),
886                          DMA_FROM_DEVICE);
887 
888 It really should be self-explanatory.  We treat the ADDR and LEN
889 separately, because it is possible for an implementation to only
890 need the address in order to perform the unmap operation.
891 
892 Platform Issues
893 ===============
894 
895 If you are just writing drivers for Linux and do not maintain
896 an architecture port for the kernel, you can safely skip down
897 to "Closing".
898 
899 1) Struct scatterlist requirements.
900 
901    You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
902    supports IOMMUs (including software IOMMU).
903 
904 2) ARCH_DMA_MINALIGN
905 
906    Architectures must ensure that kmalloc'ed buffer is
907    DMA-safe. Drivers and subsystems depend on it. If an architecture
908    isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
909    the CPU cache is identical to data in main memory),
910    ARCH_DMA_MINALIGN must be set so that the memory allocator
911    makes sure that kmalloc'ed buffer doesn't share a cache line with
912    the others. See arch/arm/include/asm/cache.h as an example.
913 
914    Note that ARCH_DMA_MINALIGN is about DMA memory alignment
915    constraints. You don't need to worry about the architecture data
916    alignment constraints (e.g. the alignment constraints about 64-bit
917    objects).
918 
919 Closing
920 =======
921 
922 This document, and the API itself, would not be in its current
923 form without the feedback and suggestions from numerous individuals.
924 We would like to specifically mention, in no particular order, the
925 following people::
926 
927         Russell King <rmk@arm.linux.org.uk>
928         Leo Dagum <dagum@barrel.engr.sgi.com>
929         Ralf Baechle <ralf@oss.sgi.com>
930         Grant Grundler <grundler@cup.hp.com>
931         Jay Estabrook <Jay.Estabrook@compaq.com>
932         Thomas Sailer <sailer@ife.ee.ethz.ch>
933         Andrea Arcangeli <andrea@suse.de>
934         Jens Axboe <jens.axboe@oracle.com>
935         David Mosberger-Tang <davidm@hpl.hp.com>

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php