1 .. Copyright 2001 Matthew Wilcox 1 .. Copyright 2001 Matthew Wilcox 2 .. 2 .. 3 .. This documentation is free software; yo 3 .. This documentation is free software; you can redistribute 4 .. it and/or modify it under the terms of 4 .. it and/or modify it under the terms of the GNU General Public 5 .. License as published by the Free Softwa 5 .. License as published by the Free Software Foundation; either 6 .. version 2 of the License, or (at your o 6 .. version 2 of the License, or (at your option) any later 7 .. version. 7 .. version. 8 8 9 =============================== 9 =============================== 10 Bus-Independent Device Accesses 10 Bus-Independent Device Accesses 11 =============================== 11 =============================== 12 12 13 :Author: Matthew Wilcox 13 :Author: Matthew Wilcox 14 :Author: Alan Cox 14 :Author: Alan Cox 15 15 16 Introduction 16 Introduction 17 ============ 17 ============ 18 18 19 Linux provides an API which abstracts performi 19 Linux provides an API which abstracts performing IO across all busses 20 and devices, allowing device drivers to be wri 20 and devices, allowing device drivers to be written independently of bus 21 type. 21 type. 22 22 23 Memory Mapped IO 23 Memory Mapped IO 24 ================ 24 ================ 25 25 26 Getting Access to the Device 26 Getting Access to the Device 27 ---------------------------- 27 ---------------------------- 28 28 29 The most widely supported form of IO is memory 29 The most widely supported form of IO is memory mapped IO. That is, a 30 part of the CPU's address space is interpreted 30 part of the CPU's address space is interpreted not as accesses to 31 memory, but as accesses to a device. Some arch 31 memory, but as accesses to a device. Some architectures define devices 32 to be at a fixed address, but most have some m 32 to be at a fixed address, but most have some method of discovering 33 devices. The PCI bus walk is a good example of 33 devices. The PCI bus walk is a good example of such a scheme. This 34 document does not cover how to receive such an 34 document does not cover how to receive such an address, but assumes you 35 are starting with one. Physical addresses are 35 are starting with one. Physical addresses are of type unsigned long. 36 36 37 This address should not be used directly. Inst 37 This address should not be used directly. Instead, to get an address 38 suitable for passing to the accessor functions 38 suitable for passing to the accessor functions described below, you 39 should call ioremap(). An address suitable for 39 should call ioremap(). An address suitable for accessing 40 the device will be returned to you. 40 the device will be returned to you. 41 41 42 After you've finished using the device (say, i 42 After you've finished using the device (say, in your module's exit 43 routine), call iounmap() in order to return th 43 routine), call iounmap() in order to return the address 44 space to the kernel. Most architectures alloca 44 space to the kernel. Most architectures allocate new address space each 45 time you call ioremap(), and they can run out 45 time you call ioremap(), and they can run out unless you 46 call iounmap(). 46 call iounmap(). 47 47 48 Accessing the device 48 Accessing the device 49 -------------------- 49 -------------------- 50 50 51 The part of the interface most used by drivers 51 The part of the interface most used by drivers is reading and writing 52 memory-mapped registers on the device. Linux p 52 memory-mapped registers on the device. Linux provides interfaces to read 53 and write 8-bit, 16-bit, 32-bit and 64-bit qua 53 and write 8-bit, 16-bit, 32-bit and 64-bit quantities. Due to a 54 historical accident, these are named byte, wor 54 historical accident, these are named byte, word, long and quad accesses. 55 Both read and write accesses are supported; th 55 Both read and write accesses are supported; there is no prefetch support 56 at this time. 56 at this time. 57 57 58 The functions are named readb(), readw(), read 58 The functions are named readb(), readw(), readl(), readq(), 59 readb_relaxed(), readw_relaxed(), readl_relaxe 59 readb_relaxed(), readw_relaxed(), readl_relaxed(), readq_relaxed(), 60 writeb(), writew(), writel() and writeq(). 60 writeb(), writew(), writel() and writeq(). 61 61 62 Some devices (such as framebuffers) would like 62 Some devices (such as framebuffers) would like to use larger transfers than 63 8 bytes at a time. For these devices, the memc 63 8 bytes at a time. For these devices, the memcpy_toio(), 64 memcpy_fromio() and memset_io() functions are 64 memcpy_fromio() and memset_io() functions are 65 provided. Do not use memset or memcpy on IO ad 65 provided. Do not use memset or memcpy on IO addresses; they are not 66 guaranteed to copy data in order. 66 guaranteed to copy data in order. 67 67 68 The read and write functions are defined to be 68 The read and write functions are defined to be ordered. That is the 69 compiler is not permitted to reorder the I/O s 69 compiler is not permitted to reorder the I/O sequence. When the ordering 70 can be compiler optimised, you can use __readb 70 can be compiler optimised, you can use __readb() and friends to 71 indicate the relaxed ordering. Use this with c 71 indicate the relaxed ordering. Use this with care. 72 72 73 While the basic functions are defined to be sy 73 While the basic functions are defined to be synchronous with respect to 74 each other and ordered with respect to each ot 74 each other and ordered with respect to each other the busses the devices 75 sit on may themselves have asynchronicity. In 75 sit on may themselves have asynchronicity. In particular many authors 76 are burned by the fact that PCI bus writes are 76 are burned by the fact that PCI bus writes are posted asynchronously. A 77 driver author must issue a read from the same 77 driver author must issue a read from the same device to ensure that 78 writes have occurred in the specific cases the 78 writes have occurred in the specific cases the author cares. This kind 79 of property cannot be hidden from driver write 79 of property cannot be hidden from driver writers in the API. In some 80 cases, the read used to flush the device may b 80 cases, the read used to flush the device may be expected to fail (if the 81 card is resetting, for example). In that case, 81 card is resetting, for example). In that case, the read should be done 82 from config space, which is guaranteed to soft 82 from config space, which is guaranteed to soft-fail if the card doesn't 83 respond. 83 respond. 84 84 85 The following is an example of flushing a writ 85 The following is an example of flushing a write to a device when the 86 driver would like to ensure the write's effect 86 driver would like to ensure the write's effects are visible prior to 87 continuing execution:: 87 continuing execution:: 88 88 89 static inline void 89 static inline void 90 qla1280_disable_intrs(struct scsi_qla_host 90 qla1280_disable_intrs(struct scsi_qla_host *ha) 91 { 91 { 92 struct device_reg *reg; 92 struct device_reg *reg; 93 93 94 reg = ha->iobase; 94 reg = ha->iobase; 95 /* disable risc and host interrupts */ 95 /* disable risc and host interrupts */ 96 WRT_REG_WORD(®->ictrl, 0); 96 WRT_REG_WORD(®->ictrl, 0); 97 /* 97 /* 98 * The following read will ensure that 98 * The following read will ensure that the above write 99 * has been received by the device bef 99 * has been received by the device before we return from this 100 * function. 100 * function. 101 */ 101 */ 102 RD_REG_WORD(®->ictrl); 102 RD_REG_WORD(®->ictrl); 103 ha->flags.ints_enabled = 0; 103 ha->flags.ints_enabled = 0; 104 } 104 } 105 105 106 PCI ordering rules also guarantee that PIO rea 106 PCI ordering rules also guarantee that PIO read responses arrive after any 107 outstanding DMA writes from that bus, since fo 107 outstanding DMA writes from that bus, since for some devices the result of 108 a readb() call may signal to the driver that a 108 a readb() call may signal to the driver that a DMA transaction is 109 complete. In many cases, however, the driver m 109 complete. In many cases, however, the driver may want to indicate that the 110 next readb() call has no relation to any previ 110 next readb() call has no relation to any previous DMA writes 111 performed by the device. The driver can use re 111 performed by the device. The driver can use readb_relaxed() for 112 these cases, although only some platforms will 112 these cases, although only some platforms will honor the relaxed 113 semantics. Using the relaxed read functions wi 113 semantics. Using the relaxed read functions will provide significant 114 performance benefits on platforms that support 114 performance benefits on platforms that support it. The qla2xxx driver 115 provides examples of how to use readX_relaxed( 115 provides examples of how to use readX_relaxed(). In many cases, a majority 116 of the driver's readX() calls can safely be co 116 of the driver's readX() calls can safely be converted to readX_relaxed() 117 calls, since only a few will indicate or depen 117 calls, since only a few will indicate or depend on DMA completion. 118 118 119 Port Space Accesses 119 Port Space Accesses 120 =================== 120 =================== 121 121 122 Port Space Explained 122 Port Space Explained 123 -------------------- 123 -------------------- 124 124 125 Another form of IO commonly supported is Port 125 Another form of IO commonly supported is Port Space. This is a range of 126 addresses separate to the normal memory addres 126 addresses separate to the normal memory address space. Access to these 127 addresses is generally not as fast as accesses 127 addresses is generally not as fast as accesses to the memory mapped 128 addresses, and it also has a potentially small 128 addresses, and it also has a potentially smaller address space. 129 129 130 Unlike memory mapped IO, no preparation is req 130 Unlike memory mapped IO, no preparation is required to access port 131 space. 131 space. 132 132 133 Accessing Port Space 133 Accessing Port Space 134 -------------------- 134 -------------------- 135 135 136 Accesses to this space are provided through a 136 Accesses to this space are provided through a set of functions which 137 allow 8-bit, 16-bit and 32-bit accesses; also 137 allow 8-bit, 16-bit and 32-bit accesses; also known as byte, word and 138 long. These functions are inb(), inw(), 138 long. These functions are inb(), inw(), 139 inl(), outb(), outw() and 139 inl(), outb(), outw() and 140 outl(). 140 outl(). 141 141 142 Some variants are provided for these functions 142 Some variants are provided for these functions. Some devices require 143 that accesses to their ports are slowed down. 143 that accesses to their ports are slowed down. This functionality is 144 provided by appending a ``_p`` to the end of t 144 provided by appending a ``_p`` to the end of the function. 145 There are also equivalents to memcpy. The ins( 145 There are also equivalents to memcpy. The ins() and 146 outs() functions copy bytes, words or longs to 146 outs() functions copy bytes, words or longs to the given 147 port. 147 port. 148 148 149 __iomem pointer tokens 149 __iomem pointer tokens 150 ====================== 150 ====================== 151 151 152 The data type for an MMIO address is an ``__io 152 The data type for an MMIO address is an ``__iomem`` qualified pointer, such as 153 ``void __iomem *reg``. On most architectures i 153 ``void __iomem *reg``. On most architectures it is a regular pointer that 154 points to a virtual memory address and can be 154 points to a virtual memory address and can be offset or dereferenced, but in 155 portable code, it must only be passed from and 155 portable code, it must only be passed from and to functions that explicitly 156 operated on an ``__iomem`` token, in particula 156 operated on an ``__iomem`` token, in particular the ioremap() and 157 readl()/writel() functions. The 'sparse' seman 157 readl()/writel() functions. The 'sparse' semantic code checker can be used to 158 verify that this is done correctly. 158 verify that this is done correctly. 159 159 160 While on most architectures, ioremap() creates 160 While on most architectures, ioremap() creates a page table entry for an 161 uncached virtual address pointing to the physi 161 uncached virtual address pointing to the physical MMIO address, some 162 architectures require special instructions for 162 architectures require special instructions for MMIO, and the ``__iomem`` pointer 163 just encodes the physical address or an offset 163 just encodes the physical address or an offsettable cookie that is interpreted 164 by readl()/writel(). 164 by readl()/writel(). 165 165 166 Differences between I/O access functions 166 Differences between I/O access functions 167 ======================================== 167 ======================================== 168 168 169 readq(), readl(), readw(), readb(), writeq(), 169 readq(), readl(), readw(), readb(), writeq(), writel(), writew(), writeb() 170 170 171 These are the most generic accessors, provid 171 These are the most generic accessors, providing serialization against other 172 MMIO accesses and DMA accesses as well as fi 172 MMIO accesses and DMA accesses as well as fixed endianness for accessing 173 little-endian PCI devices and on-chip periph 173 little-endian PCI devices and on-chip peripherals. Portable device drivers 174 should generally use these for any access to 174 should generally use these for any access to ``__iomem`` pointers. 175 175 176 Note that posted writes are not strictly ord 176 Note that posted writes are not strictly ordered against a spinlock, see 177 Documentation/driver-api/io_ordering.rst. 177 Documentation/driver-api/io_ordering.rst. 178 178 179 readq_relaxed(), readl_relaxed(), readw_relaxe 179 readq_relaxed(), readl_relaxed(), readw_relaxed(), readb_relaxed(), 180 writeq_relaxed(), writel_relaxed(), writew_rel 180 writeq_relaxed(), writel_relaxed(), writew_relaxed(), writeb_relaxed() 181 181 182 On architectures that require an expensive b 182 On architectures that require an expensive barrier for serializing against 183 DMA, these "relaxed" versions of the MMIO ac 183 DMA, these "relaxed" versions of the MMIO accessors only serialize against 184 each other, but contain a less expensive bar 184 each other, but contain a less expensive barrier operation. A device driver 185 might use these in a particularly performanc 185 might use these in a particularly performance sensitive fast path, with a 186 comment that explains why the usage in a spe 186 comment that explains why the usage in a specific location is safe without 187 the extra barriers. 187 the extra barriers. 188 188 189 See memory-barriers.txt for a more detailed 189 See memory-barriers.txt for a more detailed discussion on the precise ordering 190 guarantees of the non-relaxed and relaxed ve 190 guarantees of the non-relaxed and relaxed versions. 191 191 192 ioread64(), ioread32(), ioread16(), ioread8(), 192 ioread64(), ioread32(), ioread16(), ioread8(), 193 iowrite64(), iowrite32(), iowrite16(), iowrite 193 iowrite64(), iowrite32(), iowrite16(), iowrite8() 194 194 195 These are an alternative to the normal readl 195 These are an alternative to the normal readl()/writel() functions, with almost 196 identical behavior, but they can also operat 196 identical behavior, but they can also operate on ``__iomem`` tokens returned 197 for mapping PCI I/O space with pci_iomap() o 197 for mapping PCI I/O space with pci_iomap() or ioport_map(). On architectures 198 that require special instructions for I/O po 198 that require special instructions for I/O port access, this adds a small 199 overhead for an indirect function call imple 199 overhead for an indirect function call implemented in lib/iomap.c, while on 200 other architectures, these are simply aliase 200 other architectures, these are simply aliases. 201 201 202 ioread64be(), ioread32be(), ioread16be() 202 ioread64be(), ioread32be(), ioread16be() 203 iowrite64be(), iowrite32be(), iowrite16be() 203 iowrite64be(), iowrite32be(), iowrite16be() 204 204 205 These behave in the same way as the ioread32 205 These behave in the same way as the ioread32()/iowrite32() family, but with 206 reversed byte order, for accessing devices w 206 reversed byte order, for accessing devices with big-endian MMIO registers. 207 Device drivers that can operate on either bi 207 Device drivers that can operate on either big-endian or little-endian 208 registers may have to implement a custom wra 208 registers may have to implement a custom wrapper function that picks one or 209 the other depending on which device was foun 209 the other depending on which device was found. 210 210 211 Note: On some architectures, the normal read 211 Note: On some architectures, the normal readl()/writel() functions 212 traditionally assume that devices are the sa 212 traditionally assume that devices are the same endianness as the CPU, while 213 using a hardware byte-reverse on the PCI bus 213 using a hardware byte-reverse on the PCI bus when running a big-endian kernel. 214 Drivers that use readl()/writel() this way a 214 Drivers that use readl()/writel() this way are generally not portable, but 215 tend to be limited to a particular SoC. 215 tend to be limited to a particular SoC. 216 216 217 hi_lo_readq(), lo_hi_readq(), hi_lo_readq_rela 217 hi_lo_readq(), lo_hi_readq(), hi_lo_readq_relaxed(), lo_hi_readq_relaxed(), 218 ioread64_lo_hi(), ioread64_hi_lo(), ioread64be 218 ioread64_lo_hi(), ioread64_hi_lo(), ioread64be_lo_hi(), ioread64be_hi_lo(), 219 hi_lo_writeq(), lo_hi_writeq(), hi_lo_writeq_r 219 hi_lo_writeq(), lo_hi_writeq(), hi_lo_writeq_relaxed(), lo_hi_writeq_relaxed(), 220 iowrite64_lo_hi(), iowrite64_hi_lo(), iowrite6 220 iowrite64_lo_hi(), iowrite64_hi_lo(), iowrite64be_lo_hi(), iowrite64be_hi_lo() 221 221 222 Some device drivers have 64-bit registers th 222 Some device drivers have 64-bit registers that cannot be accessed atomically 223 on 32-bit architectures but allow two consec 223 on 32-bit architectures but allow two consecutive 32-bit accesses instead. 224 Since it depends on the particular device wh 224 Since it depends on the particular device which of the two halves has to be 225 accessed first, a helper is provided for eac 225 accessed first, a helper is provided for each combination of 64-bit accessors 226 with either low/high or high/low word orderi 226 with either low/high or high/low word ordering. A device driver must include 227 either <linux/io-64-nonatomic-lo-hi.h> or <l 227 either <linux/io-64-nonatomic-lo-hi.h> or <linux/io-64-nonatomic-hi-lo.h> to 228 get the function definitions along with help 228 get the function definitions along with helpers that redirect the normal 229 readq()/writeq() to them on architectures th 229 readq()/writeq() to them on architectures that do not provide 64-bit access 230 natively. 230 natively. 231 231 232 __raw_readq(), __raw_readl(), __raw_readw(), _ 232 __raw_readq(), __raw_readl(), __raw_readw(), __raw_readb(), 233 __raw_writeq(), __raw_writel(), __raw_writew() 233 __raw_writeq(), __raw_writel(), __raw_writew(), __raw_writeb() 234 234 235 These are low-level MMIO accessors without b 235 These are low-level MMIO accessors without barriers or byteorder changes and 236 architecture specific behavior. Accesses are 236 architecture specific behavior. Accesses are usually atomic in the sense that 237 a four-byte __raw_readl() does not get split 237 a four-byte __raw_readl() does not get split into individual byte loads, but 238 multiple consecutive accesses can be combine 238 multiple consecutive accesses can be combined on the bus. In portable code, it 239 is only safe to use these to access memory b 239 is only safe to use these to access memory behind a device bus but not MMIO 240 registers, as there are no ordering guarante 240 registers, as there are no ordering guarantees with regard to other MMIO 241 accesses or even spinlocks. The byte order i 241 accesses or even spinlocks. The byte order is generally the same as for normal 242 memory, so unlike the other functions, these 242 memory, so unlike the other functions, these can be used to copy data between 243 kernel memory and device memory. 243 kernel memory and device memory. 244 244 245 inl(), inw(), inb(), outl(), outw(), outb() 245 inl(), inw(), inb(), outl(), outw(), outb() 246 246 247 PCI I/O port resources traditionally require 247 PCI I/O port resources traditionally require separate helpers as they are 248 implemented using special instructions on th 248 implemented using special instructions on the x86 architecture. On most other 249 architectures, these are mapped to readl()/w 249 architectures, these are mapped to readl()/writel() style accessors 250 internally, usually pointing to a fixed area 250 internally, usually pointing to a fixed area in virtual memory. Instead of an 251 ``__iomem`` pointer, the address is a 32-bit 251 ``__iomem`` pointer, the address is a 32-bit integer token to identify a port 252 number. PCI requires I/O port access to be n 252 number. PCI requires I/O port access to be non-posted, meaning that an outb() 253 must complete before the following code exec 253 must complete before the following code executes, while a normal writeb() may 254 still be in progress. On architectures that 254 still be in progress. On architectures that correctly implement this, I/O port 255 access is therefore ordered against spinlock 255 access is therefore ordered against spinlocks. Many non-x86 PCI host bridge 256 implementations and CPU architectures howeve 256 implementations and CPU architectures however fail to implement non-posted I/O 257 space on PCI, so they can end up being poste 257 space on PCI, so they can end up being posted on such hardware. 258 258 259 In some architectures, the I/O port number s 259 In some architectures, the I/O port number space has a 1:1 mapping to 260 ``__iomem`` pointers, but this is not recomm 260 ``__iomem`` pointers, but this is not recommended and device drivers should 261 not rely on that for portability. Similarly, 261 not rely on that for portability. Similarly, an I/O port number as described 262 in a PCI base address register may not corre 262 in a PCI base address register may not correspond to the port number as seen 263 by a device driver. Portable drivers need to 263 by a device driver. Portable drivers need to read the port number for the 264 resource provided by the kernel. 264 resource provided by the kernel. 265 265 266 There are no direct 64-bit I/O port accessor 266 There are no direct 64-bit I/O port accessors, but pci_iomap() in combination 267 with ioread64/iowrite64 can be used instead. 267 with ioread64/iowrite64 can be used instead. 268 268 269 inl_p(), inw_p(), inb_p(), outl_p(), outw_p(), 269 inl_p(), inw_p(), inb_p(), outl_p(), outw_p(), outb_p() 270 270 271 On ISA devices that require specific timing, 271 On ISA devices that require specific timing, the _p versions of the I/O 272 accessors add a small delay. On architecture 272 accessors add a small delay. On architectures that do not have ISA buses, 273 these are aliases to the normal inb/outb hel 273 these are aliases to the normal inb/outb helpers. 274 274 275 readsq, readsl, readsw, readsb 275 readsq, readsl, readsw, readsb 276 writesq, writesl, writesw, writesb 276 writesq, writesl, writesw, writesb 277 ioread64_rep, ioread32_rep, ioread16_rep, iore 277 ioread64_rep, ioread32_rep, ioread16_rep, ioread8_rep 278 iowrite64_rep, iowrite32_rep, iowrite16_rep, i 278 iowrite64_rep, iowrite32_rep, iowrite16_rep, iowrite8_rep 279 insl, insw, insb, outsl, outsw, outsb 279 insl, insw, insb, outsl, outsw, outsb 280 280 281 These are helpers that access the same addre 281 These are helpers that access the same address multiple times, usually to copy 282 data between kernel memory byte stream and a 282 data between kernel memory byte stream and a FIFO buffer. Unlike the normal 283 MMIO accessors, these do not perform a bytes 283 MMIO accessors, these do not perform a byteswap on big-endian kernels, so the 284 first byte in the FIFO register corresponds 284 first byte in the FIFO register corresponds to the first byte in the memory 285 buffer regardless of the architecture. 285 buffer regardless of the architecture. 286 286 287 Device memory mapping modes 287 Device memory mapping modes 288 =========================== 288 =========================== 289 289 290 Some architectures support multiple modes for 290 Some architectures support multiple modes for mapping device memory. 291 ioremap_*() variants provide a common abstract 291 ioremap_*() variants provide a common abstraction around these 292 architecture-specific modes, with a shared set 292 architecture-specific modes, with a shared set of semantics. 293 293 294 ioremap() is the most common mapping type, and 294 ioremap() is the most common mapping type, and is applicable to typical device 295 memory (e.g. I/O registers). Other modes can o 295 memory (e.g. I/O registers). Other modes can offer weaker or stronger 296 guarantees, if supported by the architecture. 296 guarantees, if supported by the architecture. From most to least common, they 297 are as follows: 297 are as follows: 298 298 299 ioremap() 299 ioremap() 300 --------- 300 --------- 301 301 302 The default mode, suitable for most memory-map 302 The default mode, suitable for most memory-mapped devices, e.g. control 303 registers. Memory mapped using ioremap() has t 303 registers. Memory mapped using ioremap() has the following characteristics: 304 304 305 * Uncached - CPU-side caches are bypassed, and 305 * Uncached - CPU-side caches are bypassed, and all reads and writes are handled 306 directly by the device 306 directly by the device 307 * No speculative operations - the CPU may not 307 * No speculative operations - the CPU may not issue a read or write to this 308 memory, unless the instruction that does so 308 memory, unless the instruction that does so has been reached in committed 309 program flow. 309 program flow. 310 * No reordering - The CPU may not reorder acce 310 * No reordering - The CPU may not reorder accesses to this memory mapping with 311 respect to each other. On some architectures 311 respect to each other. On some architectures, this relies on barriers in 312 readl_relaxed()/writel_relaxed(). 312 readl_relaxed()/writel_relaxed(). 313 * No repetition - The CPU may not issue multip 313 * No repetition - The CPU may not issue multiple reads or writes for a single 314 program instruction. 314 program instruction. 315 * No write-combining - Each I/O operation resu 315 * No write-combining - Each I/O operation results in one discrete read or write 316 being issued to the device, and multiple wri 316 being issued to the device, and multiple writes are not combined into larger 317 writes. This may or may not be enforced when 317 writes. This may or may not be enforced when using __raw I/O accessors or 318 pointer dereferences. 318 pointer dereferences. 319 * Non-executable - The CPU is not allowed to s 319 * Non-executable - The CPU is not allowed to speculate instruction execution 320 from this memory (it probably goes without s 320 from this memory (it probably goes without saying, but you're also not 321 allowed to jump into device memory). 321 allowed to jump into device memory). 322 322 323 On many platforms and buses (e.g. PCI), writes 323 On many platforms and buses (e.g. PCI), writes issued through ioremap() 324 mappings are posted, which means that the CPU 324 mappings are posted, which means that the CPU does not wait for the write to 325 actually reach the target device before retiri 325 actually reach the target device before retiring the write instruction. 326 326 327 On many platforms, I/O accesses must be aligne 327 On many platforms, I/O accesses must be aligned with respect to the access 328 size; failure to do so will result in an excep 328 size; failure to do so will result in an exception or unpredictable results. 329 329 330 ioremap_wc() 330 ioremap_wc() 331 ------------ 331 ------------ 332 332 333 Maps I/O memory as normal memory with write co 333 Maps I/O memory as normal memory with write combining. Unlike ioremap(), 334 334 335 * The CPU may speculatively issue reads from t 335 * The CPU may speculatively issue reads from the device that the program 336 didn't actually execute, and may choose to b 336 didn't actually execute, and may choose to basically read whatever it wants. 337 * The CPU may reorder operations as long as th 337 * The CPU may reorder operations as long as the result is consistent from the 338 program's point of view. 338 program's point of view. 339 * The CPU may write to the same location multi 339 * The CPU may write to the same location multiple times, even when the program 340 issued a single write. 340 issued a single write. 341 * The CPU may combine several writes into a si 341 * The CPU may combine several writes into a single larger write. 342 342 343 This mode is typically used for video framebuf 343 This mode is typically used for video framebuffers, where it can increase 344 performance of writes. It can also be used for 344 performance of writes. It can also be used for other blocks of memory in 345 devices (e.g. buffers or shared memory), but c 345 devices (e.g. buffers or shared memory), but care must be taken as accesses are 346 not guaranteed to be ordered with respect to n 346 not guaranteed to be ordered with respect to normal ioremap() MMIO register 347 accesses without explicit barriers. 347 accesses without explicit barriers. 348 348 349 On a PCI bus, it is usually safe to use iorema 349 On a PCI bus, it is usually safe to use ioremap_wc() on MMIO areas marked as 350 ``IORESOURCE_PREFETCH``, but it may not be use 350 ``IORESOURCE_PREFETCH``, but it may not be used on those without the flag. 351 For on-chip devices, there is no corresponding 351 For on-chip devices, there is no corresponding flag, but a driver can use 352 ioremap_wc() on a device that is known to be s 352 ioremap_wc() on a device that is known to be safe. 353 353 354 ioremap_wt() 354 ioremap_wt() 355 ------------ 355 ------------ 356 356 357 Maps I/O memory as normal memory with write-th 357 Maps I/O memory as normal memory with write-through caching. Like ioremap_wc(), 358 but also, 358 but also, 359 359 360 * The CPU may cache writes issued to and reads 360 * The CPU may cache writes issued to and reads from the device, and serve reads 361 from that cache. 361 from that cache. 362 362 363 This mode is sometimes used for video framebuf 363 This mode is sometimes used for video framebuffers, where drivers still expect 364 writes to reach the device in a timely manner 364 writes to reach the device in a timely manner (and not be stuck in the CPU 365 cache), but reads may be served from the cache 365 cache), but reads may be served from the cache for efficiency. However, it is 366 rarely useful these days, as framebuffer drive 366 rarely useful these days, as framebuffer drivers usually perform writes only, 367 for which ioremap_wc() is more efficient (as i 367 for which ioremap_wc() is more efficient (as it doesn't needlessly trash the 368 cache). Most drivers should not use this. 368 cache). Most drivers should not use this. 369 369 370 ioremap_np() 370 ioremap_np() 371 ------------ 371 ------------ 372 372 373 Like ioremap(), but explicitly requests non-po 373 Like ioremap(), but explicitly requests non-posted write semantics. On some 374 architectures and buses, ioremap() mappings ha 374 architectures and buses, ioremap() mappings have posted write semantics, which 375 means that writes can appear to "complete" fro 375 means that writes can appear to "complete" from the point of view of the 376 CPU before the written data actually arrives a 376 CPU before the written data actually arrives at the target device. Writes are 377 still ordered with respect to other writes and 377 still ordered with respect to other writes and reads from the same device, but 378 due to the posted write semantics, this is not 378 due to the posted write semantics, this is not the case with respect to other 379 devices. ioremap_np() explicitly requests non- 379 devices. ioremap_np() explicitly requests non-posted semantics, which means 380 that the write instruction will not appear to 380 that the write instruction will not appear to complete until the device has 381 received (and to some platform-specific extent 381 received (and to some platform-specific extent acknowledged) the written data. 382 382 383 This mapping mode primarily exists to cater fo 383 This mapping mode primarily exists to cater for platforms with bus fabrics that 384 require this particular mapping mode to work c 384 require this particular mapping mode to work correctly. These platforms set the 385 ``IORESOURCE_MEM_NONPOSTED`` flag for a resour 385 ``IORESOURCE_MEM_NONPOSTED`` flag for a resource that requires ioremap_np() 386 semantics and portable drivers should use an a 386 semantics and portable drivers should use an abstraction that automatically 387 selects it where appropriate (see the `Higher- 387 selects it where appropriate (see the `Higher-level ioremap abstractions`_ 388 section below). 388 section below). 389 389 390 The bare ioremap_np() is only available on som 390 The bare ioremap_np() is only available on some architectures; on others, it 391 always returns NULL. Drivers should not normal 391 always returns NULL. Drivers should not normally use it, unless they are 392 platform-specific or they derive benefit from 392 platform-specific or they derive benefit from non-posted writes where 393 supported, and can fall back to ioremap() othe 393 supported, and can fall back to ioremap() otherwise. The normal approach to 394 ensure posted write completion is to do a dumm 394 ensure posted write completion is to do a dummy read after a write as 395 explained in `Accessing the device`_, which wo 395 explained in `Accessing the device`_, which works with ioremap() on all 396 platforms. 396 platforms. 397 397 398 ioremap_np() should never be used for PCI driv 398 ioremap_np() should never be used for PCI drivers. PCI memory space writes are 399 always posted, even on architectures that othe 399 always posted, even on architectures that otherwise implement ioremap_np(). 400 Using ioremap_np() for PCI BARs will at best r 400 Using ioremap_np() for PCI BARs will at best result in posted write semantics, 401 and at worst result in complete breakage. 401 and at worst result in complete breakage. 402 402 403 Note that non-posted write semantics are ortho 403 Note that non-posted write semantics are orthogonal to CPU-side ordering 404 guarantees. A CPU may still choose to issue ot 404 guarantees. A CPU may still choose to issue other reads or writes before a 405 non-posted write instruction retires. See the 405 non-posted write instruction retires. See the previous section on MMIO access 406 functions for details on the CPU side of thing 406 functions for details on the CPU side of things. 407 407 408 ioremap_uc() 408 ioremap_uc() 409 ------------ 409 ------------ 410 410 411 ioremap_uc() is only meaningful on old x86-32 411 ioremap_uc() is only meaningful on old x86-32 systems with the PAT extension, 412 and on ia64 with its slightly unconventional i 412 and on ia64 with its slightly unconventional ioremap() behavior, everywhere 413 elss ioremap_uc() defaults to return NULL. 413 elss ioremap_uc() defaults to return NULL. 414 414 415 415 416 Portable drivers should avoid the use of iorem 416 Portable drivers should avoid the use of ioremap_uc(), use ioremap() instead. 417 417 418 ioremap_cache() 418 ioremap_cache() 419 --------------- 419 --------------- 420 420 421 ioremap_cache() effectively maps I/O memory as 421 ioremap_cache() effectively maps I/O memory as normal RAM. CPU write-back 422 caches can be used, and the CPU is free to tre 422 caches can be used, and the CPU is free to treat the device as if it were a 423 block of RAM. This should never be used for de 423 block of RAM. This should never be used for device memory which has side 424 effects of any kind, or which does not return 424 effects of any kind, or which does not return the data previously written on 425 read. 425 read. 426 426 427 It should also not be used for actual RAM, as 427 It should also not be used for actual RAM, as the returned pointer is an 428 ``__iomem`` token. memremap() can be used for 428 ``__iomem`` token. memremap() can be used for mapping normal RAM that is outside 429 of the linear kernel memory area to a regular 429 of the linear kernel memory area to a regular pointer. 430 430 431 Portable drivers should avoid the use of iorem 431 Portable drivers should avoid the use of ioremap_cache(). 432 432 433 Architecture example 433 Architecture example 434 -------------------- 434 -------------------- 435 435 436 Here is how the above modes map to memory attr 436 Here is how the above modes map to memory attribute settings on the ARM64 437 architecture: 437 architecture: 438 438 439 +------------------------+-------------------- 439 +------------------------+--------------------------------------------+ 440 | API | Memory region type 440 | API | Memory region type and cacheability | 441 +------------------------+-------------------- 441 +------------------------+--------------------------------------------+ 442 | ioremap_np() | Device-nGnRnE 442 | ioremap_np() | Device-nGnRnE | 443 +------------------------+-------------------- 443 +------------------------+--------------------------------------------+ 444 | ioremap() | Device-nGnRE 444 | ioremap() | Device-nGnRE | 445 +------------------------+-------------------- 445 +------------------------+--------------------------------------------+ 446 | ioremap_uc() | (not implemented) 446 | ioremap_uc() | (not implemented) | 447 +------------------------+-------------------- 447 +------------------------+--------------------------------------------+ 448 | ioremap_wc() | Normal-Non Cacheabl 448 | ioremap_wc() | Normal-Non Cacheable | 449 +------------------------+-------------------- 449 +------------------------+--------------------------------------------+ 450 | ioremap_wt() | (not implemented; f 450 | ioremap_wt() | (not implemented; fallback to ioremap) | 451 +------------------------+-------------------- 451 +------------------------+--------------------------------------------+ 452 | ioremap_cache() | Normal-Write-Back C 452 | ioremap_cache() | Normal-Write-Back Cacheable | 453 +------------------------+-------------------- 453 +------------------------+--------------------------------------------+ 454 454 455 Higher-level ioremap abstractions 455 Higher-level ioremap abstractions 456 ================================= 456 ================================= 457 457 458 Instead of using the above raw ioremap() modes 458 Instead of using the above raw ioremap() modes, drivers are encouraged to use 459 higher-level APIs. These APIs may implement pl 459 higher-level APIs. These APIs may implement platform-specific logic to 460 automatically choose an appropriate ioremap mo 460 automatically choose an appropriate ioremap mode on any given bus, allowing for 461 a platform-agnostic driver to work on those pl 461 a platform-agnostic driver to work on those platforms without any special 462 cases. At the time of this writing, the follow 462 cases. At the time of this writing, the following ioremap() wrappers have such 463 logic: 463 logic: 464 464 465 devm_ioremap_resource() 465 devm_ioremap_resource() 466 466 467 Can automatically select ioremap_np() over i 467 Can automatically select ioremap_np() over ioremap() according to platform 468 requirements, if the ``IORESOURCE_MEM_NONPOS 468 requirements, if the ``IORESOURCE_MEM_NONPOSTED`` flag is set on the struct 469 resource. Uses devres to automatically unmap 469 resource. Uses devres to automatically unmap the resource when the driver 470 probe() function fails or a device in unboun 470 probe() function fails or a device in unbound from its driver. 471 471 472 Documented in Documentation/driver-api/drive 472 Documented in Documentation/driver-api/driver-model/devres.rst. 473 473 474 of_address_to_resource() 474 of_address_to_resource() 475 475 476 Automatically sets the ``IORESOURCE_MEM_NONP 476 Automatically sets the ``IORESOURCE_MEM_NONPOSTED`` flag for platforms that 477 require non-posted writes for certain buses 477 require non-posted writes for certain buses (see the nonposted-mmio and 478 posted-mmio device tree properties). 478 posted-mmio device tree properties). 479 479 480 of_iomap() 480 of_iomap() 481 481 482 Maps the resource described in a ``reg`` pro 482 Maps the resource described in a ``reg`` property in the device tree, doing 483 all required translations. Automatically sel 483 all required translations. Automatically selects ioremap_np() according to 484 platform requirements, as above. 484 platform requirements, as above. 485 485 486 pci_ioremap_bar(), pci_ioremap_wc_bar() 486 pci_ioremap_bar(), pci_ioremap_wc_bar() 487 487 488 Maps the resource described in a PCI base ad 488 Maps the resource described in a PCI base address without having to extract 489 the physical address first. 489 the physical address first. 490 490 491 pci_iomap(), pci_iomap_wc() 491 pci_iomap(), pci_iomap_wc() 492 492 493 Like pci_ioremap_bar()/pci_ioremap_bar(), bu 493 Like pci_ioremap_bar()/pci_ioremap_bar(), but also works on I/O space when 494 used together with ioread32()/iowrite32() an 494 used together with ioread32()/iowrite32() and similar accessors 495 495 496 pcim_iomap() 496 pcim_iomap() 497 497 498 Like pci_iomap(), but uses devres to automat 498 Like pci_iomap(), but uses devres to automatically unmap the resource when 499 the driver probe() function fails or a devic 499 the driver probe() function fails or a device in unbound from its driver 500 500 501 Documented in Documentation/driver-api/drive 501 Documented in Documentation/driver-api/driver-model/devres.rst. 502 502 503 Not using these wrappers may make drivers unus 503 Not using these wrappers may make drivers unusable on certain platforms with 504 stricter rules for mapping I/O memory. 504 stricter rules for mapping I/O memory. 505 505 506 Generalizing Access to System and I/O Memory 506 Generalizing Access to System and I/O Memory 507 ============================================ 507 ============================================ 508 508 509 .. kernel-doc:: include/linux/iosys-map.h 509 .. kernel-doc:: include/linux/iosys-map.h 510 :doc: overview 510 :doc: overview 511 511 512 .. kernel-doc:: include/linux/iosys-map.h 512 .. kernel-doc:: include/linux/iosys-map.h 513 :internal: 513 :internal: 514 514 515 Public Functions Provided 515 Public Functions Provided 516 ========================= 516 ========================= 517 517 518 .. kernel-doc:: arch/x86/include/asm/io.h 518 .. kernel-doc:: arch/x86/include/asm/io.h 519 :internal: 519 :internal: >> 520 >> 521 .. kernel-doc:: lib/pci_iomap.c >> 522 :export:
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.