1 ==================== 2 DMA Engine API Guide 3 ==================== 4 5 Vinod Koul <vinod dot koul at intel.com> 6 7 .. note:: For DMA Engine usage in async_tx ple 8 ``Documentation/crypto/async-tx-api. 9 10 11 Below is a guide to device driver writers on h 12 DMA Engine. This is applicable only for slave 13 14 DMA usage 15 ========= 16 17 The slave DMA usage consists of following step 18 19 - Allocate a DMA slave channel 20 21 - Set slave and controller specific parameters 22 23 - Get a descriptor for transaction 24 25 - Submit the transaction 26 27 - Issue pending requests and wait for callback 28 29 The details of these operations are: 30 31 1. Allocate a DMA slave channel 32 33 Channel allocation is slightly different in 34 client drivers typically need a channel fro 35 controller only and even in some cases a sp 36 To request a channel dma_request_chan() API 37 38 Interface: 39 40 .. code-block:: c 41 42 struct dma_chan *dma_request_chan(struct 43 44 Which will find and return the ``name`` DMA 45 device. The association is done via DT, ACP 46 dma_slave_map matching table. 47 48 A channel allocated via this interface is e 49 until dma_release_channel() is called. 50 51 2. Set slave and controller specific parameter 52 53 Next step is always to pass some specific i 54 driver. Most of the generic information whi 55 is in struct dma_slave_config. This allows 56 DMA direction, DMA addresses, bus widths, D 57 for the peripheral. 58 59 If some DMA controllers have more parameter 60 should try to embed struct dma_slave_config 61 specific structure. That gives flexibility 62 parameters, if required. 63 64 Interface: 65 66 .. code-block:: c 67 68 int dmaengine_slave_config(struct dma_ch 69 struct dma_slave_confi 70 71 Please see the dma_slave_config structure d 72 for a detailed explanation of the struct me 73 that the 'direction' member will be going a 74 direction given in the prepare call. 75 76 3. Get a descriptor for transaction 77 78 For slave usage the various modes of slave t 79 DMA-engine are: 80 81 - slave_sg: DMA a list of scatter gather buf 82 83 - peripheral_dma_vec: DMA an array of scatte 84 peripheral. Similar to slave_sg, but uses 85 structures instead of a scatterlist. 86 87 - dma_cyclic: Perform a cyclic DMA operation 88 operation is explicitly stopped. 89 90 - interleaved_dma: This is common to Slave a 91 address of devices' fifo could be already 92 Various types of operations could be expre 93 appropriate values to the 'dma_interleaved 94 interleaved DMA transfers are also possibl 95 setting the DMA_PREP_REPEAT transfer flag. 96 97 A non-NULL return of this transfer API repre 98 the given transaction. 99 100 Interface: 101 102 .. code-block:: c 103 104 struct dma_async_tx_descriptor *dmaengine 105 struct dma_chan *chan, struct 106 unsigned int sg_len, enum dma_ 107 unsigned long flags); 108 109 struct dma_async_tx_descriptor *dmaengine 110 struct dma_chan *chan, const s 111 size_t nents, enum dma_data_di 112 unsigned long flags); 113 114 struct dma_async_tx_descriptor *dmaengine 115 struct dma_chan *chan, dma_add 116 size_t period_len, enum dma_da 117 118 struct dma_async_tx_descriptor *dmaengine 119 struct dma_chan *chan, struct 120 unsigned long flags); 121 122 The peripheral driver is expected to have ma 123 the DMA operation prior to calling dmaengine 124 keep the scatterlist mapped until the DMA op 125 The scatterlist must be mapped using the DMA 126 If a mapping needs to be synchronized later, 127 called using the DMA struct device, too. 128 So, normal setup should look like this: 129 130 .. code-block:: c 131 132 struct device *dma_dev = dmaengine_get_dm 133 134 nr_sg = dma_map_sg(dma_dev, sgl, sg_len); 135 if (nr_sg == 0) 136 /* error */ 137 138 desc = dmaengine_prep_slave_sg(chan, s 139 140 Once a descriptor has been obtained, the cal 141 added and the descriptor must then be submit 142 drivers may hold a spinlock between a succes 143 submission so it is important that these two 144 paired. 145 146 .. note:: 147 148 Although the async_tx API specifies that 149 routines cannot submit any new operations 150 case for slave/cyclic DMA. 151 152 For slave DMA, the subsequent transaction 153 for submission prior to callback function 154 slave DMA callbacks are permitted to prep 155 transaction. 156 157 For cyclic DMA, a callback function may w 158 DMA via dmaengine_terminate_async(). 159 160 Therefore, it is important that DMA engin 161 locks before calling the callback functio 162 deadlock. 163 164 Note that callbacks will always be invoke 165 engines tasklet, never from interrupt con 166 167 **Optional: per descriptor metadata** 168 169 DMAengine provides two ways for metadata sup 170 171 DESC_METADATA_CLIENT 172 173 The metadata buffer is allocated/provided 174 attached to the descriptor. 175 176 .. code-block:: c 177 178 int dmaengine_desc_attach_metadata(struct 179 void *data, 180 181 DESC_METADATA_ENGINE 182 183 The metadata buffer is allocated/managed b 184 driver can ask for the pointer, maximum si 185 the metadata and can directly update or re 186 187 Because the DMA driver manages the memory 188 clients must make sure that they do not tr 189 after their transfer completion callback h 190 If no completion callback has been defined 191 metadata must not be accessed after issue_ 192 In other words: if the aim is to read back 193 completed, then the client must use comple 194 195 .. code-block:: c 196 197 void *dmaengine_desc_get_metadata_ptr(str 198 size_t *payload_len, size_t *m 199 200 int dmaengine_desc_set_metadata_len(struc 201 size_t payload_len); 202 203 Client drivers can query if a given mode is 204 205 .. code-block:: c 206 207 bool dmaengine_is_metadata_mode_supported 208 enum dma_desc_metadata_mode mo 209 210 Depending on the used mode client drivers mu 211 212 DESC_METADATA_CLIENT 213 214 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM: 215 216 1. prepare the descriptor (dmaengine_pre 217 construct the metadata in the client' 218 2. use dmaengine_desc_attach_metadata() 219 descriptor 220 3. submit the transfer 221 222 - DMA_DEV_TO_MEM: 223 224 1. prepare the descriptor (dmaengine_pre 225 2. use dmaengine_desc_attach_metadata() 226 descriptor 227 3. submit the transfer 228 4. when the transfer is completed, the m 229 attached buffer 230 231 DESC_METADATA_ENGINE 232 233 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM: 234 235 1. prepare the descriptor (dmaengine_pre 236 2. use dmaengine_desc_get_metadata_ptr() 237 engine's metadata area 238 3. update the metadata at the pointer 239 4. use dmaengine_desc_set_metadata_len() 240 amount of data the client has placed 241 5. submit the transfer 242 243 - DMA_DEV_TO_MEM: 244 245 1. prepare the descriptor (dmaengine_pre 246 2. submit the transfer 247 3. on transfer completion, use dmaengine 248 the pointer to the engine's metadata 249 4. read out the metadata from the pointe 250 251 .. note:: 252 253 When DESC_METADATA_ENGINE mode is used th 254 is no longer valid after the transfer has 255 point when the completion callback return 256 257 Mixed use of DESC_METADATA_CLIENT / DESC_ 258 client drivers must use either of the mod 259 260 4. Submit the transaction 261 262 Once the descriptor has been prepared and t 263 added, it must be placed on the DMA engine 264 265 Interface: 266 267 .. code-block:: c 268 269 dma_cookie_t dmaengine_submit(struct dma 270 271 This returns a cookie can be used to check 272 activity via other DMA engine calls not cov 273 274 dmaengine_submit() will not start the DMA o 275 it to the pending queue. For this, see step 276 277 .. note:: 278 279 After calling ``dmaengine_submit()`` the 280 (``struct dma_async_tx_descriptor``) bel 281 Consequently, the client must consider i 282 descriptor. 283 284 5. Issue pending DMA requests and wait for cal 285 286 The transactions in the pending queue can b 287 issue_pending API. If channel is idle then 288 queue is started and subsequent ones queued 289 290 On completion of each DMA operation, the ne 291 a tasklet triggered. The tasklet will then 292 completion callback routine for notificatio 293 294 Interface: 295 296 .. code-block:: c 297 298 void dma_async_issue_pending(struct dma_ 299 300 Further APIs 301 ------------ 302 303 1. Terminate APIs 304 305 .. code-block:: c 306 307 int dmaengine_terminate_sync(struct dma_ 308 int dmaengine_terminate_async(struct dma 309 int dmaengine_terminate_all(struct dma_c 310 311 This causes all activity for the DMA channe 312 discard data in the DMA FIFO which hasn't b 313 No callback functions will be called for an 314 315 Two variants of this function are available 316 317 dmaengine_terminate_async() might not wait 318 stopped or until any running complete callb 319 possible to call dmaengine_terminate_async( 320 within a complete callback. dmaengine_synch 321 is safe to free the memory accessed by the 322 accessed from within the complete callback. 323 324 dmaengine_terminate_sync() will wait for th 325 complete callbacks to finish before it retu 326 called from atomic context or from within a 327 328 dmaengine_terminate_all() is deprecated and 329 330 2. Pause API 331 332 .. code-block:: c 333 334 int dmaengine_pause(struct dma_chan *cha 335 336 This pauses activity on the DMA channel wit 337 338 3. Resume API 339 340 .. code-block:: c 341 342 int dmaengine_resume(struct dma_chan *c 343 344 Resume a previously paused DMA channel. It 345 channel which is not currently paused. 346 347 4. Check Txn complete 348 349 .. code-block:: c 350 351 enum dma_status dma_async_is_tx_complete 352 dma_cookie_t cookie, dma_cooki 353 354 This can be used to check the status of the 355 the documentation in include/linux/dmaengin 356 description of this API. 357 358 This can be used in conjunction with dma_as 359 the cookie returned from dmaengine_submit() 360 completion of a specific DMA transaction. 361 362 .. note:: 363 364 Not all DMA engine drivers can return re 365 a running DMA channel. It is recommended 366 pause or stop (via dmaengine_terminate_a 367 using this API. 368 369 5. Synchronize termination API 370 371 .. code-block:: c 372 373 void dmaengine_synchronize(struct dma_ch 374 375 Synchronize the termination of the DMA chan 376 377 This function should be used after dmaengin 378 the termination of the DMA channel to the c 379 wait for the transfer and any running compl 380 returns. 381 382 If dmaengine_terminate_async() is used to s 383 must be called before it is safe to free me 384 submitted descriptors or to free any resour 385 callback of previously submitted descriptor 386 387 The behavior of this function is undefined 388 been called between dmaengine_terminate_asy
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.