1 .. SPDX-License-Identifier: GPL-2.0 2 3 ===================================== 4 Asynchronous Transfers/Transforms API 5 ===================================== 6 7 .. Contents 8 9 1. INTRODUCTION 10 11 2 GENEALOGY 12 13 3 USAGE 14 3.1 General format of the API 15 3.2 Supported operations 16 3.3 Descriptor management 17 3.4 When does the operation execute? 18 3.5 When does the operation complete? 19 3.6 Constraints 20 3.7 Example 21 22 4 DMAENGINE DRIVER DEVELOPER NOTES 23 4.1 Conformance points 24 4.2 "My application needs exclusive control of hardware channels" 25 26 5 SOURCE 27 28 1. Introduction 29 =============== 30 31 The async_tx API provides methods for describing a chain of asynchronous 32 bulk memory transfers/transforms with support for inter-transactional 33 dependencies. It is implemented as a dmaengine client that smooths over 34 the details of different hardware offload engine implementations. Code 35 that is written to the API can optimize for asynchronous operation and 36 the API will fit the chain of operations to the available offload 37 resources. 38 39 2.Genealogy 40 =========== 41 42 The API was initially designed to offload the memory copy and 43 xor-parity-calculations of the md-raid5 driver using the offload engines 44 present in the Intel(R) Xscale series of I/O processors. It also built 45 on the 'dmaengine' layer developed for offloading memory copies in the 46 network stack using Intel(R) I/OAT engines. The following design 47 features surfaced as a result: 48 49 1. implicit synchronous path: users of the API do not need to know if 50 the platform they are running on has offload capabilities. The 51 operation will be offloaded when an engine is available and carried out 52 in software otherwise. 53 2. cross channel dependency chains: the API allows a chain of dependent 54 operations to be submitted, like xor->copy->xor in the raid5 case. The 55 API automatically handles cases where the transition from one operation 56 to another implies a hardware channel switch. 57 3. dmaengine extensions to support multiple clients and operation types 58 beyond 'memcpy' 59 60 3. Usage 61 ======== 62 63 3.1 General format of the API 64 ----------------------------- 65 66 :: 67 68 struct dma_async_tx_descriptor * 69 async_<operation>(<op specific parameters>, struct async_submit_ctl *submit) 70 71 3.2 Supported operations 72 ------------------------ 73 74 ======== ==================================================================== 75 memcpy memory copy between a source and a destination buffer 76 memset fill a destination buffer with a byte value 77 xor xor a series of source buffers and write the result to a 78 destination buffer 79 xor_val xor a series of source buffers and set a flag if the 80 result is zero. The implementation attempts to prevent 81 writes to memory 82 pq generate the p+q (raid6 syndrome) from a series of source buffers 83 pq_val validate that a p and or q buffer are in sync with a given series of 84 sources 85 datap (raid6_datap_recov) recover a raid6 data block and the p block 86 from the given sources 87 2data (raid6_2data_recov) recover 2 raid6 data blocks from the given 88 sources 89 ======== ==================================================================== 90 91 3.3 Descriptor management 92 ------------------------- 93 94 The return value is non-NULL and points to a 'descriptor' when the operation 95 has been queued to execute asynchronously. Descriptors are recycled 96 resources, under control of the offload engine driver, to be reused as 97 operations complete. When an application needs to submit a chain of 98 operations it must guarantee that the descriptor is not automatically recycled 99 before the dependency is submitted. This requires that all descriptors be 100 acknowledged by the application before the offload engine driver is allowed to 101 recycle (or free) the descriptor. A descriptor can be acked by one of the 102 following methods: 103 104 1. setting the ASYNC_TX_ACK flag if no child operations are to be submitted 105 2. submitting an unacknowledged descriptor as a dependency to another 106 async_tx call will implicitly set the acknowledged state. 107 3. calling async_tx_ack() on the descriptor. 108 109 3.4 When does the operation execute? 110 ------------------------------------ 111 112 Operations do not immediately issue after return from the 113 async_<operation> call. Offload engine drivers batch operations to 114 improve performance by reducing the number of mmio cycles needed to 115 manage the channel. Once a driver-specific threshold is met the driver 116 automatically issues pending operations. An application can force this 117 event by calling async_tx_issue_pending_all(). This operates on all 118 channels since the application has no knowledge of channel to operation 119 mapping. 120 121 3.5 When does the operation complete? 122 ------------------------------------- 123 124 There are two methods for an application to learn about the completion 125 of an operation. 126 127 1. Call dma_wait_for_async_tx(). This call causes the CPU to spin while 128 it polls for the completion of the operation. It handles dependency 129 chains and issuing pending operations. 130 2. Specify a completion callback. The callback routine runs in tasklet 131 context if the offload engine driver supports interrupts, or it is 132 called in application context if the operation is carried out 133 synchronously in software. The callback can be set in the call to 134 async_<operation>, or when the application needs to submit a chain of 135 unknown length it can use the async_trigger_callback() routine to set a 136 completion interrupt/callback at the end of the chain. 137 138 3.6 Constraints 139 --------------- 140 141 1. Calls to async_<operation> are not permitted in IRQ context. Other 142 contexts are permitted provided constraint #2 is not violated. 143 2. Completion callback routines cannot submit new operations. This 144 results in recursion in the synchronous case and spin_locks being 145 acquired twice in the asynchronous case. 146 147 3.7 Example 148 ----------- 149 150 Perform a xor->copy->xor operation where each operation depends on the 151 result from the previous operation:: 152 153 #include <linux/async_tx.h> 154 155 static void callback(void *param) 156 { 157 complete(param); 158 } 159 160 #define NDISKS 2 161 162 static void run_xor_copy_xor(struct page **xor_srcs, 163 struct page *xor_dest, 164 size_t xor_len, 165 struct page *copy_src, 166 struct page *copy_dest, 167 size_t copy_len) 168 { 169 struct dma_async_tx_descriptor *tx; 170 struct async_submit_ctl submit; 171 addr_conv_t addr_conv[NDISKS]; 172 struct completion cmp; 173 174 init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST, NULL, NULL, NULL, 175 addr_conv); 176 tx = async_xor(xor_dest, xor_srcs, 0, NDISKS, xor_len, &submit); 177 178 submit.depend_tx = tx; 179 tx = async_memcpy(copy_dest, copy_src, 0, 0, copy_len, &submit); 180 181 init_completion(&cmp); 182 init_async_submit(&submit, ASYNC_TX_XOR_DROP_DST | ASYNC_TX_ACK, tx, 183 callback, &cmp, addr_conv); 184 tx = async_xor(xor_dest, xor_srcs, 0, NDISKS, xor_len, &submit); 185 186 async_tx_issue_pending_all(); 187 188 wait_for_completion(&cmp); 189 } 190 191 See include/linux/async_tx.h for more information on the flags. See the 192 ops_run_* and ops_complete_* routines in drivers/md/raid5.c for more 193 implementation examples. 194 195 4. Driver Development Notes 196 =========================== 197 198 4.1 Conformance points 199 ---------------------- 200 201 There are a few conformance points required in dmaengine drivers to 202 accommodate assumptions made by applications using the async_tx API: 203 204 1. Completion callbacks are expected to happen in tasklet context 205 2. dma_async_tx_descriptor fields are never manipulated in IRQ context 206 3. Use async_tx_run_dependencies() in the descriptor clean up path to 207 handle submission of dependent operations 208 209 4.2 "My application needs exclusive control of hardware channels" 210 ----------------------------------------------------------------- 211 212 Primarily this requirement arises from cases where a DMA engine driver 213 is being used to support device-to-memory operations. A channel that is 214 performing these operations cannot, for many platform specific reasons, 215 be shared. For these cases the dma_request_channel() interface is 216 provided. 217 218 The interface is:: 219 220 struct dma_chan *dma_request_channel(dma_cap_mask_t mask, 221 dma_filter_fn filter_fn, 222 void *filter_param); 223 224 Where dma_filter_fn is defined as:: 225 226 typedef bool (*dma_filter_fn)(struct dma_chan *chan, void *filter_param); 227 228 When the optional 'filter_fn' parameter is set to NULL 229 dma_request_channel simply returns the first channel that satisfies the 230 capability mask. Otherwise, when the mask parameter is insufficient for 231 specifying the necessary channel, the filter_fn routine can be used to 232 disposition the available channels in the system. The filter_fn routine 233 is called once for each free channel in the system. Upon seeing a 234 suitable channel filter_fn returns DMA_ACK which flags that channel to 235 be the return value from dma_request_channel. A channel allocated via 236 this interface is exclusive to the caller, until dma_release_channel() 237 is called. 238 239 The DMA_PRIVATE capability flag is used to tag dma devices that should 240 not be used by the general-purpose allocator. It can be set at 241 initialization time if it is known that a channel will always be 242 private. Alternatively, it is set when dma_request_channel() finds an 243 unused "public" channel. 244 245 A couple caveats to note when implementing a driver and consumer: 246 247 1. Once a channel has been privately allocated it will no longer be 248 considered by the general-purpose allocator even after a call to 249 dma_release_channel(). 250 2. Since capabilities are specified at the device level a dma_device 251 with multiple channels will either have all channels public, or all 252 channels private. 253 254 5. Source 255 --------- 256 257 include/linux/dmaengine.h: 258 core header file for DMA drivers and api users 259 drivers/dma/dmaengine.c: 260 offload engine channel management routines 261 drivers/dma/: 262 location for offload engine drivers 263 include/linux/async_tx.h: 264 core header file for the async_tx api 265 crypto/async_tx/async_tx.c: 266 async_tx interface to dmaengine and common code 267 crypto/async_tx/async_memcpy.c: 268 copy offload 269 crypto/async_tx/async_xor.c: 270 xor and xor zero sum offload
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.