1 ================================ 1 ================================ 2 Devres - Managed Device Resource 2 Devres - Managed Device Resource 3 ================================ 3 ================================ 4 4 5 Tejun Heo <teheo@suse.de> 5 Tejun Heo <teheo@suse.de> 6 6 7 First draft 10 January 2007 7 First draft 10 January 2007 8 8 9 .. contents 9 .. contents 10 10 11 1. Intro : Huh? Devres? 11 1. Intro : Huh? Devres? 12 2. Devres : Devres in a 12 2. Devres : Devres in a nutshell 13 3. Devres Group : Group devres 13 3. Devres Group : Group devres'es and release them together 14 4. Details : Life time ru 14 4. Details : Life time rules, calling context, ... 15 5. Overhead : How much do 15 5. Overhead : How much do we have to pay for this? 16 6. List of managed interfaces: Currently im 16 6. List of managed interfaces: Currently implemented managed interfaces 17 17 18 18 19 1. Intro 19 1. Intro 20 -------- 20 -------- 21 21 22 devres came up while trying to convert libata 22 devres came up while trying to convert libata to use iomap. Each 23 iomapped address should be kept and unmapped o 23 iomapped address should be kept and unmapped on driver detach. For 24 example, a plain SFF ATA controller (that is, 24 example, a plain SFF ATA controller (that is, good old PCI IDE) in 25 native mode makes use of 5 PCI BARs and all of 25 native mode makes use of 5 PCI BARs and all of them should be 26 maintained. 26 maintained. 27 27 28 As with many other device drivers, libata low 28 As with many other device drivers, libata low level drivers have 29 sufficient bugs in ->remove and ->probe failur 29 sufficient bugs in ->remove and ->probe failure path. Well, yes, 30 that's probably because libata low level drive 30 that's probably because libata low level driver developers are lazy 31 bunch, but aren't all low level driver develop 31 bunch, but aren't all low level driver developers? After spending a 32 day fiddling with braindamaged hardware with n 32 day fiddling with braindamaged hardware with no document or 33 braindamaged document, if it's finally working 33 braindamaged document, if it's finally working, well, it's working. 34 34 35 For one reason or another, low level drivers d 35 For one reason or another, low level drivers don't receive as much 36 attention or testing as core code, and bugs on 36 attention or testing as core code, and bugs on driver detach or 37 initialization failure don't happen often enou 37 initialization failure don't happen often enough to be noticeable. 38 Init failure path is worse because it's much l 38 Init failure path is worse because it's much less travelled while 39 needs to handle multiple entry points. 39 needs to handle multiple entry points. 40 40 41 So, many low level drivers end up leaking reso 41 So, many low level drivers end up leaking resources on driver detach 42 and having half broken failure path implementa 42 and having half broken failure path implementation in ->probe() which 43 would leak resources or even cause oops when f 43 would leak resources or even cause oops when failure occurs. iomap 44 adds more to this mix. So do msi and msix. 44 adds more to this mix. So do msi and msix. 45 45 46 46 47 2. Devres 47 2. Devres 48 --------- 48 --------- 49 49 50 devres is basically linked list of arbitrarily 50 devres is basically linked list of arbitrarily sized memory areas 51 associated with a struct device. Each devres 51 associated with a struct device. Each devres entry is associated with 52 a release function. A devres can be released 52 a release function. A devres can be released in several ways. No 53 matter what, all devres entries are released o 53 matter what, all devres entries are released on driver detach. On 54 release, the associated release function is in 54 release, the associated release function is invoked and then the 55 devres entry is freed. 55 devres entry is freed. 56 56 57 Managed interface is created for resources com 57 Managed interface is created for resources commonly used by device 58 drivers using devres. For example, coherent D 58 drivers using devres. For example, coherent DMA memory is acquired 59 using dma_alloc_coherent(). The managed versi 59 using dma_alloc_coherent(). The managed version is called 60 dmam_alloc_coherent(). It is identical to dma 60 dmam_alloc_coherent(). It is identical to dma_alloc_coherent() except 61 for the DMA memory allocated using it is manag 61 for the DMA memory allocated using it is managed and will be 62 automatically released on driver detach. Impl 62 automatically released on driver detach. Implementation looks like 63 the following:: 63 the following:: 64 64 65 struct dma_devres { 65 struct dma_devres { 66 size_t size; 66 size_t size; 67 void *vaddr; 67 void *vaddr; 68 dma_addr_t dma_handle; 68 dma_addr_t dma_handle; 69 }; 69 }; 70 70 71 static void dmam_coherent_release(struct dev 71 static void dmam_coherent_release(struct device *dev, void *res) 72 { 72 { 73 struct dma_devres *this = res; 73 struct dma_devres *this = res; 74 74 75 dma_free_coherent(dev, this->size, thi 75 dma_free_coherent(dev, this->size, this->vaddr, this->dma_handle); 76 } 76 } 77 77 78 dmam_alloc_coherent(dev, size, dma_handle, g 78 dmam_alloc_coherent(dev, size, dma_handle, gfp) 79 { 79 { 80 struct dma_devres *dr; 80 struct dma_devres *dr; 81 void *vaddr; 81 void *vaddr; 82 82 83 dr = devres_alloc(dmam_coherent_releas 83 dr = devres_alloc(dmam_coherent_release, sizeof(*dr), gfp); 84 ... 84 ... 85 85 86 /* alloc DMA memory as usual */ 86 /* alloc DMA memory as usual */ 87 vaddr = dma_alloc_coherent(...); 87 vaddr = dma_alloc_coherent(...); 88 ... 88 ... 89 89 90 /* record size, vaddr, dma_handle in d 90 /* record size, vaddr, dma_handle in dr */ 91 dr->vaddr = vaddr; 91 dr->vaddr = vaddr; 92 ... 92 ... 93 93 94 devres_add(dev, dr); 94 devres_add(dev, dr); 95 95 96 return vaddr; 96 return vaddr; 97 } 97 } 98 98 99 If a driver uses dmam_alloc_coherent(), the ar 99 If a driver uses dmam_alloc_coherent(), the area is guaranteed to be 100 freed whether initialization fails half-way or 100 freed whether initialization fails half-way or the device gets 101 detached. If most resources are acquired usin 101 detached. If most resources are acquired using managed interface, a 102 driver can have much simpler init and exit cod 102 driver can have much simpler init and exit code. Init path basically 103 looks like the following:: 103 looks like the following:: 104 104 105 my_init_one() 105 my_init_one() 106 { 106 { 107 struct mydev *d; 107 struct mydev *d; 108 108 109 d = devm_kzalloc(dev, sizeof(*d), GFP_ 109 d = devm_kzalloc(dev, sizeof(*d), GFP_KERNEL); 110 if (!d) 110 if (!d) 111 return -ENOMEM; 111 return -ENOMEM; 112 112 113 d->ring = dmam_alloc_coherent(...); 113 d->ring = dmam_alloc_coherent(...); 114 if (!d->ring) 114 if (!d->ring) 115 return -ENOMEM; 115 return -ENOMEM; 116 116 117 if (check something) 117 if (check something) 118 return -EINVAL; 118 return -EINVAL; 119 ... 119 ... 120 120 121 return register_to_upper_layer(d); 121 return register_to_upper_layer(d); 122 } 122 } 123 123 124 And exit path:: 124 And exit path:: 125 125 126 my_remove_one() 126 my_remove_one() 127 { 127 { 128 unregister_from_upper_layer(d); 128 unregister_from_upper_layer(d); 129 shutdown_my_hardware(); 129 shutdown_my_hardware(); 130 } 130 } 131 131 132 As shown above, low level drivers can be simpl 132 As shown above, low level drivers can be simplified a lot by using 133 devres. Complexity is shifted from less maint 133 devres. Complexity is shifted from less maintained low level drivers 134 to better maintained higher layer. Also, as i 134 to better maintained higher layer. Also, as init failure path is 135 shared with exit path, both can get more testi 135 shared with exit path, both can get more testing. 136 136 137 Note though that when converting current calls 137 Note though that when converting current calls or assignments to 138 managed devm_* versions it is up to you to che 138 managed devm_* versions it is up to you to check if internal operations 139 like allocating memory, have failed. Managed r 139 like allocating memory, have failed. Managed resources pertains to the 140 freeing of these resources *only* - all other 140 freeing of these resources *only* - all other checks needed are still 141 on you. In some cases this may mean introducin 141 on you. In some cases this may mean introducing checks that were not 142 necessary before moving to the managed devm_* 142 necessary before moving to the managed devm_* calls. 143 143 144 144 145 3. Devres group 145 3. Devres group 146 --------------- 146 --------------- 147 147 148 Devres entries can be grouped using devres gro 148 Devres entries can be grouped using devres group. When a group is 149 released, all contained normal devres entries 149 released, all contained normal devres entries and properly nested 150 groups are released. One usage is to rollback 150 groups are released. One usage is to rollback series of acquired 151 resources on failure. For example:: 151 resources on failure. For example:: 152 152 153 if (!devres_open_group(dev, NULL, GFP_KERNEL 153 if (!devres_open_group(dev, NULL, GFP_KERNEL)) 154 return -ENOMEM; 154 return -ENOMEM; 155 155 156 acquire A; 156 acquire A; 157 if (failed) 157 if (failed) 158 goto err; 158 goto err; 159 159 160 acquire B; 160 acquire B; 161 if (failed) 161 if (failed) 162 goto err; 162 goto err; 163 ... 163 ... 164 164 165 devres_remove_group(dev, NULL); 165 devres_remove_group(dev, NULL); 166 return 0; 166 return 0; 167 167 168 err: 168 err: 169 devres_release_group(dev, NULL); 169 devres_release_group(dev, NULL); 170 return err_code; 170 return err_code; 171 171 172 As resource acquisition failure usually means 172 As resource acquisition failure usually means probe failure, constructs 173 like above are usually useful in midlayer driv 173 like above are usually useful in midlayer driver (e.g. libata core 174 layer) where interface function shouldn't have 174 layer) where interface function shouldn't have side effect on failure. 175 For LLDs, just returning error code suffices i 175 For LLDs, just returning error code suffices in most cases. 176 176 177 Each group is identified by `void *id`. It ca 177 Each group is identified by `void *id`. It can either be explicitly 178 specified by @id argument to devres_open_group 178 specified by @id argument to devres_open_group() or automatically 179 created by passing NULL as @id as in the above 179 created by passing NULL as @id as in the above example. In both 180 cases, devres_open_group() returns the group's 180 cases, devres_open_group() returns the group's id. The returned id 181 can be passed to other devres functions to sel 181 can be passed to other devres functions to select the target group. 182 If NULL is given to those functions, the lates 182 If NULL is given to those functions, the latest open group is 183 selected. 183 selected. 184 184 185 For example, you can do something like the fol 185 For example, you can do something like the following:: 186 186 187 int my_midlayer_create_something() 187 int my_midlayer_create_something() 188 { 188 { 189 if (!devres_open_group(dev, my_midlaye 189 if (!devres_open_group(dev, my_midlayer_create_something, GFP_KERNEL)) 190 return -ENOMEM; 190 return -ENOMEM; 191 191 192 ... 192 ... 193 193 194 devres_close_group(dev, my_midlayer_cr 194 devres_close_group(dev, my_midlayer_create_something); 195 return 0; 195 return 0; 196 } 196 } 197 197 198 void my_midlayer_destroy_something() 198 void my_midlayer_destroy_something() 199 { 199 { 200 devres_release_group(dev, my_midlayer_ 200 devres_release_group(dev, my_midlayer_create_something); 201 } 201 } 202 202 203 203 204 4. Details 204 4. Details 205 ---------- 205 ---------- 206 206 207 Lifetime of a devres entry begins on devres al 207 Lifetime of a devres entry begins on devres allocation and finishes 208 when it is released or destroyed (removed and 208 when it is released or destroyed (removed and freed) - no reference 209 counting. 209 counting. 210 210 211 devres core guarantees atomicity to all basic 211 devres core guarantees atomicity to all basic devres operations and 212 has support for single-instance devres types ( 212 has support for single-instance devres types (atomic 213 lookup-and-add-if-not-found). Other than that 213 lookup-and-add-if-not-found). Other than that, synchronizing 214 concurrent accesses to allocated devres data i 214 concurrent accesses to allocated devres data is caller's 215 responsibility. This is usually non-issue bec 215 responsibility. This is usually non-issue because bus ops and 216 resource allocations already do the job. 216 resource allocations already do the job. 217 217 218 For an example of single-instance devres type, 218 For an example of single-instance devres type, read pcim_iomap_table() 219 in lib/devres.c. 219 in lib/devres.c. 220 220 221 All devres interface functions can be called w 221 All devres interface functions can be called without context if the 222 right gfp mask is given. 222 right gfp mask is given. 223 223 224 224 225 5. Overhead 225 5. Overhead 226 ----------- 226 ----------- 227 227 228 Each devres bookkeeping info is allocated toge 228 Each devres bookkeeping info is allocated together with requested data 229 area. With debug option turned off, bookkeepi 229 area. With debug option turned off, bookkeeping info occupies 16 230 bytes on 32bit machines and 24 bytes on 64bit 230 bytes on 32bit machines and 24 bytes on 64bit (three pointers rounded 231 up to ull alignment). If singly linked list i 231 up to ull alignment). If singly linked list is used, it can be 232 reduced to two pointers (8 bytes on 32bit, 16 232 reduced to two pointers (8 bytes on 32bit, 16 bytes on 64bit). 233 233 234 Each devres group occupies 8 pointers. It can 234 Each devres group occupies 8 pointers. It can be reduced to 6 if 235 singly linked list is used. 235 singly linked list is used. 236 236 237 Memory space overhead on ahci controller with 237 Memory space overhead on ahci controller with two ports is between 300 238 and 400 bytes on 32bit machine after naive con 238 and 400 bytes on 32bit machine after naive conversion (we can 239 certainly invest a bit more effort into libata 239 certainly invest a bit more effort into libata core layer). 240 240 241 241 242 6. List of managed interfaces 242 6. List of managed interfaces 243 ----------------------------- 243 ----------------------------- 244 244 245 CLOCK 245 CLOCK 246 devm_clk_get() 246 devm_clk_get() 247 devm_clk_get_optional() 247 devm_clk_get_optional() 248 devm_clk_put() 248 devm_clk_put() 249 devm_clk_bulk_get() 249 devm_clk_bulk_get() 250 devm_clk_bulk_get_all() 250 devm_clk_bulk_get_all() 251 devm_clk_bulk_get_optional() 251 devm_clk_bulk_get_optional() 252 devm_get_clk_from_child() 252 devm_get_clk_from_child() 253 devm_clk_hw_register() 253 devm_clk_hw_register() 254 devm_of_clk_add_hw_provider() 254 devm_of_clk_add_hw_provider() 255 devm_clk_hw_register_clkdev() 255 devm_clk_hw_register_clkdev() 256 256 257 DMA 257 DMA 258 dmaenginem_async_device_register() 258 dmaenginem_async_device_register() 259 dmam_alloc_coherent() 259 dmam_alloc_coherent() 260 dmam_alloc_attrs() 260 dmam_alloc_attrs() 261 dmam_free_coherent() 261 dmam_free_coherent() 262 dmam_pool_create() 262 dmam_pool_create() 263 dmam_pool_destroy() 263 dmam_pool_destroy() 264 264 265 DRM 265 DRM 266 devm_drm_dev_alloc() 266 devm_drm_dev_alloc() 267 267 268 GPIO 268 GPIO 269 devm_gpiod_get() 269 devm_gpiod_get() 270 devm_gpiod_get_array() 270 devm_gpiod_get_array() 271 devm_gpiod_get_array_optional() 271 devm_gpiod_get_array_optional() 272 devm_gpiod_get_index() 272 devm_gpiod_get_index() 273 devm_gpiod_get_index_optional() 273 devm_gpiod_get_index_optional() 274 devm_gpiod_get_optional() 274 devm_gpiod_get_optional() 275 devm_gpiod_put() 275 devm_gpiod_put() 276 devm_gpiod_unhinge() 276 devm_gpiod_unhinge() 277 devm_gpiochip_add_data() 277 devm_gpiochip_add_data() 278 devm_gpio_request() 278 devm_gpio_request() 279 devm_gpio_request_one() 279 devm_gpio_request_one() 280 280 281 I2C 281 I2C 282 devm_i2c_add_adapter() 282 devm_i2c_add_adapter() 283 devm_i2c_new_dummy_device() 283 devm_i2c_new_dummy_device() 284 284 285 IIO 285 IIO 286 devm_iio_device_alloc() 286 devm_iio_device_alloc() 287 devm_iio_device_register() 287 devm_iio_device_register() 288 devm_iio_dmaengine_buffer_setup() 288 devm_iio_dmaengine_buffer_setup() 289 devm_iio_kfifo_buffer_setup() 289 devm_iio_kfifo_buffer_setup() 290 devm_iio_kfifo_buffer_setup_ext() 290 devm_iio_kfifo_buffer_setup_ext() 291 devm_iio_map_array_register() 291 devm_iio_map_array_register() 292 devm_iio_triggered_buffer_setup() 292 devm_iio_triggered_buffer_setup() 293 devm_iio_triggered_buffer_setup_ext() 293 devm_iio_triggered_buffer_setup_ext() 294 devm_iio_trigger_alloc() 294 devm_iio_trigger_alloc() 295 devm_iio_trigger_register() 295 devm_iio_trigger_register() 296 devm_iio_channel_get() 296 devm_iio_channel_get() 297 devm_iio_channel_get_all() 297 devm_iio_channel_get_all() 298 devm_iio_hw_consumer_alloc() 298 devm_iio_hw_consumer_alloc() 299 devm_fwnode_iio_channel_get_by_name() 299 devm_fwnode_iio_channel_get_by_name() 300 300 301 INPUT 301 INPUT 302 devm_input_allocate_device() 302 devm_input_allocate_device() 303 303 304 IO region 304 IO region 305 devm_release_mem_region() 305 devm_release_mem_region() 306 devm_release_region() 306 devm_release_region() 307 devm_release_resource() 307 devm_release_resource() 308 devm_request_mem_region() 308 devm_request_mem_region() 309 devm_request_free_mem_region() 309 devm_request_free_mem_region() 310 devm_request_region() 310 devm_request_region() 311 devm_request_resource() 311 devm_request_resource() 312 312 313 IOMAP 313 IOMAP 314 devm_ioport_map() 314 devm_ioport_map() 315 devm_ioport_unmap() 315 devm_ioport_unmap() 316 devm_ioremap() 316 devm_ioremap() 317 devm_ioremap_uc() 317 devm_ioremap_uc() 318 devm_ioremap_wc() 318 devm_ioremap_wc() 319 devm_ioremap_resource() : checks resource, r 319 devm_ioremap_resource() : checks resource, requests memory region, ioremaps 320 devm_ioremap_resource_wc() 320 devm_ioremap_resource_wc() 321 devm_platform_ioremap_resource() : calls dev 321 devm_platform_ioremap_resource() : calls devm_ioremap_resource() for platform device 322 devm_platform_ioremap_resource_byname() 322 devm_platform_ioremap_resource_byname() 323 devm_platform_get_and_ioremap_resource() 323 devm_platform_get_and_ioremap_resource() 324 devm_iounmap() 324 devm_iounmap() 325 !! 325 pcim_iomap() 326 Note: For the PCI devices the specific pcim_ !! 326 pcim_iomap_regions() : do request_region() and iomap() on multiple BARs >> 327 pcim_iomap_table() : array of mapped addresses indexed by BAR >> 328 pcim_iounmap() 327 329 328 IRQ 330 IRQ 329 devm_free_irq() 331 devm_free_irq() 330 devm_request_any_context_irq() 332 devm_request_any_context_irq() 331 devm_request_irq() 333 devm_request_irq() 332 devm_request_threaded_irq() 334 devm_request_threaded_irq() 333 devm_irq_alloc_descs() 335 devm_irq_alloc_descs() 334 devm_irq_alloc_desc() 336 devm_irq_alloc_desc() 335 devm_irq_alloc_desc_at() 337 devm_irq_alloc_desc_at() 336 devm_irq_alloc_desc_from() 338 devm_irq_alloc_desc_from() 337 devm_irq_alloc_descs_from() 339 devm_irq_alloc_descs_from() 338 devm_irq_alloc_generic_chip() 340 devm_irq_alloc_generic_chip() 339 devm_irq_setup_generic_chip() 341 devm_irq_setup_generic_chip() 340 devm_irq_domain_create_sim() 342 devm_irq_domain_create_sim() 341 343 342 LED 344 LED 343 devm_led_classdev_register() 345 devm_led_classdev_register() 344 devm_led_classdev_register_ext() 346 devm_led_classdev_register_ext() 345 devm_led_classdev_unregister() 347 devm_led_classdev_unregister() 346 devm_led_trigger_register() 348 devm_led_trigger_register() 347 devm_of_led_get() 349 devm_of_led_get() 348 350 349 MDIO 351 MDIO 350 devm_mdiobus_alloc() 352 devm_mdiobus_alloc() 351 devm_mdiobus_alloc_size() 353 devm_mdiobus_alloc_size() 352 devm_mdiobus_register() 354 devm_mdiobus_register() 353 devm_of_mdiobus_register() 355 devm_of_mdiobus_register() 354 356 355 MEM 357 MEM 356 devm_free_pages() 358 devm_free_pages() 357 devm_get_free_pages() 359 devm_get_free_pages() 358 devm_kasprintf() 360 devm_kasprintf() 359 devm_kcalloc() 361 devm_kcalloc() 360 devm_kfree() 362 devm_kfree() 361 devm_kmalloc() 363 devm_kmalloc() 362 devm_kmalloc_array() 364 devm_kmalloc_array() 363 devm_kmemdup() 365 devm_kmemdup() 364 devm_krealloc() 366 devm_krealloc() 365 devm_krealloc_array() << 366 devm_kstrdup() 367 devm_kstrdup() 367 devm_kstrdup_const() 368 devm_kstrdup_const() 368 devm_kvasprintf() 369 devm_kvasprintf() 369 devm_kzalloc() 370 devm_kzalloc() 370 371 371 MFD 372 MFD 372 devm_mfd_add_devices() 373 devm_mfd_add_devices() 373 374 374 MUX 375 MUX 375 devm_mux_chip_alloc() 376 devm_mux_chip_alloc() 376 devm_mux_chip_register() 377 devm_mux_chip_register() 377 devm_mux_control_get() 378 devm_mux_control_get() 378 devm_mux_state_get() 379 devm_mux_state_get() 379 380 380 NET 381 NET 381 devm_alloc_etherdev() 382 devm_alloc_etherdev() 382 devm_alloc_etherdev_mqs() 383 devm_alloc_etherdev_mqs() 383 devm_register_netdev() 384 devm_register_netdev() 384 385 385 PER-CPU MEM 386 PER-CPU MEM 386 devm_alloc_percpu() 387 devm_alloc_percpu() 387 devm_free_percpu() 388 devm_free_percpu() 388 389 389 PCI 390 PCI 390 devm_pci_alloc_host_bridge() : managed PCI 391 devm_pci_alloc_host_bridge() : managed PCI host bridge allocation 391 devm_pci_remap_cfgspace() : ioremap PCI 392 devm_pci_remap_cfgspace() : ioremap PCI configuration space 392 devm_pci_remap_cfg_resource() : ioremap PCI 393 devm_pci_remap_cfg_resource() : ioremap PCI configuration space resource 393 !! 394 pcim_enable_device() : after success, all PCI ops become managed 394 pcim_enable_device() : after succes << 395 pcim_iomap() : do iomap() o << 396 pcim_iomap_regions() : do request_r << 397 pcim_iomap_regions_request_all() : do reques << 398 pcim_iomap_table() : array of map << 399 pcim_iounmap() : do iounmap() << 400 pcim_iounmap_regions() : do iounmap() << 401 pcim_pin_device() : keep PCI dev 395 pcim_pin_device() : keep PCI device enabled after release 402 pcim_set_mwi() : enable Memor << 403 396 404 PHY 397 PHY 405 devm_usb_get_phy() 398 devm_usb_get_phy() 406 devm_usb_get_phy_by_node() 399 devm_usb_get_phy_by_node() 407 devm_usb_get_phy_by_phandle() 400 devm_usb_get_phy_by_phandle() 408 devm_usb_put_phy() 401 devm_usb_put_phy() 409 402 410 PINCTRL 403 PINCTRL 411 devm_pinctrl_get() 404 devm_pinctrl_get() 412 devm_pinctrl_put() 405 devm_pinctrl_put() 413 devm_pinctrl_get_select() 406 devm_pinctrl_get_select() 414 devm_pinctrl_register() 407 devm_pinctrl_register() 415 devm_pinctrl_register_and_init() 408 devm_pinctrl_register_and_init() 416 devm_pinctrl_unregister() 409 devm_pinctrl_unregister() 417 410 418 POWER 411 POWER 419 devm_reboot_mode_register() 412 devm_reboot_mode_register() 420 devm_reboot_mode_unregister() 413 devm_reboot_mode_unregister() 421 414 422 PWM 415 PWM 423 devm_pwmchip_alloc() << 424 devm_pwmchip_add() 416 devm_pwmchip_add() 425 devm_pwm_get() 417 devm_pwm_get() 426 devm_fwnode_pwm_get() 418 devm_fwnode_pwm_get() 427 419 428 REGULATOR 420 REGULATOR 429 devm_regulator_bulk_register_supply_alias() 421 devm_regulator_bulk_register_supply_alias() 430 devm_regulator_bulk_get() 422 devm_regulator_bulk_get() 431 devm_regulator_bulk_get_const() 423 devm_regulator_bulk_get_const() 432 devm_regulator_bulk_get_enable() 424 devm_regulator_bulk_get_enable() 433 devm_regulator_bulk_put() 425 devm_regulator_bulk_put() 434 devm_regulator_get() 426 devm_regulator_get() 435 devm_regulator_get_enable() 427 devm_regulator_get_enable() 436 devm_regulator_get_enable_read_voltage() << 437 devm_regulator_get_enable_optional() 428 devm_regulator_get_enable_optional() 438 devm_regulator_get_exclusive() 429 devm_regulator_get_exclusive() 439 devm_regulator_get_optional() 430 devm_regulator_get_optional() 440 devm_regulator_irq_helper() 431 devm_regulator_irq_helper() 441 devm_regulator_put() 432 devm_regulator_put() 442 devm_regulator_register() 433 devm_regulator_register() 443 devm_regulator_register_notifier() 434 devm_regulator_register_notifier() 444 devm_regulator_register_supply_alias() 435 devm_regulator_register_supply_alias() 445 devm_regulator_unregister_notifier() 436 devm_regulator_unregister_notifier() 446 437 447 RESET 438 RESET 448 devm_reset_control_get() 439 devm_reset_control_get() 449 devm_reset_controller_register() 440 devm_reset_controller_register() 450 441 451 RTC 442 RTC 452 devm_rtc_device_register() 443 devm_rtc_device_register() 453 devm_rtc_allocate_device() 444 devm_rtc_allocate_device() 454 devm_rtc_register_device() 445 devm_rtc_register_device() 455 devm_rtc_nvmem_register() 446 devm_rtc_nvmem_register() 456 447 457 SERDEV 448 SERDEV 458 devm_serdev_device_open() 449 devm_serdev_device_open() 459 450 460 SLAVE DMA ENGINE 451 SLAVE DMA ENGINE 461 devm_acpi_dma_controller_register() 452 devm_acpi_dma_controller_register() 462 devm_acpi_dma_controller_free() 453 devm_acpi_dma_controller_free() 463 454 464 SPI 455 SPI 465 devm_spi_alloc_master() 456 devm_spi_alloc_master() 466 devm_spi_alloc_slave() 457 devm_spi_alloc_slave() 467 devm_spi_optimize_message() !! 458 devm_spi_register_master() 468 devm_spi_register_controller() << 469 devm_spi_register_host() << 470 devm_spi_register_target() << 471 459 472 WATCHDOG 460 WATCHDOG 473 devm_watchdog_register_device() 461 devm_watchdog_register_device()
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.