1 ===================== 2 DRM Memory Management 3 ===================== 4 5 Modern Linux systems require large amount of g 6 frame buffers, textures, vertices and other gr 7 the very dynamic nature of many of that data, 8 efficiently is thus crucial for the graphics s 9 role in the DRM infrastructure. 10 11 The DRM core includes two memory managers, nam 12 (TTM) and Graphics Execution Manager (GEM). TT 13 manager to be developed and tried to be a one- 14 solution. It provides a single userspace API t 15 all hardware, supporting both Unified Memory A 16 and devices with dedicated video RAM (i.e. mos 17 This resulted in a large, complex piece of cod 18 hard to use for driver development. 19 20 GEM started as an Intel-sponsored project in r 21 complexity. Its design philosophy is completel 22 providing a solution to every graphics memory- 23 identified common code between drivers and cre 24 share it. GEM has simpler initialization and e 25 TTM, but has no video RAM management capabilit 26 UMA devices. 27 28 The Translation Table Manager (TTM) 29 =================================== 30 31 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_module 32 :doc: TTM 33 34 .. kernel-doc:: include/drm/ttm/ttm_caching.h 35 :internal: 36 37 TTM device object reference 38 --------------------------- 39 40 .. kernel-doc:: include/drm/ttm/ttm_device.h 41 :internal: 42 43 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_device 44 :export: 45 46 TTM resource placement reference 47 -------------------------------- 48 49 .. kernel-doc:: include/drm/ttm/ttm_placement. 50 :internal: 51 52 TTM resource object reference 53 ----------------------------- 54 55 .. kernel-doc:: include/drm/ttm/ttm_resource.h 56 :internal: 57 58 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_resour 59 :export: 60 61 TTM TT object reference 62 ----------------------- 63 64 .. kernel-doc:: include/drm/ttm/ttm_tt.h 65 :internal: 66 67 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_tt.c 68 :export: 69 70 TTM page pool reference 71 ----------------------- 72 73 .. kernel-doc:: include/drm/ttm/ttm_pool.h 74 :internal: 75 76 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_pool.c 77 :export: 78 79 The Graphics Execution Manager (GEM) 80 ==================================== 81 82 The GEM design approach has resulted in a memo 83 provide full coverage of all (or even all comm 84 userspace or kernel API. GEM exposes a set of 85 operations to userspace and a set of helper fu 86 let drivers implement hardware-specific operat 87 private API. 88 89 The GEM userspace API is described in the `GEM 90 Manager <http://lwn.net/Articles/283798/>`__ a 91 slightly outdated, the document provides a goo 92 principles. Buffer allocation and read and wri 93 as part of the common GEM API, are currently i 94 driver-specific ioctls. 95 96 GEM is data-agnostic. It manages abstract buff 97 what individual buffers contain. APIs that req 98 contents or purpose, such as buffer allocation 99 primitives, are thus outside of the scope of G 100 using driver-specific ioctls. 101 102 On a fundamental level, GEM involves several o 103 104 - Memory allocation and freeing 105 - Command execution 106 - Aperture management at command execution ti 107 108 Buffer object allocation is relatively straigh 109 provided by Linux's shmem layer, which provide 110 object. 111 112 Device-specific operations, such as command ex 113 read & write, mapping, and domain ownership tr 114 driver-specific ioctls. 115 116 GEM Initialization 117 ------------------ 118 119 Drivers that use GEM must set the DRIVER_GEM b 120 :c:type:`struct drm_driver <drm_driver>` drive 121 field. The DRM core will then automatically in 122 before calling the load operation. Behind the 123 DRM Memory Manager object which provides an ad 124 object allocation. 125 126 In a KMS configuration, drivers need to alloca 127 command ring buffer following core GEM initial 128 hardware. UMA devices usually have what is cal 129 region, which provides space for the initial f 130 contiguous memory regions required by the devi 131 typically not managed by GEM, and must be init 132 its own DRM MM object. 133 134 GEM Objects Creation 135 -------------------- 136 137 GEM splits creation of GEM objects and allocat 138 backs them in two distinct operations. 139 140 GEM objects are represented by an instance of 141 drm_gem_object <drm_gem_object>`. Drivers usua 142 extend GEM objects with private information an 143 driver-specific GEM object structure type that 144 struct :c:type:`struct drm_gem_object <drm_gem 145 146 To create a GEM object, a driver allocates mem 147 specific GEM object type and initializes the e 148 :c:type:`struct drm_gem_object <drm_gem_object 149 to drm_gem_object_init(). The function takes a 150 to the DRM device, a pointer to the GEM object 151 size in bytes. 152 153 GEM uses shmem to allocate anonymous pageable 154 drm_gem_object_init() will create an shmfs fil 155 requested size and store it into the struct :c 156 drm_gem_object <drm_gem_object>` filp field. T 157 used as either main storage for the object whe 158 uses system memory directly or as a backing st 159 160 Drivers are responsible for the actual physica 161 calling shmem_read_mapping_page_gfp() for each 162 Note that they can decide to allocate pages wh 163 object, or to delay allocation until the memor 164 when a page fault occurs as a result of a user 165 when the driver needs to start a DMA transfer 166 167 Anonymous pageable memory allocation is not al 168 when the hardware requires physically contiguo 169 often the case in embedded devices. Drivers ca 170 no shmfs backing (called private GEM objects) 171 to drm_gem_private_object_init() instead of dr 172 private GEM objects must be managed by drivers 173 174 GEM Objects Lifetime 175 -------------------- 176 177 All GEM objects are reference-counted by the G 178 acquired and release by calling drm_gem_object 179 respectively. 180 181 When the last reference to a GEM object is rel 182 the :c:type:`struct drm_gem_object_funcs <gem_ 183 operation. That operation is mandatory for GEM 184 free the GEM object and all associated resourc 185 186 void (\*free) (struct drm_gem_object \*obj); D 187 responsible for freeing all GEM object resourc 188 resources created by the GEM core, which need 189 drm_gem_object_release(). 190 191 GEM Objects Naming 192 ------------------ 193 194 Communication between userspace and the kernel 195 using local handles, global names or, more rec 196 All of those are 32-bit integer values; the us 197 apply to the file descriptors. 198 199 GEM handles are local to a DRM file. Applicati 200 object through a driver-specific ioctl, and ca 201 to the GEM object in other standard or driver- 202 DRM file handle frees all its GEM handles and 203 associated GEM objects. 204 205 To create a handle for a GEM object drivers ca 206 function takes a pointer to the DRM file and t 207 locally unique handle. When the handle is no 208 with a call to drm_gem_handle_delete(). Finall 209 handle can be retrieved by a call to drm_gem_o 210 211 Handles don't take ownership of GEM objects, t 212 to the object that will be dropped when the ha 213 avoid leaking GEM objects, drivers must make s 214 reference(s) they own (such as the initial ref 215 creation time) as appropriate, without any spe 216 handle. For example, in the particular case of 217 handle creation in the implementation of the d 218 drivers must drop the initial reference to the 219 returning the handle. 220 221 GEM names are similar in purpose to handles bu 222 files. They can be passed between processes to 223 globally. Names can't be used directly to refe 224 API, applications must convert handles to name 225 using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GE 226 respectively. The conversion is handled by the 227 driver-specific support. 228 229 GEM also supports buffer sharing with dma-buf 230 PRIME. GEM-based drivers must use the provided 231 implement the exporting and importing correctl 232 file descriptors is inherently more secure tha 233 global GEM names it is the preferred buffer sh 234 buffers through GEM names is only supported fo 235 Furthermore PRIME also allows cross-device buf 236 based on dma-bufs. 237 238 GEM Objects Mapping 239 ------------------- 240 241 Because mapping operations are fairly heavywei 242 read/write-like access to buffers, implemented 243 ioctls, over mapping buffers to userspace. How 244 to the buffer is needed (to perform software r 245 direct access to the object can be more effici 246 247 The mmap system call can't be used directly to 248 don't have their own file handle. Two alternat 249 co-exist to map GEM objects to userspace. The 250 driver-specific ioctl to perform the mapping o 251 do_mmap() under the hood. This is often consid 252 dubious, seems to be discouraged for new GEM-e 253 thus not be described here. 254 255 The second method uses the mmap system call on 256 \*mmap(void \*addr, size_t length, int prot, i 257 offset); DRM identifies the GEM object to be m 258 passed through the mmap offset argument. Prior 259 object must thus be associated with a fake off 260 must call drm_gem_create_mmap_offset() on the 261 262 Once allocated, the fake offset value must be 263 in a driver-specific way and can then be used 264 argument. 265 266 The GEM core provides a helper method drm_gem_ 267 handle object mapping. The method can be set d 268 operation handler. It will look up the GEM obj 269 value and set the VMA operations to the :c:typ 270 <drm_driver>` gem_vm_ops field. Note that drm_ 271 userspace, but relies on the driver-provided f 272 individually. 273 274 To use drm_gem_mmap(), drivers must fill the s 275 <drm_driver>` gem_vm_ops field with a pointer 276 277 The VM operations is a :c:type:`struct vm_oper 278 made up of several fields, the more interestin 279 280 .. code-block:: c 281 282 struct vm_operations_struct { 283 void (*open)(struct vm_area_st 284 void (*close)(struct vm_area_s 285 vm_fault_t (*fault)(struct vm_ 286 }; 287 288 289 The open and close operations must update the 290 count. Drivers can use the drm_gem_vm_open() a 291 functions directly as open and close handlers. 292 293 The fault operation handler is responsible for 294 to userspace when a page fault occurs. Dependi 295 allocation scheme, drivers can allocate pages 296 decide to allocate memory for the GEM object a 297 created. 298 299 Drivers that want to map the GEM object upfron 300 faults can implement their own mmap file opera 301 302 For platforms without MMU the GEM core provide 303 drm_gem_dma_get_unmapped_area(). The mmap() ro 304 proposed address for the mapping. 305 306 To use drm_gem_dma_get_unmapped_area(), driver 307 :c:type:`struct file_operations <file_operatio 308 a pointer on drm_gem_dma_get_unmapped_area(). 309 310 More detailed information about get_unmapped_a 311 Documentation/admin-guide/mm/nommu-mmap.rst 312 313 Memory Coherency 314 ---------------- 315 316 When mapped to the device or used in a command 317 an object are flushed to memory and marked wri 318 coherent with the GPU. Likewise, if the CPU ac 319 GPU has finished rendering to the object, then 320 coherent with the CPU's view of memory, usuall 321 flushing of various kinds. This core CPU<->GPU 322 provided by a device-specific ioctl, which eva 323 domain and performs any necessary flushing or 324 object into the desired coherency domain (note 325 busy, i.e. an active render target; in that ca 326 blocks the client and waits for rendering to c 327 any necessary flushing operations). 328 329 Command Execution 330 ----------------- 331 332 Perhaps the most important GEM function for GP 333 command execution interface to clients. Client 334 command buffers containing references to previ 335 objects, and then submit them to GEM. At that 336 bind all the objects into the GTT, execute the 337 necessary synchronization between clients acce 338 This often involves evicting some objects from 339 others (a fairly expensive operation), and pro 340 which hides fixed GTT offsets from clients. Cl 341 to submit command buffers that reference more 342 the GTT; otherwise, GEM will reject them and n 343 Similarly, if several objects in the buffer re 344 be allocated for correct rendering (e.g. 2D bl 345 care must be taken not to require more fence r 346 available to the client. Such resource managem 347 from the client in libdrm. 348 349 GEM Function Reference 350 ---------------------- 351 352 .. kernel-doc:: include/drm/drm_gem.h 353 :internal: 354 355 .. kernel-doc:: drivers/gpu/drm/drm_gem.c 356 :export: 357 358 GEM DMA Helper Functions Reference 359 ---------------------------------- 360 361 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_he 362 :doc: dma helpers 363 364 .. kernel-doc:: include/drm/drm_gem_dma_helper 365 :internal: 366 367 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_he 368 :export: 369 370 GEM SHMEM Helper Function Reference 371 ----------------------------------- 372 373 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_ 374 :doc: overview 375 376 .. kernel-doc:: include/drm/drm_gem_shmem_help 377 :internal: 378 379 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_ 380 :export: 381 382 GEM VRAM Helper Functions Reference 383 ----------------------------------- 384 385 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_h 386 :doc: overview 387 388 .. kernel-doc:: include/drm/drm_gem_vram_helpe 389 :internal: 390 391 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_h 392 :export: 393 394 GEM TTM Helper Functions Reference 395 ----------------------------------- 396 397 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_he 398 :doc: overview 399 400 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_he 401 :export: 402 403 VMA Offset Manager 404 ================== 405 406 .. kernel-doc:: drivers/gpu/drm/drm_vma_manage 407 :doc: vma offset manager 408 409 .. kernel-doc:: include/drm/drm_vma_manager.h 410 :internal: 411 412 .. kernel-doc:: drivers/gpu/drm/drm_vma_manage 413 :export: 414 415 .. _prime_buffer_sharing: 416 417 PRIME Buffer Sharing 418 ==================== 419 420 PRIME is the cross device buffer sharing frame 421 created for the OPTIMUS range of multi-gpu pla 422 buffers are dma-buf based file descriptors. 423 424 Overview and Lifetime Rules 425 --------------------------- 426 427 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 428 :doc: overview and lifetime rules 429 430 PRIME Helper Functions 431 ---------------------- 432 433 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 434 :doc: PRIME Helpers 435 436 PRIME Function References 437 ------------------------- 438 439 .. kernel-doc:: include/drm/drm_prime.h 440 :internal: 441 442 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 443 :export: 444 445 DRM MM Range Allocator 446 ====================== 447 448 Overview 449 -------- 450 451 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 452 :doc: Overview 453 454 LRU Scan/Eviction Support 455 ------------------------- 456 457 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 458 :doc: lru scan roster 459 460 DRM MM Range Allocator Function References 461 ------------------------------------------ 462 463 .. kernel-doc:: include/drm/drm_mm.h 464 :internal: 465 466 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 467 :export: 468 469 .. _drm_gpuvm: 470 471 DRM GPUVM 472 ========= 473 474 Overview 475 -------- 476 477 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 478 :doc: Overview 479 480 Split and Merge 481 --------------- 482 483 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 484 :doc: Split and Merge 485 486 .. _drm_gpuvm_locking: 487 488 Locking 489 ------- 490 491 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 492 :doc: Locking 493 494 Examples 495 -------- 496 497 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 498 :doc: Examples 499 500 DRM GPUVM Function References 501 ----------------------------- 502 503 .. kernel-doc:: include/drm/drm_gpuvm.h 504 :internal: 505 506 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 507 :export: 508 509 DRM Buddy Allocator 510 =================== 511 512 DRM Buddy Function References 513 ----------------------------- 514 515 .. kernel-doc:: drivers/gpu/drm/drm_buddy.c 516 :export: 517 518 DRM Cache Handling and Fast WC memcpy() 519 ======================================= 520 521 .. kernel-doc:: drivers/gpu/drm/drm_cache.c 522 :export: 523 524 .. _drm_sync_objects: 525 526 DRM Sync Objects 527 ================ 528 529 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c 530 :doc: Overview 531 532 .. kernel-doc:: include/drm/drm_syncobj.h 533 :internal: 534 535 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c 536 :export: 537 538 DRM Execution context 539 ===================== 540 541 .. kernel-doc:: drivers/gpu/drm/drm_exec.c 542 :doc: Overview 543 544 .. kernel-doc:: include/drm/drm_exec.h 545 :internal: 546 547 .. kernel-doc:: drivers/gpu/drm/drm_exec.c 548 :export: 549 550 GPU Scheduler 551 ============= 552 553 Overview 554 -------- 555 556 .. kernel-doc:: drivers/gpu/drm/scheduler/sche 557 :doc: Overview 558 559 Flow Control 560 ------------ 561 562 .. kernel-doc:: drivers/gpu/drm/scheduler/sche 563 :doc: Flow Control 564 565 Scheduler Function References 566 ----------------------------- 567 568 .. kernel-doc:: include/drm/gpu_scheduler.h 569 :internal: 570 571 .. kernel-doc:: drivers/gpu/drm/scheduler/sche 572 :export: 573 574 .. kernel-doc:: drivers/gpu/drm/scheduler/sche 575 :export:
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.