1 ===================== 2 DRM Memory Management 3 ===================== 4 5 Modern Linux systems require large amount of graphics memory to store 6 frame buffers, textures, vertices and other graphics-related data. Given 7 the very dynamic nature of many of that data, managing graphics memory 8 efficiently is thus crucial for the graphics stack and plays a central 9 role in the DRM infrastructure. 10 11 The DRM core includes two memory managers, namely Translation Table Manager 12 (TTM) and Graphics Execution Manager (GEM). TTM was the first DRM memory 13 manager to be developed and tried to be a one-size-fits-them all 14 solution. It provides a single userspace API to accommodate the need of 15 all hardware, supporting both Unified Memory Architecture (UMA) devices 16 and devices with dedicated video RAM (i.e. most discrete video cards). 17 This resulted in a large, complex piece of code that turned out to be 18 hard to use for driver development. 19 20 GEM started as an Intel-sponsored project in reaction to TTM's 21 complexity. Its design philosophy is completely different: instead of 22 providing a solution to every graphics memory-related problems, GEM 23 identified common code between drivers and created a support library to 24 share it. GEM has simpler initialization and execution requirements than 25 TTM, but has no video RAM management capabilities and is thus limited to 26 UMA devices. 27 28 The Translation Table Manager (TTM) 29 =================================== 30 31 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_module.c 32 :doc: TTM 33 34 .. kernel-doc:: include/drm/ttm/ttm_caching.h 35 :internal: 36 37 TTM device object reference 38 --------------------------- 39 40 .. kernel-doc:: include/drm/ttm/ttm_device.h 41 :internal: 42 43 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_device.c 44 :export: 45 46 TTM resource placement reference 47 -------------------------------- 48 49 .. kernel-doc:: include/drm/ttm/ttm_placement.h 50 :internal: 51 52 TTM resource object reference 53 ----------------------------- 54 55 .. kernel-doc:: include/drm/ttm/ttm_resource.h 56 :internal: 57 58 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_resource.c 59 :export: 60 61 TTM TT object reference 62 ----------------------- 63 64 .. kernel-doc:: include/drm/ttm/ttm_tt.h 65 :internal: 66 67 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_tt.c 68 :export: 69 70 TTM page pool reference 71 ----------------------- 72 73 .. kernel-doc:: include/drm/ttm/ttm_pool.h 74 :internal: 75 76 .. kernel-doc:: drivers/gpu/drm/ttm/ttm_pool.c 77 :export: 78 79 The Graphics Execution Manager (GEM) 80 ==================================== 81 82 The GEM design approach has resulted in a memory manager that doesn't 83 provide full coverage of all (or even all common) use cases in its 84 userspace or kernel API. GEM exposes a set of standard memory-related 85 operations to userspace and a set of helper functions to drivers, and 86 let drivers implement hardware-specific operations with their own 87 private API. 88 89 The GEM userspace API is described in the `GEM - the Graphics Execution 90 Manager <http://lwn.net/Articles/283798/>`__ article on LWN. While 91 slightly outdated, the document provides a good overview of the GEM API 92 principles. Buffer allocation and read and write operations, described 93 as part of the common GEM API, are currently implemented using 94 driver-specific ioctls. 95 96 GEM is data-agnostic. It manages abstract buffer objects without knowing 97 what individual buffers contain. APIs that require knowledge of buffer 98 contents or purpose, such as buffer allocation or synchronization 99 primitives, are thus outside of the scope of GEM and must be implemented 100 using driver-specific ioctls. 101 102 On a fundamental level, GEM involves several operations: 103 104 - Memory allocation and freeing 105 - Command execution 106 - Aperture management at command execution time 107 108 Buffer object allocation is relatively straightforward and largely 109 provided by Linux's shmem layer, which provides memory to back each 110 object. 111 112 Device-specific operations, such as command execution, pinning, buffer 113 read & write, mapping, and domain ownership transfers are left to 114 driver-specific ioctls. 115 116 GEM Initialization 117 ------------------ 118 119 Drivers that use GEM must set the DRIVER_GEM bit in the struct 120 :c:type:`struct drm_driver <drm_driver>` driver_features 121 field. The DRM core will then automatically initialize the GEM core 122 before calling the load operation. Behind the scene, this will create a 123 DRM Memory Manager object which provides an address space pool for 124 object allocation. 125 126 In a KMS configuration, drivers need to allocate and initialize a 127 command ring buffer following core GEM initialization if required by the 128 hardware. UMA devices usually have what is called a "stolen" memory 129 region, which provides space for the initial framebuffer and large, 130 contiguous memory regions required by the device. This space is 131 typically not managed by GEM, and must be initialized separately into 132 its own DRM MM object. 133 134 GEM Objects Creation 135 -------------------- 136 137 GEM splits creation of GEM objects and allocation of the memory that 138 backs them in two distinct operations. 139 140 GEM objects are represented by an instance of struct :c:type:`struct 141 drm_gem_object <drm_gem_object>`. Drivers usually need to 142 extend GEM objects with private information and thus create a 143 driver-specific GEM object structure type that embeds an instance of 144 struct :c:type:`struct drm_gem_object <drm_gem_object>`. 145 146 To create a GEM object, a driver allocates memory for an instance of its 147 specific GEM object type and initializes the embedded struct 148 :c:type:`struct drm_gem_object <drm_gem_object>` with a call 149 to drm_gem_object_init(). The function takes a pointer 150 to the DRM device, a pointer to the GEM object and the buffer object 151 size in bytes. 152 153 GEM uses shmem to allocate anonymous pageable memory. 154 drm_gem_object_init() will create an shmfs file of the 155 requested size and store it into the struct :c:type:`struct 156 drm_gem_object <drm_gem_object>` filp field. The memory is 157 used as either main storage for the object when the graphics hardware 158 uses system memory directly or as a backing store otherwise. 159 160 Drivers are responsible for the actual physical pages allocation by 161 calling shmem_read_mapping_page_gfp() for each page. 162 Note that they can decide to allocate pages when initializing the GEM 163 object, or to delay allocation until the memory is needed (for instance 164 when a page fault occurs as a result of a userspace memory access or 165 when the driver needs to start a DMA transfer involving the memory). 166 167 Anonymous pageable memory allocation is not always desired, for instance 168 when the hardware requires physically contiguous system memory as is 169 often the case in embedded devices. Drivers can create GEM objects with 170 no shmfs backing (called private GEM objects) by initializing them with a call 171 to drm_gem_private_object_init() instead of drm_gem_object_init(). Storage for 172 private GEM objects must be managed by drivers. 173 174 GEM Objects Lifetime 175 -------------------- 176 177 All GEM objects are reference-counted by the GEM core. References can be 178 acquired and release by calling drm_gem_object_get() and drm_gem_object_put() 179 respectively. 180 181 When the last reference to a GEM object is released the GEM core calls 182 the :c:type:`struct drm_gem_object_funcs <gem_object_funcs>` free 183 operation. That operation is mandatory for GEM-enabled drivers and must 184 free the GEM object and all associated resources. 185 186 void (\*free) (struct drm_gem_object \*obj); Drivers are 187 responsible for freeing all GEM object resources. This includes the 188 resources created by the GEM core, which need to be released with 189 drm_gem_object_release(). 190 191 GEM Objects Naming 192 ------------------ 193 194 Communication between userspace and the kernel refers to GEM objects 195 using local handles, global names or, more recently, file descriptors. 196 All of those are 32-bit integer values; the usual Linux kernel limits 197 apply to the file descriptors. 198 199 GEM handles are local to a DRM file. Applications get a handle to a GEM 200 object through a driver-specific ioctl, and can use that handle to refer 201 to the GEM object in other standard or driver-specific ioctls. Closing a 202 DRM file handle frees all its GEM handles and dereferences the 203 associated GEM objects. 204 205 To create a handle for a GEM object drivers call drm_gem_handle_create(). The 206 function takes a pointer to the DRM file and the GEM object and returns a 207 locally unique handle. When the handle is no longer needed drivers delete it 208 with a call to drm_gem_handle_delete(). Finally the GEM object associated with a 209 handle can be retrieved by a call to drm_gem_object_lookup(). 210 211 Handles don't take ownership of GEM objects, they only take a reference 212 to the object that will be dropped when the handle is destroyed. To 213 avoid leaking GEM objects, drivers must make sure they drop the 214 reference(s) they own (such as the initial reference taken at object 215 creation time) as appropriate, without any special consideration for the 216 handle. For example, in the particular case of combined GEM object and 217 handle creation in the implementation of the dumb_create operation, 218 drivers must drop the initial reference to the GEM object before 219 returning the handle. 220 221 GEM names are similar in purpose to handles but are not local to DRM 222 files. They can be passed between processes to reference a GEM object 223 globally. Names can't be used directly to refer to objects in the DRM 224 API, applications must convert handles to names and names to handles 225 using the DRM_IOCTL_GEM_FLINK and DRM_IOCTL_GEM_OPEN ioctls 226 respectively. The conversion is handled by the DRM core without any 227 driver-specific support. 228 229 GEM also supports buffer sharing with dma-buf file descriptors through 230 PRIME. GEM-based drivers must use the provided helpers functions to 231 implement the exporting and importing correctly. See ?. Since sharing 232 file descriptors is inherently more secure than the easily guessable and 233 global GEM names it is the preferred buffer sharing mechanism. Sharing 234 buffers through GEM names is only supported for legacy userspace. 235 Furthermore PRIME also allows cross-device buffer sharing since it is 236 based on dma-bufs. 237 238 GEM Objects Mapping 239 ------------------- 240 241 Because mapping operations are fairly heavyweight GEM favours 242 read/write-like access to buffers, implemented through driver-specific 243 ioctls, over mapping buffers to userspace. However, when random access 244 to the buffer is needed (to perform software rendering for instance), 245 direct access to the object can be more efficient. 246 247 The mmap system call can't be used directly to map GEM objects, as they 248 don't have their own file handle. Two alternative methods currently 249 co-exist to map GEM objects to userspace. The first method uses a 250 driver-specific ioctl to perform the mapping operation, calling 251 do_mmap() under the hood. This is often considered 252 dubious, seems to be discouraged for new GEM-enabled drivers, and will 253 thus not be described here. 254 255 The second method uses the mmap system call on the DRM file handle. void 256 \*mmap(void \*addr, size_t length, int prot, int flags, int fd, off_t 257 offset); DRM identifies the GEM object to be mapped by a fake offset 258 passed through the mmap offset argument. Prior to being mapped, a GEM 259 object must thus be associated with a fake offset. To do so, drivers 260 must call drm_gem_create_mmap_offset() on the object. 261 262 Once allocated, the fake offset value must be passed to the application 263 in a driver-specific way and can then be used as the mmap offset 264 argument. 265 266 The GEM core provides a helper method drm_gem_mmap() to 267 handle object mapping. The method can be set directly as the mmap file 268 operation handler. It will look up the GEM object based on the offset 269 value and set the VMA operations to the :c:type:`struct drm_driver 270 <drm_driver>` gem_vm_ops field. Note that drm_gem_mmap() doesn't map memory to 271 userspace, but relies on the driver-provided fault handler to map pages 272 individually. 273 274 To use drm_gem_mmap(), drivers must fill the struct :c:type:`struct drm_driver 275 <drm_driver>` gem_vm_ops field with a pointer to VM operations. 276 277 The VM operations is a :c:type:`struct vm_operations_struct <vm_operations_struct>` 278 made up of several fields, the more interesting ones being: 279 280 .. code-block:: c 281 282 struct vm_operations_struct { 283 void (*open)(struct vm_area_struct * area); 284 void (*close)(struct vm_area_struct * area); 285 vm_fault_t (*fault)(struct vm_fault *vmf); 286 }; 287 288 289 The open and close operations must update the GEM object reference 290 count. Drivers can use the drm_gem_vm_open() and drm_gem_vm_close() helper 291 functions directly as open and close handlers. 292 293 The fault operation handler is responsible for mapping individual pages 294 to userspace when a page fault occurs. Depending on the memory 295 allocation scheme, drivers can allocate pages at fault time, or can 296 decide to allocate memory for the GEM object at the time the object is 297 created. 298 299 Drivers that want to map the GEM object upfront instead of handling page 300 faults can implement their own mmap file operation handler. 301 302 For platforms without MMU the GEM core provides a helper method 303 drm_gem_dma_get_unmapped_area(). The mmap() routines will call this to get a 304 proposed address for the mapping. 305 306 To use drm_gem_dma_get_unmapped_area(), drivers must fill the struct 307 :c:type:`struct file_operations <file_operations>` get_unmapped_area field with 308 a pointer on drm_gem_dma_get_unmapped_area(). 309 310 More detailed information about get_unmapped_area can be found in 311 Documentation/admin-guide/mm/nommu-mmap.rst 312 313 Memory Coherency 314 ---------------- 315 316 When mapped to the device or used in a command buffer, backing pages for 317 an object are flushed to memory and marked write combined so as to be 318 coherent with the GPU. Likewise, if the CPU accesses an object after the 319 GPU has finished rendering to the object, then the object must be made 320 coherent with the CPU's view of memory, usually involving GPU cache 321 flushing of various kinds. This core CPU<->GPU coherency management is 322 provided by a device-specific ioctl, which evaluates an object's current 323 domain and performs any necessary flushing or synchronization to put the 324 object into the desired coherency domain (note that the object may be 325 busy, i.e. an active render target; in that case, setting the domain 326 blocks the client and waits for rendering to complete before performing 327 any necessary flushing operations). 328 329 Command Execution 330 ----------------- 331 332 Perhaps the most important GEM function for GPU devices is providing a 333 command execution interface to clients. Client programs construct 334 command buffers containing references to previously allocated memory 335 objects, and then submit them to GEM. At that point, GEM takes care to 336 bind all the objects into the GTT, execute the buffer, and provide 337 necessary synchronization between clients accessing the same buffers. 338 This often involves evicting some objects from the GTT and re-binding 339 others (a fairly expensive operation), and providing relocation support 340 which hides fixed GTT offsets from clients. Clients must take care not 341 to submit command buffers that reference more objects than can fit in 342 the GTT; otherwise, GEM will reject them and no rendering will occur. 343 Similarly, if several objects in the buffer require fence registers to 344 be allocated for correct rendering (e.g. 2D blits on pre-965 chips), 345 care must be taken not to require more fence registers than are 346 available to the client. Such resource management should be abstracted 347 from the client in libdrm. 348 349 GEM Function Reference 350 ---------------------- 351 352 .. kernel-doc:: include/drm/drm_gem.h 353 :internal: 354 355 .. kernel-doc:: drivers/gpu/drm/drm_gem.c 356 :export: 357 358 GEM DMA Helper Functions Reference 359 ---------------------------------- 360 361 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_helper.c 362 :doc: dma helpers 363 364 .. kernel-doc:: include/drm/drm_gem_dma_helper.h 365 :internal: 366 367 .. kernel-doc:: drivers/gpu/drm/drm_gem_dma_helper.c 368 :export: 369 370 GEM SHMEM Helper Function Reference 371 ----------------------------------- 372 373 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c 374 :doc: overview 375 376 .. kernel-doc:: include/drm/drm_gem_shmem_helper.h 377 :internal: 378 379 .. kernel-doc:: drivers/gpu/drm/drm_gem_shmem_helper.c 380 :export: 381 382 GEM VRAM Helper Functions Reference 383 ----------------------------------- 384 385 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c 386 :doc: overview 387 388 .. kernel-doc:: include/drm/drm_gem_vram_helper.h 389 :internal: 390 391 .. kernel-doc:: drivers/gpu/drm/drm_gem_vram_helper.c 392 :export: 393 394 GEM TTM Helper Functions Reference 395 ----------------------------------- 396 397 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c 398 :doc: overview 399 400 .. kernel-doc:: drivers/gpu/drm/drm_gem_ttm_helper.c 401 :export: 402 403 VMA Offset Manager 404 ================== 405 406 .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 407 :doc: vma offset manager 408 409 .. kernel-doc:: include/drm/drm_vma_manager.h 410 :internal: 411 412 .. kernel-doc:: drivers/gpu/drm/drm_vma_manager.c 413 :export: 414 415 .. _prime_buffer_sharing: 416 417 PRIME Buffer Sharing 418 ==================== 419 420 PRIME is the cross device buffer sharing framework in drm, originally 421 created for the OPTIMUS range of multi-gpu platforms. To userspace PRIME 422 buffers are dma-buf based file descriptors. 423 424 Overview and Lifetime Rules 425 --------------------------- 426 427 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 428 :doc: overview and lifetime rules 429 430 PRIME Helper Functions 431 ---------------------- 432 433 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 434 :doc: PRIME Helpers 435 436 PRIME Function References 437 ------------------------- 438 439 .. kernel-doc:: include/drm/drm_prime.h 440 :internal: 441 442 .. kernel-doc:: drivers/gpu/drm/drm_prime.c 443 :export: 444 445 DRM MM Range Allocator 446 ====================== 447 448 Overview 449 -------- 450 451 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 452 :doc: Overview 453 454 LRU Scan/Eviction Support 455 ------------------------- 456 457 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 458 :doc: lru scan roster 459 460 DRM MM Range Allocator Function References 461 ------------------------------------------ 462 463 .. kernel-doc:: include/drm/drm_mm.h 464 :internal: 465 466 .. kernel-doc:: drivers/gpu/drm/drm_mm.c 467 :export: 468 469 .. _drm_gpuvm: 470 471 DRM GPUVM 472 ========= 473 474 Overview 475 -------- 476 477 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 478 :doc: Overview 479 480 Split and Merge 481 --------------- 482 483 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 484 :doc: Split and Merge 485 486 .. _drm_gpuvm_locking: 487 488 Locking 489 ------- 490 491 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 492 :doc: Locking 493 494 Examples 495 -------- 496 497 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 498 :doc: Examples 499 500 DRM GPUVM Function References 501 ----------------------------- 502 503 .. kernel-doc:: include/drm/drm_gpuvm.h 504 :internal: 505 506 .. kernel-doc:: drivers/gpu/drm/drm_gpuvm.c 507 :export: 508 509 DRM Buddy Allocator 510 =================== 511 512 DRM Buddy Function References 513 ----------------------------- 514 515 .. kernel-doc:: drivers/gpu/drm/drm_buddy.c 516 :export: 517 518 DRM Cache Handling and Fast WC memcpy() 519 ======================================= 520 521 .. kernel-doc:: drivers/gpu/drm/drm_cache.c 522 :export: 523 524 .. _drm_sync_objects: 525 526 DRM Sync Objects 527 ================ 528 529 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c 530 :doc: Overview 531 532 .. kernel-doc:: include/drm/drm_syncobj.h 533 :internal: 534 535 .. kernel-doc:: drivers/gpu/drm/drm_syncobj.c 536 :export: 537 538 DRM Execution context 539 ===================== 540 541 .. kernel-doc:: drivers/gpu/drm/drm_exec.c 542 :doc: Overview 543 544 .. kernel-doc:: include/drm/drm_exec.h 545 :internal: 546 547 .. kernel-doc:: drivers/gpu/drm/drm_exec.c 548 :export: 549 550 GPU Scheduler 551 ============= 552 553 Overview 554 -------- 555 556 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c 557 :doc: Overview 558 559 Flow Control 560 ------------ 561 562 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c 563 :doc: Flow Control 564 565 Scheduler Function References 566 ----------------------------- 567 568 .. kernel-doc:: include/drm/gpu_scheduler.h 569 :internal: 570 571 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_main.c 572 :export: 573 574 .. kernel-doc:: drivers/gpu/drm/scheduler/sched_entity.c 575 :export:
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.