1 ============================================== 1 =============================================== 2 drm/tegra NVIDIA Tegra GPU and display driver 2 drm/tegra NVIDIA Tegra GPU and display driver 3 ============================================== 3 =============================================== 4 4 5 NVIDIA Tegra SoCs support a set of display, gr 5 NVIDIA Tegra SoCs support a set of display, graphics and video functions via 6 the host1x controller. host1x supplies command 6 the host1x controller. host1x supplies command streams, gathered from a push 7 buffer provided directly by the CPU, to its cl 7 buffer provided directly by the CPU, to its clients via channels. Software, 8 or blocks amongst themselves, can use syncpoin 8 or blocks amongst themselves, can use syncpoints for synchronization. 9 9 10 Up until, but not including, Tegra124 (aka Teg 10 Up until, but not including, Tegra124 (aka Tegra K1) the drm/tegra driver 11 supports the built-in GPU, comprised of the gr 11 supports the built-in GPU, comprised of the gr2d and gr3d engines. Starting 12 with Tegra124 the GPU is based on the NVIDIA d 12 with Tegra124 the GPU is based on the NVIDIA desktop GPU architecture and 13 supported by the drm/nouveau driver. 13 supported by the drm/nouveau driver. 14 14 15 The drm/tegra driver supports NVIDIA Tegra SoC 15 The drm/tegra driver supports NVIDIA Tegra SoC generations since Tegra20. It 16 has three parts: 16 has three parts: 17 17 18 - A host1x driver that provides infrastructu 18 - A host1x driver that provides infrastructure and access to the host1x 19 services. 19 services. 20 20 21 - A KMS driver that supports the display con 21 - A KMS driver that supports the display controllers as well as a number of 22 outputs, such as RGB, HDMI, DSI, and Displ 22 outputs, such as RGB, HDMI, DSI, and DisplayPort. 23 23 24 - A set of custom userspace IOCTLs that can 24 - A set of custom userspace IOCTLs that can be used to submit jobs to the 25 GPU and video engines via host1x. 25 GPU and video engines via host1x. 26 26 27 Driver Infrastructure 27 Driver Infrastructure 28 ===================== 28 ===================== 29 29 30 The various host1x clients need to be bound to 30 The various host1x clients need to be bound together into a logical device in 31 order to expose their functionality to users. 31 order to expose their functionality to users. The infrastructure that supports 32 this is implemented in the host1x driver. When 32 this is implemented in the host1x driver. When a driver is registered with the 33 infrastructure it provides a list of compatibl 33 infrastructure it provides a list of compatible strings specifying the devices 34 that it needs. The infrastructure creates a lo 34 that it needs. The infrastructure creates a logical device and scan the device 35 tree for matching device nodes, adding the req 35 tree for matching device nodes, adding the required clients to a list. Drivers 36 for individual clients register with the infra 36 for individual clients register with the infrastructure as well and are added 37 to the logical host1x device. 37 to the logical host1x device. 38 38 39 Once all clients are available, the infrastruc 39 Once all clients are available, the infrastructure will initialize the logical 40 device using a driver-provided function which 40 device using a driver-provided function which will set up the bits specific to 41 the subsystem and in turn initialize each of i 41 the subsystem and in turn initialize each of its clients. 42 42 43 Similarly, when one of the clients is unregist 43 Similarly, when one of the clients is unregistered, the infrastructure will 44 destroy the logical device by calling back int 44 destroy the logical device by calling back into the driver, which ensures that 45 the subsystem specific bits are torn down and 45 the subsystem specific bits are torn down and the clients destroyed in turn. 46 46 47 Host1x Infrastructure Reference 47 Host1x Infrastructure Reference 48 ------------------------------- 48 ------------------------------- 49 49 50 .. kernel-doc:: include/linux/host1x.h 50 .. kernel-doc:: include/linux/host1x.h 51 51 52 .. kernel-doc:: drivers/gpu/host1x/bus.c 52 .. kernel-doc:: drivers/gpu/host1x/bus.c 53 :export: 53 :export: 54 54 55 Host1x Syncpoint Reference 55 Host1x Syncpoint Reference 56 -------------------------- 56 -------------------------- 57 57 58 .. kernel-doc:: drivers/gpu/host1x/syncpt.c 58 .. kernel-doc:: drivers/gpu/host1x/syncpt.c 59 :export: 59 :export: 60 60 61 KMS driver 61 KMS driver 62 ========== 62 ========== 63 63 64 The display hardware has remained mostly backw 64 The display hardware has remained mostly backwards compatible over the various 65 Tegra SoC generations, up until Tegra186 which 65 Tegra SoC generations, up until Tegra186 which introduces several changes that 66 make it difficult to support with a parameteri 66 make it difficult to support with a parameterized driver. 67 67 68 Display Controllers 68 Display Controllers 69 ------------------- 69 ------------------- 70 70 71 Tegra SoCs have two display controllers, each 71 Tegra SoCs have two display controllers, each of which can be associated with 72 zero or more outputs. Outputs can also share a 72 zero or more outputs. Outputs can also share a single display controller, but 73 only if they run with compatible display timin 73 only if they run with compatible display timings. Two display controllers can 74 also share a single framebuffer, allowing clon 74 also share a single framebuffer, allowing cloned configurations even if modes 75 on two outputs don't match. A display controll 75 on two outputs don't match. A display controller is modelled as a CRTC in KMS 76 terms. 76 terms. 77 77 78 On Tegra186, the number of display controllers 78 On Tegra186, the number of display controllers has been increased to three. A 79 display controller can no longer drive all of 79 display controller can no longer drive all of the outputs. While two of these 80 controllers can drive both DSI outputs and bot 80 controllers can drive both DSI outputs and both SOR outputs, the third cannot 81 drive any DSI. 81 drive any DSI. 82 82 83 Windows 83 Windows 84 ~~~~~~~ 84 ~~~~~~~ 85 85 86 A display controller controls a set of windows 86 A display controller controls a set of windows that can be used to composite 87 multiple buffers onto the screen. While it is 87 multiple buffers onto the screen. While it is possible to assign arbitrary Z 88 ordering to individual windows (by programming 88 ordering to individual windows (by programming the corresponding blending 89 registers), this is currently not supported by 89 registers), this is currently not supported by the driver. Instead, it will 90 assume a fixed Z ordering of the windows (wind 90 assume a fixed Z ordering of the windows (window A is the root window, that 91 is, the lowest, while windows B and C are over 91 is, the lowest, while windows B and C are overlaid on top of window A). The 92 overlay windows support multiple pixel formats 92 overlay windows support multiple pixel formats and can automatically convert 93 from YUV to RGB at scanout time. This makes th 93 from YUV to RGB at scanout time. This makes them useful for displaying video 94 content. In KMS, each window is modelled as a 94 content. In KMS, each window is modelled as a plane. Each display controller 95 has a hardware cursor that is exposed as a cur 95 has a hardware cursor that is exposed as a cursor plane. 96 96 97 Outputs 97 Outputs 98 ------- 98 ------- 99 99 100 The type and number of supported outputs varie 100 The type and number of supported outputs varies between Tegra SoC generations. 101 All generations support at least HDMI. While e 101 All generations support at least HDMI. While earlier generations supported the 102 very simple RGB interfaces (one per display co 102 very simple RGB interfaces (one per display controller), recent generations no 103 longer do and instead provide standard interfa 103 longer do and instead provide standard interfaces such as DSI and eDP/DP. 104 104 105 Outputs are modelled as a composite encoder/co 105 Outputs are modelled as a composite encoder/connector pair. 106 106 107 RGB/LVDS 107 RGB/LVDS 108 ~~~~~~~~ 108 ~~~~~~~~ 109 109 110 This interface is no longer available since Te 110 This interface is no longer available since Tegra124. It has been replaced by 111 the more standard DSI and eDP interfaces. 111 the more standard DSI and eDP interfaces. 112 112 113 HDMI 113 HDMI 114 ~~~~ 114 ~~~~ 115 115 116 HDMI is supported on all Tegra SoCs. Starting 116 HDMI is supported on all Tegra SoCs. Starting with Tegra210, HDMI is provided 117 by the versatile SOR output, which supports eD 117 by the versatile SOR output, which supports eDP, DP and HDMI. The SOR is able 118 to support HDMI 2.0, though support for this i 118 to support HDMI 2.0, though support for this is currently not merged. 119 119 120 DSI 120 DSI 121 ~~~ 121 ~~~ 122 122 123 Although Tegra has supported DSI since Tegra30 123 Although Tegra has supported DSI since Tegra30, the controller has changed in 124 several ways in Tegra114. Since none of the pu 124 several ways in Tegra114. Since none of the publicly available development 125 boards prior to Dalmore (Tegra114) have made u 125 boards prior to Dalmore (Tegra114) have made use of DSI, only Tegra114 and 126 later are supported by the drm/tegra driver. 126 later are supported by the drm/tegra driver. 127 127 128 eDP/DP 128 eDP/DP 129 ~~~~~~ 129 ~~~~~~ 130 130 131 eDP was first introduced in Tegra124 where it 131 eDP was first introduced in Tegra124 where it was used to drive the display 132 panel for notebook form factors. Tegra210 adde 132 panel for notebook form factors. Tegra210 added support for full DisplayPort 133 support, though this is currently not implemen 133 support, though this is currently not implemented in the drm/tegra driver. 134 134 135 Userspace Interface 135 Userspace Interface 136 =================== 136 =================== 137 137 138 The userspace interface provided by drm/tegra 138 The userspace interface provided by drm/tegra allows applications to create 139 GEM buffers, access and control syncpoints as 139 GEM buffers, access and control syncpoints as well as submit command streams 140 to host1x. 140 to host1x. 141 141 142 GEM Buffers 142 GEM Buffers 143 ----------- 143 ----------- 144 144 145 The ``DRM_IOCTL_TEGRA_GEM_CREATE`` IOCTL is us 145 The ``DRM_IOCTL_TEGRA_GEM_CREATE`` IOCTL is used to create a GEM buffer object 146 with Tegra-specific flags. This is useful for 146 with Tegra-specific flags. This is useful for buffers that should be tiled, or 147 that are to be scanned out upside down (useful 147 that are to be scanned out upside down (useful for 3D content). 148 148 149 After a GEM buffer object has been created, it 149 After a GEM buffer object has been created, its memory can be mapped by an 150 application using the mmap offset returned by 150 application using the mmap offset returned by the ``DRM_IOCTL_TEGRA_GEM_MMAP`` 151 IOCTL. 151 IOCTL. 152 152 153 Syncpoints 153 Syncpoints 154 ---------- 154 ---------- 155 155 156 The current value of a syncpoint can be obtain 156 The current value of a syncpoint can be obtained by executing the 157 ``DRM_IOCTL_TEGRA_SYNCPT_READ`` IOCTL. Increme 157 ``DRM_IOCTL_TEGRA_SYNCPT_READ`` IOCTL. Incrementing the syncpoint is achieved 158 using the ``DRM_IOCTL_TEGRA_SYNCPT_INCR`` IOCT 158 using the ``DRM_IOCTL_TEGRA_SYNCPT_INCR`` IOCTL. 159 159 160 Userspace can also request blocking on a syncp 160 Userspace can also request blocking on a syncpoint. To do so, it needs to 161 execute the ``DRM_IOCTL_TEGRA_SYNCPT_WAIT`` IO 161 execute the ``DRM_IOCTL_TEGRA_SYNCPT_WAIT`` IOCTL, specifying the value of 162 the syncpoint to wait for. The kernel will rel 162 the syncpoint to wait for. The kernel will release the application when the 163 syncpoint reaches that value or after a specif 163 syncpoint reaches that value or after a specified timeout. 164 164 165 Command Stream Submission 165 Command Stream Submission 166 ------------------------- 166 ------------------------- 167 167 168 Before an application can submit command strea 168 Before an application can submit command streams to host1x it needs to open a 169 channel to an engine using the ``DRM_IOCTL_TEG 169 channel to an engine using the ``DRM_IOCTL_TEGRA_OPEN_CHANNEL`` IOCTL. Client 170 IDs are used to identify the target of the cha 170 IDs are used to identify the target of the channel. When a channel is no 171 longer needed, it can be closed using the ``DR 171 longer needed, it can be closed using the ``DRM_IOCTL_TEGRA_CLOSE_CHANNEL`` 172 IOCTL. To retrieve the syncpoint associated wi 172 IOCTL. To retrieve the syncpoint associated with a channel, an application 173 can use the ``DRM_IOCTL_TEGRA_GET_SYNCPT``. 173 can use the ``DRM_IOCTL_TEGRA_GET_SYNCPT``. 174 174 175 After opening a channel, submitting command st 175 After opening a channel, submitting command streams is easy. The application 176 writes commands into the memory backing a GEM 176 writes commands into the memory backing a GEM buffer object and passes these 177 to the ``DRM_IOCTL_TEGRA_SUBMIT`` IOCTL along 177 to the ``DRM_IOCTL_TEGRA_SUBMIT`` IOCTL along with various other parameters, 178 such as the syncpoints or relocations used in 178 such as the syncpoints or relocations used in the job submission.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.