~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/block/inline-encryption.rst

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/block/inline-encryption.rst (Version linux-6.11.5) and /Documentation/block/inline-encryption.rst (Version linux-6.3.13)


  1 .. SPDX-License-Identifier: GPL-2.0                 1 .. SPDX-License-Identifier: GPL-2.0
  2                                                     2 
  3 .. _inline_encryption:                              3 .. _inline_encryption:
  4                                                     4 
  5 =================                                   5 =================
  6 Inline Encryption                                   6 Inline Encryption
  7 =================                                   7 =================
  8                                                     8 
  9 Background                                          9 Background
 10 ==========                                         10 ==========
 11                                                    11 
 12 Inline encryption hardware sits logically betw     12 Inline encryption hardware sits logically between memory and disk, and can
 13 en/decrypt data as it goes in/out of the disk.     13 en/decrypt data as it goes in/out of the disk.  For each I/O request, software
 14 can control exactly how the inline encryption      14 can control exactly how the inline encryption hardware will en/decrypt the data
 15 in terms of key, algorithm, data unit size (th     15 in terms of key, algorithm, data unit size (the granularity of en/decryption),
 16 and data unit number (a value that determines      16 and data unit number (a value that determines the initialization vector(s)).
 17                                                    17 
 18 Some inline encryption hardware accepts all en     18 Some inline encryption hardware accepts all encryption parameters including raw
 19 keys directly in low-level I/O requests.  Howe     19 keys directly in low-level I/O requests.  However, most inline encryption
 20 hardware instead has a fixed number of "keyslo     20 hardware instead has a fixed number of "keyslots" and requires that the key,
 21 algorithm, and data unit size first be program     21 algorithm, and data unit size first be programmed into a keyslot.  Each
 22 low-level I/O request then just contains a key     22 low-level I/O request then just contains a keyslot index and data unit number.
 23                                                    23 
 24 Note that inline encryption hardware is very d     24 Note that inline encryption hardware is very different from traditional crypto
 25 accelerators, which are supported through the      25 accelerators, which are supported through the kernel crypto API.  Traditional
 26 crypto accelerators operate on memory regions,     26 crypto accelerators operate on memory regions, whereas inline encryption
 27 hardware operates on I/O requests.  Thus, inli     27 hardware operates on I/O requests.  Thus, inline encryption hardware needs to be
 28 managed by the block layer, not the kernel cry     28 managed by the block layer, not the kernel crypto API.
 29                                                    29 
 30 Inline encryption hardware is also very differ     30 Inline encryption hardware is also very different from "self-encrypting drives",
 31 such as those based on the TCG Opal or ATA Sec     31 such as those based on the TCG Opal or ATA Security standards.  Self-encrypting
 32 drives don't provide fine-grained control of e     32 drives don't provide fine-grained control of encryption and provide no way to
 33 verify the correctness of the resulting cipher     33 verify the correctness of the resulting ciphertext.  Inline encryption hardware
 34 provides fine-grained control of encryption, i     34 provides fine-grained control of encryption, including the choice of key and
 35 initialization vector for each sector, and can     35 initialization vector for each sector, and can be tested for correctness.
 36                                                    36 
 37 Objective                                          37 Objective
 38 =========                                          38 =========
 39                                                    39 
 40 We want to support inline encryption in the ke     40 We want to support inline encryption in the kernel.  To make testing easier, we
 41 also want support for falling back to the kern     41 also want support for falling back to the kernel crypto API when actual inline
 42 encryption hardware is absent.  We also want i     42 encryption hardware is absent.  We also want inline encryption to work with
 43 layered devices like device-mapper and loopbac     43 layered devices like device-mapper and loopback (i.e. we want to be able to use
 44 the inline encryption hardware of the underlyi     44 the inline encryption hardware of the underlying devices if present, or else
 45 fall back to crypto API en/decryption).            45 fall back to crypto API en/decryption).
 46                                                    46 
 47 Constraints and notes                              47 Constraints and notes
 48 =====================                              48 =====================
 49                                                    49 
 50 - We need a way for upper layers (e.g. filesys     50 - We need a way for upper layers (e.g. filesystems) to specify an encryption
 51   context to use for en/decrypting a bio, and      51   context to use for en/decrypting a bio, and device drivers (e.g. UFSHCD) need
 52   to be able to use that encryption context wh     52   to be able to use that encryption context when they process the request.
 53   Encryption contexts also introduce constrain     53   Encryption contexts also introduce constraints on bio merging; the block layer
 54   needs to be aware of these constraints.          54   needs to be aware of these constraints.
 55                                                    55 
 56 - Different inline encryption hardware has dif     56 - Different inline encryption hardware has different supported algorithms,
 57   supported data unit sizes, maximum data unit     57   supported data unit sizes, maximum data unit numbers, etc.  We call these
 58   properties the "crypto capabilities".  We ne     58   properties the "crypto capabilities".  We need a way for device drivers to
 59   advertise crypto capabilities to upper layer     59   advertise crypto capabilities to upper layers in a generic way.
 60                                                    60 
 61 - Inline encryption hardware usually (but not      61 - Inline encryption hardware usually (but not always) requires that keys be
 62   programmed into keyslots before being used.      62   programmed into keyslots before being used.  Since programming keyslots may be
 63   slow and there may not be very many keyslots     63   slow and there may not be very many keyslots, we shouldn't just program the
 64   key for every I/O request, but rather keep t     64   key for every I/O request, but rather keep track of which keys are in the
 65   keyslots and reuse an already-programmed key     65   keyslots and reuse an already-programmed keyslot when possible.
 66                                                    66 
 67 - Upper layers typically define a specific end     67 - Upper layers typically define a specific end-of-life for crypto keys, e.g.
 68   when an encrypted directory is locked or whe     68   when an encrypted directory is locked or when a crypto mapping is torn down.
 69   At these times, keys are wiped from memory.      69   At these times, keys are wiped from memory.  We must provide a way for upper
 70   layers to also evict keys from any keyslots      70   layers to also evict keys from any keyslots they are present in.
 71                                                    71 
 72 - When possible, device-mapper devices must be     72 - When possible, device-mapper devices must be able to pass through the inline
 73   encryption support of their underlying devic     73   encryption support of their underlying devices.  However, it doesn't make
 74   sense for device-mapper devices to have keys     74   sense for device-mapper devices to have keyslots themselves.
 75                                                    75 
 76 Basic design                                       76 Basic design
 77 ============                                       77 ============
 78                                                    78 
 79 We introduce ``struct blk_crypto_key`` to repr     79 We introduce ``struct blk_crypto_key`` to represent an inline encryption key and
 80 how it will be used.  This includes the actual     80 how it will be used.  This includes the actual bytes of the key; the size of the
 81 key; the algorithm and data unit size the key      81 key; the algorithm and data unit size the key will be used with; and the number
 82 of bytes needed to represent the maximum data      82 of bytes needed to represent the maximum data unit number the key will be used
 83 with.                                              83 with.
 84                                                    84 
 85 We introduce ``struct bio_crypt_ctx`` to repre     85 We introduce ``struct bio_crypt_ctx`` to represent an encryption context.  It
 86 contains a data unit number and a pointer to a     86 contains a data unit number and a pointer to a blk_crypto_key.  We add pointers
 87 to a bio_crypt_ctx to ``struct bio`` and ``str     87 to a bio_crypt_ctx to ``struct bio`` and ``struct request``; this allows users
 88 of the block layer (e.g. filesystems) to provi     88 of the block layer (e.g. filesystems) to provide an encryption context when
 89 creating a bio and have it be passed down the      89 creating a bio and have it be passed down the stack for processing by the block
 90 layer and device drivers.  Note that the encry     90 layer and device drivers.  Note that the encryption context doesn't explicitly
 91 say whether to encrypt or decrypt, as that is      91 say whether to encrypt or decrypt, as that is implicit from the direction of the
 92 bio; WRITE means encrypt, and READ means decry     92 bio; WRITE means encrypt, and READ means decrypt.
 93                                                    93 
 94 We also introduce ``struct blk_crypto_profile`     94 We also introduce ``struct blk_crypto_profile`` to contain all generic inline
 95 encryption-related state for a particular inli     95 encryption-related state for a particular inline encryption device.  The
 96 blk_crypto_profile serves as the way that driv     96 blk_crypto_profile serves as the way that drivers for inline encryption hardware
 97 advertise their crypto capabilities and provid     97 advertise their crypto capabilities and provide certain functions (e.g.,
 98 functions to program and evict keys) to upper      98 functions to program and evict keys) to upper layers.  Each device driver that
 99 wants to support inline encryption will constr     99 wants to support inline encryption will construct a blk_crypto_profile, then
100 associate it with the disk's request_queue.       100 associate it with the disk's request_queue.
101                                                   101 
102 The blk_crypto_profile also manages the hardwa    102 The blk_crypto_profile also manages the hardware's keyslots, when applicable.
103 This happens in the block layer, so that users    103 This happens in the block layer, so that users of the block layer can just
104 specify encryption contexts and don't need to     104 specify encryption contexts and don't need to know about keyslots at all, nor do
105 device drivers need to care about most details    105 device drivers need to care about most details of keyslot management.
106                                                   106 
107 Specifically, for each keyslot, the block laye    107 Specifically, for each keyslot, the block layer (via the blk_crypto_profile)
108 keeps track of which blk_crypto_key that keysl    108 keeps track of which blk_crypto_key that keyslot contains (if any), and how many
109 in-flight I/O requests are using it.  When the    109 in-flight I/O requests are using it.  When the block layer creates a
110 ``struct request`` for a bio that has an encry    110 ``struct request`` for a bio that has an encryption context, it grabs a keyslot
111 that already contains the key if possible.  Ot    111 that already contains the key if possible.  Otherwise it waits for an idle
112 keyslot (a keyslot that isn't in-use by any I/    112 keyslot (a keyslot that isn't in-use by any I/O), then programs the key into the
113 least-recently-used idle keyslot using the fun    113 least-recently-used idle keyslot using the function the device driver provided.
114 In both cases, the resulting keyslot is stored    114 In both cases, the resulting keyslot is stored in the ``crypt_keyslot`` field of
115 the request, where it is then accessible to de    115 the request, where it is then accessible to device drivers and is released after
116 the request completes.                            116 the request completes.
117                                                   117 
118 ``struct request`` also contains a pointer to     118 ``struct request`` also contains a pointer to the original bio_crypt_ctx.
119 Requests can be built from multiple bios, and     119 Requests can be built from multiple bios, and the block layer must take the
120 encryption context into account when trying to    120 encryption context into account when trying to merge bios and requests.  For two
121 bios/requests to be merged, they must have com    121 bios/requests to be merged, they must have compatible encryption contexts: both
122 unencrypted, or both encrypted with the same k    122 unencrypted, or both encrypted with the same key and contiguous data unit
123 numbers.  Only the encryption context for the     123 numbers.  Only the encryption context for the first bio in a request is
124 retained, since the remaining bios have been v    124 retained, since the remaining bios have been verified to be merge-compatible
125 with the first bio.                               125 with the first bio.
126                                                   126 
127 To make it possible for inline encryption to w    127 To make it possible for inline encryption to work with request_queue based
128 layered devices, when a request is cloned, its    128 layered devices, when a request is cloned, its encryption context is cloned as
129 well.  When the cloned request is submitted, i    129 well.  When the cloned request is submitted, it is then processed as usual; this
130 includes getting a keyslot from the clone's ta    130 includes getting a keyslot from the clone's target device if needed.
131                                                   131 
132 blk-crypto-fallback                               132 blk-crypto-fallback
133 ===================                               133 ===================
134                                                   134 
135 It is desirable for the inline encryption supp    135 It is desirable for the inline encryption support of upper layers (e.g.
136 filesystems) to be testable without real inlin    136 filesystems) to be testable without real inline encryption hardware, and
137 likewise for the block layer's keyslot managem    137 likewise for the block layer's keyslot management logic.  It is also desirable
138 to allow upper layers to just always use inlin    138 to allow upper layers to just always use inline encryption rather than have to
139 implement encryption in multiple ways.            139 implement encryption in multiple ways.
140                                                   140 
141 Therefore, we also introduce *blk-crypto-fallb    141 Therefore, we also introduce *blk-crypto-fallback*, which is an implementation
142 of inline encryption using the kernel crypto A    142 of inline encryption using the kernel crypto API.  blk-crypto-fallback is built
143 into the block layer, so it works on any block    143 into the block layer, so it works on any block device without any special setup.
144 Essentially, when a bio with an encryption con    144 Essentially, when a bio with an encryption context is submitted to a
145 block_device that doesn't support that encrypt    145 block_device that doesn't support that encryption context, the block layer will
146 handle en/decryption of the bio using blk-cryp    146 handle en/decryption of the bio using blk-crypto-fallback.
147                                                   147 
148 For encryption, the data cannot be encrypted i    148 For encryption, the data cannot be encrypted in-place, as callers usually rely
149 on it being unmodified.  Instead, blk-crypto-f    149 on it being unmodified.  Instead, blk-crypto-fallback allocates bounce pages,
150 fills a new bio with those bounce pages, encry    150 fills a new bio with those bounce pages, encrypts the data into those bounce
151 pages, and submits that "bounce" bio.  When th    151 pages, and submits that "bounce" bio.  When the bounce bio completes,
152 blk-crypto-fallback completes the original bio    152 blk-crypto-fallback completes the original bio.  If the original bio is too
153 large, multiple bounce bios may be required; s    153 large, multiple bounce bios may be required; see the code for details.
154                                                   154 
155 For decryption, blk-crypto-fallback "wraps" th    155 For decryption, blk-crypto-fallback "wraps" the bio's completion callback
156 (``bi_complete``) and private data (``bi_priva    156 (``bi_complete``) and private data (``bi_private``) with its own, unsets the
157 bio's encryption context, then submits the bio    157 bio's encryption context, then submits the bio.  If the read completes
158 successfully, blk-crypto-fallback restores the    158 successfully, blk-crypto-fallback restores the bio's original completion
159 callback and private data, then decrypts the b    159 callback and private data, then decrypts the bio's data in-place using the
160 kernel crypto API.  Decryption happens from a     160 kernel crypto API.  Decryption happens from a workqueue, as it may sleep.
161 Afterwards, blk-crypto-fallback completes the     161 Afterwards, blk-crypto-fallback completes the bio.
162                                                   162 
163 In both cases, the bios that blk-crypto-fallba    163 In both cases, the bios that blk-crypto-fallback submits no longer have an
164 encryption context.  Therefore, lower layers o    164 encryption context.  Therefore, lower layers only see standard unencrypted I/O.
165                                                   165 
166 blk-crypto-fallback also defines its own blk_c    166 blk-crypto-fallback also defines its own blk_crypto_profile and has its own
167 "keyslots"; its keyslots contain ``struct cryp    167 "keyslots"; its keyslots contain ``struct crypto_skcipher`` objects.  The reason
168 for this is twofold.  First, it allows the key    168 for this is twofold.  First, it allows the keyslot management logic to be tested
169 without actual inline encryption hardware.  Se    169 without actual inline encryption hardware.  Second, similar to actual inline
170 encryption hardware, the crypto API doesn't ac    170 encryption hardware, the crypto API doesn't accept keys directly in requests but
171 rather requires that keys be set ahead of time    171 rather requires that keys be set ahead of time, and setting keys can be
172 expensive; moreover, allocating a crypto_skcip    172 expensive; moreover, allocating a crypto_skcipher can't happen on the I/O path
173 at all due to the locks it takes.  Therefore,     173 at all due to the locks it takes.  Therefore, the concept of keyslots still
174 makes sense for blk-crypto-fallback.              174 makes sense for blk-crypto-fallback.
175                                                   175 
176 Note that regardless of whether real inline en    176 Note that regardless of whether real inline encryption hardware or
177 blk-crypto-fallback is used, the ciphertext wr    177 blk-crypto-fallback is used, the ciphertext written to disk (and hence the
178 on-disk format of data) will be the same (assu    178 on-disk format of data) will be the same (assuming that both the inline
179 encryption hardware's implementation and the k    179 encryption hardware's implementation and the kernel crypto API's implementation
180 of the algorithm being used adhere to spec and    180 of the algorithm being used adhere to spec and function correctly).
181                                                   181 
182 blk-crypto-fallback is optional and is control    182 blk-crypto-fallback is optional and is controlled by the
183 ``CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK`` kern    183 ``CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK`` kernel configuration option.
184                                                   184 
185 API presented to users of the block layer         185 API presented to users of the block layer
186 =========================================         186 =========================================
187                                                   187 
188 ``blk_crypto_config_supported()`` allows users    188 ``blk_crypto_config_supported()`` allows users to check ahead of time whether
189 inline encryption with particular crypto setti    189 inline encryption with particular crypto settings will work on a particular
190 block_device -- either via hardware or via blk    190 block_device -- either via hardware or via blk-crypto-fallback.  This function
191 takes in a ``struct blk_crypto_config`` which     191 takes in a ``struct blk_crypto_config`` which is like blk_crypto_key, but omits
192 the actual bytes of the key and instead just c    192 the actual bytes of the key and instead just contains the algorithm, data unit
193 size, etc.  This function can be useful if blk    193 size, etc.  This function can be useful if blk-crypto-fallback is disabled.
194                                                   194 
195 ``blk_crypto_init_key()`` allows users to init    195 ``blk_crypto_init_key()`` allows users to initialize a blk_crypto_key.
196                                                   196 
197 Users must call ``blk_crypto_start_using_key()    197 Users must call ``blk_crypto_start_using_key()`` before actually starting to use
198 a blk_crypto_key on a block_device (even if ``    198 a blk_crypto_key on a block_device (even if ``blk_crypto_config_supported()``
199 was called earlier).  This is needed to initia    199 was called earlier).  This is needed to initialize blk-crypto-fallback if it
200 will be needed.  This must not be called from     200 will be needed.  This must not be called from the data path, as this may have to
201 allocate resources, which may deadlock in that    201 allocate resources, which may deadlock in that case.
202                                                   202 
203 Next, to attach an encryption context to a bio    203 Next, to attach an encryption context to a bio, users should call
204 ``bio_crypt_set_ctx()``.  This function alloca    204 ``bio_crypt_set_ctx()``.  This function allocates a bio_crypt_ctx and attaches
205 it to a bio, given the blk_crypto_key and the     205 it to a bio, given the blk_crypto_key and the data unit number that will be used
206 for en/decryption.  Users don't need to worry     206 for en/decryption.  Users don't need to worry about freeing the bio_crypt_ctx
207 later, as that happens automatically when the     207 later, as that happens automatically when the bio is freed or reset.
208                                                   208 
209 Finally, when done using inline encryption wit    209 Finally, when done using inline encryption with a blk_crypto_key on a
210 block_device, users must call ``blk_crypto_evi    210 block_device, users must call ``blk_crypto_evict_key()``.  This ensures that
211 the key is evicted from all keyslots it may be    211 the key is evicted from all keyslots it may be programmed into and unlinked from
212 any kernel data structures it may be linked in    212 any kernel data structures it may be linked into.
213                                                   213 
214 In summary, for users of the block layer, the     214 In summary, for users of the block layer, the lifecycle of a blk_crypto_key is
215 as follows:                                       215 as follows:
216                                                   216 
217 1. ``blk_crypto_config_supported()`` (optional    217 1. ``blk_crypto_config_supported()`` (optional)
218 2. ``blk_crypto_init_key()``                      218 2. ``blk_crypto_init_key()``
219 3. ``blk_crypto_start_using_key()``               219 3. ``blk_crypto_start_using_key()``
220 4. ``bio_crypt_set_ctx()`` (potentially many t    220 4. ``bio_crypt_set_ctx()`` (potentially many times)
221 5. ``blk_crypto_evict_key()`` (after all I/O h    221 5. ``blk_crypto_evict_key()`` (after all I/O has completed)
222 6. Zeroize the blk_crypto_key (this has no ded    222 6. Zeroize the blk_crypto_key (this has no dedicated function)
223                                                   223 
224 If a blk_crypto_key is being used on multiple     224 If a blk_crypto_key is being used on multiple block_devices, then
225 ``blk_crypto_config_supported()`` (if used), `    225 ``blk_crypto_config_supported()`` (if used), ``blk_crypto_start_using_key()``,
226 and ``blk_crypto_evict_key()`` must be called     226 and ``blk_crypto_evict_key()`` must be called on each block_device.
227                                                   227 
228 API presented to device drivers                   228 API presented to device drivers
229 ===============================                   229 ===============================
230                                                   230 
231 A device driver that wants to support inline e    231 A device driver that wants to support inline encryption must set up a
232 blk_crypto_profile in the request_queue of its    232 blk_crypto_profile in the request_queue of its device.  To do this, it first
233 must call ``blk_crypto_profile_init()`` (or it    233 must call ``blk_crypto_profile_init()`` (or its resource-managed variant
234 ``devm_blk_crypto_profile_init()``), providing    234 ``devm_blk_crypto_profile_init()``), providing the number of keyslots.
235                                                   235 
236 Next, it must advertise its crypto capabilitie    236 Next, it must advertise its crypto capabilities by setting fields in the
237 blk_crypto_profile, e.g. ``modes_supported`` a    237 blk_crypto_profile, e.g. ``modes_supported`` and ``max_dun_bytes_supported``.
238                                                   238 
239 It then must set function pointers in the ``ll    239 It then must set function pointers in the ``ll_ops`` field of the
240 blk_crypto_profile to tell upper layers how to    240 blk_crypto_profile to tell upper layers how to control the inline encryption
241 hardware, e.g. how to program and evict keyslo    241 hardware, e.g. how to program and evict keyslots.  Most drivers will need to
242 implement ``keyslot_program`` and ``keyslot_ev    242 implement ``keyslot_program`` and ``keyslot_evict``.  For details, see the
243 comments for ``struct blk_crypto_ll_ops``.        243 comments for ``struct blk_crypto_ll_ops``.
244                                                   244 
245 Once the driver registers a blk_crypto_profile    245 Once the driver registers a blk_crypto_profile with a request_queue, I/O
246 requests the driver receives via that queue ma    246 requests the driver receives via that queue may have an encryption context.  All
247 encryption contexts will be compatible with th    247 encryption contexts will be compatible with the crypto capabilities declared in
248 the blk_crypto_profile, so drivers don't need     248 the blk_crypto_profile, so drivers don't need to worry about handling
249 unsupported requests.  Also, if a nonzero numb    249 unsupported requests.  Also, if a nonzero number of keyslots was declared in the
250 blk_crypto_profile, then all I/O requests that    250 blk_crypto_profile, then all I/O requests that have an encryption context will
251 also have a keyslot which was already programm    251 also have a keyslot which was already programmed with the appropriate key.
252                                                   252 
253 If the driver implements runtime suspend and i    253 If the driver implements runtime suspend and its blk_crypto_ll_ops don't work
254 while the device is runtime-suspended, then th    254 while the device is runtime-suspended, then the driver must also set the ``dev``
255 field of the blk_crypto_profile to point to th    255 field of the blk_crypto_profile to point to the ``struct device`` that will be
256 resumed before any of the low-level operations    256 resumed before any of the low-level operations are called.
257                                                   257 
258 If there are situations where the inline encry    258 If there are situations where the inline encryption hardware loses the contents
259 of its keyslots, e.g. device resets, the drive    259 of its keyslots, e.g. device resets, the driver must handle reprogramming the
260 keyslots.  To do this, the driver may call ``b    260 keyslots.  To do this, the driver may call ``blk_crypto_reprogram_all_keys()``.
261                                                   261 
262 Finally, if the driver used ``blk_crypto_profi    262 Finally, if the driver used ``blk_crypto_profile_init()`` instead of
263 ``devm_blk_crypto_profile_init()``, then it is    263 ``devm_blk_crypto_profile_init()``, then it is responsible for calling
264 ``blk_crypto_profile_destroy()`` when the cryp    264 ``blk_crypto_profile_destroy()`` when the crypto profile is no longer needed.
265                                                   265 
266 Layered Devices                                   266 Layered Devices
267 ===============                                   267 ===============
268                                                   268 
269 Request queue based layered devices like dm-rq    269 Request queue based layered devices like dm-rq that wish to support inline
270 encryption need to create their own blk_crypto    270 encryption need to create their own blk_crypto_profile for their request_queue,
271 and expose whatever functionality they choose.    271 and expose whatever functionality they choose. When a layered device wants to
272 pass a clone of that request to another reques    272 pass a clone of that request to another request_queue, blk-crypto will
273 initialize and prepare the clone as necessary. !! 273 initialize and prepare the clone as necessary; see
                                                   >> 274 ``blk_crypto_insert_cloned_request()``.
274                                                   275 
275 Interaction between inline encryption and blk     276 Interaction between inline encryption and blk integrity
276 ==============================================    277 =======================================================
277                                                   278 
278 At the time of this patch, there is no real ha    279 At the time of this patch, there is no real hardware that supports both these
279 features. However, these features do interact     280 features. However, these features do interact with each other, and it's not
280 completely trivial to make them both work toge    281 completely trivial to make them both work together properly. In particular,
281 when a WRITE bio wants to use inline encryptio    282 when a WRITE bio wants to use inline encryption on a device that supports both
282 features, the bio will have an encryption cont    283 features, the bio will have an encryption context specified, after which
283 its integrity information is calculated (using    284 its integrity information is calculated (using the plaintext data, since
284 the encryption will happen while data is being    285 the encryption will happen while data is being written), and the data and
285 integrity info is sent to the device. Obviousl    286 integrity info is sent to the device. Obviously, the integrity info must be
286 verified before the data is encrypted. After t    287 verified before the data is encrypted. After the data is encrypted, the device
287 must not store the integrity info that it rece    288 must not store the integrity info that it received with the plaintext data
288 since that might reveal information about the     289 since that might reveal information about the plaintext data. As such, it must
289 re-generate the integrity info from the cipher    290 re-generate the integrity info from the ciphertext data and store that on disk
290 instead. Another issue with storing the integr    291 instead. Another issue with storing the integrity info of the plaintext data is
291 that it changes the on disk format depending o    292 that it changes the on disk format depending on whether hardware inline
292 encryption support is present or the kernel cr    293 encryption support is present or the kernel crypto API fallback is used (since
293 if the fallback is used, the device will recei    294 if the fallback is used, the device will receive the integrity info of the
294 ciphertext, not that of the plaintext).           295 ciphertext, not that of the plaintext).
295                                                   296 
296 Because there isn't any real hardware yet, it     297 Because there isn't any real hardware yet, it seems prudent to assume that
297 hardware implementations might not implement b    298 hardware implementations might not implement both features together correctly,
298 and disallow the combination for now. Whenever    299 and disallow the combination for now. Whenever a device supports integrity, the
299 kernel will pretend that the device does not s    300 kernel will pretend that the device does not support hardware inline encryption
300 (by setting the blk_crypto_profile in the requ    301 (by setting the blk_crypto_profile in the request_queue of the device to NULL).
301 When the crypto API fallback is enabled, this     302 When the crypto API fallback is enabled, this means that all bios with and
302 encryption context will use the fallback, and     303 encryption context will use the fallback, and IO will complete as usual.  When
303 the fallback is disabled, a bio with an encryp    304 the fallback is disabled, a bio with an encryption context will be failed.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php