1 .. SPDX-License-Identifier: GPL-2.0 2 3 ================================================= 4 Using RCU hlist_nulls to protect list and objects 5 ================================================= 6 7 This section describes how to use hlist_nulls to 8 protect read-mostly linked lists and 9 objects using SLAB_TYPESAFE_BY_RCU allocations. 10 11 Please read the basics in listRCU.rst. 12 13 Using 'nulls' 14 ============= 15 16 Using special makers (called 'nulls') is a convenient way 17 to solve following problem. 18 19 Without 'nulls', a typical RCU linked list managing objects which are 20 allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can use the following 21 algorithms. Following examples assume 'obj' is a pointer to such 22 objects, which is having below type. 23 24 :: 25 26 struct object { 27 struct hlist_node obj_node; 28 atomic_t refcnt; 29 unsigned int key; 30 }; 31 32 1) Lookup algorithm 33 ------------------- 34 35 :: 36 37 begin: 38 rcu_read_lock(); 39 obj = lockless_lookup(key); 40 if (obj) { 41 if (!try_get_ref(obj)) { // might fail for free objects 42 rcu_read_unlock(); 43 goto begin; 44 } 45 /* 46 * Because a writer could delete object, and a writer could 47 * reuse these object before the RCU grace period, we 48 * must check key after getting the reference on object 49 */ 50 if (obj->key != key) { // not the object we expected 51 put_ref(obj); 52 rcu_read_unlock(); 53 goto begin; 54 } 55 } 56 rcu_read_unlock(); 57 58 Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu() 59 but a version with an additional memory barrier (smp_rmb()) 60 61 :: 62 63 lockless_lookup(key) 64 { 65 struct hlist_node *node, *next; 66 for (pos = rcu_dereference((head)->first); 67 pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) && 68 ({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; }); 69 pos = rcu_dereference(next)) 70 if (obj->key == key) 71 return obj; 72 return NULL; 73 } 74 75 And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb():: 76 77 struct hlist_node *node; 78 for (pos = rcu_dereference((head)->first); 79 pos && ({ prefetch(pos->next); 1; }) && 80 ({ obj = hlist_entry(pos, typeof(*obj), obj_node); 1; }); 81 pos = rcu_dereference(pos->next)) 82 if (obj->key == key) 83 return obj; 84 return NULL; 85 86 Quoting Corey Minyard:: 87 88 "If the object is moved from one list to another list in-between the 89 time the hash is calculated and the next field is accessed, and the 90 object has moved to the end of a new list, the traversal will not 91 complete properly on the list it should have, since the object will 92 be on the end of the new list and there's not a way to tell it's on a 93 new list and restart the list traversal. I think that this can be 94 solved by pre-fetching the "next" field (with proper barriers) before 95 checking the key." 96 97 2) Insertion algorithm 98 ---------------------- 99 100 We need to make sure a reader cannot read the new 'obj->obj_node.next' value 101 and previous value of 'obj->key'. Otherwise, an item could be deleted 102 from a chain, and inserted into another chain. If new chain was empty 103 before the move, 'next' pointer is NULL, and lockless reader can not 104 detect the fact that it missed following items in original chain. 105 106 :: 107 108 /* 109 * Please note that new inserts are done at the head of list, 110 * not in the middle or end. 111 */ 112 obj = kmem_cache_alloc(...); 113 lock_chain(); // typically a spin_lock() 114 obj->key = key; 115 atomic_set_release(&obj->refcnt, 1); // key before refcnt 116 hlist_add_head_rcu(&obj->obj_node, list); 117 unlock_chain(); // typically a spin_unlock() 118 119 120 3) Removal algorithm 121 -------------------- 122 123 Nothing special here, we can use a standard RCU hlist deletion. 124 But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused 125 very very fast (before the end of RCU grace period) 126 127 :: 128 129 if (put_last_reference_on(obj) { 130 lock_chain(); // typically a spin_lock() 131 hlist_del_init_rcu(&obj->obj_node); 132 unlock_chain(); // typically a spin_unlock() 133 kmem_cache_free(cachep, obj); 134 } 135 136 137 138 -------------------------------------------------------------------------- 139 140 Avoiding extra smp_rmb() 141 ======================== 142 143 With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup(). 144 145 For example, if we choose to store the slot number as the 'nulls' 146 end-of-list marker for each slot of the hash table, we can detect 147 a race (some writer did a delete and/or a move of an object 148 to another chain) checking the final 'nulls' value if 149 the lookup met the end of chain. If final 'nulls' value 150 is not the slot number, then we must restart the lookup at 151 the beginning. If the object was moved to the same chain, 152 then the reader doesn't care: It might occasionally 153 scan the list again without harm. 154 155 Note that using hlist_nulls means the type of 'obj_node' field of 156 'struct object' becomes 'struct hlist_nulls_node'. 157 158 159 1) lookup algorithm 160 ------------------- 161 162 :: 163 164 head = &table[slot]; 165 begin: 166 rcu_read_lock(); 167 hlist_nulls_for_each_entry_rcu(obj, node, head, obj_node) { 168 if (obj->key == key) { 169 if (!try_get_ref(obj)) { // might fail for free objects 170 rcu_read_unlock(); 171 goto begin; 172 } 173 if (obj->key != key) { // not the object we expected 174 put_ref(obj); 175 rcu_read_unlock(); 176 goto begin; 177 } 178 goto out; 179 } 180 } 181 182 // If the nulls value we got at the end of this lookup is 183 // not the expected one, we must restart lookup. 184 // We probably met an item that was moved to another chain. 185 if (get_nulls_value(node) != slot) { 186 put_ref(obj); 187 rcu_read_unlock(); 188 goto begin; 189 } 190 obj = NULL; 191 192 out: 193 rcu_read_unlock(); 194 195 2) Insert algorithm 196 ------------------- 197 198 Same to the above one, but uses hlist_nulls_add_head_rcu() instead of 199 hlist_add_head_rcu(). 200 201 :: 202 203 /* 204 * Please note that new inserts are done at the head of list, 205 * not in the middle or end. 206 */ 207 obj = kmem_cache_alloc(cachep); 208 lock_chain(); // typically a spin_lock() 209 obj->key = key; 210 atomic_set_release(&obj->refcnt, 1); // key before refcnt 211 /* 212 * insert obj in RCU way (readers might be traversing chain) 213 */ 214 hlist_nulls_add_head_rcu(&obj->obj_node, list); 215 unlock_chain(); // typically a spin_unlock()
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.