~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/kernel-hacking/locking.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/kernel-hacking/locking.rst (Version linux-6.12-rc7) and /Documentation/kernel-hacking/locking.rst (Version linux-4.4.302)


  1 .. _kernel_hacking_lock:                          
  2                                                   
  3 ===========================                       
  4 Unreliable Guide To Locking                       
  5 ===========================                       
  6                                                   
  7 :Author: Rusty Russell                            
  8                                                   
  9 Introduction                                      
 10 ============                                      
 11                                                   
 12 Welcome, to Rusty's Remarkably Unreliable Guid    
 13 issues. This document describes the locking sy    
 14 in 2.6.                                           
 15                                                   
 16 With the wide availability of HyperThreading,     
 17 Linux Kernel, everyone hacking on the kernel n    
 18 fundamentals of concurrency and locking for SM    
 19                                                   
 20 The Problem With Concurrency                      
 21 ============================                      
 22                                                   
 23 (Skip this if you know what a Race Condition i    
 24                                                   
 25 In a normal program, you can increment a count    
 26                                                   
 27 ::                                                
 28                                                   
 29           very_important_count++;                 
 30                                                   
 31                                                   
 32 This is what they would expect to happen:         
 33                                                   
 34                                                   
 35 .. table:: Expected Results                       
 36                                                   
 37   +------------------------------------+------    
 38   | Instance 1                         | Insta    
 39   +====================================+======    
 40   | read very_important_count (5)      |          
 41   +------------------------------------+------    
 42   | add 1 (6)                          |          
 43   +------------------------------------+------    
 44   | write very_important_count (6)     |          
 45   +------------------------------------+------    
 46   |                                    | read     
 47   +------------------------------------+------    
 48   |                                    | add 1    
 49   +------------------------------------+------    
 50   |                                    | write    
 51   +------------------------------------+------    
 52                                                   
 53 This is what might happen:                        
 54                                                   
 55 .. table:: Possible Results                       
 56                                                   
 57   +------------------------------------+------    
 58   | Instance 1                         | Insta    
 59   +====================================+======    
 60   | read very_important_count (5)      |          
 61   +------------------------------------+------    
 62   |                                    | read     
 63   +------------------------------------+------    
 64   | add 1 (6)                          |          
 65   +------------------------------------+------    
 66   |                                    | add 1    
 67   +------------------------------------+------    
 68   | write very_important_count (6)     |          
 69   +------------------------------------+------    
 70   |                                    | write    
 71   +------------------------------------+------    
 72                                                   
 73                                                   
 74 Race Conditions and Critical Regions              
 75 ------------------------------------              
 76                                                   
 77 This overlap, where the result depends on the     
 78 multiple tasks, is called a race condition. Th    
 79 the concurrency issue is called a critical reg    
 80 Linux starting running on SMP machines, they b    
 81 issues in kernel design and implementation.       
 82                                                   
 83 Preemption can have the same effect, even if t    
 84 preempting one task during the critical region    
 85 race condition. In this case the thread which     
 86 critical region itself.                           
 87                                                   
 88 The solution is to recognize when these simult    
 89 use locks to make sure that only one instance     
 90 region at any time. There are many friendly pr    
 91 kernel to help you do this. And then there are    
 92 primitives, but I'll pretend they don't exist.    
 93                                                   
 94 Locking in the Linux Kernel                       
 95 ===========================                       
 96                                                   
 97 If I could give you one piece of advice on loc    
 98                                                   
 99 Be reluctant to introduce new locks.              
100                                                   
101 Two Main Types of Kernel Locks: Spinlocks and     
102 ----------------------------------------------    
103                                                   
104 There are two main types of kernel locks. The     
105 spinlock (``include/asm/spinlock.h``), which i    
106 single-holder lock: if you can't get the spinl    
107 (spinning) until you can. Spinlocks are very s    
108 used anywhere.                                    
109                                                   
110 The second type is a mutex (``include/linux/mu    
111 spinlock, but you may block holding a mutex. I    
112 your task will suspend itself, and be woken up    
113 released. This means the CPU can do something     
114 waiting. There are many cases when you simply     
115 `What Functions Are Safe To Call From Interrup    
116 and so have to use a spinlock instead.            
117                                                   
118 Neither type of lock is recursive: see            
119 `Deadlock: Simple and Advanced`_.                 
120                                                   
121 Locks and Uniprocessor Kernels                    
122 ------------------------------                    
123                                                   
124 For kernels compiled without ``CONFIG_SMP``, a    
125 ``CONFIG_PREEMPT`` spinlocks do not exist at a    
126 design decision: when no-one else can run at t    
127 reason to have a lock.                            
128                                                   
129 If the kernel is compiled without ``CONFIG_SMP    
130 is set, then spinlocks simply disable preempti    
131 prevent any races. For most purposes, we can t    
132 equivalent to SMP, and not worry about it sepa    
133                                                   
134 You should always test your locking code with     
135 ``CONFIG_PREEMPT`` enabled, even if you don't     
136 because it will still catch some kinds of lock    
137                                                   
138 Mutexes still exist, because they are required    
139 between user contexts, as we will see below.      
140                                                   
141 Locking Only In User Context                      
142 ----------------------------                      
143                                                   
144 If you have a data structure which is only eve    
145 context, then you can use a simple mutex (``in    
146 protect it. This is the most trivial case: you    
147 Then you can call mutex_lock_interruptible() t    
148 mutex, and mutex_unlock() to release it. There    
149 mutex_lock(), which should be avoided, because    
150 not return if a signal is received.               
151                                                   
152 Example: ``net/netfilter/nf_sockopt.c`` allows    
153 setsockopt() and getsockopt() calls, with         
154 nf_register_sockopt(). Registration and de-reg    
155 are only done on module load and unload (and b    
156 no concurrency), and the list of registrations    
157 unknown setsockopt() or getsockopt() system       
158 call. The ``nf_sockopt_mutex`` is perfect to p    
159 since the setsockopt and getsockopt calls may     
160                                                   
161 Locking Between User Context and Softirqs         
162 -----------------------------------------         
163                                                   
164 If a softirq shares data with user context, yo    
165 Firstly, the current user context can be inter    
166 secondly, the critical region could be entered    
167 where spin_lock_bh() (``include/linux/spinlock    
168 used. It disables softirqs on that CPU, then g    
169 spin_unlock_bh() does the reverse. (The '_bh'     
170 a historical reference to "Bottom Halves", the    
171 interrupts. It should really be called spin_lo    
172 perfect world).                                   
173                                                   
174 Note that you can also use spin_lock_irq() or     
175 spin_lock_irqsave() here, which stop hardware     
176 as well: see `Hard IRQ Context`_.                 
177                                                   
178 This works perfectly for UP as well: the spin     
179 macro simply becomes local_bh_disable()           
180 (``include/linux/interrupt.h``), which protect    
181 being run.                                        
182                                                   
183 Locking Between User Context and Tasklets         
184 -----------------------------------------         
185                                                   
186 This is exactly the same as above, because tas    
187 from a softirq.                                   
188                                                   
189 Locking Between User Context and Timers           
190 ---------------------------------------           
191                                                   
192 This, too, is exactly the same as above, becau    
193 from a softirq. From a locking point of view,     
194 identical.                                        
195                                                   
196 Locking Between Tasklets/Timers                   
197 -------------------------------                   
198                                                   
199 Sometimes a tasklet or timer might want to sha    
200 tasklet or timer.                                 
201                                                   
202 The Same Tasklet/Timer                            
203 ~~~~~~~~~~~~~~~~~~~~~~                            
204                                                   
205 Since a tasklet is never run on two CPUs at on    
206 worry about your tasklet being reentrant (runn    
207 on SMP.                                           
208                                                   
209 Different Tasklets/Timers                         
210 ~~~~~~~~~~~~~~~~~~~~~~~~~                         
211                                                   
212 If another tasklet/timer wants to share data w    
213 , you will both need to use spin_lock() and       
214 spin_unlock() calls. spin_lock_bh() is            
215 unnecessary here, as you are already in a task    
216 on the same CPU.                                  
217                                                   
218 Locking Between Softirqs                          
219 ------------------------                          
220                                                   
221 Often a softirq might want to share data with     
222                                                   
223 The Same Softirq                                  
224 ~~~~~~~~~~~~~~~~                                  
225                                                   
226 The same softirq can run on the other CPUs: yo    
227 (see `Per-CPU Data`_) for better performance.     
228 going so far as to use a softirq, you probably    
229 performance enough to justify the extra comple    
230                                                   
231 You'll need to use spin_lock() and                
232 spin_unlock() for shared data.                    
233                                                   
234 Different Softirqs                                
235 ~~~~~~~~~~~~~~~~~~                                
236                                                   
237 You'll need to use spin_lock() and                
238 spin_unlock() for shared data, whether it be a    
239 tasklet, different softirq or the same or anot    
240 could be running on a different CPU.              
241                                                   
242 Hard IRQ Context                                  
243 ================                                  
244                                                   
245 Hardware interrupts usually communicate with a    
246 Frequently this involves putting work in a que    
247 take out.                                         
248                                                   
249 Locking Between Hard IRQ and Softirqs/Tasklets    
250 ----------------------------------------------    
251                                                   
252 If a hardware irq handler shares data with a s    
253 concerns. Firstly, the softirq processing can     
254 hardware interrupt, and secondly, the critical    
255 by a hardware interrupt on another CPU. This i    
256 spin_lock_irq() is used. It is defined to disa    
257 interrupts on that cpu, then grab the lock.       
258 spin_unlock_irq() does the reverse.               
259                                                   
260 The irq handler does not need to use spin_lock    
261 the softirq cannot run while the irq handler i    
262 spin_lock(), which is slightly faster. The onl    
263 would be if a different hardware irq handler u    
264 spin_lock_irq() will stop that from interrupti    
265                                                   
266 This works perfectly for UP as well: the spin     
267 macro simply becomes local_irq_disable()          
268 (``include/asm/smp.h``), which protects you fr    
269 being run.                                        
270                                                   
271 spin_lock_irqsave() (``include/linux/spinlock.    
272 variant which saves whether interrupts were on    
273 which is passed to spin_unlock_irqrestore(). T    
274 that the same code can be used inside an hard     
275 interrupts are already off) and in softirqs (w    
276 required).                                        
277                                                   
278 Note that softirqs (and hence tasklets and tim    
279 from hardware interrupts, so spin_lock_irq() a    
280 these. In that sense, spin_lock_irqsave() is t    
281 general and powerful locking function.            
282                                                   
283 Locking Between Two Hard IRQ Handlers             
284 -------------------------------------             
285                                                   
286 It is rare to have to share data between two I    
287 do, spin_lock_irqsave() should be used: it is     
288 architecture-specific whether all interrupts a    
289 handlers themselves.                              
290                                                   
291 Cheat Sheet For Locking                           
292 =======================                           
293                                                   
294 Pete Zaitcev gives the following summary:         
295                                                   
296 -  If you are in a process context (any syscal    
297    process out, use a mutex. You can take a mu    
298    (``copy_from_user()`` or ``kmalloc(x,GFP_KE    
299                                                   
300 -  Otherwise (== data can be touched in an int    
301    spin_lock_irqsave() and                        
302    spin_unlock_irqrestore().                      
303                                                   
304 -  Avoid holding spinlock for more than 5 line    
305    function call (except accessors like readb(    
306                                                   
307 Table of Minimum Requirements                     
308 -----------------------------                     
309                                                   
310 The following table lists the **minimum** lock    
311 various contexts. In some cases, the same cont    
312 one CPU at a time, so no locking is required f    
313 particular thread can only run on one CPU at a    
314 shares data with another thread, locking is re    
315                                                   
316 Remember the advice above: you can always use     
317 spin_lock_irqsave(), which is a superset of al    
318 spinlock primitives.                              
319                                                   
320 ============== ============= ============= ===    
321 .              IRQ Handler A IRQ Handler B Sof    
322 ============== ============= ============= ===    
323 IRQ Handler A  None                               
324 IRQ Handler B  SLIS          None                 
325 Softirq A      SLI           SLI           SL     
326 Softirq B      SLI           SLI           SL     
327 Tasklet A      SLI           SLI           SL     
328 Tasklet B      SLI           SLI           SL     
329 Timer A        SLI           SLI           SL     
330 Timer B        SLI           SLI           SL     
331 User Context A SLI           SLI           SLB    
332 User Context B SLI           SLI           SLB    
333 ============== ============= ============= ===    
334                                                   
335 Table: Table of Locking Requirements              
336                                                   
337 +--------+----------------------------+           
338 | SLIS   | spin_lock_irqsave          |           
339 +--------+----------------------------+           
340 | SLI    | spin_lock_irq              |           
341 +--------+----------------------------+           
342 | SL     | spin_lock                  |           
343 +--------+----------------------------+           
344 | SLBH   | spin_lock_bh               |           
345 +--------+----------------------------+           
346 | MLI    | mutex_lock_interruptible   |           
347 +--------+----------------------------+           
348                                                   
349 Table: Legend for Locking Requirements Table      
350                                                   
351 The trylock Functions                             
352 =====================                             
353                                                   
354 There are functions that try to acquire a lock    
355 return a value telling about success or failur    
356 They can be used if you need no access to the     
357 lock when some other thread is holding the loc    
358 lock later if you then need access to the data    
359                                                   
360 spin_trylock() does not spin but returns non-z    
361 acquires the spinlock on the first try or 0 if    
362 used in all contexts like spin_lock(): you mus    
363 disabled the contexts that might interrupt you    
364 lock.                                             
365                                                   
366 mutex_trylock() does not suspend your task but    
367 non-zero if it could lock the mutex on the fir    
368 function cannot be safely used in hardware or     
369 contexts despite not sleeping.                    
370                                                   
371 Common Examples                                   
372 ===============                                   
373                                                   
374 Let's step through a simple example: a cache o    
375 The cache keeps a count of how often each of t    
376 when it gets full, throws out the least used o    
377                                                   
378 All In User Context                               
379 -------------------                               
380                                                   
381 For our first example, we assume that all oper    
382 (ie. from system calls), so we can sleep. This    
383 to protect the cache and all the objects withi    
384                                                   
385     #include <linux/list.h>                       
386     #include <linux/slab.h>                       
387     #include <linux/string.h>                     
388     #include <linux/mutex.h>                      
389     #include <asm/errno.h>                        
390                                                   
391     struct object                                 
392     {                                             
393             struct list_head list;                
394             int id;                               
395             char name[32];                        
396             int popularity;                       
397     };                                            
398                                                   
399     /* Protects the cache, cache_num, and the     
400     static DEFINE_MUTEX(cache_lock);              
401     static LIST_HEAD(cache);                      
402     static unsigned int cache_num = 0;            
403     #define MAX_CACHE_SIZE 10                     
404                                                   
405     /* Must be holding cache_lock */              
406     static struct object *__cache_find(int id)    
407     {                                             
408             struct object *i;                     
409                                                   
410             list_for_each_entry(i, &cache, lis    
411                     if (i->id == id) {            
412                             i->popularity++;      
413                             return i;             
414                     }                             
415             return NULL;                          
416     }                                             
417                                                   
418     /* Must be holding cache_lock */              
419     static void __cache_delete(struct object *    
420     {                                             
421             BUG_ON(!obj);                         
422             list_del(&obj->list);                 
423             kfree(obj);                           
424             cache_num--;                          
425     }                                             
426                                                   
427     /* Must be holding cache_lock */              
428     static void __cache_add(struct object *obj    
429     {                                             
430             list_add(&obj->list, &cache);         
431             if (++cache_num > MAX_CACHE_SIZE)     
432                     struct object *i, *outcast    
433                     list_for_each_entry(i, &ca    
434                             if (!outcast || i-    
435                                     outcast =     
436                     }                             
437                     __cache_delete(outcast);      
438             }                                     
439     }                                             
440                                                   
441     int cache_add(int id, const char *name)       
442     {                                             
443             struct object *obj;                   
444                                                   
445             if ((obj = kmalloc(sizeof(*obj), G    
446                     return -ENOMEM;               
447                                                   
448             strscpy(obj->name, name, sizeof(ob    
449             obj->id = id;                         
450             obj->popularity = 0;                  
451                                                   
452             mutex_lock(&cache_lock);              
453             __cache_add(obj);                     
454             mutex_unlock(&cache_lock);            
455             return 0;                             
456     }                                             
457                                                   
458     void cache_delete(int id)                     
459     {                                             
460             mutex_lock(&cache_lock);              
461             __cache_delete(__cache_find(id));     
462             mutex_unlock(&cache_lock);            
463     }                                             
464                                                   
465     int cache_find(int id, char *name)            
466     {                                             
467             struct object *obj;                   
468             int ret = -ENOENT;                    
469                                                   
470             mutex_lock(&cache_lock);              
471             obj = __cache_find(id);               
472             if (obj) {                            
473                     ret = 0;                      
474                     strcpy(name, obj->name);      
475             }                                     
476             mutex_unlock(&cache_lock);            
477             return ret;                           
478     }                                             
479                                                   
480 Note that we always make sure we have the cach    
481 delete, or look up the cache: both the cache i    
482 the contents of the objects are protected by t    
483 easy, since we copy the data for the user, and    
484 objects directly.                                 
485                                                   
486 There is a slight (and common) optimization he    
487 cache_add() we set up the fields of the object    
488 grabbing the lock. This is safe, as no-one els    
489 put it in cache.                                  
490                                                   
491 Accessing From Interrupt Context                  
492 --------------------------------                  
493                                                   
494 Now consider the case where cache_find() can b    
495 from interrupt context: either a hardware inte    
496 example would be a timer which deletes object     
497                                                   
498 The change is shown below, in standard patch f    
499 which are taken away, and the ``+`` are lines     
500                                                   
501 ::                                                
502                                                   
503     --- cache.c.usercontext 2003-12-09 13:58:5    
504     +++ cache.c.interrupt   2003-12-09 14:07:4    
505     @@ -12,7 +12,7 @@                             
506              int popularity;                      
507      };                                           
508                                                   
509     -static DEFINE_MUTEX(cache_lock);             
510     +static DEFINE_SPINLOCK(cache_lock);          
511      static LIST_HEAD(cache);                     
512      static unsigned int cache_num = 0;           
513      #define MAX_CACHE_SIZE 10                    
514     @@ -55,6 +55,7 @@                             
515      int cache_add(int id, const char *name)      
516      {                                            
517              struct object *obj;                  
518     +        unsigned long flags;                 
519                                                   
520              if ((obj = kmalloc(sizeof(*obj),     
521                      return -ENOMEM;              
522     @@ -63,30 +64,33 @@                           
523              obj->id = id;                        
524              obj->popularity = 0;                 
525                                                   
526     -        mutex_lock(&cache_lock);             
527     +        spin_lock_irqsave(&cache_lock, fl    
528              __cache_add(obj);                    
529     -        mutex_unlock(&cache_lock);           
530     +        spin_unlock_irqrestore(&cache_loc    
531              return 0;                            
532      }                                            
533                                                   
534      void cache_delete(int id)                    
535      {                                            
536     -        mutex_lock(&cache_lock);             
537     +        unsigned long flags;                 
538     +                                             
539     +        spin_lock_irqsave(&cache_lock, fl    
540              __cache_delete(__cache_find(id));    
541     -        mutex_unlock(&cache_lock);           
542     +        spin_unlock_irqrestore(&cache_loc    
543      }                                            
544                                                   
545      int cache_find(int id, char *name)           
546      {                                            
547              struct object *obj;                  
548              int ret = -ENOENT;                   
549     +        unsigned long flags;                 
550                                                   
551     -        mutex_lock(&cache_lock);             
552     +        spin_lock_irqsave(&cache_lock, fl    
553              obj = __cache_find(id);              
554              if (obj) {                           
555                      ret = 0;                     
556                      strcpy(name, obj->name);     
557              }                                    
558     -        mutex_unlock(&cache_lock);           
559     +        spin_unlock_irqrestore(&cache_loc    
560              return ret;                          
561      }                                            
562                                                   
563 Note that the spin_lock_irqsave() will turn of    
564 interrupts if they are on, otherwise does noth    
565 an interrupt handler), hence these functions a    
566 context.                                          
567                                                   
568 Unfortunately, cache_add() calls kmalloc()        
569 with the ``GFP_KERNEL`` flag, which is only le    
570 have assumed that cache_add() is still only ca    
571 user context, otherwise this should become a p    
572 cache_add().                                      
573                                                   
574 Exposing Objects Outside This File                
575 ----------------------------------                
576                                                   
577 If our objects contained more information, it     
578 copy the information in and out: other parts o    
579 keep pointers to these objects, for example, r    
580 id every time. This produces two problems.        
581                                                   
582 The first problem is that we use the ``cache_l    
583 we'd need to make this non-static so the rest     
584 This makes locking trickier, as it is no longe    
585                                                   
586 The second problem is the lifetime problem: if    
587 pointer to an object, it presumably expects th    
588 valid. Unfortunately, this is only guaranteed     
589 otherwise someone might call cache_delete() an    
590 worse, add another object, re-using the same a    
591                                                   
592 As there is only one lock, you can't hold it f    
593 get any work done.                                
594                                                   
595 The solution to this problem is to use a refer    
596 has a pointer to the object increases it when     
597 and drops the reference count when they're fin    
598 drops it to zero knows it is unused, and can a    
599                                                   
600 Here is the code::                                
601                                                   
602     --- cache.c.interrupt   2003-12-09 14:25:4    
603     +++ cache.c.refcnt  2003-12-09 14:33:05.00    
604     @@ -7,6 +7,7 @@                               
605      struct object                                
606      {                                            
607              struct list_head list;               
608     +        unsigned int refcnt;                 
609              int id;                              
610              char name[32];                       
611              int popularity;                      
612     @@ -17,6 +18,35 @@                            
613      static unsigned int cache_num = 0;           
614      #define MAX_CACHE_SIZE 10                    
615                                                   
616     +static void __object_put(struct object *o    
617     +{                                            
618     +        if (--obj->refcnt == 0)              
619     +                kfree(obj);                  
620     +}                                            
621     +                                             
622     +static void __object_get(struct object *o    
623     +{                                            
624     +        obj->refcnt++;                       
625     +}                                            
626     +                                             
627     +void object_put(struct object *obj)          
628     +{                                            
629     +        unsigned long flags;                 
630     +                                             
631     +        spin_lock_irqsave(&cache_lock, fl    
632     +        __object_put(obj);                   
633     +        spin_unlock_irqrestore(&cache_loc    
634     +}                                            
635     +                                             
636     +void object_get(struct object *obj)          
637     +{                                            
638     +        unsigned long flags;                 
639     +                                             
640     +        spin_lock_irqsave(&cache_lock, fl    
641     +        __object_get(obj);                   
642     +        spin_unlock_irqrestore(&cache_loc    
643     +}                                            
644     +                                             
645      /* Must be holding cache_lock */             
646      static struct object *__cache_find(int id    
647      {                                            
648     @@ -35,6 +65,7 @@                             
649      {                                            
650              BUG_ON(!obj);                        
651              list_del(&obj->list);                
652     +        __object_put(obj);                   
653              cache_num--;                         
654      }                                            
655                                                   
656     @@ -63,6 +94,7 @@                             
657              strscpy(obj->name, name, sizeof(o    
658              obj->id = id;                        
659              obj->popularity = 0;                 
660     +        obj->refcnt = 1; /* The cache hol    
661                                                   
662              spin_lock_irqsave(&cache_lock, fl    
663              __cache_add(obj);                    
664     @@ -79,18 +111,15 @@                          
665              spin_unlock_irqrestore(&cache_loc    
666      }                                            
667                                                   
668     -int cache_find(int id, char *name)           
669     +struct object *cache_find(int id)            
670      {                                            
671              struct object *obj;                  
672     -        int ret = -ENOENT;                   
673              unsigned long flags;                 
674                                                   
675              spin_lock_irqsave(&cache_lock, fl    
676              obj = __cache_find(id);              
677     -        if (obj) {                           
678     -                ret = 0;                     
679     -                strcpy(name, obj->name);     
680     -        }                                    
681     +        if (obj)                             
682     +                __object_get(obj);           
683              spin_unlock_irqrestore(&cache_loc    
684     -        return ret;                          
685     +        return obj;                          
686      }                                            
687                                                   
688 We encapsulate the reference counting in the s    
689 functions. Now we can return the object itself    
690 cache_find() which has the advantage that the     
691 now sleep holding the object (eg. to copy_to_u    
692 name to userspace).                               
693                                                   
694 The other point to note is that I said a refer    
695 every pointer to the object: thus the referenc    
696 inserted into the cache. In some versions the     
697 reference count, but they are more complicated    
698                                                   
699 Using Atomic Operations For The Reference Coun    
700 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~    
701                                                   
702 In practice, :c:type:`atomic_t` would usually     
703 number of atomic operations defined in ``inclu    
704 are guaranteed to be seen atomically from all     
705 lock is required. In this case, it is simpler     
706 although for anything non-trivial using spinlo    
707 atomic_inc() and atomic_dec_and_test()            
708 are used instead of the standard increment and    
709 the lock is no longer used to protect the refe    
710                                                   
711 ::                                                
712                                                   
713     --- cache.c.refcnt  2003-12-09 15:00:35.00    
714     +++ cache.c.refcnt-atomic   2003-12-11 15:    
715     @@ -7,7 +7,7 @@                               
716      struct object                                
717      {                                            
718              struct list_head list;               
719     -        unsigned int refcnt;                 
720     +        atomic_t refcnt;                     
721              int id;                              
722              char name[32];                       
723              int popularity;                      
724     @@ -18,33 +18,15 @@                           
725      static unsigned int cache_num = 0;           
726      #define MAX_CACHE_SIZE 10                    
727                                                   
728     -static void __object_put(struct object *o    
729     -{                                            
730     -        if (--obj->refcnt == 0)              
731     -                kfree(obj);                  
732     -}                                            
733     -                                             
734     -static void __object_get(struct object *o    
735     -{                                            
736     -        obj->refcnt++;                       
737     -}                                            
738     -                                             
739      void object_put(struct object *obj)          
740      {                                            
741     -        unsigned long flags;                 
742     -                                             
743     -        spin_lock_irqsave(&cache_lock, fl    
744     -        __object_put(obj);                   
745     -        spin_unlock_irqrestore(&cache_loc    
746     +        if (atomic_dec_and_test(&obj->ref    
747     +                kfree(obj);                  
748      }                                            
749                                                   
750      void object_get(struct object *obj)          
751      {                                            
752     -        unsigned long flags;                 
753     -                                             
754     -        spin_lock_irqsave(&cache_lock, fl    
755     -        __object_get(obj);                   
756     -        spin_unlock_irqrestore(&cache_loc    
757     +        atomic_inc(&obj->refcnt);            
758      }                                            
759                                                   
760      /* Must be holding cache_lock */             
761     @@ -65,7 +47,7 @@                             
762      {                                            
763              BUG_ON(!obj);                        
764              list_del(&obj->list);                
765     -        __object_put(obj);                   
766     +        object_put(obj);                     
767              cache_num--;                         
768      }                                            
769                                                   
770     @@ -94,7 +76,7 @@                             
771              strscpy(obj->name, name, sizeof(o    
772              obj->id = id;                        
773              obj->popularity = 0;                 
774     -        obj->refcnt = 1; /* The cache hol    
775     +        atomic_set(&obj->refcnt, 1); /* T    
776                                                   
777              spin_lock_irqsave(&cache_lock, fl    
778              __cache_add(obj);                    
779     @@ -119,7 +101,7 @@                           
780              spin_lock_irqsave(&cache_lock, fl    
781              obj = __cache_find(id);              
782              if (obj)                             
783     -                __object_get(obj);           
784     +                object_get(obj);             
785              spin_unlock_irqrestore(&cache_loc    
786              return obj;                          
787      }                                            
788                                                   
789 Protecting The Objects Themselves                 
790 ---------------------------------                 
791                                                   
792 In these examples, we assumed that the objects    
793 counts) never changed once they are created. I    
794 name to change, there are three possibilities:    
795                                                   
796 -  You can make ``cache_lock`` non-static, and    
797    lock before changing the name in any object    
798                                                   
799 -  You can provide a cache_obj_rename() which     
800    lock and changes the name for the caller, a    
801    that function.                                 
802                                                   
803 -  You can make the ``cache_lock`` protect onl    
804    use another lock to protect the name.          
805                                                   
806 Theoretically, you can make the locks as fine-    
807 every field, for every object. In practice, th    
808 are:                                              
809                                                   
810 -  One lock which protects the infrastructure     
811    this example) and all the objects. This is     
812                                                   
813 -  One lock which protects the infrastructure     
814    pointers inside the objects), and one lock     
815    protects the rest of that object.              
816                                                   
817 -  Multiple locks to protect the infrastructur    
818    chain), possibly with a separate per-object    
819                                                   
820 Here is the "lock-per-object" implementation:     
821                                                   
822 ::                                                
823                                                   
824     --- cache.c.refcnt-atomic   2003-12-11 15:    
825     +++ cache.c.perobjectlock   2003-12-11 17:    
826     @@ -6,11 +6,17 @@                             
827                                                   
828      struct object                                
829      {                                            
830     +        /* These two protected by cache_l    
831              struct list_head list;               
832     +        int popularity;                      
833     +                                             
834              atomic_t refcnt;                     
835     +                                             
836     +        /* Doesn't change once created. *    
837              int id;                              
838     +                                             
839     +        spinlock_t lock; /* Protects the     
840              char name[32];                       
841     -        int popularity;                      
842      };                                           
843                                                   
844      static DEFINE_SPINLOCK(cache_lock);          
845     @@ -77,6 +84,7 @@                             
846              obj->id = id;                        
847              obj->popularity = 0;                 
848              atomic_set(&obj->refcnt, 1); /* T    
849     +        spin_lock_init(&obj->lock);          
850                                                   
851              spin_lock_irqsave(&cache_lock, fl    
852              __cache_add(obj);                    
853                                                   
854 Note that I decide that the popularity count s    
855 ``cache_lock`` rather than the per-object lock    
856 the :c:type:`struct list_head <list_head>` ins    
857 is logically part of the infrastructure. This     
858 the lock of every object in __cache_add() when    
859 the least popular.                                
860                                                   
861 I also decided that the id member is unchangea    
862 grab each object lock in __cache_find() to exa    
863 id: the object lock is only used by a caller w    
864 the name field.                                   
865                                                   
866 Note also that I added a comment describing wh    
867 which locks. This is extremely important, as i    
868 behavior of the code, and can be hard to gain     
869 Alan Cox says, “Lock data, not code”.         
870                                                   
871 Common Problems                                   
872 ===============                                   
873                                                   
874 Deadlock: Simple and Advanced                     
875 -----------------------------                     
876                                                   
877 There is a coding bug where a piece of code tr    
878 twice: it will spin forever, waiting for the l    
879 (spinlocks, rwlocks and mutexes are not recurs    
880 trivial to diagnose: not a                        
881 stay-up-five-nights-talk-to-fluffy-code-bunnie    
882                                                   
883 For a slightly more complex case, imagine you     
884 softirq and user context. If you use a spin_lo    
885 to protect it, it is possible that the user co    
886 by the softirq while it holds the lock, and th    
887 forever trying to get the same lock.              
888                                                   
889 Both of these are called deadlock, and as show    
890 with a single CPU (although not on UP compiles    
891 on kernel compiles with ``CONFIG_SMP``\ =n. Yo    
892 corruption in the second example).                
893                                                   
894 This complete lockup is easy to diagnose: on S    
895 timer or compiling with ``DEBUG_SPINLOCK`` set    
896 (``include/linux/spinlock.h``) will show this     
897 happens.                                          
898                                                   
899 A more complex problem is the so-called 'deadl    
900 or more locks. Say you have a hash table: each    
901 spinlock, and a chain of hashed objects. Insid    
902 sometimes want to alter an object from one pla    
903 you grab the spinlock of the old hash chain an    
904 hash chain, and delete the object from the old    
905 new one.                                          
906                                                   
907 There are two problems here. First, if your co    
908 object to the same chain, it will deadlock wit    
909 lock it twice. Secondly, if the same softirq o    
910 move another object in the reverse direction,     
911 happen:                                           
912                                                   
913 +-----------------------+---------------------    
914 | CPU 1                 | CPU 2                   
915 +=======================+=====================    
916 | Grab lock A -> OK     | Grab lock B -> OK       
917 +-----------------------+---------------------    
918 | Grab lock B -> spin   | Grab lock A -> spin     
919 +-----------------------+---------------------    
920                                                   
921 Table: Consequences                               
922                                                   
923 The two CPUs will spin forever, waiting for th    
924 lock. It will look, smell, and feel like a cra    
925                                                   
926 Preventing Deadlock                               
927 -------------------                               
928                                                   
929 Textbooks will tell you that if you always loc    
930 will never get this kind of deadlock. Practice    
931 approach doesn't scale: when I create a new lo    
932 enough of the kernel to figure out where in th    
933 will fit.                                         
934                                                   
935 The best locks are encapsulated: they never ge    
936 are never held around calls to non-trivial fun    
937 file. You can read through this code and see t    
938 deadlock, because it never tries to grab anoth    
939 one. People using your code don't even need to    
940 lock.                                             
941                                                   
942 A classic problem here is when you provide cal    
943 call these with the lock held, you risk simple    
944 embrace (who knows what the callback will do?)    
945                                                   
946 Overzealous Prevention Of Deadlocks               
947 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~               
948                                                   
949 Deadlocks are problematic, but not as bad as d    
950 grabs a read lock, searches a list, fails to f    
951 the read lock, grabs a write lock and inserts     
952 condition.                                        
953                                                   
954 Racing Timers: A Kernel Pastime                   
955 -------------------------------                   
956                                                   
957 Timers can produce their own special problems     
958 collection of objects (list, hash, etc) where     
959 which is due to destroy it.                       
960                                                   
961 If you want to destroy the entire collection (    
962 you might do the following::                      
963                                                   
964             /* THIS CODE BAD BAD BAD BAD: IF I    
965                HUNGARIAN NOTATION */              
966             spin_lock_bh(&list_lock);             
967                                                   
968             while (list) {                        
969                     struct foo *next = list->n    
970                     timer_delete(&list->timer)    
971                     kfree(list);                  
972                     list = next;                  
973             }                                     
974                                                   
975             spin_unlock_bh(&list_lock);           
976                                                   
977                                                   
978 Sooner or later, this will crash on SMP, becau    
979 gone off before the spin_lock_bh(), and it wil    
980 the lock after we spin_unlock_bh(), and then t    
981 the element (which has already been freed!).      
982                                                   
983 This can be avoided by checking the result of     
984 timer_delete(): if it returns 1, the timer has    
985 If 0, it means (in this case) that it is curre    
986 do::                                              
987                                                   
988             retry:                                
989                     spin_lock_bh(&list_lock);     
990                                                   
991                     while (list) {                
992                             struct foo *next =    
993                             if (!timer_delete(    
994                                     /* Give ti    
995                                     spin_unloc    
996                                     goto retry    
997                             }                     
998                             kfree(list);          
999                             list = next;          
1000                     }                            
1001                                                  
1002                     spin_unlock_bh(&list_lock    
1003                                                  
1004                                                  
1005 Another common problem is deleting timers whi    
1006 calling add_timer() at the end of their timer    
1007 Because this is a fairly common case which is    
1008 use timer_delete_sync() (``include/linux/time    
1009                                                  
1010 Before freeing a timer, timer_shutdown() or t    
1011 called which will keep it from being rearmed.    
1012 rearm the timer will be silently ignored by t    
1013                                                  
1014                                                  
1015 Locking Speed                                    
1016 =============                                    
1017                                                  
1018 There are three main things to worry about wh    
1019 some code which does locking. First is concur    
1020 going to be waiting while someone else is hol    
1021 time taken to actually acquire and release an    
1022 using fewer, or smarter locks. I'm assuming t    
1023 often: otherwise, you wouldn't be concerned a    
1024                                                  
1025 Concurrency depends on how long the lock is u    
1026 hold the lock for as long as needed, but no l    
1027 example, we always create the object without     
1028 grab the lock only when we are ready to inser    
1029                                                  
1030 Acquisition times depend on how much damage t    
1031 the pipeline (pipeline stalls) and how likely    
1032 the last one to grab the lock (ie. is the loc    
1033 on a machine with more CPUs, this likelihood     
1034 700MHz Intel Pentium III: an instruction take    
1035 increment takes about 58ns, a lock which is c    
1036 160ns, and a cacheline transfer from another     
1037 to 360ns. (These figures from Paul McKenney's    
1038 article <http://www.linuxjournal.com/article.    
1039                                                  
1040 These two aims conflict: holding a lock for a    
1041 by splitting locks into parts (such as in our    
1042 example), but this increases the number of lo    
1043 results are often slower than having a single    
1044 reason to advocate locking simplicity.           
1045                                                  
1046 The third concern is addressed below: there a    
1047 the amount of locking which needs to be done.    
1048                                                  
1049 Read/Write Lock Variants                         
1050 ------------------------                         
1051                                                  
1052 Both spinlocks and mutexes have read/write va    
1053 :c:type:`struct rw_semaphore <rw_semaphore>`.    
1054 users into two classes: the readers and the w    
1055 reading the data, you can get a read lock, bu    
1056 need the write lock. Many people can hold a r    
1057 be sole holder.                                  
1058                                                  
1059 If your code divides neatly along reader/writ    
1060 does), and the lock is held by readers for si    
1061 using these locks can help. They are slightly    
1062 locks though, so in practice ``rwlock_t`` is     
1063                                                  
1064 Avoiding Locks: Read Copy Update                 
1065 --------------------------------                 
1066                                                  
1067 There is a special method of read/write locki    
1068 Using RCU, the readers can avoid taking a loc    
1069 our cache to be read more often than updated     
1070 waste of time), it is a candidate for this op    
1071                                                  
1072 How do we get rid of read locks? Getting rid     
1073 writers may be changing the list underneath t    
1074 actually quite simple: we can read a linked l    
1075 being added if the writer adds the element ve    
1076 adding ``new`` to a single linked list called    
1077                                                  
1078             new->next = list->next;              
1079             wmb();                               
1080             list->next = new;                    
1081                                                  
1082                                                  
1083 The wmb() is a write memory barrier. It ensur    
1084 first operation (setting the new element's ``    
1085 and will be seen by all CPUs, before the seco    
1086 the new element into the list). This is impor    
1087 compilers and modern CPUs can both reorder in    
1088 otherwise: we want a reader to either not see    
1089 see the new element with the ``next`` pointer    
1090 rest of the list.                                
1091                                                  
1092 Fortunately, there is a function to do this f    
1093 :c:type:`struct list_head <list_head>` lists:    
1094 list_add_rcu() (``include/linux/list.h``).       
1095                                                  
1096 Removing an element from the list is even sim    
1097 pointer to the old element with a pointer to     
1098 will either see it, or skip over it.             
1099                                                  
1100 ::                                               
1101                                                  
1102             list->next = old->next;              
1103                                                  
1104                                                  
1105 There is list_del_rcu() (``include/linux/list    
1106 does this (the normal version poisons the old    
1107 want).                                           
1108                                                  
1109 The reader must also be careful: some CPUs ca    
1110 pointer to start reading the contents of the     
1111 don't realize that the pre-fetched contents i    
1112 pointer changes underneath them. Once again,     
1113 list_for_each_entry_rcu() (``include/linux/li    
1114 to help you. Of course, writers can just use     
1115 list_for_each_entry(), since there cannot be     
1116 simultaneous writers.                            
1117                                                  
1118 Our final dilemma is this: when can we actual    
1119 element? Remember, a reader might be stepping    
1120 the list right now: if we free this element a    
1121 changes, the reader will jump off into garbag    
1122 wait until we know that all the readers who w    
1123 when we deleted the element are finished. We     
1124 call_rcu() to register a callback which will     
1125 destroy the object once all pre-existing read    
1126 Alternatively, synchronize_rcu() may be used     
1127 until all pre-existing are finished.             
1128                                                  
1129 But how does Read Copy Update know when the r    
1130 method is this: firstly, the readers always t    
1131 rcu_read_lock()/rcu_read_unlock() pairs:         
1132 these simply disable preemption so the reader    
1133 reading the list.                                
1134                                                  
1135 RCU then waits until every other CPU has slep    
1136 readers cannot sleep, we know that any reader    
1137 list during the deletion are finished, and th    
1138 The real Read Copy Update code is a little mo    
1139 this is the fundamental idea.                    
1140                                                  
1141 ::                                               
1142                                                  
1143     --- cache.c.perobjectlock   2003-12-11 17    
1144     +++ cache.c.rcupdate    2003-12-11 17:55:    
1145     @@ -1,15 +1,18 @@                            
1146      #include <linux/list.h>                     
1147      #include <linux/slab.h>                     
1148      #include <linux/string.h>                   
1149     +#include <linux/rcupdate.h>                 
1150      #include <linux/mutex.h>                    
1151      #include <asm/errno.h>                      
1152                                                  
1153      struct object                               
1154      {                                           
1155     -        /* These two protected by cache_    
1156     +        /* This is protected by RCU */      
1157              struct list_head list;              
1158              int popularity;                     
1159                                                  
1160     +        struct rcu_head rcu;                
1161     +                                            
1162              atomic_t refcnt;                    
1163                                                  
1164              /* Doesn't change once created.     
1165     @@ -40,7 +43,7 @@                            
1166      {                                           
1167              struct object *i;                   
1168                                                  
1169     -        list_for_each_entry(i, &cache, l    
1170     +        list_for_each_entry_rcu(i, &cach    
1171                      if (i->id == id) {          
1172                              i->popularity++;    
1173                              return i;           
1174     @@ -49,19 +52,25 @@                          
1175              return NULL;                        
1176      }                                           
1177                                                  
1178     +/* Final discard done once we know no re    
1179     +static void cache_delete_rcu(void *arg)     
1180     +{                                           
1181     +        object_put(arg);                    
1182     +}                                           
1183     +                                            
1184      /* Must be holding cache_lock */            
1185      static void __cache_delete(struct object    
1186      {                                           
1187              BUG_ON(!obj);                       
1188     -        list_del(&obj->list);               
1189     -        object_put(obj);                    
1190     +        list_del_rcu(&obj->list);           
1191              cache_num--;                        
1192     +        call_rcu(&obj->rcu, cache_delete    
1193      }                                           
1194                                                  
1195      /* Must be holding cache_lock */            
1196      static void __cache_add(struct object *o    
1197      {                                           
1198     -        list_add(&obj->list, &cache);       
1199     +        list_add_rcu(&obj->list, &cache)    
1200              if (++cache_num > MAX_CACHE_SIZE    
1201                      struct object *i, *outca    
1202                      list_for_each_entry(i, &    
1203     @@ -104,12 +114,11 @@                        
1204      struct object *cache_find(int id)           
1205      {                                           
1206              struct object *obj;                 
1207     -        unsigned long flags;                
1208                                                  
1209     -        spin_lock_irqsave(&cache_lock, f    
1210     +        rcu_read_lock();                    
1211              obj = __cache_find(id);             
1212              if (obj)                            
1213                      object_get(obj);            
1214     -        spin_unlock_irqrestore(&cache_lo    
1215     +        rcu_read_unlock();                  
1216              return obj;                         
1217      }                                           
1218                                                  
1219 Note that the reader will alter the popularit    
1220 __cache_find(), and now it doesn't hold a loc    
1221 solution would be to make it an ``atomic_t``,    
1222 don't really care about races: an approximate    
1223 I didn't change it.                              
1224                                                  
1225 The result is that cache_find() requires no      
1226 synchronization with any other functions, so     
1227 it would be on UP.                               
1228                                                  
1229 There is a further optimization possible here    
1230 cache code, where there were no reference cou    
1231 held the lock whenever using the object? This    
1232 hold the lock, no one can delete the object,     
1233 and put the reference count.                     
1234                                                  
1235 Now, because the 'read lock' in RCU is simply    
1236 caller which always has preemption disabled b    
1237 cache_find() and object_put() does not           
1238 need to actually get and put the reference co    
1239 __cache_find() by making it non-static, and s    
1240 callers could simply call that.                  
1241                                                  
1242 The benefit here is that the reference count     
1243 object is not altered in any way, which is mu    
1244 due to caching.                                  
1245                                                  
1246 Per-CPU Data                                     
1247 ------------                                     
1248                                                  
1249 Another technique for avoiding locking which     
1250 duplicate information for each CPU. For examp    
1251 count of a common condition, you could use a     
1252 counter. Nice and simple.                        
1253                                                  
1254 If that was too slow (it's usually not, but i    
1255 machine to test on and can show that it is),     
1256 counter for each CPU, then none of them need     
1257 DEFINE_PER_CPU(), get_cpu_var() and              
1258 put_cpu_var() (``include/linux/percpu.h``).      
1259                                                  
1260 Of particular use for simple per-cpu counters    
1261 and the cpu_local_inc() and related functions    
1262 more efficient than simple code on some archi    
1263 (``include/asm/local.h``).                       
1264                                                  
1265 Note that there is no simple, reliable way of    
1266 such a counter, without introducing more lock    
1267 for some uses.                                   
1268                                                  
1269 Data Which Mostly Used By An IRQ Handler         
1270 ----------------------------------------         
1271                                                  
1272 If data is always accessed from within the sa    
1273 need a lock at all: the kernel already guaran    
1274 will not run simultaneously on multiple CPUs.    
1275                                                  
1276 Manfred Spraul points out that you can still     
1277 is very occasionally accessed in user context    
1278 irq handler doesn't use a lock, and all other    
1279                                                  
1280         mutex_lock(&lock);                       
1281         disable_irq(irq);                        
1282         ...                                      
1283         enable_irq(irq);                         
1284         mutex_unlock(&lock);                     
1285                                                  
1286 The disable_irq() prevents the irq handler fr    
1287 (and waits for it to finish if it's currently    
1288 The spinlock prevents any other accesses happ    
1289 Naturally, this is slower than just a spin_lo    
1290 call, so it only makes sense if this type of     
1291 rarely.                                          
1292                                                  
1293 What Functions Are Safe To Call From Interrup    
1294 =============================================    
1295                                                  
1296 Many functions in the kernel sleep (ie. call     
1297 indirectly: you can never call them while hol    
1298 preemption disabled. This also means you need    
1299 calling them from an interrupt is illegal.       
1300                                                  
1301 Some Functions Which Sleep                       
1302 --------------------------                       
1303                                                  
1304 The most common ones are listed below, but yo    
1305 code to find out if other calls are safe. If     
1306 can sleep, you probably need to be able to sl    
1307 registration and deregistration functions usu    
1308 from user context, and can sleep.                
1309                                                  
1310 -  Accesses to userspace:                        
1311                                                  
1312    -  copy_from_user()                           
1313                                                  
1314    -  copy_to_user()                             
1315                                                  
1316    -  get_user()                                 
1317                                                  
1318    -  put_user()                                 
1319                                                  
1320 -  kmalloc(GP_KERNEL) <kmalloc>`                 
1321                                                  
1322 -  mutex_lock_interruptible() and                
1323    mutex_lock()                                  
1324                                                  
1325    There is a mutex_trylock() which does not     
1326    Still, it must not be used inside interrup    
1327    implementation is not safe for that. mutex    
1328    will also never sleep. It cannot be used i    
1329    since a mutex must be released by the same    
1330                                                  
1331 Some Functions Which Don't Sleep                 
1332 --------------------------------                 
1333                                                  
1334 Some functions are safe to call from any cont    
1335 lock.                                            
1336                                                  
1337 -  printk()                                      
1338                                                  
1339 -  kfree()                                       
1340                                                  
1341 -  add_timer() and timer_delete()                
1342                                                  
1343 Mutex API reference                              
1344 ===================                              
1345                                                  
1346 .. kernel-doc:: include/linux/mutex.h            
1347    :internal:                                    
1348                                                  
1349 .. kernel-doc:: kernel/locking/mutex.c           
1350    :export:                                      
1351                                                  
1352 Futex API reference                              
1353 ===================                              
1354                                                  
1355 .. kernel-doc:: kernel/futex/core.c              
1356    :internal:                                    
1357                                                  
1358 .. kernel-doc:: kernel/futex/futex.h             
1359    :internal:                                    
1360                                                  
1361 .. kernel-doc:: kernel/futex/pi.c                
1362    :internal:                                    
1363                                                  
1364 .. kernel-doc:: kernel/futex/requeue.c           
1365    :internal:                                    
1366                                                  
1367 .. kernel-doc:: kernel/futex/waitwake.c          
1368    :internal:                                    
1369                                                  
1370 Further reading                                  
1371 ===============                                  
1372                                                  
1373 -  ``Documentation/locking/spinlocks.rst``: L    
1374    tutorial in the kernel sources.               
1375                                                  
1376 -  Unix Systems for Modern Architectures: Sym    
1377    Caching for Kernel Programmers:               
1378                                                  
1379    Curt Schimmel's very good introduction to     
1380    written for Linux, but nearly everything a    
1381    expensive, but really worth every penny to    
1382    [ISBN: 0201633388]                            
1383                                                  
1384 Thanks                                           
1385 ======                                           
1386                                                  
1387 Thanks to Telsa Gwynne for DocBooking, neaten    
1388                                                  
1389 Thanks to Martin Pool, Philipp Rumpf, Stephen    
1390 Ruedi Aschwanden, Alan Cox, Manfred Spraul, T    
1391 James Morris, Robert Love, Paul McKenney, Joh    
1392 correcting, flaming, commenting.                 
1393                                                  
1394 Thanks to the cabal for having no influence o    
1395                                                  
1396 Glossary                                         
1397 ========                                         
1398                                                  
1399 preemption                                       
1400   Prior to 2.5, or when ``CONFIG_PREEMPT`` is    
1401   context inside the kernel would not preempt    
1402   CPU until you gave it up, except for interr    
1403   ``CONFIG_PREEMPT`` in 2.5.4, this changed:     
1404   priority tasks can "cut in": spinlocks were    
1405   preemption, even on UP.                        
1406                                                  
1407 bh                                               
1408   Bottom Half: for historical reasons, functi    
1409   now refer to any software interrupt, e.g. s    
1410   blocks any software interrupt on the curren    
1411   deprecated, and will eventually be replaced    
1412   half will be running at any time.              
1413                                                  
1414 Hardware Interrupt / Hardware IRQ                
1415   Hardware interrupt request. in_hardirq() re    
1416   hardware interrupt handler.                    
1417                                                  
1418 Interrupt Context                                
1419   Not user context: processing a hardware irq    
1420   by the in_interrupt() macro returning true.    
1421                                                  
1422 SMP                                              
1423   Symmetric Multi-Processor: kernels compiled    
1424   (``CONFIG_SMP=y``).                            
1425                                                  
1426 Software Interrupt / softirq                     
1427   Software interrupt handler. in_hardirq() re    
1428   in_softirq() returns true. Tasklets and sof    
1429   fall into the category of 'software interru    
1430                                                  
1431   Strictly speaking a softirq is one of up to    
1432   interrupts which can run on multiple CPUs a    
1433   refer to tasklets as well (ie. all software    
1434                                                  
1435 tasklet                                          
1436   A dynamically-registrable software interrup    
1437   only run on one CPU at a time.                 
1438                                                  
1439 timer                                            
1440   A dynamically-registrable software interrup    
1441   to) a given time. When running, it is just     
1442   are called from the ``TIMER_SOFTIRQ``).        
1443                                                  
1444 UP                                               
1445   Uni-Processor: Non-SMP. (``CONFIG_SMP=n``).    
1446                                                  
1447 User Context                                     
1448   The kernel executing on behalf of a particu    
1449   call or trap) or kernel thread. You can tel    
1450   ``current`` macro.) Not to be confused with    
1451   interrupted by software or hardware interru    
1452                                                  
1453 Userspace                                        
1454   A process executing its own code outside th    
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php