~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/core-api/cachetlb.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

  1 ==================================
  2 Cache and TLB Flushing Under Linux
  3 ==================================
  4 
  5 :Author: David S. Miller <davem@redhat.com>
  6 
  7 This document describes the cache/tlb flushing interfaces called
  8 by the Linux VM subsystem.  It enumerates over each interface,
  9 describes its intended purpose, and what side effect is expected
 10 after the interface is invoked.
 11 
 12 The side effects described below are stated for a uniprocessor
 13 implementation, and what is to happen on that single processor.  The
 14 SMP cases are a simple extension, in that you just extend the
 15 definition such that the side effect for a particular interface occurs
 16 on all processors in the system.  Don't let this scare you into
 17 thinking SMP cache/tlb flushing must be so inefficient, this is in
 18 fact an area where many optimizations are possible.  For example,
 19 if it can be proven that a user address space has never executed
 20 on a cpu (see mm_cpumask()), one need not perform a flush
 21 for this address space on that cpu.
 22 
 23 First, the TLB flushing interfaces, since they are the simplest.  The
 24 "TLB" is abstracted under Linux as something the cpu uses to cache
 25 virtual-->physical address translations obtained from the software
 26 page tables.  Meaning that if the software page tables change, it is
 27 possible for stale translations to exist in this "TLB" cache.
 28 Therefore when software page table changes occur, the kernel will
 29 invoke one of the following flush methods _after_ the page table
 30 changes occur:
 31 
 32 1) ``void flush_tlb_all(void)``
 33 
 34         The most severe flush of all.  After this interface runs,
 35         any previous page table modification whatsoever will be
 36         visible to the cpu.
 37 
 38         This is usually invoked when the kernel page tables are
 39         changed, since such translations are "global" in nature.
 40 
 41 2) ``void flush_tlb_mm(struct mm_struct *mm)``
 42 
 43         This interface flushes an entire user address space from
 44         the TLB.  After running, this interface must make sure that
 45         any previous page table modifications for the address space
 46         'mm' will be visible to the cpu.  That is, after running,
 47         there will be no entries in the TLB for 'mm'.
 48 
 49         This interface is used to handle whole address space
 50         page table operations such as what happens during
 51         fork, and exec.
 52 
 53 3) ``void flush_tlb_range(struct vm_area_struct *vma,
 54    unsigned long start, unsigned long end)``
 55 
 56         Here we are flushing a specific range of (user) virtual
 57         address translations from the TLB.  After running, this
 58         interface must make sure that any previous page table
 59         modifications for the address space 'vma->vm_mm' in the range
 60         'start' to 'end-1' will be visible to the cpu.  That is, after
 61         running, there will be no entries in the TLB for 'mm' for
 62         virtual addresses in the range 'start' to 'end-1'.
 63 
 64         The "vma" is the backing store being used for the region.
 65         Primarily, this is used for munmap() type operations.
 66 
 67         The interface is provided in hopes that the port can find
 68         a suitably efficient method for removing multiple page
 69         sized translations from the TLB, instead of having the kernel
 70         call flush_tlb_page (see below) for each entry which may be
 71         modified.
 72 
 73 4) ``void flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)``
 74 
 75         This time we need to remove the PAGE_SIZE sized translation
 76         from the TLB.  The 'vma' is the backing structure used by
 77         Linux to keep track of mmap'd regions for a process, the
 78         address space is available via vma->vm_mm.  Also, one may
 79         test (vma->vm_flags & VM_EXEC) to see if this region is
 80         executable (and thus could be in the 'instruction TLB' in
 81         split-tlb type setups).
 82 
 83         After running, this interface must make sure that any previous
 84         page table modification for address space 'vma->vm_mm' for
 85         user virtual address 'addr' will be visible to the cpu.  That
 86         is, after running, there will be no entries in the TLB for
 87         'vma->vm_mm' for virtual address 'addr'.
 88 
 89         This is used primarily during fault processing.
 90 
 91 5) ``void update_mmu_cache_range(struct vm_fault *vmf,
 92    struct vm_area_struct *vma, unsigned long address, pte_t *ptep,
 93    unsigned int nr)``
 94 
 95         At the end of every page fault, this routine is invoked to tell
 96         the architecture specific code that translations now exists
 97         in the software page tables for address space "vma->vm_mm"
 98         at virtual address "address" for "nr" consecutive pages.
 99 
100         This routine is also invoked in various other places which pass
101         a NULL "vmf".
102 
103         A port may use this information in any way it so chooses.
104         For example, it could use this event to pre-load TLB
105         translations for software managed TLB configurations.
106         The sparc64 port currently does this.
107 
108 Next, we have the cache flushing interfaces.  In general, when Linux
109 is changing an existing virtual-->physical mapping to a new value,
110 the sequence will be in one of the following forms::
111 
112         1) flush_cache_mm(mm);
113            change_all_page_tables_of(mm);
114            flush_tlb_mm(mm);
115 
116         2) flush_cache_range(vma, start, end);
117            change_range_of_page_tables(mm, start, end);
118            flush_tlb_range(vma, start, end);
119 
120         3) flush_cache_page(vma, addr, pfn);
121            set_pte(pte_pointer, new_pte_val);
122            flush_tlb_page(vma, addr);
123 
124 The cache level flush will always be first, because this allows
125 us to properly handle systems whose caches are strict and require
126 a virtual-->physical translation to exist for a virtual address
127 when that virtual address is flushed from the cache.  The HyperSparc
128 cpu is one such cpu with this attribute.
129 
130 The cache flushing routines below need only deal with cache flushing
131 to the extent that it is necessary for a particular cpu.  Mostly,
132 these routines must be implemented for cpus which have virtually
133 indexed caches which must be flushed when virtual-->physical
134 translations are changed or removed.  So, for example, the physically
135 indexed physically tagged caches of IA32 processors have no need to
136 implement these interfaces since the caches are fully synchronized
137 and have no dependency on translation information.
138 
139 Here are the routines, one by one:
140 
141 1) ``void flush_cache_mm(struct mm_struct *mm)``
142 
143         This interface flushes an entire user address space from
144         the caches.  That is, after running, there will be no cache
145         lines associated with 'mm'.
146 
147         This interface is used to handle whole address space
148         page table operations such as what happens during exit and exec.
149 
150 2) ``void flush_cache_dup_mm(struct mm_struct *mm)``
151 
152         This interface flushes an entire user address space from
153         the caches.  That is, after running, there will be no cache
154         lines associated with 'mm'.
155 
156         This interface is used to handle whole address space
157         page table operations such as what happens during fork.
158 
159         This option is separate from flush_cache_mm to allow some
160         optimizations for VIPT caches.
161 
162 3) ``void flush_cache_range(struct vm_area_struct *vma,
163    unsigned long start, unsigned long end)``
164 
165         Here we are flushing a specific range of (user) virtual
166         addresses from the cache.  After running, there will be no
167         entries in the cache for 'vma->vm_mm' for virtual addresses in
168         the range 'start' to 'end-1'.
169 
170         The "vma" is the backing store being used for the region.
171         Primarily, this is used for munmap() type operations.
172 
173         The interface is provided in hopes that the port can find
174         a suitably efficient method for removing multiple page
175         sized regions from the cache, instead of having the kernel
176         call flush_cache_page (see below) for each entry which may be
177         modified.
178 
179 4) ``void flush_cache_page(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn)``
180 
181         This time we need to remove a PAGE_SIZE sized range
182         from the cache.  The 'vma' is the backing structure used by
183         Linux to keep track of mmap'd regions for a process, the
184         address space is available via vma->vm_mm.  Also, one may
185         test (vma->vm_flags & VM_EXEC) to see if this region is
186         executable (and thus could be in the 'instruction cache' in
187         "Harvard" type cache layouts).
188 
189         The 'pfn' indicates the physical page frame (shift this value
190         left by PAGE_SHIFT to get the physical address) that 'addr'
191         translates to.  It is this mapping which should be removed from
192         the cache.
193 
194         After running, there will be no entries in the cache for
195         'vma->vm_mm' for virtual address 'addr' which translates
196         to 'pfn'.
197 
198         This is used primarily during fault processing.
199 
200 5) ``void flush_cache_kmaps(void)``
201 
202         This routine need only be implemented if the platform utilizes
203         highmem.  It will be called right before all of the kmaps
204         are invalidated.
205 
206         After running, there will be no entries in the cache for
207         the kernel virtual address range PKMAP_ADDR(0) to
208         PKMAP_ADDR(LAST_PKMAP).
209 
210         This routing should be implemented in asm/highmem.h
211 
212 6) ``void flush_cache_vmap(unsigned long start, unsigned long end)``
213    ``void flush_cache_vunmap(unsigned long start, unsigned long end)``
214 
215         Here in these two interfaces we are flushing a specific range
216         of (kernel) virtual addresses from the cache.  After running,
217         there will be no entries in the cache for the kernel address
218         space for virtual addresses in the range 'start' to 'end-1'.
219 
220         The first of these two routines is invoked after vmap_range()
221         has installed the page table entries.  The second is invoked
222         before vunmap_range() deletes the page table entries.
223 
224 There exists another whole class of cpu cache issues which currently
225 require a whole different set of interfaces to handle properly.
226 The biggest problem is that of virtual aliasing in the data cache
227 of a processor.
228 
229 Is your port susceptible to virtual aliasing in its D-cache?
230 Well, if your D-cache is virtually indexed, is larger in size than
231 PAGE_SIZE, and does not prevent multiple cache lines for the same
232 physical address from existing at once, you have this problem.
233 
234 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
235 properly, it should essentially be the size of your virtually
236 addressed D-cache (or if the size is variable, the largest possible
237 size).  This setting will force the SYSv IPC layer to only allow user
238 processes to mmap shared memory at address which are a multiple of
239 this value.
240 
241 .. note::
242 
243   This does not fix shared mmaps, check out the sparc64 port for
244   one way to solve this (in particular SPARC_FLAG_MMAPSHARED).
245 
246 Next, you have to solve the D-cache aliasing issue for all
247 other cases.  Please keep in mind that fact that, for a given page
248 mapped into some user address space, there is always at least one more
249 mapping, that of the kernel in its linear mapping starting at
250 PAGE_OFFSET.  So immediately, once the first user maps a given
251 physical page into its address space, by implication the D-cache
252 aliasing problem has the potential to exist since the kernel already
253 maps this page at its virtual address.
254 
255   ``void copy_user_page(void *to, void *from, unsigned long addr, struct page *page)``
256   ``void clear_user_page(void *to, unsigned long addr, struct page *page)``
257 
258         These two routines store data in user anonymous or COW
259         pages.  It allows a port to efficiently avoid D-cache alias
260         issues between userspace and the kernel.
261 
262         For example, a port may temporarily map 'from' and 'to' to
263         kernel virtual addresses during the copy.  The virtual address
264         for these two pages is chosen in such a way that the kernel
265         load/store instructions happen to virtual addresses which are
266         of the same "color" as the user mapping of the page.  Sparc64
267         for example, uses this technique.
268 
269         The 'addr' parameter tells the virtual address where the
270         user will ultimately have this page mapped, and the 'page'
271         parameter gives a pointer to the struct page of the target.
272 
273         If D-cache aliasing is not an issue, these two routines may
274         simply call memcpy/memset directly and do nothing more.
275 
276   ``void flush_dcache_folio(struct folio *folio)``
277 
278         This routines must be called when:
279 
280           a) the kernel did write to a page that is in the page cache page
281              and / or in high memory
282           b) the kernel is about to read from a page cache page and user space
283              shared/writable mappings of this page potentially exist.  Note
284              that {get,pin}_user_pages{_fast} already call flush_dcache_folio
285              on any page found in the user address space and thus driver
286              code rarely needs to take this into account.
287 
288         .. note::
289 
290               This routine need only be called for page cache pages
291               which can potentially ever be mapped into the address
292               space of a user process.  So for example, VFS layer code
293               handling vfs symlinks in the page cache need not call
294               this interface at all.
295 
296         The phrase "kernel writes to a page cache page" means, specifically,
297         that the kernel executes store instructions that dirty data in that
298         page at the kernel virtual mapping of that page.  It is important to
299         flush here to handle D-cache aliasing, to make sure these kernel stores
300         are visible to user space mappings of that page.
301 
302         The corollary case is just as important, if there are users which have
303         shared+writable mappings of this file, we must make sure that kernel
304         reads of these pages will see the most recent stores done by the user.
305 
306         If D-cache aliasing is not an issue, this routine may simply be defined
307         as a nop on that architecture.
308 
309         There is a bit set aside in folio->flags (PG_arch_1) as "architecture
310         private".  The kernel guarantees that, for pagecache pages, it will
311         clear this bit when such a page first enters the pagecache.
312 
313         This allows these interfaces to be implemented much more
314         efficiently.  It allows one to "defer" (perhaps indefinitely) the
315         actual flush if there are currently no user processes mapping this
316         page.  See sparc64's flush_dcache_folio and update_mmu_cache_range
317         implementations for an example of how to go about doing this.
318 
319         The idea is, first at flush_dcache_folio() time, if
320         folio_flush_mapping() returns a mapping, and mapping_mapped() on that
321         mapping returns %false, just mark the architecture private page
322         flag bit.  Later, in update_mmu_cache_range(), a check is made
323         of this flag bit, and if set the flush is done and the flag bit
324         is cleared.
325 
326         .. important::
327 
328                         It is often important, if you defer the flush,
329                         that the actual flush occurs on the same CPU
330                         as did the cpu stores into the page to make it
331                         dirty.  Again, see sparc64 for examples of how
332                         to deal with this.
333 
334   ``void copy_to_user_page(struct vm_area_struct *vma, struct page *page,
335   unsigned long user_vaddr, void *dst, void *src, int len)``
336   ``void copy_from_user_page(struct vm_area_struct *vma, struct page *page,
337   unsigned long user_vaddr, void *dst, void *src, int len)``
338 
339         When the kernel needs to copy arbitrary data in and out
340         of arbitrary user pages (f.e. for ptrace()) it will use
341         these two routines.
342 
343         Any necessary cache flushing or other coherency operations
344         that need to occur should happen here.  If the processor's
345         instruction cache does not snoop cpu stores, it is very
346         likely that you will need to flush the instruction cache
347         for copy_to_user_page().
348 
349   ``void flush_anon_page(struct vm_area_struct *vma, struct page *page,
350   unsigned long vmaddr)``
351 
352         When the kernel needs to access the contents of an anonymous
353         page, it calls this function (currently only
354         get_user_pages()).  Note: flush_dcache_folio() deliberately
355         doesn't work for an anonymous page.  The default
356         implementation is a nop (and should remain so for all coherent
357         architectures).  For incoherent architectures, it should flush
358         the cache of the page at vmaddr.
359 
360   ``void flush_icache_range(unsigned long start, unsigned long end)``
361 
362         When the kernel stores into addresses that it will execute
363         out of (eg when loading modules), this function is called.
364 
365         If the icache does not snoop stores then this routine will need
366         to flush it.
367 
368   ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)``
369 
370         All the functionality of flush_icache_page can be implemented in
371         flush_dcache_folio and update_mmu_cache_range. In the future, the hope
372         is to remove this interface completely.
373 
374 The final category of APIs is for I/O to deliberately aliased address
375 ranges inside the kernel.  Such aliases are set up by use of the
376 vmap/vmalloc API.  Since kernel I/O goes via physical pages, the I/O
377 subsystem assumes that the user mapping and kernel offset mapping are
378 the only aliases.  This isn't true for vmap aliases, so anything in
379 the kernel trying to do I/O to vmap areas must manually manage
380 coherency.  It must do this by flushing the vmap range before doing
381 I/O and invalidating it after the I/O returns.
382 
383   ``void flush_kernel_vmap_range(void *vaddr, int size)``
384 
385        flushes the kernel cache for a given virtual address range in
386        the vmap area.  This is to make sure that any data the kernel
387        modified in the vmap range is made visible to the physical
388        page.  The design is to make this area safe to perform I/O on.
389        Note that this API does *not* also flush the offset map alias
390        of the area.
391 
392   ``void invalidate_kernel_vmap_range(void *vaddr, int size) invalidates``
393 
394        the cache for a given virtual address range in the vmap area
395        which prevents the processor from making the cache stale by
396        speculatively reading data while the I/O was occurring to the
397        physical pages.  This is only necessary for data reads into the
398        vmap area.

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php