~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/arch/arm64/kernel/cpufeature.c

Version: ~ [ linux-6.11.5 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.58 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.114 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.169 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.228 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.284 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.322 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.9 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /arch/arm64/kernel/cpufeature.c (Version linux-6.11.5) and /arch/sparc64/kernel/cpufeature.c (Version linux-6.0.19)


  1 // SPDX-License-Identifier: GPL-2.0-only            1 
  2 /*                                                
  3  * Contains CPU feature definitions               
  4  *                                                
  5  * Copyright (C) 2015 ARM Ltd.                    
  6  *                                                
  7  * A note for the weary kernel hacker: the cod    
  8  * follow! That's partly because it's solving     
  9  * there's a little bit of over-abstraction th    
 10  * on behind a maze of helper functions and ma    
 11  *                                                
 12  * The basic problem is that hardware folks ha    
 13  * with distinct architectural features; in so    
 14  * user-visible instructions are available onl    
 15  * cores. We try to address this by snapshotti    
 16  * boot CPU and comparing these with the featu    
 17  * CPU when bringing them up. If there is a mi    
 18  * snapshot state to indicate the lowest-commo    
 19  * known as the "safe" value. This snapshot st    
 20  * "sanitised" value of a feature register.       
 21  *                                                
 22  * The sanitised register values are used to d    
 23  * have in the system. These may be in the for    
 24  * advertised to userspace or internal "cpucap    
 25  * things like alternative patching and static    
 26  * may result in a TAINT_CPU_OUT_OF_SPEC kerne    
 27  * may prevent a CPU from being onlined at all    
 28  *                                                
 29  * Some implementation details worth rememberi    
 30  *                                                
 31  * - Mismatched features are *always* sanitise    
 32  *   usually indicates that the feature is not    
 33  *                                                
 34  * - A mismatched feature marked with FTR_STRI    
 35  *   warning when onlining an offending CPU an    
 36  *   with TAINT_CPU_OUT_OF_SPEC.                  
 37  *                                                
 38  * - Features marked as FTR_VISIBLE have their    
 39  *   userspace. FTR_VISIBLE features in regist    
 40  *   to EL0 by trapping *must* have a correspo    
 41  *   onlining of CPUs cannot lead to features     
 42  *                                                
 43  * - A "feature" is typically a 4-bit register    
 44  *   high-level description derived from the s    
 45  *                                                
 46  * - Read the Arm ARM (DDI 0487F.a) section D1    
 47  *   scheme for fields in ID registers") to un    
 48  *   may be signed or unsigned (FTR_SIGNED and    
 49  *                                                
 50  * - KVM exposes its own view of the feature r    
 51  *   systems regardless of FTR_VISIBLE. This i    
 52  *   sanitised register values to allow virtua    
 53  *   arbitrary physical CPUs, but some feature    
 54  *   also advertised and emulated. Look at sys    
 55  *   details.                                     
 56  *                                                
 57  * - If the arm64_ftr_bits[] for a register ha    
 58  *   field is treated as STRICT RES0, includin    
 59  *   This is stronger than FTR_HIDDEN and can     
 60  *   KVM guests.                                  
 61  */                                               
 62                                                   
 63 #define pr_fmt(fmt) "CPU features: " fmt          
 64                                                   
 65 #include <linux/bsearch.h>                        
 66 #include <linux/cpumask.h>                        
 67 #include <linux/crash_dump.h>                     
 68 #include <linux/kstrtox.h>                        
 69 #include <linux/sort.h>                           
 70 #include <linux/stop_machine.h>                   
 71 #include <linux/sysfs.h>                          
 72 #include <linux/types.h>                          
 73 #include <linux/minmax.h>                         
 74 #include <linux/mm.h>                             
 75 #include <linux/cpu.h>                            
 76 #include <linux/kasan.h>                          
 77 #include <linux/percpu.h>                         
 78                                                   
 79 #include <asm/cpu.h>                              
 80 #include <asm/cpufeature.h>                       
 81 #include <asm/cpu_ops.h>                          
 82 #include <asm/fpsimd.h>                           
 83 #include <asm/hwcap.h>                            
 84 #include <asm/insn.h>                             
 85 #include <asm/kvm_host.h>                         
 86 #include <asm/mmu_context.h>                      
 87 #include <asm/mte.h>                              
 88 #include <asm/processor.h>                        
 89 #include <asm/smp.h>                              
 90 #include <asm/sysreg.h>                           
 91 #include <asm/traps.h>                            
 92 #include <asm/vectors.h>                          
 93 #include <asm/virt.h>                             
 94                                                   
 95 /* Kernel representation of AT_HWCAP and AT_HW    
 96 static DECLARE_BITMAP(elf_hwcap, MAX_CPU_FEATU    
 97                                                   
 98 #ifdef CONFIG_COMPAT                              
 99 #define COMPAT_ELF_HWCAP_DEFAULT        \         
100                                 (COMPAT_HWCAP_    
101                                  COMPAT_HWCAP_    
102                                  COMPAT_HWCAP_    
103                                  COMPAT_HWCAP_    
104 unsigned int compat_elf_hwcap __read_mostly =     
105 unsigned int compat_elf_hwcap2 __read_mostly;     
106 #endif                                            
107                                                   
108 DECLARE_BITMAP(system_cpucaps, ARM64_NCAPS);      
109 EXPORT_SYMBOL(system_cpucaps);                    
110 static struct arm64_cpu_capabilities const __r    
111                                                   
112 DECLARE_BITMAP(boot_cpucaps, ARM64_NCAPS);        
113                                                   
114 bool arm64_use_ng_mappings = false;               
115 EXPORT_SYMBOL(arm64_use_ng_mappings);             
116                                                   
117 DEFINE_PER_CPU_READ_MOSTLY(const char *, this_    
118                                                   
119 /*                                                
120  * Permit PER_LINUX32 and execve() of 32-bit b    
121  * support it?                                    
122  */                                               
123 static bool __read_mostly allow_mismatched_32b    
124                                                   
125 /*                                                
126  * Static branch enabled only if allow_mismatc    
127  * seen at least one CPU capable of 32-bit EL0    
128  */                                               
129 DEFINE_STATIC_KEY_FALSE(arm64_mismatched_32bit    
130                                                   
131 /*                                                
132  * Mask of CPUs supporting 32-bit EL0.            
133  * Only valid if arm64_mismatched_32bit_el0 is    
134  */                                               
135 static cpumask_var_t cpu_32bit_el0_mask __cpum    
136                                                   
137 void dump_cpu_features(void)                      
138 {                                                 
139         /* file-wide pr_fmt adds "CPU features    
140         pr_emerg("0x%*pb\n", ARM64_NCAPS, &sys    
141 }                                                 
142                                                   
143 #define __ARM64_MAX_POSITIVE(reg, field)          
144                 ((reg##_##field##_SIGNED ?        
145                   BIT(reg##_##field##_WIDTH -     
146                   BIT(reg##_##field##_WIDTH))     
147                                                   
148 #define __ARM64_MIN_NEGATIVE(reg, field)  BIT(    
149                                                   
150 #define __ARM64_CPUID_FIELDS(reg, field, min_v    
151                 .sys_reg = SYS_##reg,             
152                 .field_pos = reg##_##field##_S    
153                 .field_width = reg##_##field##    
154                 .sign = reg##_##field##_SIGNED    
155                 .min_field_value = min_value,     
156                 .max_field_value = max_value,     
157                                                   
158 /*                                                
159  * ARM64_CPUID_FIELDS() encodes a field with a    
160  * an implicit maximum that depends on the sig    
161  *                                                
162  * An unsigned field will be capped at all one    
163  * will be limited to the positive half only.     
164  */                                               
165 #define ARM64_CPUID_FIELDS(reg, field, min_val    
166         __ARM64_CPUID_FIELDS(reg, field,          
167                              SYS_FIELD_VALUE(r    
168                              __ARM64_MAX_POSIT    
169                                                   
170 /*                                                
171  * ARM64_CPUID_FIELDS_NEG() encodes a field wi    
172  * implicit minimal value to max_value. This s    
173  * matching a non-implemented property.           
174  */                                               
175 #define ARM64_CPUID_FIELDS_NEG(reg, field, max    
176         __ARM64_CPUID_FIELDS(reg, field,          
177                              __ARM64_MIN_NEGAT    
178                              SYS_FIELD_VALUE(r    
179                                                   
180 #define __ARM64_FTR_BITS(SIGNED, VISIBLE, STRI    
181         {                                         
182                 .sign = SIGNED,                   
183                 .visible = VISIBLE,               
184                 .strict = STRICT,                 
185                 .type = TYPE,                     
186                 .shift = SHIFT,                   
187                 .width = WIDTH,                   
188                 .safe_val = SAFE_VAL,             
189         }                                         
190                                                   
191 /* Define a feature with unsigned values */       
192 #define ARM64_FTR_BITS(VISIBLE, STRICT, TYPE,     
193         __ARM64_FTR_BITS(FTR_UNSIGNED, VISIBLE    
194                                                   
195 /* Define a feature with a signed value */        
196 #define S_ARM64_FTR_BITS(VISIBLE, STRICT, TYPE    
197         __ARM64_FTR_BITS(FTR_SIGNED, VISIBLE,     
198                                                   
199 #define ARM64_FTR_END                             
200         {                                         
201                 .width = 0,                       
202         }                                         
203                                                   
204 static void cpu_enable_cnp(struct arm64_cpu_ca    
205                                                   
206 static bool __system_matches_cap(unsigned int     
207                                                   
208 /*                                                
209  * NOTE: Any changes to the visibility of feat    
210  * sync with the documentation of the CPU feat    
211  */                                               
212 static const struct arm64_ftr_bits ftr_id_aa64    
213         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
214         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
215         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
216         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
217         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
218         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
219         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
220         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
221         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
222         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
223         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
224         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
225         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
226         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
227         ARM64_FTR_END,                            
228 };                                                
229                                                   
230 static const struct arm64_ftr_bits ftr_id_aa64    
231         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
232         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
233         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
234         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
235         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
236         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
237         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
238                        FTR_STRICT, FTR_LOWER_S    
239         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
240                        FTR_STRICT, FTR_LOWER_S    
241         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
242         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
243         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
244         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
245                        FTR_STRICT, FTR_EXACT,     
246         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
247                        FTR_STRICT, FTR_EXACT,     
248         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
249         ARM64_FTR_END,                            
250 };                                                
251                                                   
252 static const struct arm64_ftr_bits ftr_id_aa64    
253         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
254         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
255         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
256         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
257         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
258         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
259         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
260                        FTR_STRICT, FTR_EXACT,     
261         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
262                        FTR_STRICT, FTR_LOWER_S    
263         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
264         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
265         ARM64_FTR_END,                            
266 };                                                
267                                                   
268 static const struct arm64_ftr_bits ftr_id_aa64    
269         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
270         ARM64_FTR_END,                            
271 };                                                
272                                                   
273 static const struct arm64_ftr_bits ftr_id_aa64    
274         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
275         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
276         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
277         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
278         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
279         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
280         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
281                                    FTR_STRICT,    
282         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
283         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
284         S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRI    
285         S_ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRI    
286         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
287         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
288         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
289         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
290         ARM64_FTR_END,                            
291 };                                                
292                                                   
293 static const struct arm64_ftr_bits ftr_id_aa64    
294         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
295                        FTR_STRICT, FTR_LOWER_S    
296         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
297         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
298         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
299                        FTR_STRICT, FTR_LOWER_S    
300         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
301         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
302                                     FTR_STRICT    
303         ARM64_FTR_END,                            
304 };                                                
305                                                   
306 static const struct arm64_ftr_bits ftr_id_aa64    
307         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
308         ARM64_FTR_END,                            
309 };                                                
310                                                   
311 static const struct arm64_ftr_bits ftr_id_aa64    
312         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
313                        FTR_STRICT, FTR_LOWER_S    
314         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
315                        FTR_STRICT, FTR_LOWER_S    
316         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
317                        FTR_STRICT, FTR_LOWER_S    
318         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
319                        FTR_STRICT, FTR_LOWER_S    
320         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
321                        FTR_STRICT, FTR_LOWER_S    
322         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
323                        FTR_STRICT, FTR_LOWER_S    
324         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
325                        FTR_STRICT, FTR_LOWER_S    
326         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
327                        FTR_STRICT, FTR_LOWER_S    
328         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
329                        FTR_STRICT, FTR_LOWER_S    
330         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
331                        FTR_STRICT, FTR_LOWER_S    
332         ARM64_FTR_END,                            
333 };                                                
334                                                   
335 static const struct arm64_ftr_bits ftr_id_aa64    
336         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
337                        FTR_STRICT, FTR_EXACT,     
338         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
339                        FTR_STRICT, FTR_EXACT,     
340         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
341                        FTR_STRICT, FTR_EXACT,     
342         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
343                        FTR_STRICT, FTR_EXACT,     
344         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
345                        FTR_STRICT, FTR_EXACT,     
346         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
347                        FTR_STRICT, FTR_EXACT,     
348         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
349                        FTR_STRICT, FTR_EXACT,     
350         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
351                        FTR_STRICT, FTR_EXACT,     
352         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
353                        FTR_STRICT, FTR_EXACT,     
354         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
355                        FTR_STRICT, FTR_EXACT,     
356         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
357                        FTR_STRICT, FTR_EXACT,     
358         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
359                        FTR_STRICT, FTR_EXACT,     
360         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
361                        FTR_STRICT, FTR_EXACT,     
362         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
363                        FTR_STRICT, FTR_EXACT,     
364         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
365                        FTR_STRICT, FTR_EXACT,     
366         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
367                        FTR_STRICT, FTR_EXACT,     
368         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
369                        FTR_STRICT, FTR_EXACT,     
370         ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABL    
371                        FTR_STRICT, FTR_EXACT,     
372         ARM64_FTR_END,                            
373 };                                                
374                                                   
375 static const struct arm64_ftr_bits ftr_id_aa64    
376         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
377         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
378         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
379         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
380         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
381         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
382         ARM64_FTR_END,                            
383 };                                                
384                                                   
385 static const struct arm64_ftr_bits ftr_id_aa64    
386         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
387         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
388         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
389         /*                                        
390          * Page size not being supported at St    
391          * just give up KVM if PAGE_SIZE isn't    
392          * your favourite nesting hypervisor.     
393          *                                        
394          * There is a small corner case where     
395          * advertises a given granule size at     
396          * vCPUs, and uses the fallback to Sta    
397          * vCPUs. Although this is not forbidd    
398          * indicates that the hypervisor is be    
399          *                                        
400          * We make no effort to cope with this    
401          * fields are inconsistent across vCPU    
402          * trying to bring KVM up.                
403          */                                       
404         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
405         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
406         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
407         /*                                        
408          * We already refuse to boot CPUs that    
409          * page size, so we can only detect mi    
410          * than the one we're currently using.    
411          * exist in the wild so, even though w    
412          * along with it and treat them as non    
413          */                                       
414         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONST    
415         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONST    
416         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
417                                                   
418         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
419         /* Linux shouldn't care about secure m    
420         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
421         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
422         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
423         /*                                        
424          * Differing PARange is fine as long a    
425          * within the minimum PARange of all C    
426          */                                       
427         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
428         ARM64_FTR_END,                            
429 };                                                
430                                                   
431 static const struct arm64_ftr_bits ftr_id_aa64    
432         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
433         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
434         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
435         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
436         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
437         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
438         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
439         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
440         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
441         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
442         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
443         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
444         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
445         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
446         ARM64_FTR_END,                            
447 };                                                
448                                                   
449 static const struct arm64_ftr_bits ftr_id_aa64    
450         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
451         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
452         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
453         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
454         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
455         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
456         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
457         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
458         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
459         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
460         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
461         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
462         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
463         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
464         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
465         ARM64_FTR_END,                            
466 };                                                
467                                                   
468 static const struct arm64_ftr_bits ftr_id_aa64    
469         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
470         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
471         ARM64_FTR_END,                            
472 };                                                
473                                                   
474 static const struct arm64_ftr_bits ftr_id_aa64    
475         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRIC    
476         ARM64_FTR_END,                            
477 };                                                
478                                                   
479 static const struct arm64_ftr_bits ftr_ctr[] =    
480         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
481         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
482         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
483         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
484         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
485         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
486         /*                                        
487          * Linux can handle differing I-cache     
488          * make use of *minLine.                  
489          * If we have differing I-cache polici    
490          */                                       
491         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
492         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
493         ARM64_FTR_END,                            
494 };                                                
495                                                   
496 static struct arm64_ftr_override __ro_after_in    
497                                                   
498 struct arm64_ftr_reg arm64_ftr_reg_ctrel0 = {     
499         .name           = "SYS_CTR_EL0",          
500         .ftr_bits       = ftr_ctr,                
501         .override       = &no_override,           
502 };                                                
503                                                   
504 static const struct arm64_ftr_bits ftr_id_mmfr    
505         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRIC    
506         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
507         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
508         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
509         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
510         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRIC    
511         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
512         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
513         ARM64_FTR_END,                            
514 };                                                
515                                                   
516 static const struct arm64_ftr_bits ftr_id_aa64    
517         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRIC    
518         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
519         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
520         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
521         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
522         /*                                        
523          * We can instantiate multiple PMU ins    
524          * of support.                            
525          */                                       
526         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONST    
527         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
528         ARM64_FTR_END,                            
529 };                                                
530                                                   
531 static const struct arm64_ftr_bits ftr_mvfr0[]    
532         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
533         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
534         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
535         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
536         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
537         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
538         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
539         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
540         ARM64_FTR_END,                            
541 };                                                
542                                                   
543 static const struct arm64_ftr_bits ftr_mvfr1[]    
544         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
545         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
546         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
547         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
548         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
549         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
550         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
551         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
552         ARM64_FTR_END,                            
553 };                                                
554                                                   
555 static const struct arm64_ftr_bits ftr_mvfr2[]    
556         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
557         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
558         ARM64_FTR_END,                            
559 };                                                
560                                                   
561 static const struct arm64_ftr_bits ftr_dczid[]    
562         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
563         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
564         ARM64_FTR_END,                            
565 };                                                
566                                                   
567 static const struct arm64_ftr_bits ftr_gmid[]     
568         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
569         ARM64_FTR_END,                            
570 };                                                
571                                                   
572 static const struct arm64_ftr_bits ftr_id_isar    
573         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
574         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
575         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
576         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
577         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
578         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
579         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
580         ARM64_FTR_END,                            
581 };                                                
582                                                   
583 static const struct arm64_ftr_bits ftr_id_isar    
584         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
585         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
586         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
587         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
588         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
589         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
590         ARM64_FTR_END,                            
591 };                                                
592                                                   
593 static const struct arm64_ftr_bits ftr_id_mmfr    
594         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
595         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
596         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
597         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
598         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
599         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
600         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
601                                                   
602         /*                                        
603          * SpecSEI = 1 indicates that the PE m    
604          * external abort on speculative read.    
605          * SError might be generated than it w    
606          * classified as FTR_HIGHER_SAFE.         
607          */                                       
608         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
609         ARM64_FTR_END,                            
610 };                                                
611                                                   
612 static const struct arm64_ftr_bits ftr_id_isar    
613         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
614         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
615         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
616         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
617         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
618         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
619         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
620         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
621         ARM64_FTR_END,                            
622 };                                                
623                                                   
624 static const struct arm64_ftr_bits ftr_id_mmfr    
625         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
626         ARM64_FTR_END,                            
627 };                                                
628                                                   
629 static const struct arm64_ftr_bits ftr_id_isar    
630         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
631         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
632         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
633         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
634         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
635         ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT    
636         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
637         ARM64_FTR_END,                            
638 };                                                
639                                                   
640 static const struct arm64_ftr_bits ftr_id_pfr0    
641         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
642         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
643         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
644         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
645         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
646         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
647         ARM64_FTR_END,                            
648 };                                                
649                                                   
650 static const struct arm64_ftr_bits ftr_id_pfr1    
651         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
652         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
653         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
654         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
655         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
656         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
657         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
658         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
659         ARM64_FTR_END,                            
660 };                                                
661                                                   
662 static const struct arm64_ftr_bits ftr_id_pfr2    
663         ARM64_FTR_BITS(FTR_VISIBLE, FTR_NONSTR    
664         ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRI    
665         ARM64_FTR_END,                            
666 };                                                
667                                                   
668 static const struct arm64_ftr_bits ftr_id_dfr0    
669         /* [31:28] TraceFilt */                   
670         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONST    
671         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
672         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
673         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
674         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
675         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
676         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
677         ARM64_FTR_END,                            
678 };                                                
679                                                   
680 static const struct arm64_ftr_bits ftr_id_dfr1    
681         S_ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRIC    
682         ARM64_FTR_END,                            
683 };                                                
684                                                   
685 /*                                                
686  * Common ftr bits for a 32bit register with a    
687  * attributes, with 4bit feature fields and a     
688  * 0. Covers the following 32bit registers:       
689  * id_isar[1-3], id_mmfr[1-3]                     
690  */                                               
691 static const struct arm64_ftr_bits ftr_generic    
692         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
693         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
694         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
695         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
696         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
697         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
698         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
699         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
700         ARM64_FTR_END,                            
701 };                                                
702                                                   
703 /* Table for a single 32bit feature value */      
704 static const struct arm64_ftr_bits ftr_single3    
705         ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT,    
706         ARM64_FTR_END,                            
707 };                                                
708                                                   
709 static const struct arm64_ftr_bits ftr_raz[] =    
710         ARM64_FTR_END,                            
711 };                                                
712                                                   
713 #define __ARM64_FTR_REG_OVERRIDE(id_str, id, t    
714                 .sys_id = id,                     
715                 .reg =  &(struct arm64_ftr_reg    
716                         .name = id_str,           
717                         .override = (ovr),        
718                         .ftr_bits = &((table)[    
719         }}                                        
720                                                   
721 #define ARM64_FTR_REG_OVERRIDE(id, table, ovr)    
722         __ARM64_FTR_REG_OVERRIDE(#id, id, tabl    
723                                                   
724 #define ARM64_FTR_REG(id, table)                  
725         __ARM64_FTR_REG_OVERRIDE(#id, id, tabl    
726                                                   
727 struct arm64_ftr_override id_aa64mmfr0_overrid    
728 struct arm64_ftr_override id_aa64mmfr1_overrid    
729 struct arm64_ftr_override id_aa64mmfr2_overrid    
730 struct arm64_ftr_override id_aa64pfr0_override    
731 struct arm64_ftr_override id_aa64pfr1_override    
732 struct arm64_ftr_override id_aa64zfr0_override    
733 struct arm64_ftr_override id_aa64smfr0_overrid    
734 struct arm64_ftr_override id_aa64isar1_overrid    
735 struct arm64_ftr_override id_aa64isar2_overrid    
736                                                   
737 struct arm64_ftr_override arm64_sw_feature_ove    
738                                                   
739 static const struct __ftr_reg_entry {             
740         u32                     sys_id;           
741         struct arm64_ftr_reg    *reg;             
742 } arm64_ftr_regs[] = {                            
743                                                   
744         /* Op1 = 0, CRn = 0, CRm = 1 */           
745         ARM64_FTR_REG(SYS_ID_PFR0_EL1, ftr_id_    
746         ARM64_FTR_REG(SYS_ID_PFR1_EL1, ftr_id_    
747         ARM64_FTR_REG(SYS_ID_DFR0_EL1, ftr_id_    
748         ARM64_FTR_REG(SYS_ID_MMFR0_EL1, ftr_id    
749         ARM64_FTR_REG(SYS_ID_MMFR1_EL1, ftr_ge    
750         ARM64_FTR_REG(SYS_ID_MMFR2_EL1, ftr_ge    
751         ARM64_FTR_REG(SYS_ID_MMFR3_EL1, ftr_ge    
752                                                   
753         /* Op1 = 0, CRn = 0, CRm = 2 */           
754         ARM64_FTR_REG(SYS_ID_ISAR0_EL1, ftr_id    
755         ARM64_FTR_REG(SYS_ID_ISAR1_EL1, ftr_ge    
756         ARM64_FTR_REG(SYS_ID_ISAR2_EL1, ftr_ge    
757         ARM64_FTR_REG(SYS_ID_ISAR3_EL1, ftr_ge    
758         ARM64_FTR_REG(SYS_ID_ISAR4_EL1, ftr_id    
759         ARM64_FTR_REG(SYS_ID_ISAR5_EL1, ftr_id    
760         ARM64_FTR_REG(SYS_ID_MMFR4_EL1, ftr_id    
761         ARM64_FTR_REG(SYS_ID_ISAR6_EL1, ftr_id    
762                                                   
763         /* Op1 = 0, CRn = 0, CRm = 3 */           
764         ARM64_FTR_REG(SYS_MVFR0_EL1, ftr_mvfr0    
765         ARM64_FTR_REG(SYS_MVFR1_EL1, ftr_mvfr1    
766         ARM64_FTR_REG(SYS_MVFR2_EL1, ftr_mvfr2    
767         ARM64_FTR_REG(SYS_ID_PFR2_EL1, ftr_id_    
768         ARM64_FTR_REG(SYS_ID_DFR1_EL1, ftr_id_    
769         ARM64_FTR_REG(SYS_ID_MMFR5_EL1, ftr_id    
770                                                   
771         /* Op1 = 0, CRn = 0, CRm = 4 */           
772         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64PFR0    
773                                &id_aa64pfr0_ov    
774         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64PFR1    
775                                &id_aa64pfr1_ov    
776         ARM64_FTR_REG(SYS_ID_AA64PFR2_EL1, ftr    
777         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ZFR0    
778                                &id_aa64zfr0_ov    
779         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64SMFR    
780                                &id_aa64smfr0_o    
781         ARM64_FTR_REG(SYS_ID_AA64FPFR0_EL1, ft    
782                                                   
783         /* Op1 = 0, CRn = 0, CRm = 5 */           
784         ARM64_FTR_REG(SYS_ID_AA64DFR0_EL1, ftr    
785         ARM64_FTR_REG(SYS_ID_AA64DFR1_EL1, ftr    
786                                                   
787         /* Op1 = 0, CRn = 0, CRm = 6 */           
788         ARM64_FTR_REG(SYS_ID_AA64ISAR0_EL1, ft    
789         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR    
790                                &id_aa64isar1_o    
791         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64ISAR    
792                                &id_aa64isar2_o    
793         ARM64_FTR_REG(SYS_ID_AA64ISAR3_EL1, ft    
794                                                   
795         /* Op1 = 0, CRn = 0, CRm = 7 */           
796         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR    
797                                &id_aa64mmfr0_o    
798         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR    
799                                &id_aa64mmfr1_o    
800         ARM64_FTR_REG_OVERRIDE(SYS_ID_AA64MMFR    
801                                &id_aa64mmfr2_o    
802         ARM64_FTR_REG(SYS_ID_AA64MMFR3_EL1, ft    
803         ARM64_FTR_REG(SYS_ID_AA64MMFR4_EL1, ft    
804                                                   
805         /* Op1 = 1, CRn = 0, CRm = 0 */           
806         ARM64_FTR_REG(SYS_GMID_EL1, ftr_gmid),    
807                                                   
808         /* Op1 = 3, CRn = 0, CRm = 0 */           
809         { SYS_CTR_EL0, &arm64_ftr_reg_ctrel0 }    
810         ARM64_FTR_REG(SYS_DCZID_EL0, ftr_dczid    
811                                                   
812         /* Op1 = 3, CRn = 14, CRm = 0 */          
813         ARM64_FTR_REG(SYS_CNTFRQ_EL0, ftr_sing    
814 };                                                
815                                                   
816 static int search_cmp_ftr_reg(const void *id,     
817 {                                                 
818         return (int)(unsigned long)id - (int)(    
819 }                                                 
820                                                   
821 /*                                                
822  * get_arm64_ftr_reg_nowarn - Looks up a featu    
823  * its sys_reg() encoding. With the array arm6    
824  * ascending order of sys_id, we use binary se    
825  * entry.                                         
826  *                                                
827  * returns - Upon success,  matching ftr_reg e    
828  *         - NULL on failure. It is upto the c    
829  *           the impact of a failure.             
830  */                                               
831 static struct arm64_ftr_reg *get_arm64_ftr_reg    
832 {                                                 
833         const struct __ftr_reg_entry *ret;        
834                                                   
835         ret = bsearch((const void *)(unsigned     
836                         arm64_ftr_regs,           
837                         ARRAY_SIZE(arm64_ftr_r    
838                         sizeof(arm64_ftr_regs[    
839                         search_cmp_ftr_reg);      
840         if (ret)                                  
841                 return ret->reg;                  
842         return NULL;                              
843 }                                                 
844                                                   
845 /*                                                
846  * get_arm64_ftr_reg - Looks up a feature regi    
847  * its sys_reg() encoding. This calls get_arm6    
848  *                                                
849  * returns - Upon success,  matching ftr_reg e    
850  *         - NULL on failure but with an WARN_    
851  */                                               
852 struct arm64_ftr_reg *get_arm64_ftr_reg(u32 sy    
853 {                                                 
854         struct arm64_ftr_reg *reg;                
855                                                   
856         reg = get_arm64_ftr_reg_nowarn(sys_id)    
857                                                   
858         /*                                        
859          * Requesting a non-existent register     
860          * and let the caller handle it.          
861          */                                       
862         WARN_ON(!reg);                            
863         return reg;                               
864 }                                                 
865                                                   
866 static u64 arm64_ftr_set_value(const struct ar    
867                                s64 ftr_val)       
868 {                                                 
869         u64 mask = arm64_ftr_mask(ftrp);          
870                                                   
871         reg &= ~mask;                             
872         reg |= (ftr_val << ftrp->shift) & mask    
873         return reg;                               
874 }                                                 
875                                                   
876 s64 arm64_ftr_safe_value(const struct arm64_ft    
877                                 s64 cur)          
878 {                                                 
879         s64 ret = 0;                              
880                                                   
881         switch (ftrp->type) {                     
882         case FTR_EXACT:                           
883                 ret = ftrp->safe_val;             
884                 break;                            
885         case FTR_LOWER_SAFE:                      
886                 ret = min(new, cur);              
887                 break;                            
888         case FTR_HIGHER_OR_ZERO_SAFE:             
889                 if (!cur || !new)                 
890                         break;                    
891                 fallthrough;                      
892         case FTR_HIGHER_SAFE:                     
893                 ret = max(new, cur);              
894                 break;                            
895         default:                                  
896                 BUG();                            
897         }                                         
898                                                   
899         return ret;                               
900 }                                                 
901                                                   
902 static void __init sort_ftr_regs(void)            
903 {                                                 
904         unsigned int i;                           
905                                                   
906         for (i = 0; i < ARRAY_SIZE(arm64_ftr_r    
907                 const struct arm64_ftr_reg *ft    
908                 const struct arm64_ftr_bits *f    
909                 unsigned int j = 0;               
910                                                   
911                 /*                                
912                  * Features here must be sorte    
913                  * to their shift values and s    
914                  */                               
915                 for (; ftr_bits->width != 0; f    
916                         unsigned int width = f    
917                         unsigned int shift = f    
918                         unsigned int prev_shif    
919                                                   
920                         WARN((shift  + width)     
921                                 "%s has invali    
922                                 ftr_reg->name,    
923                                                   
924                         /*                        
925                          * Skip the first feat    
926                          * compare against for    
927                          */                       
928                         if (j == 0)               
929                                 continue;         
930                                                   
931                         prev_shift = ftr_reg->    
932                         WARN((shift + width) >    
933                                 "%s has featur    
934                                 ftr_reg->name,    
935                 }                                 
936                                                   
937                 /*                                
938                  * Skip the first register. Th    
939                  * compare against for now.       
940                  */                               
941                 if (i == 0)                       
942                         continue;                 
943                 /*                                
944                  * Registers here must be sort    
945                  * to sys_id for subsequent bi    
946                  * to work correctly.             
947                  */                               
948                 BUG_ON(arm64_ftr_regs[i].sys_i    
949         }                                         
950 }                                                 
951                                                   
952 /*                                                
953  * Initialise the CPU feature register from Bo    
954  * Also initiliases the strict_mask for the re    
955  * Any bits that are not covered by an arm64_f    
956  * RES0 for the system-wide value, and must st    
957  */                                               
958 static void init_cpu_ftr_reg(u32 sys_reg, u64     
959 {                                                 
960         u64 val = 0;                              
961         u64 strict_mask = ~0x0ULL;                
962         u64 user_mask = 0;                        
963         u64 valid_mask = 0;                       
964                                                   
965         const struct arm64_ftr_bits *ftrp;        
966         struct arm64_ftr_reg *reg = get_arm64_    
967                                                   
968         if (!reg)                                 
969                 return;                           
970                                                   
971         for (ftrp = reg->ftr_bits; ftrp->width    
972                 u64 ftr_mask = arm64_ftr_mask(    
973                 s64 ftr_new = arm64_ftr_value(    
974                 s64 ftr_ovr = arm64_ftr_value(    
975                                                   
976                 if ((ftr_mask & reg->override-    
977                         s64 tmp = arm64_ftr_sa    
978                         char *str = NULL;         
979                                                   
980                         if (ftr_ovr != tmp) {     
981                                 /* Unsafe, rem    
982                                 reg->override-    
983                                 reg->override-    
984                                 tmp = ftr_ovr;    
985                                 str = "ignorin    
986                         } else if (ftr_new !=     
987                                 /* Override wa    
988                                 ftr_new = tmp;    
989                                 str = "forced"    
990                         } else if (ftr_ovr ==     
991                                 /* Override wa    
992                                 str = "already    
993                         }                         
994                                                   
995                         if (str)                  
996                                 pr_warn("%s[%d    
997                                         reg->n    
998                                         ftrp->    
999                                         ftrp->    
1000                                         tmp &    
1001                 } else if ((ftr_mask & reg->o    
1002                         reg->override->val &=    
1003                         pr_warn("%s[%d:%d]: i    
1004                                 reg->name,       
1005                                 ftrp->shift +    
1006                                 ftrp->shift);    
1007                 }                                
1008                                                  
1009                 val = arm64_ftr_set_value(ftr    
1010                                                  
1011                 valid_mask |= ftr_mask;          
1012                 if (!ftrp->strict)               
1013                         strict_mask &= ~ftr_m    
1014                 if (ftrp->visible)               
1015                         user_mask |= ftr_mask    
1016                 else                             
1017                         reg->user_val = arm64    
1018                                                  
1019                                                  
1020         }                                        
1021                                                  
1022         val &= valid_mask;                       
1023                                                  
1024         reg->sys_val = val;                      
1025         reg->strict_mask = strict_mask;          
1026         reg->user_mask = user_mask;              
1027 }                                                
1028                                                  
1029 extern const struct arm64_cpu_capabilities ar    
1030 static const struct arm64_cpu_capabilities ar    
1031                                                  
1032 static void __init                               
1033 init_cpucap_indirect_list_from_array(const st    
1034 {                                                
1035         for (; caps->matches; caps++) {          
1036                 if (WARN(caps->capability >=     
1037                         "Invalid capability %    
1038                         continue;                
1039                 if (WARN(cpucap_ptrs[caps->ca    
1040                         "Duplicate entry for     
1041                         caps->capability))       
1042                         continue;                
1043                 cpucap_ptrs[caps->capability]    
1044         }                                        
1045 }                                                
1046                                                  
1047 static void __init init_cpucap_indirect_list(    
1048 {                                                
1049         init_cpucap_indirect_list_from_array(    
1050         init_cpucap_indirect_list_from_array(    
1051 }                                                
1052                                                  
1053 static void __init setup_boot_cpu_capabilitie    
1054                                                  
1055 static void init_32bit_cpu_features(struct cp    
1056 {                                                
1057         init_cpu_ftr_reg(SYS_ID_DFR0_EL1, inf    
1058         init_cpu_ftr_reg(SYS_ID_DFR1_EL1, inf    
1059         init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, in    
1060         init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, in    
1061         init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, in    
1062         init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, in    
1063         init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, in    
1064         init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, in    
1065         init_cpu_ftr_reg(SYS_ID_ISAR6_EL1, in    
1066         init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, in    
1067         init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, in    
1068         init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, in    
1069         init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, in    
1070         init_cpu_ftr_reg(SYS_ID_MMFR4_EL1, in    
1071         init_cpu_ftr_reg(SYS_ID_MMFR5_EL1, in    
1072         init_cpu_ftr_reg(SYS_ID_PFR0_EL1, inf    
1073         init_cpu_ftr_reg(SYS_ID_PFR1_EL1, inf    
1074         init_cpu_ftr_reg(SYS_ID_PFR2_EL1, inf    
1075         init_cpu_ftr_reg(SYS_MVFR0_EL1, info-    
1076         init_cpu_ftr_reg(SYS_MVFR1_EL1, info-    
1077         init_cpu_ftr_reg(SYS_MVFR2_EL1, info-    
1078 }                                                
1079                                                  
1080 #ifdef CONFIG_ARM64_PSEUDO_NMI                   
1081 static bool enable_pseudo_nmi;                   
1082                                                  
1083 static int __init early_enable_pseudo_nmi(cha    
1084 {                                                
1085         return kstrtobool(p, &enable_pseudo_n    
1086 }                                                
1087 early_param("irqchip.gicv3_pseudo_nmi", early    
1088                                                  
1089 static __init void detect_system_supports_pse    
1090 {                                                
1091         struct device_node *np;                  
1092                                                  
1093         if (!enable_pseudo_nmi)                  
1094                 return;                          
1095                                                  
1096         /*                                       
1097          * Detect broken MediaTek firmware th    
1098          * restore GIC priorities.               
1099          */                                      
1100         np = of_find_compatible_node(NULL, NU    
1101         if (np && of_property_read_bool(np, "    
1102                 pr_info("Pseudo-NMI disabled     
1103                 enable_pseudo_nmi = false;       
1104         }                                        
1105         of_node_put(np);                         
1106 }                                                
1107 #else /* CONFIG_ARM64_PSEUDO_NMI */              
1108 static inline void detect_system_supports_pse    
1109 #endif                                           
1110                                                  
1111 void __init init_cpu_features(struct cpuinfo_    
1112 {                                                
1113         /* Before we start using the tables,     
1114         sort_ftr_regs();                         
1115                                                  
1116         init_cpu_ftr_reg(SYS_CTR_EL0, info->r    
1117         init_cpu_ftr_reg(SYS_DCZID_EL0, info-    
1118         init_cpu_ftr_reg(SYS_CNTFRQ_EL0, info    
1119         init_cpu_ftr_reg(SYS_ID_AA64DFR0_EL1,    
1120         init_cpu_ftr_reg(SYS_ID_AA64DFR1_EL1,    
1121         init_cpu_ftr_reg(SYS_ID_AA64ISAR0_EL1    
1122         init_cpu_ftr_reg(SYS_ID_AA64ISAR1_EL1    
1123         init_cpu_ftr_reg(SYS_ID_AA64ISAR2_EL1    
1124         init_cpu_ftr_reg(SYS_ID_AA64ISAR3_EL1    
1125         init_cpu_ftr_reg(SYS_ID_AA64MMFR0_EL1    
1126         init_cpu_ftr_reg(SYS_ID_AA64MMFR1_EL1    
1127         init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1    
1128         init_cpu_ftr_reg(SYS_ID_AA64MMFR3_EL1    
1129         init_cpu_ftr_reg(SYS_ID_AA64MMFR4_EL1    
1130         init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1,    
1131         init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1,    
1132         init_cpu_ftr_reg(SYS_ID_AA64PFR2_EL1,    
1133         init_cpu_ftr_reg(SYS_ID_AA64ZFR0_EL1,    
1134         init_cpu_ftr_reg(SYS_ID_AA64SMFR0_EL1    
1135         init_cpu_ftr_reg(SYS_ID_AA64FPFR0_EL1    
1136                                                  
1137         if (id_aa64pfr0_32bit_el0(info->reg_i    
1138                 init_32bit_cpu_features(&info    
1139                                                  
1140         if (IS_ENABLED(CONFIG_ARM64_SVE) &&      
1141             id_aa64pfr0_sve(read_sanitised_ft    
1142                 unsigned long cpacr = cpacr_s    
1143                                                  
1144                 vec_init_vq_map(ARM64_VEC_SVE    
1145                                                  
1146                 cpacr_restore(cpacr);            
1147         }                                        
1148                                                  
1149         if (IS_ENABLED(CONFIG_ARM64_SME) &&      
1150             id_aa64pfr1_sme(read_sanitised_ft    
1151                 unsigned long cpacr = cpacr_s    
1152                                                  
1153                 /*                               
1154                  * We mask out SMPS since eve    
1155                  * supports priorities the ke    
1156                  * and we block access to the    
1157                  */                              
1158                 info->reg_smidr = read_cpuid(    
1159                 vec_init_vq_map(ARM64_VEC_SME    
1160                                                  
1161                 cpacr_restore(cpacr);            
1162         }                                        
1163                                                  
1164         if (id_aa64pfr1_mte(info->reg_id_aa64    
1165                 init_cpu_ftr_reg(SYS_GMID_EL1    
1166 }                                                
1167                                                  
1168 static void update_cpu_ftr_reg(struct arm64_f    
1169 {                                                
1170         const struct arm64_ftr_bits *ftrp;       
1171                                                  
1172         for (ftrp = reg->ftr_bits; ftrp->widt    
1173                 s64 ftr_cur = arm64_ftr_value    
1174                 s64 ftr_new = arm64_ftr_value    
1175                                                  
1176                 if (ftr_cur == ftr_new)          
1177                         continue;                
1178                 /* Find a safe value */          
1179                 ftr_new = arm64_ftr_safe_valu    
1180                 reg->sys_val = arm64_ftr_set_    
1181         }                                        
1182                                                  
1183 }                                                
1184                                                  
1185 static int check_update_ftr_reg(u32 sys_id, i    
1186 {                                                
1187         struct arm64_ftr_reg *regp = get_arm6    
1188                                                  
1189         if (!regp)                               
1190                 return 0;                        
1191                                                  
1192         update_cpu_ftr_reg(regp, val);           
1193         if ((boot & regp->strict_mask) == (va    
1194                 return 0;                        
1195         pr_warn("SANITY CHECK: Unexpected var    
1196                         regp->name, boot, cpu    
1197         return 1;                                
1198 }                                                
1199                                                  
1200 static void relax_cpu_ftr_reg(u32 sys_id, int    
1201 {                                                
1202         const struct arm64_ftr_bits *ftrp;       
1203         struct arm64_ftr_reg *regp = get_arm6    
1204                                                  
1205         if (!regp)                               
1206                 return;                          
1207                                                  
1208         for (ftrp = regp->ftr_bits; ftrp->wid    
1209                 if (ftrp->shift == field) {      
1210                         regp->strict_mask &=     
1211                         break;                   
1212                 }                                
1213         }                                        
1214                                                  
1215         /* Bogus field? */                       
1216         WARN_ON(!ftrp->width);                   
1217 }                                                
1218                                                  
1219 static void lazy_init_32bit_cpu_features(stru    
1220                                          stru    
1221 {                                                
1222         static bool boot_cpu_32bit_regs_overr    
1223                                                  
1224         if (!allow_mismatched_32bit_el0 || bo    
1225                 return;                          
1226                                                  
1227         if (id_aa64pfr0_32bit_el0(boot->reg_i    
1228                 return;                          
1229                                                  
1230         boot->aarch32 = info->aarch32;           
1231         init_32bit_cpu_features(&boot->aarch3    
1232         boot_cpu_32bit_regs_overridden = true    
1233 }                                                
1234                                                  
1235 static int update_32bit_cpu_features(int cpu,    
1236                                      struct c    
1237 {                                                
1238         int taint = 0;                           
1239         u64 pfr0 = read_sanitised_ftr_reg(SYS    
1240                                                  
1241         /*                                       
1242          * If we don't have AArch32 at EL1, t    
1243          * EL1-dependent register fields to a    
1244          */                                      
1245         if (!id_aa64pfr0_32bit_el1(pfr0)) {      
1246                 relax_cpu_ftr_reg(SYS_ID_ISAR    
1247                 relax_cpu_ftr_reg(SYS_ID_PFR1    
1248                 relax_cpu_ftr_reg(SYS_ID_PFR1    
1249                 relax_cpu_ftr_reg(SYS_ID_PFR1    
1250                 relax_cpu_ftr_reg(SYS_ID_PFR1    
1251                 relax_cpu_ftr_reg(SYS_ID_PFR1    
1252         }                                        
1253                                                  
1254         taint |= check_update_ftr_reg(SYS_ID_    
1255                                       info->r    
1256         taint |= check_update_ftr_reg(SYS_ID_    
1257                                       info->r    
1258         taint |= check_update_ftr_reg(SYS_ID_    
1259                                       info->r    
1260         taint |= check_update_ftr_reg(SYS_ID_    
1261                                       info->r    
1262         taint |= check_update_ftr_reg(SYS_ID_    
1263                                       info->r    
1264         taint |= check_update_ftr_reg(SYS_ID_    
1265                                       info->r    
1266         taint |= check_update_ftr_reg(SYS_ID_    
1267                                       info->r    
1268         taint |= check_update_ftr_reg(SYS_ID_    
1269                                       info->r    
1270         taint |= check_update_ftr_reg(SYS_ID_    
1271                                       info->r    
1272                                                  
1273         /*                                       
1274          * Regardless of the value of the Aux    
1275          * ACTLR formats could differ across     
1276          * be trapped for virtualization anyw    
1277          */                                      
1278         taint |= check_update_ftr_reg(SYS_ID_    
1279                                       info->r    
1280         taint |= check_update_ftr_reg(SYS_ID_    
1281                                       info->r    
1282         taint |= check_update_ftr_reg(SYS_ID_    
1283                                       info->r    
1284         taint |= check_update_ftr_reg(SYS_ID_    
1285                                       info->r    
1286         taint |= check_update_ftr_reg(SYS_ID_    
1287                                       info->r    
1288         taint |= check_update_ftr_reg(SYS_ID_    
1289                                       info->r    
1290         taint |= check_update_ftr_reg(SYS_ID_    
1291                                       info->r    
1292         taint |= check_update_ftr_reg(SYS_ID_    
1293                                       info->r    
1294         taint |= check_update_ftr_reg(SYS_ID_    
1295                                       info->r    
1296         taint |= check_update_ftr_reg(SYS_MVF    
1297                                       info->r    
1298         taint |= check_update_ftr_reg(SYS_MVF    
1299                                       info->r    
1300         taint |= check_update_ftr_reg(SYS_MVF    
1301                                       info->r    
1302                                                  
1303         return taint;                            
1304 }                                                
1305                                                  
1306 /*                                               
1307  * Update system wide CPU feature registers w    
1308  * non-boot CPU. Also performs SANITY checks     
1309  * aren't any insane variations from that of     
1310  */                                              
1311 void update_cpu_features(int cpu,                
1312                          struct cpuinfo_arm64    
1313                          struct cpuinfo_arm64    
1314 {                                                
1315         int taint = 0;                           
1316                                                  
1317         /*                                       
1318          * The kernel can handle differing I-    
1319          * caches should look identical. User    
1320          * *minLine.                             
1321          */                                      
1322         taint |= check_update_ftr_reg(SYS_CTR    
1323                                       info->r    
1324                                                  
1325         /*                                       
1326          * Userspace may perform DC ZVA instr    
1327          * could result in too much or too li    
1328          * process is preempted and migrated     
1329          */                                      
1330         taint |= check_update_ftr_reg(SYS_DCZ    
1331                                       info->r    
1332                                                  
1333         /* If different, timekeeping will be     
1334         taint |= check_update_ftr_reg(SYS_CNT    
1335                                       info->r    
1336                                                  
1337         /*                                       
1338          * The kernel uses self-hosted debug     
1339          * support identical debug features.     
1340          * and BRPs to be identical.             
1341          * ID_AA64DFR1 is currently RES0.        
1342          */                                      
1343         taint |= check_update_ftr_reg(SYS_ID_    
1344                                       info->r    
1345         taint |= check_update_ftr_reg(SYS_ID_    
1346                                       info->r    
1347         /*                                       
1348          * Even in big.LITTLE, processors sho    
1349          * wise.                                 
1350          */                                      
1351         taint |= check_update_ftr_reg(SYS_ID_    
1352                                       info->r    
1353         taint |= check_update_ftr_reg(SYS_ID_    
1354                                       info->r    
1355         taint |= check_update_ftr_reg(SYS_ID_    
1356                                       info->r    
1357         taint |= check_update_ftr_reg(SYS_ID_    
1358                                       info->r    
1359                                                  
1360         /*                                       
1361          * Differing PARange support is fine     
1362          * memory are mapped within the minim    
1363          * Linux should not care about secure    
1364          */                                      
1365         taint |= check_update_ftr_reg(SYS_ID_    
1366                                       info->r    
1367         taint |= check_update_ftr_reg(SYS_ID_    
1368                                       info->r    
1369         taint |= check_update_ftr_reg(SYS_ID_    
1370                                       info->r    
1371         taint |= check_update_ftr_reg(SYS_ID_    
1372                                       info->r    
1373                                                  
1374         taint |= check_update_ftr_reg(SYS_ID_    
1375                                       info->r    
1376         taint |= check_update_ftr_reg(SYS_ID_    
1377                                       info->r    
1378         taint |= check_update_ftr_reg(SYS_ID_    
1379                                       info->r    
1380                                                  
1381         taint |= check_update_ftr_reg(SYS_ID_    
1382                                       info->r    
1383                                                  
1384         taint |= check_update_ftr_reg(SYS_ID_    
1385                                       info->r    
1386                                                  
1387         taint |= check_update_ftr_reg(SYS_ID_    
1388                                       info->r    
1389                                                  
1390         /* Probe vector lengths */               
1391         if (IS_ENABLED(CONFIG_ARM64_SVE) &&      
1392             id_aa64pfr0_sve(read_sanitised_ft    
1393                 if (!system_capabilities_fina    
1394                         unsigned long cpacr =    
1395                                                  
1396                         vec_update_vq_map(ARM    
1397                                                  
1398                         cpacr_restore(cpacr);    
1399                 }                                
1400         }                                        
1401                                                  
1402         if (IS_ENABLED(CONFIG_ARM64_SME) &&      
1403             id_aa64pfr1_sme(read_sanitised_ft    
1404                 unsigned long cpacr = cpacr_s    
1405                                                  
1406                 /*                               
1407                  * We mask out SMPS since eve    
1408                  * supports priorities the ke    
1409                  * and we block access to the    
1410                  */                              
1411                 info->reg_smidr = read_cpuid(    
1412                                                  
1413                 /* Probe vector lengths */       
1414                 if (!system_capabilities_fina    
1415                         vec_update_vq_map(ARM    
1416                                                  
1417                 cpacr_restore(cpacr);            
1418         }                                        
1419                                                  
1420         /*                                       
1421          * The kernel uses the LDGM/STGM inst    
1422          * they read/write depends on the GMI    
1423          * value is the same on all CPUs.        
1424          */                                      
1425         if (IS_ENABLED(CONFIG_ARM64_MTE) &&      
1426             id_aa64pfr1_mte(info->reg_id_aa64    
1427                 taint |= check_update_ftr_reg    
1428                                                  
1429         }                                        
1430                                                  
1431         /*                                       
1432          * If we don't have AArch32 at all th    
1433          * as the register values may be UNKN    
1434          * using them for anything.              
1435          *                                       
1436          * This relies on a sanitised view of    
1437          * (e.g. SYS_ID_AA64PFR0_EL1), so we     
1438          */                                      
1439         if (id_aa64pfr0_32bit_el0(info->reg_i    
1440                 lazy_init_32bit_cpu_features(    
1441                 taint |= update_32bit_cpu_fea    
1442                                                  
1443         }                                        
1444                                                  
1445         /*                                       
1446          * Mismatched CPU features are a reci    
1447          * pretend to support them.              
1448          */                                      
1449         if (taint) {                             
1450                 pr_warn_once("Unsupported CPU    
1451                 add_taint(TAINT_CPU_OUT_OF_SP    
1452         }                                        
1453 }                                                
1454                                                  
1455 u64 read_sanitised_ftr_reg(u32 id)               
1456 {                                                
1457         struct arm64_ftr_reg *regp = get_arm6    
1458                                                  
1459         if (!regp)                               
1460                 return 0;                        
1461         return regp->sys_val;                    
1462 }                                                
1463 EXPORT_SYMBOL_GPL(read_sanitised_ftr_reg);       
1464                                                  
1465 #define read_sysreg_case(r)     \                
1466         case r:         val = read_sysreg_s(r    
1467                                                  
1468 /*                                               
1469  * __read_sysreg_by_encoding() - Used by a ST    
1470  * Read the system register on the current CP    
1471  */                                              
1472 u64 __read_sysreg_by_encoding(u32 sys_id)        
1473 {                                                
1474         struct arm64_ftr_reg *regp;              
1475         u64 val;                                 
1476                                                  
1477         switch (sys_id) {                        
1478         read_sysreg_case(SYS_ID_PFR0_EL1);       
1479         read_sysreg_case(SYS_ID_PFR1_EL1);       
1480         read_sysreg_case(SYS_ID_PFR2_EL1);       
1481         read_sysreg_case(SYS_ID_DFR0_EL1);       
1482         read_sysreg_case(SYS_ID_DFR1_EL1);       
1483         read_sysreg_case(SYS_ID_MMFR0_EL1);      
1484         read_sysreg_case(SYS_ID_MMFR1_EL1);      
1485         read_sysreg_case(SYS_ID_MMFR2_EL1);      
1486         read_sysreg_case(SYS_ID_MMFR3_EL1);      
1487         read_sysreg_case(SYS_ID_MMFR4_EL1);      
1488         read_sysreg_case(SYS_ID_MMFR5_EL1);      
1489         read_sysreg_case(SYS_ID_ISAR0_EL1);      
1490         read_sysreg_case(SYS_ID_ISAR1_EL1);      
1491         read_sysreg_case(SYS_ID_ISAR2_EL1);      
1492         read_sysreg_case(SYS_ID_ISAR3_EL1);      
1493         read_sysreg_case(SYS_ID_ISAR4_EL1);      
1494         read_sysreg_case(SYS_ID_ISAR5_EL1);      
1495         read_sysreg_case(SYS_ID_ISAR6_EL1);      
1496         read_sysreg_case(SYS_MVFR0_EL1);         
1497         read_sysreg_case(SYS_MVFR1_EL1);         
1498         read_sysreg_case(SYS_MVFR2_EL1);         
1499                                                  
1500         read_sysreg_case(SYS_ID_AA64PFR0_EL1)    
1501         read_sysreg_case(SYS_ID_AA64PFR1_EL1)    
1502         read_sysreg_case(SYS_ID_AA64PFR2_EL1)    
1503         read_sysreg_case(SYS_ID_AA64ZFR0_EL1)    
1504         read_sysreg_case(SYS_ID_AA64SMFR0_EL1    
1505         read_sysreg_case(SYS_ID_AA64FPFR0_EL1    
1506         read_sysreg_case(SYS_ID_AA64DFR0_EL1)    
1507         read_sysreg_case(SYS_ID_AA64DFR1_EL1)    
1508         read_sysreg_case(SYS_ID_AA64MMFR0_EL1    
1509         read_sysreg_case(SYS_ID_AA64MMFR1_EL1    
1510         read_sysreg_case(SYS_ID_AA64MMFR2_EL1    
1511         read_sysreg_case(SYS_ID_AA64MMFR3_EL1    
1512         read_sysreg_case(SYS_ID_AA64MMFR4_EL1    
1513         read_sysreg_case(SYS_ID_AA64ISAR0_EL1    
1514         read_sysreg_case(SYS_ID_AA64ISAR1_EL1    
1515         read_sysreg_case(SYS_ID_AA64ISAR2_EL1    
1516         read_sysreg_case(SYS_ID_AA64ISAR3_EL1    
1517                                                  
1518         read_sysreg_case(SYS_CNTFRQ_EL0);        
1519         read_sysreg_case(SYS_CTR_EL0);           
1520         read_sysreg_case(SYS_DCZID_EL0);         
1521                                                  
1522         default:                                 
1523                 BUG();                           
1524                 return 0;                        
1525         }                                        
1526                                                  
1527         regp  = get_arm64_ftr_reg(sys_id);       
1528         if (regp) {                              
1529                 val &= ~regp->override->mask;    
1530                 val |= (regp->override->val &    
1531         }                                        
1532                                                  
1533         return val;                              
1534 }                                                
1535                                                  
1536 #include <linux/irqchip/arm-gic-v3.h>            
1537                                                  
1538 static bool                                      
1539 has_always(const struct arm64_cpu_capabilitie    
1540 {                                                
1541         return true;                             
1542 }                                                
1543                                                  
1544 static bool                                      
1545 feature_matches(u64 reg, const struct arm64_c    
1546 {                                                
1547         int val, min, max;                       
1548         u64 tmp;                                 
1549                                                  
1550         val = cpuid_feature_extract_field_wid    
1551                                                  
1552                                                  
1553                                                  
1554         tmp = entry->min_field_value;            
1555         tmp <<= entry->field_pos;                
1556                                                  
1557         min = cpuid_feature_extract_field_wid    
1558                                                  
1559                                                  
1560                                                  
1561         tmp = entry->max_field_value;            
1562         tmp <<= entry->field_pos;                
1563                                                  
1564         max = cpuid_feature_extract_field_wid    
1565                                                  
1566                                                  
1567                                                  
1568         return val >= min && val <= max;         
1569 }                                                
1570                                                  
1571 static u64                                       
1572 read_scoped_sysreg(const struct arm64_cpu_cap    
1573 {                                                
1574         WARN_ON(scope == SCOPE_LOCAL_CPU && p    
1575         if (scope == SCOPE_SYSTEM)               
1576                 return read_sanitised_ftr_reg    
1577         else                                     
1578                 return __read_sysreg_by_encod    
1579 }                                                
1580                                                  
1581 static bool                                      
1582 has_user_cpuid_feature(const struct arm64_cpu    
1583 {                                                
1584         int mask;                                
1585         struct arm64_ftr_reg *regp;              
1586         u64 val = read_scoped_sysreg(entry, s    
1587                                                  
1588         regp = get_arm64_ftr_reg(entry->sys_r    
1589         if (!regp)                               
1590                 return false;                    
1591                                                  
1592         mask = cpuid_feature_extract_unsigned    
1593                                                  
1594                                                  
1595         if (!mask)                               
1596                 return false;                    
1597                                                  
1598         return feature_matches(val, entry);      
1599 }                                                
1600                                                  
1601 static bool                                      
1602 has_cpuid_feature(const struct arm64_cpu_capa    
1603 {                                                
1604         u64 val = read_scoped_sysreg(entry, s    
1605         return feature_matches(val, entry);      
1606 }                                                
1607                                                  
1608 const struct cpumask *system_32bit_el0_cpumas    
1609 {                                                
1610         if (!system_supports_32bit_el0())        
1611                 return cpu_none_mask;            
1612                                                  
1613         if (static_branch_unlikely(&arm64_mis    
1614                 return cpu_32bit_el0_mask;       
1615                                                  
1616         return cpu_possible_mask;                
1617 }                                                
1618                                                  
1619 static int __init parse_32bit_el0_param(char     
1620 {                                                
1621         allow_mismatched_32bit_el0 = true;       
1622         return 0;                                
1623 }                                                
1624 early_param("allow_mismatched_32bit_el0", par    
1625                                                  
1626 static ssize_t aarch32_el0_show(struct device    
1627                                 struct device    
1628 {                                                
1629         const struct cpumask *mask = system_3    
1630                                                  
1631         return sysfs_emit(buf, "%*pbl\n", cpu    
1632 }                                                
1633 static const DEVICE_ATTR_RO(aarch32_el0);        
1634                                                  
1635 static int __init aarch32_el0_sysfs_init(void    
1636 {                                                
1637         struct device *dev_root;                 
1638         int ret = 0;                             
1639                                                  
1640         if (!allow_mismatched_32bit_el0)         
1641                 return 0;                        
1642                                                  
1643         dev_root = bus_get_dev_root(&cpu_subs    
1644         if (dev_root) {                          
1645                 ret = device_create_file(dev_    
1646                 put_device(dev_root);            
1647         }                                        
1648         return ret;                              
1649 }                                                
1650 device_initcall(aarch32_el0_sysfs_init);         
1651                                                  
1652 static bool has_32bit_el0(const struct arm64_    
1653 {                                                
1654         if (!has_cpuid_feature(entry, scope))    
1655                 return allow_mismatched_32bit    
1656                                                  
1657         if (scope == SCOPE_SYSTEM)               
1658                 pr_info("detected: 32-bit EL0    
1659                                                  
1660         return true;                             
1661 }                                                
1662                                                  
1663 static bool has_useable_gicv3_cpuif(const str    
1664 {                                                
1665         bool has_sre;                            
1666                                                  
1667         if (!has_cpuid_feature(entry, scope))    
1668                 return false;                    
1669                                                  
1670         has_sre = gic_enable_sre();              
1671         if (!has_sre)                            
1672                 pr_warn_once("%s present but     
1673                              entry->desc);       
1674                                                  
1675         return has_sre;                          
1676 }                                                
1677                                                  
1678 static bool has_cache_idc(const struct arm64_    
1679                           int scope)             
1680 {                                                
1681         u64 ctr;                                 
1682                                                  
1683         if (scope == SCOPE_SYSTEM)               
1684                 ctr = arm64_ftr_reg_ctrel0.sy    
1685         else                                     
1686                 ctr = read_cpuid_effective_ca    
1687                                                  
1688         return ctr & BIT(CTR_EL0_IDC_SHIFT);     
1689 }                                                
1690                                                  
1691 static void cpu_emulate_effective_ctr(const s    
1692 {                                                
1693         /*                                       
1694          * If the CPU exposes raw CTR_EL0.IDC    
1695          * CTR_EL0.IDC = 1 (from CLIDR values    
1696          * to the CTR_EL0 on this CPU and emu    
1697          * value.                                
1698          */                                      
1699         if (!(read_cpuid_cachetype() & BIT(CT    
1700                 sysreg_clear_set(sctlr_el1, S    
1701 }                                                
1702                                                  
1703 static bool has_cache_dic(const struct arm64_    
1704                           int scope)             
1705 {                                                
1706         u64 ctr;                                 
1707                                                  
1708         if (scope == SCOPE_SYSTEM)               
1709                 ctr = arm64_ftr_reg_ctrel0.sy    
1710         else                                     
1711                 ctr = read_cpuid_cachetype();    
1712                                                  
1713         return ctr & BIT(CTR_EL0_DIC_SHIFT);     
1714 }                                                
1715                                                  
1716 static bool __maybe_unused                       
1717 has_useable_cnp(const struct arm64_cpu_capabi    
1718 {                                                
1719         /*                                       
1720          * Kdump isn't guaranteed to power-of    
1721          * may share TLB entries with a CPU s    
1722          * kernel.                               
1723          */                                      
1724         if (is_kdump_kernel())                   
1725                 return false;                    
1726                                                  
1727         if (cpus_have_cap(ARM64_WORKAROUND_NV    
1728                 return false;                    
1729                                                  
1730         return has_cpuid_feature(entry, scope    
1731 }                                                
1732                                                  
1733 static bool __meltdown_safe = true;              
1734 static int __kpti_forced; /* 0: not forced, >    
1735                                                  
1736 static bool unmap_kernel_at_el0(const struct     
1737                                 int scope)       
1738 {                                                
1739         /* List of CPUs that are not vulnerab    
1740         static const struct midr_range kpti_s    
1741                 MIDR_ALL_VERSIONS(MIDR_CAVIUM    
1742                 MIDR_ALL_VERSIONS(MIDR_BRCM_V    
1743                 MIDR_ALL_VERSIONS(MIDR_BRAHMA    
1744                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1745                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1746                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1747                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1748                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1749                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
1750                 MIDR_ALL_VERSIONS(MIDR_HISI_T    
1751                 MIDR_ALL_VERSIONS(MIDR_NVIDIA    
1752                 MIDR_ALL_VERSIONS(MIDR_QCOM_K    
1753                 MIDR_ALL_VERSIONS(MIDR_QCOM_K    
1754                 MIDR_ALL_VERSIONS(MIDR_QCOM_K    
1755                 MIDR_ALL_VERSIONS(MIDR_QCOM_K    
1756                 { /* sentinel */ }               
1757         };                                       
1758         char const *str = "kpti command line     
1759         bool meltdown_safe;                      
1760                                                  
1761         meltdown_safe = is_midr_in_range_list    
1762                                                  
1763         /* Defer to CPU feature registers */     
1764         if (has_cpuid_feature(entry, scope))     
1765                 meltdown_safe = true;            
1766                                                  
1767         if (!meltdown_safe)                      
1768                 __meltdown_safe = false;         
1769                                                  
1770         /*                                       
1771          * For reasons that aren't entirely c    
1772          * ThunderX leads to apparent I-cache    
1773          * ends as well as you might imagine.    
1774          * on the cpus_have_*cap() helpers he    
1775          * because cpucap detection order may    
1776          * affected CPUs are always in a homo    
1777          * safe to rely on this_cpu_has_cap()    
1778          */                                      
1779         if (this_cpu_has_cap(ARM64_WORKAROUND    
1780                 str = "ARM64_WORKAROUND_CAVIU    
1781                 __kpti_forced = -1;              
1782         }                                        
1783                                                  
1784         /* Useful for KASLR robustness */        
1785         if (kaslr_enabled() && kaslr_requires    
1786                 if (!__kpti_forced) {            
1787                         str = "KASLR";           
1788                         __kpti_forced = 1;       
1789                 }                                
1790         }                                        
1791                                                  
1792         if (cpu_mitigations_off() && !__kpti_    
1793                 str = "mitigations=off";         
1794                 __kpti_forced = -1;              
1795         }                                        
1796                                                  
1797         if (!IS_ENABLED(CONFIG_UNMAP_KERNEL_A    
1798                 pr_info_once("kernel page tab    
1799                 return false;                    
1800         }                                        
1801                                                  
1802         /* Forced? */                            
1803         if (__kpti_forced) {                     
1804                 pr_info_once("kernel page tab    
1805                              __kpti_forced >     
1806                 return __kpti_forced > 0;        
1807         }                                        
1808                                                  
1809         return !meltdown_safe;                   
1810 }                                                
1811                                                  
1812 static bool has_nv1(const struct arm64_cpu_ca    
1813 {                                                
1814         /*                                       
1815          * Although the Apple M2 family appea    
1816          * PTW barfs on the nVHE EL2 S1 page     
1817          * that it doesn't support NV1 at all    
1818          */                                      
1819         static const struct midr_range nv1_ni    
1820                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1821                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1822                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1823                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1824                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1825                 MIDR_ALL_VERSIONS(MIDR_APPLE_    
1826                 {}                               
1827         };                                       
1828                                                  
1829         return (__system_matches_cap(ARM64_HA    
1830                 !(has_cpuid_feature(entry, sc    
1831                   is_midr_in_range_list(read_    
1832 }                                                
1833                                                  
1834 #if defined(ID_AA64MMFR0_EL1_TGRAN_LPA2) && d    
1835 static bool has_lpa2_at_stage1(u64 mmfr0)        
1836 {                                                
1837         unsigned int tgran;                      
1838                                                  
1839         tgran = cpuid_feature_extract_unsigne    
1840                                         ID_AA    
1841         return tgran == ID_AA64MMFR0_EL1_TGRA    
1842 }                                                
1843                                                  
1844 static bool has_lpa2_at_stage2(u64 mmfr0)        
1845 {                                                
1846         unsigned int tgran;                      
1847                                                  
1848         tgran = cpuid_feature_extract_unsigne    
1849                                         ID_AA    
1850         return tgran == ID_AA64MMFR0_EL1_TGRA    
1851 }                                                
1852                                                  
1853 static bool has_lpa2(const struct arm64_cpu_c    
1854 {                                                
1855         u64 mmfr0;                               
1856                                                  
1857         mmfr0 = read_sanitised_ftr_reg(SYS_ID    
1858         return has_lpa2_at_stage1(mmfr0) && h    
1859 }                                                
1860 #else                                            
1861 static bool has_lpa2(const struct arm64_cpu_c    
1862 {                                                
1863         return false;                            
1864 }                                                
1865 #endif                                           
1866                                                  
1867 #ifdef CONFIG_UNMAP_KERNEL_AT_EL0                
1868 #define KPTI_NG_TEMP_VA         (-(1UL << PMD    
1869                                                  
1870 extern                                           
1871 void create_kpti_ng_temp_pgd(pgd_t *pgdir, ph    
1872                              phys_addr_t size    
1873                              phys_addr_t (*pg    
1874                                                  
1875 static phys_addr_t __initdata kpti_ng_temp_al    
1876                                                  
1877 static phys_addr_t __init kpti_ng_pgd_alloc(i    
1878 {                                                
1879         kpti_ng_temp_alloc -= PAGE_SIZE;         
1880         return kpti_ng_temp_alloc;               
1881 }                                                
1882                                                  
1883 static int __init __kpti_install_ng_mappings(    
1884 {                                                
1885         typedef void (kpti_remap_fn)(int, int    
1886         extern kpti_remap_fn idmap_kpti_insta    
1887         kpti_remap_fn *remap_fn;                 
1888                                                  
1889         int cpu = smp_processor_id();            
1890         int levels = CONFIG_PGTABLE_LEVELS;      
1891         int order = order_base_2(levels);        
1892         u64 kpti_ng_temp_pgd_pa = 0;             
1893         pgd_t *kpti_ng_temp_pgd;                 
1894         u64 alloc = 0;                           
1895                                                  
1896         if (levels == 5 && !pgtable_l5_enable    
1897                 levels = 4;                      
1898         else if (levels == 4 && !pgtable_l4_e    
1899                 levels = 3;                      
1900                                                  
1901         remap_fn = (void *)__pa_symbol(idmap_    
1902                                                  
1903         if (!cpu) {                              
1904                 alloc = __get_free_pages(GFP_    
1905                 kpti_ng_temp_pgd = (pgd_t *)(    
1906                 kpti_ng_temp_alloc = kpti_ng_    
1907                                                  
1908                 //                               
1909                 // Create a minimal page tabl    
1910                 // the swapper page tables te    
1911                 //                               
1912                 // The physical pages are lai    
1913                 //                               
1914                 // +--------+-/-------+-/----    
1915                 // :  PTE[] : | PMD[] : | PUD    
1916                 // +--------+-\-------+-\----    
1917                 //      ^                        
1918                 // The first page is mapped i    
1919                 // aligned virtual address, s    
1920                 // level entries while the ma    
1921                 // covers the PTE[] page itse    
1922                 // to be used as a ad-hoc fix    
1923                 //                               
1924                 create_kpti_ng_temp_pgd(kpti_    
1925                                         KPTI_    
1926                                         kpti_    
1927         }                                        
1928                                                  
1929         cpu_install_idmap();                     
1930         remap_fn(cpu, num_online_cpus(), kpti    
1931         cpu_uninstall_idmap();                   
1932                                                  
1933         if (!cpu) {                              
1934                 free_pages(alloc, order);        
1935                 arm64_use_ng_mappings = true;    
1936         }                                        
1937                                                  
1938         return 0;                                
1939 }                                                
1940                                                  
1941 static void __init kpti_install_ng_mappings(v    
1942 {                                                
1943         /* Check whether KPTI is going to be     
1944         if (!arm64_kernel_unmapped_at_el0())     
1945                 return;                          
1946                                                  
1947         /*                                       
1948          * We don't need to rewrite the page-    
1949          * it already or we have KASLR enable    
1950          * created any global mappings at all    
1951          */                                      
1952         if (arm64_use_ng_mappings)               
1953                 return;                          
1954                                                  
1955         stop_machine(__kpti_install_ng_mappin    
1956 }                                                
1957                                                  
1958 #else                                            
1959 static inline void kpti_install_ng_mappings(v    
1960 {                                                
1961 }                                                
1962 #endif  /* CONFIG_UNMAP_KERNEL_AT_EL0 */         
1963                                                  
1964 static void cpu_enable_kpti(struct arm64_cpu_    
1965 {                                                
1966         if (__this_cpu_read(this_cpu_vector)     
1967                 const char *v = arm64_get_bp_    
1968                                                  
1969                 __this_cpu_write(this_cpu_vec    
1970         }                                        
1971                                                  
1972 }                                                
1973                                                  
1974 static int __init parse_kpti(char *str)          
1975 {                                                
1976         bool enabled;                            
1977         int ret = kstrtobool(str, &enabled);     
1978                                                  
1979         if (ret)                                 
1980                 return ret;                      
1981                                                  
1982         __kpti_forced = enabled ? 1 : -1;        
1983         return 0;                                
1984 }                                                
1985 early_param("kpti", parse_kpti);                 
1986                                                  
1987 #ifdef CONFIG_ARM64_HW_AFDBM                     
1988 static struct cpumask dbm_cpus __read_mostly;    
1989                                                  
1990 static inline void __cpu_enable_hw_dbm(void)     
1991 {                                                
1992         u64 tcr = read_sysreg(tcr_el1) | TCR_    
1993                                                  
1994         write_sysreg(tcr, tcr_el1);              
1995         isb();                                   
1996         local_flush_tlb_all();                   
1997 }                                                
1998                                                  
1999 static bool cpu_has_broken_dbm(void)             
2000 {                                                
2001         /* List of CPUs which have broken DBM    
2002         static const struct midr_range cpus[]    
2003 #ifdef CONFIG_ARM64_ERRATUM_1024718              
2004                 MIDR_ALL_VERSIONS(MIDR_CORTEX    
2005                 /* Kryo4xx Silver (rdpe => r1    
2006                 MIDR_REV(MIDR_QCOM_KRYO_4XX_S    
2007 #endif                                           
2008 #ifdef CONFIG_ARM64_ERRATUM_2051678              
2009                 MIDR_REV_RANGE(MIDR_CORTEX_A5    
2010 #endif                                           
2011                 {},                              
2012         };                                       
2013                                                  
2014         return is_midr_in_range_list(read_cpu    
2015 }                                                
2016                                                  
2017 static bool cpu_can_use_dbm(const struct arm6    
2018 {                                                
2019         return has_cpuid_feature(cap, SCOPE_L    
2020                !cpu_has_broken_dbm();            
2021 }                                                
2022                                                  
2023 static void cpu_enable_hw_dbm(struct arm64_cp    
2024 {                                                
2025         if (cpu_can_use_dbm(cap)) {              
2026                 __cpu_enable_hw_dbm();           
2027                 cpumask_set_cpu(smp_processor    
2028         }                                        
2029 }                                                
2030                                                  
2031 static bool has_hw_dbm(const struct arm64_cpu    
2032                        int __unused)             
2033 {                                                
2034         /*                                       
2035          * DBM is a non-conflicting feature.     
2036          * run a mix of CPUs with and without    
2037          * unconditionally enable the capabil    
2038          * to use the feature. We only enable    
2039          * CPU, if it is supported.              
2040          */                                      
2041                                                  
2042         return true;                             
2043 }                                                
2044                                                  
2045 #endif                                           
2046                                                  
2047 #ifdef CONFIG_ARM64_AMU_EXTN                     
2048                                                  
2049 /*                                               
2050  * The "amu_cpus" cpumask only signals that t    
2051  * flagged CPUs supports the Activity Monitor    
2052  * information regarding all the events that     
2053  * set in the cpumask, the user of this featu    
2054  * of the 4 fixed counters for that CPU. But     
2055  * counters are enabled or access to these co    
2056  * executed at higher exception levels (firmw    
2057  */                                              
2058 static struct cpumask amu_cpus __read_mostly;    
2059                                                  
2060 bool cpu_has_amu_feat(int cpu)                   
2061 {                                                
2062         return cpumask_test_cpu(cpu, &amu_cpu    
2063 }                                                
2064                                                  
2065 int get_cpu_with_amu_feat(void)                  
2066 {                                                
2067         return cpumask_any(&amu_cpus);           
2068 }                                                
2069                                                  
2070 static void cpu_amu_enable(struct arm64_cpu_c    
2071 {                                                
2072         if (has_cpuid_feature(cap, SCOPE_LOCA    
2073                 cpumask_set_cpu(smp_processor    
2074                                                  
2075                 /* 0 reference values signal     
2076                 if (!this_cpu_has_cap(ARM64_W    
2077                         update_freq_counters_    
2078         }                                        
2079 }                                                
2080                                                  
2081 static bool has_amu(const struct arm64_cpu_ca    
2082                     int __unused)                
2083 {                                                
2084         /*                                       
2085          * The AMU extension is a non-conflic    
2086          * safely run a mix of CPUs with and     
2087          * activity monitors extension. There    
2088          * the capability to allow any late C    
2089          *                                       
2090          * With this feature unconditionally     
2091          * function will be called for all CP    
2092          * including secondary and hotplugged    
2093          * present on that respective CPU. Th    
2094          * print a detection message.            
2095          */                                      
2096                                                  
2097         return true;                             
2098 }                                                
2099 #else                                            
2100 int get_cpu_with_amu_feat(void)                  
2101 {                                                
2102         return nr_cpu_ids;                       
2103 }                                                
2104 #endif                                           
2105                                                  
2106 static bool runs_at_el2(const struct arm64_cp    
2107 {                                                
2108         return is_kernel_in_hyp_mode();          
2109 }                                                
2110                                                  
2111 static void cpu_copy_el2regs(const struct arm    
2112 {                                                
2113         /*                                       
2114          * Copy register values that aren't r    
2115          *                                       
2116          * Before code patching, we only set     
2117          * this value to tpidr_el2 before we     
2118          * that, freshly-onlined CPUs will se    
2119          * do anything here.                     
2120          */                                      
2121         if (!alternative_is_applied(ARM64_HAS    
2122                 write_sysreg(read_sysreg(tpid    
2123 }                                                
2124                                                  
2125 static bool has_nested_virt_support(const str    
2126                                     int scope    
2127 {                                                
2128         if (kvm_get_mode() != KVM_MODE_NV)       
2129                 return false;                    
2130                                                  
2131         if (!has_cpuid_feature(cap, scope)) {    
2132                 pr_warn("unavailable: %s\n",     
2133                 return false;                    
2134         }                                        
2135                                                  
2136         return true;                             
2137 }                                                
2138                                                  
2139 static bool hvhe_possible(const struct arm64_    
2140                           int __unused)          
2141 {                                                
2142         return arm64_test_sw_feature_override    
2143 }                                                
2144                                                  
2145 #ifdef CONFIG_ARM64_PAN                          
2146 static void cpu_enable_pan(const struct arm64    
2147 {                                                
2148         /*                                       
2149          * We modify PSTATE. This won't work     
2150          * is discarded once we return from t    
2151          */                                      
2152         WARN_ON_ONCE(in_interrupt());            
2153                                                  
2154         sysreg_clear_set(sctlr_el1, SCTLR_EL1    
2155         set_pstate_pan(1);                       
2156 }                                                
2157 #endif /* CONFIG_ARM64_PAN */                    
2158                                                  
2159 #ifdef CONFIG_ARM64_RAS_EXTN                     
2160 static void cpu_clear_disr(const struct arm64    
2161 {                                                
2162         /* Firmware may have left a deferred     
2163         write_sysreg_s(0, SYS_DISR_EL1);         
2164 }                                                
2165 #endif /* CONFIG_ARM64_RAS_EXTN */               
2166                                                  
2167 #ifdef CONFIG_ARM64_PTR_AUTH                     
2168 static bool has_address_auth_cpucap(const str    
2169 {                                                
2170         int boot_val, sec_val;                   
2171                                                  
2172         /* We don't expect to be called with     
2173         WARN_ON(scope == SCOPE_SYSTEM);          
2174         /*                                       
2175          * The ptr-auth feature levels are no    
2176          * levels. Hence we must match ptr-au    
2177          * CPUs with that of the boot CPU. Th    
2178          * from the sanitised register wherea    
2179          * the secondary CPUs.                   
2180          * The sanitised feature state is gua    
2181          * boot CPU as a mismatched secondary    
2182          * a chance to update the state, with    
2183          */                                      
2184         boot_val = cpuid_feature_extract_fiel    
2185                                                  
2186         if (scope & SCOPE_BOOT_CPU)              
2187                 return boot_val >= entry->min    
2188         /* Now check for the secondary CPUs w    
2189         sec_val = cpuid_feature_extract_field    
2190                                                  
2191         return (sec_val >= entry->min_field_v    
2192 }                                                
2193                                                  
2194 static bool has_address_auth_metacap(const st    
2195                                      int scop    
2196 {                                                
2197         bool api = has_address_auth_cpucap(cp    
2198         bool apa = has_address_auth_cpucap(cp    
2199         bool apa3 = has_address_auth_cpucap(c    
2200                                                  
2201         return apa || apa3 || api;               
2202 }                                                
2203                                                  
2204 static bool has_generic_auth(const struct arm    
2205                              int __unused)       
2206 {                                                
2207         bool gpi = __system_matches_cap(ARM64    
2208         bool gpa = __system_matches_cap(ARM64    
2209         bool gpa3 = __system_matches_cap(ARM6    
2210                                                  
2211         return gpa || gpa3 || gpi;               
2212 }                                                
2213 #endif /* CONFIG_ARM64_PTR_AUTH */               
2214                                                  
2215 #ifdef CONFIG_ARM64_E0PD                         
2216 static void cpu_enable_e0pd(struct arm64_cpu_    
2217 {                                                
2218         if (this_cpu_has_cap(ARM64_HAS_E0PD))    
2219                 sysreg_clear_set(tcr_el1, 0,     
2220 }                                                
2221 #endif /* CONFIG_ARM64_E0PD */                   
2222                                                  
2223 #ifdef CONFIG_ARM64_PSEUDO_NMI                   
2224 static bool can_use_gic_priorities(const stru    
2225                                    int scope)    
2226 {                                                
2227         /*                                       
2228          * ARM64_HAS_GIC_CPUIF_SYSREGS has a     
2229          * feature, so will be detected earli    
2230          */                                      
2231         BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_MASKI    
2232         if (!cpus_have_cap(ARM64_HAS_GIC_CPUI    
2233                 return false;                    
2234                                                  
2235         return enable_pseudo_nmi;                
2236 }                                                
2237                                                  
2238 static bool has_gic_prio_relaxed_sync(const s    
2239                                       int sco    
2240 {                                                
2241         /*                                       
2242          * If we're not using priority maskin    
2243          * and there's no need to relax synch    
2244          * ICC_CTLR_EL1 might not be accessib    
2245          * that.                                 
2246          *                                       
2247          * ARM64_HAS_GIC_PRIO_MASKING has a l    
2248          * feature, so will be detected earli    
2249          */                                      
2250         BUILD_BUG_ON(ARM64_HAS_GIC_PRIO_RELAX    
2251         if (!cpus_have_cap(ARM64_HAS_GIC_PRIO    
2252                 return false;                    
2253                                                  
2254         /*                                       
2255          * When Priority Mask Hint Enable (PM    
2256          * hint for interrupt distribution, a    
2257          * unmasking IRQs via PMR, and we can    
2258          *                                       
2259          * Linux itself doesn't use 1:N distr    
2260          * set PMHE. The only reason to have     
2261          * (and we can't change it).             
2262          */                                      
2263         return (gic_read_ctlr() & ICC_CTLR_EL    
2264 }                                                
2265 #endif                                           
2266                                                  
2267 #ifdef CONFIG_ARM64_BTI                          
2268 static void bti_enable(const struct arm64_cpu    
2269 {                                                
2270         /*                                       
2271          * Use of X16/X17 for tail-calls and     
2272          * function entry points using BR is     
2273          * marking binaries with GNU_PROPERTY    
2274          * So, be strict and forbid other BRs    
2275          * jump onto a PACIxSP instruction:      
2276          */                                      
2277         sysreg_clear_set(sctlr_el1, 0, SCTLR_    
2278         isb();                                   
2279 }                                                
2280 #endif /* CONFIG_ARM64_BTI */                    
2281                                                  
2282 #ifdef CONFIG_ARM64_MTE                          
2283 static void cpu_enable_mte(struct arm64_cpu_c    
2284 {                                                
2285         sysreg_clear_set(sctlr_el1, 0, SCTLR_    
2286                                                  
2287         mte_cpu_setup();                         
2288                                                  
2289         /*                                       
2290          * Clear the tags in the zero page. T    
2291          * linear map which has the Tagged at    
2292          */                                      
2293         if (try_page_mte_tagging(ZERO_PAGE(0)    
2294                 mte_clear_page_tags(lm_alias(    
2295                 set_page_mte_tagged(ZERO_PAGE    
2296         }                                        
2297                                                  
2298         kasan_init_hw_tags_cpu();                
2299 }                                                
2300 #endif /* CONFIG_ARM64_MTE */                    
2301                                                  
2302 static void user_feature_fixup(void)             
2303 {                                                
2304         if (cpus_have_cap(ARM64_WORKAROUND_26    
2305                 struct arm64_ftr_reg *regp;      
2306                                                  
2307                 regp = get_arm64_ftr_reg(SYS_    
2308                 if (regp)                        
2309                         regp->user_mask &= ~I    
2310         }                                        
2311                                                  
2312         if (cpus_have_cap(ARM64_WORKAROUND_SP    
2313                 struct arm64_ftr_reg *regp;      
2314                                                  
2315                 regp = get_arm64_ftr_reg(SYS_    
2316                 if (regp)                        
2317                         regp->user_mask &= ~I    
2318         }                                        
2319 }                                                
2320                                                  
2321 static void elf_hwcap_fixup(void)                
2322 {                                                
2323 #ifdef CONFIG_COMPAT                             
2324         if (cpus_have_cap(ARM64_WORKAROUND_17    
2325                 compat_elf_hwcap2 &= ~COMPAT_    
2326 #endif /* CONFIG_COMPAT */                       
2327 }                                                
2328                                                  
2329 #ifdef CONFIG_KVM                                
2330 static bool is_kvm_protected_mode(const struc    
2331 {                                                
2332         return kvm_get_mode() == KVM_MODE_PRO    
2333 }                                                
2334 #endif /* CONFIG_KVM */                          
2335                                                  
2336 static void cpu_trap_el0_impdef(const struct     
2337 {                                                
2338         sysreg_clear_set(sctlr_el1, 0, SCTLR_    
2339 }                                                
2340                                                  
2341 static void cpu_enable_dit(const struct arm64    
2342 {                                                
2343         set_pstate_dit(1);                       
2344 }                                                
2345                                                  
2346 static void cpu_enable_mops(const struct arm6    
2347 {                                                
2348         sysreg_clear_set(sctlr_el1, 0, SCTLR_    
2349 }                                                
2350                                                  
2351 /* Internal helper functions to match cpu cap    
2352 static bool                                      
2353 cpucap_late_cpu_optional(const struct arm64_c    
2354 {                                                
2355         return !!(cap->type & ARM64_CPUCAP_OP    
2356 }                                                
2357                                                  
2358 static bool                                      
2359 cpucap_late_cpu_permitted(const struct arm64_    
2360 {                                                
2361         return !!(cap->type & ARM64_CPUCAP_PE    
2362 }                                                
2363                                                  
2364 static bool                                      
2365 cpucap_panic_on_conflict(const struct arm64_c    
2366 {                                                
2367         return !!(cap->type & ARM64_CPUCAP_PA    
2368 }                                                
2369                                                  
2370 static const struct arm64_cpu_capabilities ar    
2371         {                                        
2372                 .capability = ARM64_ALWAYS_BO    
2373                 .type = ARM64_CPUCAP_BOOT_CPU    
2374                 .matches = has_always,           
2375         },                                       
2376         {                                        
2377                 .capability = ARM64_ALWAYS_SY    
2378                 .type = ARM64_CPUCAP_SYSTEM_F    
2379                 .matches = has_always,           
2380         },                                       
2381         {                                        
2382                 .desc = "GIC system register     
2383                 .capability = ARM64_HAS_GIC_C    
2384                 .type = ARM64_CPUCAP_STRICT_B    
2385                 .matches = has_useable_gicv3_    
2386                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2387         },                                       
2388         {                                        
2389                 .desc = "Enhanced Counter Vir    
2390                 .capability = ARM64_HAS_ECV,     
2391                 .type = ARM64_CPUCAP_SYSTEM_F    
2392                 .matches = has_cpuid_feature,    
2393                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2394         },                                       
2395         {                                        
2396                 .desc = "Enhanced Counter Vir    
2397                 .capability = ARM64_HAS_ECV_C    
2398                 .type = ARM64_CPUCAP_SYSTEM_F    
2399                 .matches = has_cpuid_feature,    
2400                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2401         },                                       
2402 #ifdef CONFIG_ARM64_PAN                          
2403         {                                        
2404                 .desc = "Privileged Access Ne    
2405                 .capability = ARM64_HAS_PAN,     
2406                 .type = ARM64_CPUCAP_SYSTEM_F    
2407                 .matches = has_cpuid_feature,    
2408                 .cpu_enable = cpu_enable_pan,    
2409                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2410         },                                       
2411 #endif /* CONFIG_ARM64_PAN */                    
2412 #ifdef CONFIG_ARM64_EPAN                         
2413         {                                        
2414                 .desc = "Enhanced Privileged     
2415                 .capability = ARM64_HAS_EPAN,    
2416                 .type = ARM64_CPUCAP_SYSTEM_F    
2417                 .matches = has_cpuid_feature,    
2418                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2419         },                                       
2420 #endif /* CONFIG_ARM64_EPAN */                   
2421 #ifdef CONFIG_ARM64_LSE_ATOMICS                  
2422         {                                        
2423                 .desc = "LSE atomic instructi    
2424                 .capability = ARM64_HAS_LSE_A    
2425                 .type = ARM64_CPUCAP_SYSTEM_F    
2426                 .matches = has_cpuid_feature,    
2427                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2428         },                                       
2429 #endif /* CONFIG_ARM64_LSE_ATOMICS */            
2430         {                                        
2431                 .desc = "Virtualization Host     
2432                 .capability = ARM64_HAS_VIRT_    
2433                 .type = ARM64_CPUCAP_STRICT_B    
2434                 .matches = runs_at_el2,          
2435                 .cpu_enable = cpu_copy_el2reg    
2436         },                                       
2437         {                                        
2438                 .desc = "Nested Virtualizatio    
2439                 .capability = ARM64_HAS_NESTE    
2440                 .type = ARM64_CPUCAP_SYSTEM_F    
2441                 .matches = has_nested_virt_su    
2442                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2443         },                                       
2444         {                                        
2445                 .capability = ARM64_HAS_32BIT    
2446                 .type = ARM64_CPUCAP_SYSTEM_F    
2447                 .matches = has_32bit_el0,        
2448                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2449         },                                       
2450 #ifdef CONFIG_KVM                                
2451         {                                        
2452                 .desc = "32-bit EL1 Support",    
2453                 .capability = ARM64_HAS_32BIT    
2454                 .type = ARM64_CPUCAP_SYSTEM_F    
2455                 .matches = has_cpuid_feature,    
2456                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2457         },                                       
2458         {                                        
2459                 .desc = "Protected KVM",         
2460                 .capability = ARM64_KVM_PROTE    
2461                 .type = ARM64_CPUCAP_SYSTEM_F    
2462                 .matches = is_kvm_protected_m    
2463         },                                       
2464         {                                        
2465                 .desc = "HCRX_EL2 register",     
2466                 .capability = ARM64_HAS_HCX,     
2467                 .type = ARM64_CPUCAP_STRICT_B    
2468                 .matches = has_cpuid_feature,    
2469                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2470         },                                       
2471 #endif                                           
2472         {                                        
2473                 .desc = "Kernel page table is    
2474                 .capability = ARM64_UNMAP_KER    
2475                 .type = ARM64_CPUCAP_BOOT_RES    
2476                 .cpu_enable = cpu_enable_kpti    
2477                 .matches = unmap_kernel_at_el    
2478                 /*                               
2479                  * The ID feature fields belo    
2480                  * the CPU doesn't need KPTI.    
2481                  * more details.                 
2482                  */                              
2483                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2484         },                                       
2485         {                                        
2486                 .capability = ARM64_HAS_FPSIM    
2487                 .type = ARM64_CPUCAP_SYSTEM_F    
2488                 .matches = has_cpuid_feature,    
2489                 .cpu_enable = cpu_enable_fpsi    
2490                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2491         },                                       
2492 #ifdef CONFIG_ARM64_PMEM                         
2493         {                                        
2494                 .desc = "Data cache clean to     
2495                 .capability = ARM64_HAS_DCPOP    
2496                 .type = ARM64_CPUCAP_SYSTEM_F    
2497                 .matches = has_cpuid_feature,    
2498                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2499         },                                       
2500         {                                        
2501                 .desc = "Data cache clean to     
2502                 .capability = ARM64_HAS_DCPOD    
2503                 .type = ARM64_CPUCAP_SYSTEM_F    
2504                 .matches = has_cpuid_feature,    
2505                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2506         },                                       
2507 #endif                                           
2508 #ifdef CONFIG_ARM64_SVE                          
2509         {                                        
2510                 .desc = "Scalable Vector Exte    
2511                 .type = ARM64_CPUCAP_SYSTEM_F    
2512                 .capability = ARM64_SVE,         
2513                 .cpu_enable = cpu_enable_sve,    
2514                 .matches = has_cpuid_feature,    
2515                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2516         },                                       
2517 #endif /* CONFIG_ARM64_SVE */                    
2518 #ifdef CONFIG_ARM64_RAS_EXTN                     
2519         {                                        
2520                 .desc = "RAS Extension Suppor    
2521                 .capability = ARM64_HAS_RAS_E    
2522                 .type = ARM64_CPUCAP_SYSTEM_F    
2523                 .matches = has_cpuid_feature,    
2524                 .cpu_enable = cpu_clear_disr,    
2525                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2526         },                                       
2527 #endif /* CONFIG_ARM64_RAS_EXTN */               
2528 #ifdef CONFIG_ARM64_AMU_EXTN                     
2529         {                                        
2530                 .desc = "Activity Monitors Un    
2531                 .capability = ARM64_HAS_AMU_E    
2532                 .type = ARM64_CPUCAP_WEAK_LOC    
2533                 .matches = has_amu,              
2534                 .cpu_enable = cpu_amu_enable,    
2535                 .cpus = &amu_cpus,               
2536                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2537         },                                       
2538 #endif /* CONFIG_ARM64_AMU_EXTN */               
2539         {                                        
2540                 .desc = "Data cache clean to     
2541                 .capability = ARM64_HAS_CACHE    
2542                 .type = ARM64_CPUCAP_SYSTEM_F    
2543                 .matches = has_cache_idc,        
2544                 .cpu_enable = cpu_emulate_eff    
2545         },                                       
2546         {                                        
2547                 .desc = "Instruction cache in    
2548                 .capability = ARM64_HAS_CACHE    
2549                 .type = ARM64_CPUCAP_SYSTEM_F    
2550                 .matches = has_cache_dic,        
2551         },                                       
2552         {                                        
2553                 .desc = "Stage-2 Force Write-    
2554                 .type = ARM64_CPUCAP_SYSTEM_F    
2555                 .capability = ARM64_HAS_STAGE    
2556                 .matches = has_cpuid_feature,    
2557                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2558         },                                       
2559         {                                        
2560                 .desc = "ARMv8.4 Translation     
2561                 .type = ARM64_CPUCAP_SYSTEM_F    
2562                 .capability = ARM64_HAS_ARMv8    
2563                 .matches = has_cpuid_feature,    
2564                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2565         },                                       
2566         {                                        
2567                 .desc = "TLB range maintenanc    
2568                 .capability = ARM64_HAS_TLB_R    
2569                 .type = ARM64_CPUCAP_SYSTEM_F    
2570                 .matches = has_cpuid_feature,    
2571                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2572         },                                       
2573 #ifdef CONFIG_ARM64_HW_AFDBM                     
2574         {                                        
2575                 .desc = "Hardware dirty bit m    
2576                 .type = ARM64_CPUCAP_WEAK_LOC    
2577                 .capability = ARM64_HW_DBM,      
2578                 .matches = has_hw_dbm,           
2579                 .cpu_enable = cpu_enable_hw_d    
2580                 .cpus = &dbm_cpus,               
2581                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2582         },                                       
2583 #endif                                           
2584         {                                        
2585                 .desc = "CRC32 instructions",    
2586                 .capability = ARM64_HAS_CRC32    
2587                 .type = ARM64_CPUCAP_SYSTEM_F    
2588                 .matches = has_cpuid_feature,    
2589                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2590         },                                       
2591         {                                        
2592                 .desc = "Speculative Store By    
2593                 .capability = ARM64_SSBS,        
2594                 .type = ARM64_CPUCAP_SYSTEM_F    
2595                 .matches = has_cpuid_feature,    
2596                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2597         },                                       
2598 #ifdef CONFIG_ARM64_CNP                          
2599         {                                        
2600                 .desc = "Common not Private t    
2601                 .capability = ARM64_HAS_CNP,     
2602                 .type = ARM64_CPUCAP_SYSTEM_F    
2603                 .matches = has_useable_cnp,      
2604                 .cpu_enable = cpu_enable_cnp,    
2605                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2606         },                                       
2607 #endif                                           
2608         {                                        
2609                 .desc = "Speculation barrier     
2610                 .capability = ARM64_HAS_SB,      
2611                 .type = ARM64_CPUCAP_SYSTEM_F    
2612                 .matches = has_cpuid_feature,    
2613                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2614         },                                       
2615 #ifdef CONFIG_ARM64_PTR_AUTH                     
2616         {                                        
2617                 .desc = "Address authenticati    
2618                 .capability = ARM64_HAS_ADDRE    
2619                 .type = ARM64_CPUCAP_BOOT_CPU    
2620                 .matches = has_address_auth_c    
2621                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2622         },                                       
2623         {                                        
2624                 .desc = "Address authenticati    
2625                 .capability = ARM64_HAS_ADDRE    
2626                 .type = ARM64_CPUCAP_BOOT_CPU    
2627                 .matches = has_address_auth_c    
2628                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2629         },                                       
2630         {                                        
2631                 .desc = "Address authenticati    
2632                 .capability = ARM64_HAS_ADDRE    
2633                 .type = ARM64_CPUCAP_BOOT_CPU    
2634                 .matches = has_address_auth_c    
2635                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2636         },                                       
2637         {                                        
2638                 .capability = ARM64_HAS_ADDRE    
2639                 .type = ARM64_CPUCAP_BOOT_CPU    
2640                 .matches = has_address_auth_m    
2641         },                                       
2642         {                                        
2643                 .desc = "Generic authenticati    
2644                 .capability = ARM64_HAS_GENER    
2645                 .type = ARM64_CPUCAP_SYSTEM_F    
2646                 .matches = has_cpuid_feature,    
2647                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2648         },                                       
2649         {                                        
2650                 .desc = "Generic authenticati    
2651                 .capability = ARM64_HAS_GENER    
2652                 .type = ARM64_CPUCAP_SYSTEM_F    
2653                 .matches = has_cpuid_feature,    
2654                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2655         },                                       
2656         {                                        
2657                 .desc = "Generic authenticati    
2658                 .capability = ARM64_HAS_GENER    
2659                 .type = ARM64_CPUCAP_SYSTEM_F    
2660                 .matches = has_cpuid_feature,    
2661                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2662         },                                       
2663         {                                        
2664                 .capability = ARM64_HAS_GENER    
2665                 .type = ARM64_CPUCAP_SYSTEM_F    
2666                 .matches = has_generic_auth,     
2667         },                                       
2668 #endif /* CONFIG_ARM64_PTR_AUTH */               
2669 #ifdef CONFIG_ARM64_PSEUDO_NMI                   
2670         {                                        
2671                 /*                               
2672                  * Depends on having GICv3       
2673                  */                              
2674                 .desc = "IRQ priority masking    
2675                 .capability = ARM64_HAS_GIC_P    
2676                 .type = ARM64_CPUCAP_STRICT_B    
2677                 .matches = can_use_gic_priori    
2678         },                                       
2679         {                                        
2680                 /*                               
2681                  * Depends on ARM64_HAS_GIC_P    
2682                  */                              
2683                 .capability = ARM64_HAS_GIC_P    
2684                 .type = ARM64_CPUCAP_STRICT_B    
2685                 .matches = has_gic_prio_relax    
2686         },                                       
2687 #endif                                           
2688 #ifdef CONFIG_ARM64_E0PD                         
2689         {                                        
2690                 .desc = "E0PD",                  
2691                 .capability = ARM64_HAS_E0PD,    
2692                 .type = ARM64_CPUCAP_SYSTEM_F    
2693                 .cpu_enable = cpu_enable_e0pd    
2694                 .matches = has_cpuid_feature,    
2695                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2696         },                                       
2697 #endif                                           
2698         {                                        
2699                 .desc = "Random Number Genera    
2700                 .capability = ARM64_HAS_RNG,     
2701                 .type = ARM64_CPUCAP_SYSTEM_F    
2702                 .matches = has_cpuid_feature,    
2703                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2704         },                                       
2705 #ifdef CONFIG_ARM64_BTI                          
2706         {                                        
2707                 .desc = "Branch Target Identi    
2708                 .capability = ARM64_BTI,         
2709 #ifdef CONFIG_ARM64_BTI_KERNEL                   
2710                 .type = ARM64_CPUCAP_STRICT_B    
2711 #else                                            
2712                 .type = ARM64_CPUCAP_SYSTEM_F    
2713 #endif                                           
2714                 .matches = has_cpuid_feature,    
2715                 .cpu_enable = bti_enable,        
2716                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2717         },                                       
2718 #endif                                           
2719 #ifdef CONFIG_ARM64_MTE                          
2720         {                                        
2721                 .desc = "Memory Tagging Exten    
2722                 .capability = ARM64_MTE,         
2723                 .type = ARM64_CPUCAP_STRICT_B    
2724                 .matches = has_cpuid_feature,    
2725                 .cpu_enable = cpu_enable_mte,    
2726                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2727         },                                       
2728         {                                        
2729                 .desc = "Asymmetric MTE Tag C    
2730                 .capability = ARM64_MTE_ASYMM    
2731                 .type = ARM64_CPUCAP_BOOT_CPU    
2732                 .matches = has_cpuid_feature,    
2733                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2734         },                                       
2735 #endif /* CONFIG_ARM64_MTE */                    
2736         {                                        
2737                 .desc = "RCpc load-acquire (L    
2738                 .capability = ARM64_HAS_LDAPR    
2739                 .type = ARM64_CPUCAP_SYSTEM_F    
2740                 .matches = has_cpuid_feature,    
2741                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2742         },                                       
2743         {                                        
2744                 .desc = "Fine Grained Traps",    
2745                 .type = ARM64_CPUCAP_SYSTEM_F    
2746                 .capability = ARM64_HAS_FGT,     
2747                 .matches = has_cpuid_feature,    
2748                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2749         },                                       
2750 #ifdef CONFIG_ARM64_SME                          
2751         {                                        
2752                 .desc = "Scalable Matrix Exte    
2753                 .type = ARM64_CPUCAP_SYSTEM_F    
2754                 .capability = ARM64_SME,         
2755                 .matches = has_cpuid_feature,    
2756                 .cpu_enable = cpu_enable_sme,    
2757                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2758         },                                       
2759         /* FA64 should be sorted after the ba    
2760         {                                        
2761                 .desc = "FA64",                  
2762                 .type = ARM64_CPUCAP_SYSTEM_F    
2763                 .capability = ARM64_SME_FA64,    
2764                 .matches = has_cpuid_feature,    
2765                 .cpu_enable = cpu_enable_fa64    
2766                 ARM64_CPUID_FIELDS(ID_AA64SMF    
2767         },                                       
2768         {                                        
2769                 .desc = "SME2",                  
2770                 .type = ARM64_CPUCAP_SYSTEM_F    
2771                 .capability = ARM64_SME2,        
2772                 .matches = has_cpuid_feature,    
2773                 .cpu_enable = cpu_enable_sme2    
2774                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2775         },                                       
2776 #endif /* CONFIG_ARM64_SME */                    
2777         {                                        
2778                 .desc = "WFx with timeout",      
2779                 .capability = ARM64_HAS_WFXT,    
2780                 .type = ARM64_CPUCAP_SYSTEM_F    
2781                 .matches = has_cpuid_feature,    
2782                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2783         },                                       
2784         {                                        
2785                 .desc = "Trap EL0 IMPLEMENTAT    
2786                 .capability = ARM64_HAS_TIDCP    
2787                 .type = ARM64_CPUCAP_SYSTEM_F    
2788                 .matches = has_cpuid_feature,    
2789                 .cpu_enable = cpu_trap_el0_im    
2790                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2791         },                                       
2792         {                                        
2793                 .desc = "Data independent tim    
2794                 .capability = ARM64_HAS_DIT,     
2795                 .type = ARM64_CPUCAP_SYSTEM_F    
2796                 .matches = has_cpuid_feature,    
2797                 .cpu_enable = cpu_enable_dit,    
2798                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2799         },                                       
2800         {                                        
2801                 .desc = "Memory Copy and Memo    
2802                 .capability = ARM64_HAS_MOPS,    
2803                 .type = ARM64_CPUCAP_SYSTEM_F    
2804                 .matches = has_cpuid_feature,    
2805                 .cpu_enable = cpu_enable_mops    
2806                 ARM64_CPUID_FIELDS(ID_AA64ISA    
2807         },                                       
2808         {                                        
2809                 .capability = ARM64_HAS_TCR2,    
2810                 .type = ARM64_CPUCAP_SYSTEM_F    
2811                 .matches = has_cpuid_feature,    
2812                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2813         },                                       
2814         {                                        
2815                 .desc = "Stage-1 Permission I    
2816                 .capability = ARM64_HAS_S1PIE    
2817                 .type = ARM64_CPUCAP_BOOT_CPU    
2818                 .matches = has_cpuid_feature,    
2819                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2820         },                                       
2821         {                                        
2822                 .desc = "VHE for hypervisor o    
2823                 .capability = ARM64_KVM_HVHE,    
2824                 .type = ARM64_CPUCAP_SYSTEM_F    
2825                 .matches = hvhe_possible,        
2826         },                                       
2827         {                                        
2828                 .desc = "Enhanced Virtualizat    
2829                 .capability = ARM64_HAS_EVT,     
2830                 .type = ARM64_CPUCAP_SYSTEM_F    
2831                 .matches = has_cpuid_feature,    
2832                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2833         },                                       
2834         {                                        
2835                 .desc = "52-bit Virtual Addre    
2836                 .capability = ARM64_HAS_LPA2,    
2837                 .type = ARM64_CPUCAP_SYSTEM_F    
2838                 .matches = has_lpa2,             
2839         },                                       
2840         {                                        
2841                 .desc = "FPMR",                  
2842                 .type = ARM64_CPUCAP_SYSTEM_F    
2843                 .capability = ARM64_HAS_FPMR,    
2844                 .matches = has_cpuid_feature,    
2845                 .cpu_enable = cpu_enable_fpmr    
2846                 ARM64_CPUID_FIELDS(ID_AA64PFR    
2847         },                                       
2848 #ifdef CONFIG_ARM64_VA_BITS_52                   
2849         {                                        
2850                 .capability = ARM64_HAS_VA52,    
2851                 .type = ARM64_CPUCAP_BOOT_CPU    
2852                 .matches = has_cpuid_feature,    
2853 #ifdef CONFIG_ARM64_64K_PAGES                    
2854                 .desc = "52-bit Virtual Addre    
2855                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2856 #else                                            
2857                 .desc = "52-bit Virtual Addre    
2858 #ifdef CONFIG_ARM64_4K_PAGES                     
2859                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2860 #else                                            
2861                 ARM64_CPUID_FIELDS(ID_AA64MMF    
2862 #endif                                           
2863 #endif                                           
2864         },                                       
2865 #endif                                           
2866         {                                        
2867                 .desc = "NV1",                   
2868                 .capability = ARM64_HAS_HCR_N    
2869                 .type = ARM64_CPUCAP_SYSTEM_F    
2870                 .matches = has_nv1,              
2871                 ARM64_CPUID_FIELDS_NEG(ID_AA6    
2872         },                                       
2873         {},                                      
2874 };                                               
2875                                                  
2876 #define HWCAP_CPUID_MATCH(reg, field, min_val    
2877                 .matches = has_user_cpuid_fea    
2878                 ARM64_CPUID_FIELDS(reg, field    
2879                                                  
2880 #define __HWCAP_CAP(name, cap_type, cap)         
2881                 .desc = name,                    
2882                 .type = ARM64_CPUCAP_SYSTEM_F    
2883                 .hwcap_type = cap_type,          
2884                 .hwcap = cap,                    
2885                                                  
2886 #define HWCAP_CAP(reg, field, min_value, cap_    
2887         {                                        
2888                 __HWCAP_CAP(#cap, cap_type, c    
2889                 HWCAP_CPUID_MATCH(reg, field,    
2890         }                                        
2891                                                  
2892 #define HWCAP_MULTI_CAP(list, cap_type, cap)     
2893         {                                        
2894                 __HWCAP_CAP(#cap, cap_type, c    
2895                 .matches = cpucap_multi_entry    
2896                 .match_list = list,              
2897         }                                        
2898                                                  
2899 #define HWCAP_CAP_MATCH(match, cap_type, cap)    
2900         {                                        
2901                 __HWCAP_CAP(#cap, cap_type, c    
2902                 .matches = match,                
2903         }                                        
2904                                                  
2905 #ifdef CONFIG_ARM64_PTR_AUTH                     
2906 static const struct arm64_cpu_capabilities pt    
2907         {                                        
2908                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2909         },                                       
2910         {                                        
2911                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2912         },                                       
2913         {                                        
2914                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2915         },                                       
2916         {},                                      
2917 };                                               
2918                                                  
2919 static const struct arm64_cpu_capabilities pt    
2920         {                                        
2921                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2922         },                                       
2923         {                                        
2924                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2925         },                                       
2926         {                                        
2927                 HWCAP_CPUID_MATCH(ID_AA64ISAR    
2928         },                                       
2929         {},                                      
2930 };                                               
2931 #endif                                           
2932                                                  
2933 static const struct arm64_cpu_capabilities ar    
2934         HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMUL    
2935         HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES,    
2936         HWCAP_CAP(ID_AA64ISAR0_EL1, SHA1, IMP    
2937         HWCAP_CAP(ID_AA64ISAR0_EL1, SHA2, SHA    
2938         HWCAP_CAP(ID_AA64ISAR0_EL1, SHA2, SHA    
2939         HWCAP_CAP(ID_AA64ISAR0_EL1, CRC32, IM    
2940         HWCAP_CAP(ID_AA64ISAR0_EL1, ATOMIC, I    
2941         HWCAP_CAP(ID_AA64ISAR0_EL1, ATOMIC, F    
2942         HWCAP_CAP(ID_AA64ISAR0_EL1, RDM, IMP,    
2943         HWCAP_CAP(ID_AA64ISAR0_EL1, SHA3, IMP    
2944         HWCAP_CAP(ID_AA64ISAR0_EL1, SM3, IMP,    
2945         HWCAP_CAP(ID_AA64ISAR0_EL1, SM4, IMP,    
2946         HWCAP_CAP(ID_AA64ISAR0_EL1, DP, IMP,     
2947         HWCAP_CAP(ID_AA64ISAR0_EL1, FHM, IMP,    
2948         HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM    
2949         HWCAP_CAP(ID_AA64ISAR0_EL1, TS, FLAGM    
2950         HWCAP_CAP(ID_AA64ISAR0_EL1, RNDR, IMP    
2951         HWCAP_CAP(ID_AA64PFR0_EL1, FP, IMP, C    
2952         HWCAP_CAP(ID_AA64PFR0_EL1, FP, FP16,     
2953         HWCAP_CAP(ID_AA64PFR0_EL1, AdvSIMD, I    
2954         HWCAP_CAP(ID_AA64PFR0_EL1, AdvSIMD, F    
2955         HWCAP_CAP(ID_AA64PFR0_EL1, DIT, IMP,     
2956         HWCAP_CAP(ID_AA64PFR2_EL1, FPMR, IMP,    
2957         HWCAP_CAP(ID_AA64ISAR1_EL1, DPB, IMP,    
2958         HWCAP_CAP(ID_AA64ISAR1_EL1, DPB, DPB2    
2959         HWCAP_CAP(ID_AA64ISAR1_EL1, JSCVT, IM    
2960         HWCAP_CAP(ID_AA64ISAR1_EL1, FCMA, IMP    
2961         HWCAP_CAP(ID_AA64ISAR1_EL1, LRCPC, IM    
2962         HWCAP_CAP(ID_AA64ISAR1_EL1, LRCPC, LR    
2963         HWCAP_CAP(ID_AA64ISAR1_EL1, LRCPC, LR    
2964         HWCAP_CAP(ID_AA64ISAR1_EL1, FRINTTS,     
2965         HWCAP_CAP(ID_AA64ISAR1_EL1, SB, IMP,     
2966         HWCAP_CAP(ID_AA64ISAR1_EL1, BF16, IMP    
2967         HWCAP_CAP(ID_AA64ISAR1_EL1, BF16, EBF    
2968         HWCAP_CAP(ID_AA64ISAR1_EL1, DGH, IMP,    
2969         HWCAP_CAP(ID_AA64ISAR1_EL1, I8MM, IMP    
2970         HWCAP_CAP(ID_AA64ISAR2_EL1, LUT, IMP,    
2971         HWCAP_CAP(ID_AA64ISAR3_EL1, FAMINMAX,    
2972         HWCAP_CAP(ID_AA64MMFR2_EL1, AT, IMP,     
2973 #ifdef CONFIG_ARM64_SVE                          
2974         HWCAP_CAP(ID_AA64PFR0_EL1, SVE, IMP,     
2975         HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SV    
2976         HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SV    
2977         HWCAP_CAP(ID_AA64ZFR0_EL1, AES, IMP,     
2978         HWCAP_CAP(ID_AA64ZFR0_EL1, AES, PMULL    
2979         HWCAP_CAP(ID_AA64ZFR0_EL1, BitPerm, I    
2980         HWCAP_CAP(ID_AA64ZFR0_EL1, B16B16, IM    
2981         HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, IMP,    
2982         HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, EBF1    
2983         HWCAP_CAP(ID_AA64ZFR0_EL1, SHA3, IMP,    
2984         HWCAP_CAP(ID_AA64ZFR0_EL1, SM4, IMP,     
2985         HWCAP_CAP(ID_AA64ZFR0_EL1, I8MM, IMP,    
2986         HWCAP_CAP(ID_AA64ZFR0_EL1, F32MM, IMP    
2987         HWCAP_CAP(ID_AA64ZFR0_EL1, F64MM, IMP    
2988 #endif                                           
2989         HWCAP_CAP(ID_AA64PFR1_EL1, SSBS, SSBS    
2990 #ifdef CONFIG_ARM64_BTI                          
2991         HWCAP_CAP(ID_AA64PFR1_EL1, BT, IMP, C    
2992 #endif                                           
2993 #ifdef CONFIG_ARM64_PTR_AUTH                     
2994         HWCAP_MULTI_CAP(ptr_auth_hwcap_addr_m    
2995         HWCAP_MULTI_CAP(ptr_auth_hwcap_gen_ma    
2996 #endif                                           
2997 #ifdef CONFIG_ARM64_MTE                          
2998         HWCAP_CAP(ID_AA64PFR1_EL1, MTE, MTE2,    
2999         HWCAP_CAP(ID_AA64PFR1_EL1, MTE, MTE3,    
3000 #endif /* CONFIG_ARM64_MTE */                    
3001         HWCAP_CAP(ID_AA64MMFR0_EL1, ECV, IMP,    
3002         HWCAP_CAP(ID_AA64MMFR1_EL1, AFP, IMP,    
3003         HWCAP_CAP(ID_AA64ISAR2_EL1, CSSC, IMP    
3004         HWCAP_CAP(ID_AA64ISAR2_EL1, RPRFM, IM    
3005         HWCAP_CAP(ID_AA64ISAR2_EL1, RPRES, IM    
3006         HWCAP_CAP(ID_AA64ISAR2_EL1, WFxT, IMP    
3007         HWCAP_CAP(ID_AA64ISAR2_EL1, MOPS, IMP    
3008         HWCAP_CAP(ID_AA64ISAR2_EL1, BC, IMP,     
3009 #ifdef CONFIG_ARM64_SME                          
3010         HWCAP_CAP(ID_AA64PFR1_EL1, SME, IMP,     
3011         HWCAP_CAP(ID_AA64SMFR0_EL1, FA64, IMP    
3012         HWCAP_CAP(ID_AA64SMFR0_EL1, LUTv2, IM    
3013         HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, S    
3014         HWCAP_CAP(ID_AA64SMFR0_EL1, SMEver, S    
3015         HWCAP_CAP(ID_AA64SMFR0_EL1, I16I64, I    
3016         HWCAP_CAP(ID_AA64SMFR0_EL1, F64F64, I    
3017         HWCAP_CAP(ID_AA64SMFR0_EL1, I16I32, I    
3018         HWCAP_CAP(ID_AA64SMFR0_EL1, B16B16, I    
3019         HWCAP_CAP(ID_AA64SMFR0_EL1, F16F16, I    
3020         HWCAP_CAP(ID_AA64SMFR0_EL1, F8F16, IM    
3021         HWCAP_CAP(ID_AA64SMFR0_EL1, F8F32, IM    
3022         HWCAP_CAP(ID_AA64SMFR0_EL1, I8I32, IM    
3023         HWCAP_CAP(ID_AA64SMFR0_EL1, F16F32, I    
3024         HWCAP_CAP(ID_AA64SMFR0_EL1, B16F32, I    
3025         HWCAP_CAP(ID_AA64SMFR0_EL1, BI32I32,     
3026         HWCAP_CAP(ID_AA64SMFR0_EL1, F32F32, I    
3027         HWCAP_CAP(ID_AA64SMFR0_EL1, SF8FMA, I    
3028         HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP4, I    
3029         HWCAP_CAP(ID_AA64SMFR0_EL1, SF8DP2, I    
3030 #endif /* CONFIG_ARM64_SME */                    
3031         HWCAP_CAP(ID_AA64FPFR0_EL1, F8CVT, IM    
3032         HWCAP_CAP(ID_AA64FPFR0_EL1, F8FMA, IM    
3033         HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP4, IM    
3034         HWCAP_CAP(ID_AA64FPFR0_EL1, F8DP2, IM    
3035         HWCAP_CAP(ID_AA64FPFR0_EL1, F8E4M3, I    
3036         HWCAP_CAP(ID_AA64FPFR0_EL1, F8E5M2, I    
3037         {},                                      
3038 };                                               
3039                                                  
3040 #ifdef CONFIG_COMPAT                             
3041 static bool compat_has_neon(const struct arm6    
3042 {                                                
3043         /*                                       
3044          * Check that all of MVFR1_EL1.{SIMDS    
3045          * in line with that of arm32 as in v    
3046          * check is future proof, by making s    
3047          */                                      
3048         u32 mvfr1;                               
3049                                                  
3050         WARN_ON(scope == SCOPE_LOCAL_CPU && p    
3051         if (scope == SCOPE_SYSTEM)               
3052                 mvfr1 = read_sanitised_ftr_re    
3053         else                                     
3054                 mvfr1 = read_sysreg_s(SYS_MVF    
3055                                                  
3056         return cpuid_feature_extract_unsigned    
3057                 cpuid_feature_extract_unsigne    
3058                 cpuid_feature_extract_unsigne    
3059 }                                                
3060 #endif                                           
3061                                                  
3062 static const struct arm64_cpu_capabilities co    
3063 #ifdef CONFIG_COMPAT                             
3064         HWCAP_CAP_MATCH(compat_has_neon, CAP_    
3065         HWCAP_CAP(MVFR1_EL1, SIMDFMAC, IMP, C    
3066         /* Arm v8 mandates MVFR0.FPDP == {0,     
3067         HWCAP_CAP(MVFR0_EL1, FPDP, VFPv3, CAP    
3068         HWCAP_CAP(MVFR0_EL1, FPDP, VFPv3, CAP    
3069         HWCAP_CAP(MVFR1_EL1, FPHP, FP16, CAP_    
3070         HWCAP_CAP(MVFR1_EL1, SIMDHP, SIMDHP_F    
3071         HWCAP_CAP(ID_ISAR5_EL1, AES, VMULL, C    
3072         HWCAP_CAP(ID_ISAR5_EL1, AES, IMP, CAP    
3073         HWCAP_CAP(ID_ISAR5_EL1, SHA1, IMP, CA    
3074         HWCAP_CAP(ID_ISAR5_EL1, SHA2, IMP, CA    
3075         HWCAP_CAP(ID_ISAR5_EL1, CRC32, IMP, C    
3076         HWCAP_CAP(ID_ISAR6_EL1, DP, IMP, CAP_    
3077         HWCAP_CAP(ID_ISAR6_EL1, FHM, IMP, CAP    
3078         HWCAP_CAP(ID_ISAR6_EL1, SB, IMP, CAP_    
3079         HWCAP_CAP(ID_ISAR6_EL1, BF16, IMP, CA    
3080         HWCAP_CAP(ID_ISAR6_EL1, I8MM, IMP, CA    
3081         HWCAP_CAP(ID_PFR2_EL1, SSBS, IMP, CAP    
3082 #endif                                           
3083         {},                                      
3084 };                                               
3085                                                  
3086 static void cap_set_elf_hwcap(const struct ar    
3087 {                                                
3088         switch (cap->hwcap_type) {               
3089         case CAP_HWCAP:                          
3090                 cpu_set_feature(cap->hwcap);     
3091                 break;                           
3092 #ifdef CONFIG_COMPAT                             
3093         case CAP_COMPAT_HWCAP:                   
3094                 compat_elf_hwcap |= (u32)cap-    
3095                 break;                           
3096         case CAP_COMPAT_HWCAP2:                  
3097                 compat_elf_hwcap2 |= (u32)cap    
3098                 break;                           
3099 #endif                                           
3100         default:                                 
3101                 WARN_ON(1);                      
3102                 break;                           
3103         }                                        
3104 }                                                
3105                                                  
3106 /* Check if we have a particular HWCAP enable    
3107 static bool cpus_have_elf_hwcap(const struct     
3108 {                                                
3109         bool rc;                                 
3110                                                  
3111         switch (cap->hwcap_type) {               
3112         case CAP_HWCAP:                          
3113                 rc = cpu_have_feature(cap->hw    
3114                 break;                           
3115 #ifdef CONFIG_COMPAT                             
3116         case CAP_COMPAT_HWCAP:                   
3117                 rc = (compat_elf_hwcap & (u32    
3118                 break;                           
3119         case CAP_COMPAT_HWCAP2:                  
3120                 rc = (compat_elf_hwcap2 & (u3    
3121                 break;                           
3122 #endif                                           
3123         default:                                 
3124                 WARN_ON(1);                      
3125                 rc = false;                      
3126         }                                        
3127                                                  
3128         return rc;                               
3129 }                                                
3130                                                  
3131 static void setup_elf_hwcaps(const struct arm    
3132 {                                                
3133         /* We support emulation of accesses t    
3134         cpu_set_named_feature(CPUID);            
3135         for (; hwcaps->matches; hwcaps++)        
3136                 if (hwcaps->matches(hwcaps, c    
3137                         cap_set_elf_hwcap(hwc    
3138 }                                                
3139                                                  
3140 static void update_cpu_capabilities(u16 scope    
3141 {                                                
3142         int i;                                   
3143         const struct arm64_cpu_capabilities *    
3144                                                  
3145         scope_mask &= ARM64_CPUCAP_SCOPE_MASK    
3146         for (i = 0; i < ARM64_NCAPS; i++) {      
3147                 caps = cpucap_ptrs[i];           
3148                 if (!caps || !(caps->type & s    
3149                     cpus_have_cap(caps->capab    
3150                     !caps->matches(caps, cpuc    
3151                         continue;                
3152                                                  
3153                 if (caps->desc && !caps->cpus    
3154                         pr_info("detected: %s    
3155                                                  
3156                 __set_bit(caps->capability, s    
3157                                                  
3158                 if ((scope_mask & SCOPE_BOOT_    
3159                         set_bit(caps->capabil    
3160         }                                        
3161 }                                                
3162                                                  
3163 /*                                               
3164  * Enable all the available capabilities on t    
3165  * with BOOT_CPU scope are handled separately    
3166  */                                              
3167 static int cpu_enable_non_boot_scope_capabili    
3168 {                                                
3169         int i;                                   
3170         u16 non_boot_scope = SCOPE_ALL & ~SCO    
3171                                                  
3172         for_each_available_cap(i) {              
3173                 const struct arm64_cpu_capabi    
3174                                                  
3175                 if (WARN_ON(!cap))               
3176                         continue;                
3177                                                  
3178                 if (!(cap->type & non_boot_sc    
3179                         continue;                
3180                                                  
3181                 if (cap->cpu_enable)             
3182                         cap->cpu_enable(cap);    
3183         }                                        
3184         return 0;                                
3185 }                                                
3186                                                  
3187 /*                                               
3188  * Run through the enabled capabilities and e    
3189  * CPUs                                          
3190  */                                              
3191 static void __init enable_cpu_capabilities(u1    
3192 {                                                
3193         int i;                                   
3194         const struct arm64_cpu_capabilities *    
3195         bool boot_scope;                         
3196                                                  
3197         scope_mask &= ARM64_CPUCAP_SCOPE_MASK    
3198         boot_scope = !!(scope_mask & SCOPE_BO    
3199                                                  
3200         for (i = 0; i < ARM64_NCAPS; i++) {      
3201                 caps = cpucap_ptrs[i];           
3202                 if (!caps || !(caps->type & s    
3203                     !cpus_have_cap(caps->capa    
3204                         continue;                
3205                                                  
3206                 if (boot_scope && caps->cpu_e    
3207                         /*                       
3208                          * Capabilities with     
3209                          * before any seconda    
3210                          * will enable the ca    
3211                          * check_local_cpu_ca    
3212                          * the boot CPU, for     
3213                          * enabled here. This    
3214                          * stop_machine() cal    
3215                          */                      
3216                         caps->cpu_enable(caps    
3217         }                                        
3218                                                  
3219         /*                                       
3220          * For all non-boot scope capabilitie    
3221          * as it schedules the work allowing     
3222          * instead of on_each_cpu() which use    
3223          * PSTATE that disappears when we ret    
3224          */                                      
3225         if (!boot_scope)                         
3226                 stop_machine(cpu_enable_non_b    
3227                              NULL, cpu_online    
3228 }                                                
3229                                                  
3230 /*                                               
3231  * Run through the list of capabilities to ch    
3232  * If the system has already detected a capab    
3233  * action on this CPU.                           
3234  */                                              
3235 static void verify_local_cpu_caps(u16 scope_m    
3236 {                                                
3237         int i;                                   
3238         bool cpu_has_cap, system_has_cap;        
3239         const struct arm64_cpu_capabilities *    
3240                                                  
3241         scope_mask &= ARM64_CPUCAP_SCOPE_MASK    
3242                                                  
3243         for (i = 0; i < ARM64_NCAPS; i++) {      
3244                 caps = cpucap_ptrs[i];           
3245                 if (!caps || !(caps->type & s    
3246                         continue;                
3247                                                  
3248                 cpu_has_cap = caps->matches(c    
3249                 system_has_cap = cpus_have_ca    
3250                                                  
3251                 if (system_has_cap) {            
3252                         /*                       
3253                          * Check if the new C    
3254                          * which is not safe     
3255                          */                      
3256                         if (!cpu_has_cap && !    
3257                                 break;           
3258                         /*                       
3259                          * We have to issue c    
3260                          * whether the CPU ha    
3261                          * system wide. It is    
3262                          * appropriate action    
3263                          */                      
3264                         if (caps->cpu_enable)    
3265                                 caps->cpu_ena    
3266                 } else {                         
3267                         /*                       
3268                          * Check if the CPU h    
3269                          * safe to have when     
3270                          */                      
3271                         if (cpu_has_cap && !c    
3272                                 break;           
3273                 }                                
3274         }                                        
3275                                                  
3276         if (i < ARM64_NCAPS) {                   
3277                 pr_crit("CPU%d: Detected conf    
3278                         smp_processor_id(), c    
3279                         caps->desc, system_ha    
3280                                                  
3281                 if (cpucap_panic_on_conflict(    
3282                         cpu_panic_kernel();      
3283                 else                             
3284                         cpu_die_early();         
3285         }                                        
3286 }                                                
3287                                                  
3288 /*                                               
3289  * Check for CPU features that are used in ea    
3290  * based on the Boot CPU value.                  
3291  */                                              
3292 static void check_early_cpu_features(void)       
3293 {                                                
3294         verify_cpu_asid_bits();                  
3295                                                  
3296         verify_local_cpu_caps(SCOPE_BOOT_CPU)    
3297 }                                                
3298                                                  
3299 static void                                      
3300 __verify_local_elf_hwcaps(const struct arm64_    
3301 {                                                
3302                                                  
3303         for (; caps->matches; caps++)            
3304                 if (cpus_have_elf_hwcap(caps)    
3305                         pr_crit("CPU%d: missi    
3306                                         smp_p    
3307                         cpu_die_early();         
3308                 }                                
3309 }                                                
3310                                                  
3311 static void verify_local_elf_hwcaps(void)        
3312 {                                                
3313         __verify_local_elf_hwcaps(arm64_elf_h    
3314                                                  
3315         if (id_aa64pfr0_32bit_el0(read_cpuid(    
3316                 __verify_local_elf_hwcaps(com    
3317 }                                                
3318                                                  
3319 static void verify_sve_features(void)            
3320 {                                                
3321         unsigned long cpacr = cpacr_save_enab    
3322                                                  
3323         if (vec_verify_vq_map(ARM64_VEC_SVE))    
3324                 pr_crit("CPU%d: SVE: vector l    
3325                         smp_processor_id());     
3326                 cpu_die_early();                 
3327         }                                        
3328                                                  
3329         cpacr_restore(cpacr);                    
3330 }                                                
3331                                                  
3332 static void verify_sme_features(void)            
3333 {                                                
3334         unsigned long cpacr = cpacr_save_enab    
3335                                                  
3336         if (vec_verify_vq_map(ARM64_VEC_SME))    
3337                 pr_crit("CPU%d: SME: vector l    
3338                         smp_processor_id());     
3339                 cpu_die_early();                 
3340         }                                        
3341                                                  
3342         cpacr_restore(cpacr);                    
3343 }                                                
3344                                                  
3345 static void verify_hyp_capabilities(void)        
3346 {                                                
3347         u64 safe_mmfr1, mmfr0, mmfr1;            
3348         int parange, ipa_max;                    
3349         unsigned int safe_vmid_bits, vmid_bit    
3350                                                  
3351         if (!IS_ENABLED(CONFIG_KVM))             
3352                 return;                          
3353                                                  
3354         safe_mmfr1 = read_sanitised_ftr_reg(S    
3355         mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);    
3356         mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);    
3357                                                  
3358         /* Verify VMID bits */                   
3359         safe_vmid_bits = get_vmid_bits(safe_m    
3360         vmid_bits = get_vmid_bits(mmfr1);        
3361         if (vmid_bits < safe_vmid_bits) {        
3362                 pr_crit("CPU%d: VMID width mi    
3363                 cpu_die_early();                 
3364         }                                        
3365                                                  
3366         /* Verify IPA range */                   
3367         parange = cpuid_feature_extract_unsig    
3368                                 ID_AA64MMFR0_    
3369         ipa_max = id_aa64mmfr0_parange_to_phy    
3370         if (ipa_max < get_kvm_ipa_limit()) {     
3371                 pr_crit("CPU%d: IPA range mis    
3372                 cpu_die_early();                 
3373         }                                        
3374 }                                                
3375                                                  
3376 /*                                               
3377  * Run through the enabled system capabilitie    
3378  * The capabilities were decided based on the    
3379  * Any new CPU should match the system wide s    
3380  * new CPU doesn't have a capability which th    
3381  * cannot do anything to fix it up and could     
3382  * we park the CPU.                              
3383  */                                              
3384 static void verify_local_cpu_capabilities(voi    
3385 {                                                
3386         /*                                       
3387          * The capabilities with SCOPE_BOOT_C    
3388          * check_early_cpu_features(), as the    
3389          * on all secondary CPUs.                
3390          */                                      
3391         verify_local_cpu_caps(SCOPE_ALL & ~SC    
3392         verify_local_elf_hwcaps();               
3393                                                  
3394         if (system_supports_sve())               
3395                 verify_sve_features();           
3396                                                  
3397         if (system_supports_sme())               
3398                 verify_sme_features();           
3399                                                  
3400         if (is_hyp_mode_available())             
3401                 verify_hyp_capabilities();       
3402 }                                                
3403                                                  
3404 void check_local_cpu_capabilities(void)          
3405 {                                                
3406         /*                                       
3407          * All secondary CPUs should conform     
3408          * in use by the kernel based on boot    
3409          */                                      
3410         check_early_cpu_features();              
3411                                                  
3412         /*                                       
3413          * If we haven't finalised the system    
3414          * a chance to update the errata work    
3415          * Otherwise, this CPU should verify     
3416          * advertised capabilities.              
3417          */                                      
3418         if (!system_capabilities_finalized())    
3419                 update_cpu_capabilities(SCOPE    
3420         else                                     
3421                 verify_local_cpu_capabilities    
3422 }                                                
3423                                                  
3424 bool this_cpu_has_cap(unsigned int n)            
3425 {                                                
3426         if (!WARN_ON(preemptible()) && n < AR    
3427                 const struct arm64_cpu_capabi    
3428                                                  
3429                 if (cap)                         
3430                         return cap->matches(c    
3431         }                                        
3432                                                  
3433         return false;                            
3434 }                                                
3435 EXPORT_SYMBOL_GPL(this_cpu_has_cap);             
3436                                                  
3437 /*                                               
3438  * This helper function is used in a narrow w    
3439  * - The system wide safe registers are set w    
3440  * - The SYSTEM_FEATURE system_cpucaps may no    
3441  */                                              
3442 static bool __maybe_unused __system_matches_c    
3443 {                                                
3444         if (n < ARM64_NCAPS) {                   
3445                 const struct arm64_cpu_capabi    
3446                                                  
3447                 if (cap)                         
3448                         return cap->matches(c    
3449         }                                        
3450         return false;                            
3451 }                                                
3452                                                  
3453 void cpu_set_feature(unsigned int num)           
3454 {                                                
3455         set_bit(num, elf_hwcap);                 
3456 }                                                
3457                                                  
3458 bool cpu_have_feature(unsigned int num)          
3459 {                                                
3460         return test_bit(num, elf_hwcap);         
3461 }                                                
3462 EXPORT_SYMBOL_GPL(cpu_have_feature);             
3463                                                  
3464 unsigned long cpu_get_elf_hwcap(void)            
3465 {                                                
3466         /*                                       
3467          * We currently only populate the fir    
3468          * note that for userspace compatibil    
3469          * and 63 will always be returned as     
3470          */                                      
3471         return elf_hwcap[0];                     
3472 }                                                
3473                                                  
3474 unsigned long cpu_get_elf_hwcap2(void)           
3475 {                                                
3476         return elf_hwcap[1];                     
3477 }                                                
3478                                                  
3479 static void __init setup_boot_cpu_capabilitie    
3480 {                                                
3481         /*                                       
3482          * The boot CPU's feature register va    
3483          * boot cpucaps and local cpucaps for    
3484          * patch alternatives for the availab    
3485          */                                      
3486         update_cpu_capabilities(SCOPE_BOOT_CP    
3487         enable_cpu_capabilities(SCOPE_BOOT_CP    
3488         apply_boot_alternatives();               
3489 }                                                
3490                                                  
3491 void __init setup_boot_cpu_features(void)        
3492 {                                                
3493         /*                                       
3494          * Initialize the indirect array of C    
3495          * handle the boot CPU.                  
3496          */                                      
3497         init_cpucap_indirect_list();             
3498                                                  
3499         /*                                       
3500          * Detect broken pseudo-NMI. Must be     
3501          * setup_boot_cpu_capabilities() sinc    
3502          * can_use_gic_priorities().             
3503          */                                      
3504         detect_system_supports_pseudo_nmi();     
3505                                                  
3506         setup_boot_cpu_capabilities();           
3507 }                                                
3508                                                  
3509 static void __init setup_system_capabilities(    
3510 {                                                
3511         /*                                       
3512          * The system-wide safe feature regis    
3513          * Detect, enable, and patch alternat    
3514          * cpucaps.                              
3515          */                                      
3516         update_cpu_capabilities(SCOPE_SYSTEM)    
3517         enable_cpu_capabilities(SCOPE_ALL & ~    
3518         apply_alternatives_all();                
3519                                                  
3520         /*                                       
3521          * Log any cpucaps with a cpumask as     
3522          * update_cpu_capabilities().            
3523          */                                      
3524         for (int i = 0; i < ARM64_NCAPS; i++)    
3525                 const struct arm64_cpu_capabi    
3526                                                  
3527                 if (caps && caps->cpus && cap    
3528                         cpumask_any(caps->cpu    
3529                         pr_info("detected: %s    
3530                                 caps->desc, c    
3531         }                                        
3532                                                  
3533         /*                                       
3534          * TTBR0 PAN doesn't have its own cpu    
3535          */                                      
3536         if (system_uses_ttbr0_pan())             
3537                 pr_info("emulated: Privileged    
3538 }                                                
3539                                                  
3540 void __init setup_system_features(void)          
3541 {                                                
3542         setup_system_capabilities();             
3543                                                  
3544         kpti_install_ng_mappings();              
3545                                                  
3546         sve_setup();                             
3547         sme_setup();                             
3548                                                  
3549         /*                                       
3550          * Check for sane CTR_EL0.CWG value.     
3551          */                                      
3552         if (!cache_type_cwg())                   
3553                 pr_warn("No Cache Writeback G    
3554                         ARCH_DMA_MINALIGN);      
3555 }                                                
3556                                                  
3557 void __init setup_user_features(void)            
3558 {                                                
3559         user_feature_fixup();                    
3560                                                  
3561         setup_elf_hwcaps(arm64_elf_hwcaps);      
3562                                                  
3563         if (system_supports_32bit_el0()) {       
3564                 setup_elf_hwcaps(compat_elf_h    
3565                 elf_hwcap_fixup();               
3566         }                                        
3567                                                  
3568         minsigstksz_setup();                     
3569 }                                                
3570                                                  
3571 static int enable_mismatched_32bit_el0(unsign    
3572 {                                                
3573         /*                                       
3574          * The first 32-bit-capable CPU we de    
3575          * be offlined by userspace. -1 indic    
3576          * a 32-bit-capable CPU.                 
3577          */                                      
3578         static int lucky_winner = -1;            
3579                                                  
3580         struct cpuinfo_arm64 *info = &per_cpu    
3581         bool cpu_32bit = id_aa64pfr0_32bit_el    
3582                                                  
3583         if (cpu_32bit) {                         
3584                 cpumask_set_cpu(cpu, cpu_32bi    
3585                 static_branch_enable_cpuslock    
3586         }                                        
3587                                                  
3588         if (cpumask_test_cpu(0, cpu_32bit_el0    
3589                 return 0;                        
3590                                                  
3591         if (lucky_winner >= 0)                   
3592                 return 0;                        
3593                                                  
3594         /*                                       
3595          * We've detected a mismatch. We need    
3596          * 32-bit EL0 online so that is_cpu_a    
3597          * every CPU in the system for a 32-b    
3598          */                                      
3599         lucky_winner = cpu_32bit ? cpu : cpum    
3600                                                  
3601         get_cpu_device(lucky_winner)->offline    
3602         setup_elf_hwcaps(compat_elf_hwcaps);     
3603         elf_hwcap_fixup();                       
3604         pr_info("Asymmetric 32-bit EL0 suppor    
3605                 cpu, lucky_winner);              
3606         return 0;                                
3607 }                                                
3608                                                  
3609 static int __init init_32bit_el0_mask(void)      
3610 {                                                
3611         if (!allow_mismatched_32bit_el0)         
3612                 return 0;                        
3613                                                  
3614         if (!zalloc_cpumask_var(&cpu_32bit_el    
3615                 return -ENOMEM;                  
3616                                                  
3617         return cpuhp_setup_state(CPUHP_AP_ONL    
3618                                  "arm64/misma    
3619                                  enable_misma    
3620 }                                                
3621 subsys_initcall_sync(init_32bit_el0_mask);       
3622                                                  
3623 static void __maybe_unused cpu_enable_cnp(str    
3624 {                                                
3625         cpu_enable_swapper_cnp();                
3626 }                                                
3627                                                  
3628 /*                                               
3629  * We emulate only the following system regis    
3630  * Op0 = 0x3, CRn = 0x0, Op1 = 0x0, CRm = [0,    
3631  * See Table C5-6 System instruction encoding    
3632  * ARMv8 ARM(ARM DDI 0487A.f) for more detail    
3633  */                                              
3634 static inline bool __attribute_const__ is_emu    
3635 {                                                
3636         return (sys_reg_Op0(id) == 0x3 &&        
3637                 sys_reg_CRn(id) == 0x0 &&        
3638                 sys_reg_Op1(id) == 0x0 &&        
3639                 (sys_reg_CRm(id) == 0 ||         
3640                  ((sys_reg_CRm(id) >= 2) && (    
3641 }                                                
3642                                                  
3643 /*                                               
3644  * With CRm == 0, reg should be one of :         
3645  * MIDR_EL1, MPIDR_EL1 or REVIDR_EL1.            
3646  */                                              
3647 static inline int emulate_id_reg(u32 id, u64     
3648 {                                                
3649         switch (id) {                            
3650         case SYS_MIDR_EL1:                       
3651                 *valp = read_cpuid_id();         
3652                 break;                           
3653         case SYS_MPIDR_EL1:                      
3654                 *valp = SYS_MPIDR_SAFE_VAL;      
3655                 break;                           
3656         case SYS_REVIDR_EL1:                     
3657                 /* IMPLEMENTATION DEFINED val    
3658                 *valp = 0;                       
3659                 break;                           
3660         default:                                 
3661                 return -EINVAL;                  
3662         }                                        
3663                                                  
3664         return 0;                                
3665 }                                                
3666                                                  
3667 static int emulate_sys_reg(u32 id, u64 *valp)    
3668 {                                                
3669         struct arm64_ftr_reg *regp;              
3670                                                  
3671         if (!is_emulated(id))                    
3672                 return -EINVAL;                  
3673                                                  
3674         if (sys_reg_CRm(id) == 0)                
3675                 return emulate_id_reg(id, val    
3676                                                  
3677         regp = get_arm64_ftr_reg_nowarn(id);     
3678         if (regp)                                
3679                 *valp = arm64_ftr_reg_user_va    
3680         else                                     
3681                 /*                               
3682                  * The untracked registers ar    
3683                  * (e.g, ID_AFR0_EL1) or rese    
3684                  */                              
3685                 *valp = 0;                       
3686         return 0;                                
3687 }                                                
3688                                                  
3689 int do_emulate_mrs(struct pt_regs *regs, u32     
3690 {                                                
3691         int rc;                                  
3692         u64 val;                                 
3693                                                  
3694         rc = emulate_sys_reg(sys_reg, &val);     
3695         if (!rc) {                               
3696                 pt_regs_write_reg(regs, rt, v    
3697                 arm64_skip_faulting_instructi    
3698         }                                        
3699         return rc;                               
3700 }                                                
3701                                                  
3702 bool try_emulate_mrs(struct pt_regs *regs, u3    
3703 {                                                
3704         u32 sys_reg, rt;                         
3705                                                  
3706         if (compat_user_mode(regs) || !aarch6    
3707                 return false;                    
3708                                                  
3709         /*                                       
3710          * sys_reg values are defined as used    
3711          * shift the imm value to get the enc    
3712          */                                      
3713         sys_reg = (u32)aarch64_insn_decode_im    
3714         rt = aarch64_insn_decode_register(AAR    
3715         return do_emulate_mrs(regs, sys_reg,     
3716 }                                                
3717                                                  
3718 enum mitigation_state arm64_get_meltdown_stat    
3719 {                                                
3720         if (__meltdown_safe)                     
3721                 return SPECTRE_UNAFFECTED;       
3722                                                  
3723         if (arm64_kernel_unmapped_at_el0())      
3724                 return SPECTRE_MITIGATED;        
3725                                                  
3726         return SPECTRE_VULNERABLE;               
3727 }                                                
3728                                                  
3729 ssize_t cpu_show_meltdown(struct device *dev,    
3730                           char *buf)             
3731 {                                                
3732         switch (arm64_get_meltdown_state()) {    
3733         case SPECTRE_UNAFFECTED:                 
3734                 return sprintf(buf, "Not affe    
3735                                                  
3736         case SPECTRE_MITIGATED:                  
3737                 return sprintf(buf, "Mitigati    
3738                                                  
3739         default:                                 
3740                 return sprintf(buf, "Vulnerab    
3741         }                                        
3742 }                                                
3743                                                  

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php