~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/sysctl/net.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/admin-guide/sysctl/net.rst (Version linux-6.12-rc7) and /Documentation/admin-guide/sysctl/net.rst (Version linux-6.6.60)


  1 ================================                    1 ================================
  2 Documentation for /proc/sys/net/                    2 Documentation for /proc/sys/net/
  3 ================================                    3 ================================
  4                                                     4 
  5 Copyright                                           5 Copyright
  6                                                     6 
  7 Copyright (c) 1999                                  7 Copyright (c) 1999
  8                                                     8 
  9         - Terrehon Bowden <terrehon@pacbell.net      9         - Terrehon Bowden <terrehon@pacbell.net>
 10         - Bodo Bauer <bb@ricochet.net>              10         - Bodo Bauer <bb@ricochet.net>
 11                                                    11 
 12 Copyright (c) 2000                                 12 Copyright (c) 2000
 13                                                    13 
 14         - Jorge Nerin <comandante@zaralinux.com     14         - Jorge Nerin <comandante@zaralinux.com>
 15                                                    15 
 16 Copyright (c) 2009                                 16 Copyright (c) 2009
 17                                                    17 
 18         - Shen Feng <shen@cn.fujitsu.com>           18         - Shen Feng <shen@cn.fujitsu.com>
 19                                                    19 
 20 For general info and legal blurb, please look      20 For general info and legal blurb, please look in index.rst.
 21                                                    21 
 22 ----------------------------------------------     22 ------------------------------------------------------------------------------
 23                                                    23 
 24 This file contains the documentation for the s     24 This file contains the documentation for the sysctl files in
 25 /proc/sys/net                                      25 /proc/sys/net
 26                                                    26 
 27 The interface  to  the  networking  parts  of      27 The interface  to  the  networking  parts  of  the  kernel  is  located  in
 28 /proc/sys/net. The following table shows all p     28 /proc/sys/net. The following table shows all possible subdirectories.  You may
 29 see only some of them, depending on your kerne     29 see only some of them, depending on your kernel's configuration.
 30                                                    30 
 31                                                    31 
 32 Table : Subdirectories in /proc/sys/net            32 Table : Subdirectories in /proc/sys/net
 33                                                    33 
 34  ========= =================== = ========== ==     34  ========= =================== = ========== ===================
 35  Directory Content               Directory  Co     35  Directory Content               Directory  Content
 36  ========= =================== = ========== ==     36  ========= =================== = ========== ===================
 37  802       E802 protocol         mptcp      Mu     37  802       E802 protocol         mptcp      Multipath TCP
 38  appletalk Appletalk protocol    netfilter  Ne     38  appletalk Appletalk protocol    netfilter  Network Filter
 39  ax25      AX25                  netrom     NE     39  ax25      AX25                  netrom     NET/ROM
 40  bridge    Bridging              rose       X.     40  bridge    Bridging              rose       X.25 PLP layer
 41  core      General parameter     tipc       TI     41  core      General parameter     tipc       TIPC
 42  ethernet  Ethernet protocol     unix       Un     42  ethernet  Ethernet protocol     unix       Unix domain sockets
 43  ipv4      IP version 4          x25        X.     43  ipv4      IP version 4          x25        X.25 protocol
 44  ipv6      IP version 6                            44  ipv6      IP version 6
 45  ========= =================== = ========== ==     45  ========= =================== = ========== ===================
 46                                                    46 
 47 1. /proc/sys/net/core - Network core options       47 1. /proc/sys/net/core - Network core options
 48 ============================================       48 ============================================
 49                                                    49 
 50 bpf_jit_enable                                     50 bpf_jit_enable
 51 --------------                                     51 --------------
 52                                                    52 
 53 This enables the BPF Just in Time (JIT) compil     53 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
 54 and efficient infrastructure allowing to execu     54 and efficient infrastructure allowing to execute bytecode at various
 55 hook points. It is used in a number of Linux k     55 hook points. It is used in a number of Linux kernel subsystems such
 56 as networking (e.g. XDP, tc), tracing (e.g. kp     56 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
 57 and security (e.g. seccomp). LLVM has a BPF ba     57 and security (e.g. seccomp). LLVM has a BPF back end that can compile
 58 restricted C into a sequence of BPF instructio     58 restricted C into a sequence of BPF instructions. After program load
 59 through bpf(2) and passing a verifier in the k     59 through bpf(2) and passing a verifier in the kernel, a JIT will then
 60 translate these BPF proglets into native CPU i     60 translate these BPF proglets into native CPU instructions. There are
 61 two flavors of JITs, the newer eBPF JIT curren     61 two flavors of JITs, the newer eBPF JIT currently supported on:
 62                                                    62 
 63   - x86_64                                         63   - x86_64
 64   - x86_32                                         64   - x86_32
 65   - arm64                                          65   - arm64
 66   - arm32                                          66   - arm32
 67   - ppc64                                          67   - ppc64
 68   - ppc32                                          68   - ppc32
 69   - sparc64                                        69   - sparc64
 70   - mips64                                         70   - mips64
 71   - s390x                                          71   - s390x
 72   - riscv64                                        72   - riscv64
 73   - riscv32                                        73   - riscv32
 74   - loongarch64                                << 
 75   - arc                                        << 
 76                                                    74 
 77 And the older cBPF JIT supported on the follow     75 And the older cBPF JIT supported on the following archs:
 78                                                    76 
 79   - mips                                           77   - mips
 80   - sparc                                          78   - sparc
 81                                                    79 
 82 eBPF JITs are a superset of cBPF JITs, meaning     80 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
 83 migrate cBPF instructions into eBPF instructio     81 migrate cBPF instructions into eBPF instructions and then JIT
 84 compile them transparently. Older cBPF JITs ca     82 compile them transparently. Older cBPF JITs can only translate
 85 tcpdump filters, seccomp rules, etc, but not m     83 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
 86 programs loaded through bpf(2).                    84 programs loaded through bpf(2).
 87                                                    85 
 88 Values:                                            86 Values:
 89                                                    87 
 90         - 0 - disable the JIT (default value)      88         - 0 - disable the JIT (default value)
 91         - 1 - enable the JIT                       89         - 1 - enable the JIT
 92         - 2 - enable the JIT and ask the compi     90         - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
 93                                                    91 
 94 bpf_jit_harden                                     92 bpf_jit_harden
 95 --------------                                     93 --------------
 96                                                    94 
 97 This enables hardening for the BPF JIT compile     95 This enables hardening for the BPF JIT compiler. Supported are eBPF
 98 JIT backends. Enabling hardening trades off pe     96 JIT backends. Enabling hardening trades off performance, but can
 99 mitigate JIT spraying.                             97 mitigate JIT spraying.
100                                                    98 
101 Values:                                            99 Values:
102                                                   100 
103         - 0 - disable JIT hardening (default v    101         - 0 - disable JIT hardening (default value)
104         - 1 - enable JIT hardening for unprivi    102         - 1 - enable JIT hardening for unprivileged users only
105         - 2 - enable JIT hardening for all use    103         - 2 - enable JIT hardening for all users
106                                                   104 
107 where "privileged user" in this context means     105 where "privileged user" in this context means a process having
108 CAP_BPF or CAP_SYS_ADMIN in the root user name    106 CAP_BPF or CAP_SYS_ADMIN in the root user name space.
109                                                   107 
110 bpf_jit_kallsyms                                  108 bpf_jit_kallsyms
111 ----------------                                  109 ----------------
112                                                   110 
113 When BPF JIT compiler is enabled, then compile    111 When BPF JIT compiler is enabled, then compiled images are unknown
114 addresses to the kernel, meaning they neither     112 addresses to the kernel, meaning they neither show up in traces nor
115 in /proc/kallsyms. This enables export of thes    113 in /proc/kallsyms. This enables export of these addresses, which can
116 be used for debugging/tracing. If bpf_jit_hard    114 be used for debugging/tracing. If bpf_jit_harden is enabled, this
117 feature is disabled.                              115 feature is disabled.
118                                                   116 
119 Values :                                          117 Values :
120                                                   118 
121         - 0 - disable JIT kallsyms export (def    119         - 0 - disable JIT kallsyms export (default value)
122         - 1 - enable JIT kallsyms export for p    120         - 1 - enable JIT kallsyms export for privileged users only
123                                                   121 
124 bpf_jit_limit                                     122 bpf_jit_limit
125 -------------                                     123 -------------
126                                                   124 
127 This enforces a global limit for memory alloca    125 This enforces a global limit for memory allocations to the BPF JIT
128 compiler in order to reject unprivileged JIT r    126 compiler in order to reject unprivileged JIT requests once it has
129 been surpassed. bpf_jit_limit contains the val    127 been surpassed. bpf_jit_limit contains the value of the global limit
130 in bytes.                                         128 in bytes.
131                                                   129 
132 dev_weight                                        130 dev_weight
133 ----------                                        131 ----------
134                                                   132 
135 The maximum number of packets that kernel can     133 The maximum number of packets that kernel can handle on a NAPI interrupt,
136 it's a Per-CPU variable. For drivers that supp    134 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
137 aggregated packet is counted as one packet in     135 aggregated packet is counted as one packet in this context.
138                                                   136 
139 Default: 64                                       137 Default: 64
140                                                   138 
141 dev_weight_rx_bias                                139 dev_weight_rx_bias
142 ------------------                                140 ------------------
143                                                   141 
144 RPS (e.g. RFS, aRFS) processing is competing w    142 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
145 of the driver for the per softirq cycle netdev    143 of the driver for the per softirq cycle netdev_budget. This parameter influences
146 the proportion of the configured netdev_budget    144 the proportion of the configured netdev_budget that is spent on RPS based packet
147 processing during RX softirq cycles. It is fur    145 processing during RX softirq cycles. It is further meant for making current
148 dev_weight adaptable for asymmetric CPU needs     146 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
149 (see dev_weight_tx_bias) It is effective on a     147 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
150 on dev_weight and is calculated multiplicative    148 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
151                                                   149 
152 Default: 1                                        150 Default: 1
153                                                   151 
154 dev_weight_tx_bias                                152 dev_weight_tx_bias
155 ------------------                                153 ------------------
156                                                   154 
157 Scales the maximum number of packets that can     155 Scales the maximum number of packets that can be processed during a TX softirq cycle.
158 Effective on a per CPU basis. Allows scaling o    156 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
159 net stack processing needs. Be careful to avoi    157 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
160                                                   158 
161 Calculation is based on dev_weight (dev_weight    159 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
162                                                   160 
163 Default: 1                                        161 Default: 1
164                                                   162 
165 default_qdisc                                     163 default_qdisc
166 -------------                                     164 -------------
167                                                   165 
168 The default queuing discipline to use for netw    166 The default queuing discipline to use for network devices. This allows
169 overriding the default of pfifo_fast with an a    167 overriding the default of pfifo_fast with an alternative. Since the default
170 queuing discipline is created without addition    168 queuing discipline is created without additional parameters so is best suited
171 to queuing disciplines that work well without     169 to queuing disciplines that work well without configuration like stochastic
172 fair queue (sfq), CoDel (codel) or fair queue     170 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
173 queuing disciplines like Hierarchical Token Bu    171 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
174 which require setting up classes and bandwidth    172 which require setting up classes and bandwidths. Note that physical multiqueue
175 interfaces still use mq as root qdisc, which i    173 interfaces still use mq as root qdisc, which in turn uses this default for its
176 leaves. Virtual devices (like e.g. lo or veth)    174 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
177 default to noqueue.                               175 default to noqueue.
178                                                   176 
179 Default: pfifo_fast                               177 Default: pfifo_fast
180                                                   178 
181 busy_read                                         179 busy_read
182 ---------                                         180 ---------
183                                                   181 
184 Low latency busy poll timeout for socket reads    182 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
185 Approximate time in us to busy loop waiting fo    183 Approximate time in us to busy loop waiting for packets on the device queue.
186 This sets the default value of the SO_BUSY_POL    184 This sets the default value of the SO_BUSY_POLL socket option.
187 Can be set or overridden per socket by setting    185 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
188 which is the preferred method of enabling. If     186 which is the preferred method of enabling. If you need to enable the feature
189 globally via sysctl, a value of 50 is recommen    187 globally via sysctl, a value of 50 is recommended.
190                                                   188 
191 Will increase power usage.                        189 Will increase power usage.
192                                                   190 
193 Default: 0 (off)                                  191 Default: 0 (off)
194                                                   192 
195 busy_poll                                         193 busy_poll
196 ----------------                                  194 ----------------
197 Low latency busy poll timeout for poll and sel    195 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
198 Approximate time in us to busy loop waiting fo    196 Approximate time in us to busy loop waiting for events.
199 Recommended value depends on the number of soc    197 Recommended value depends on the number of sockets you poll on.
200 For several sockets 50, for several hundreds 1    198 For several sockets 50, for several hundreds 100.
201 For more than that you probably want to use ep    199 For more than that you probably want to use epoll.
202 Note that only sockets with SO_BUSY_POLL set w    200 Note that only sockets with SO_BUSY_POLL set will be busy polled,
203 so you want to either selectively set SO_BUSY_    201 so you want to either selectively set SO_BUSY_POLL on those sockets or set
204 sysctl.net.busy_read globally.                    202 sysctl.net.busy_read globally.
205                                                   203 
206 Will increase power usage.                        204 Will increase power usage.
207                                                   205 
208 Default: 0 (off)                                  206 Default: 0 (off)
209                                                   207 
210 mem_pcpu_rsv                                      208 mem_pcpu_rsv
211 ------------                                      209 ------------
212                                                   210 
213 Per-cpu reserved forward alloc cache size in p    211 Per-cpu reserved forward alloc cache size in page units. Default 1MB per CPU.
214                                                   212 
215 rmem_default                                      213 rmem_default
216 ------------                                      214 ------------
217                                                   215 
218 The default setting of the socket receive buff    216 The default setting of the socket receive buffer in bytes.
219                                                   217 
220 rmem_max                                          218 rmem_max
221 --------                                          219 --------
222                                                   220 
223 The maximum receive socket buffer size in byte    221 The maximum receive socket buffer size in bytes.
224                                                   222 
225 rps_default_mask                                  223 rps_default_mask
226 ----------------                                  224 ----------------
227                                                   225 
228 The default RPS CPU mask used on newly created    226 The default RPS CPU mask used on newly created network devices. An empty
229 mask means RPS disabled by default.               227 mask means RPS disabled by default.
230                                                   228 
231 tstamp_allow_data                                 229 tstamp_allow_data
232 -----------------                                 230 -----------------
233 Allow processes to receive tx timestamps loope    231 Allow processes to receive tx timestamps looped together with the original
234 packet contents. If disabled, transmit timesta    232 packet contents. If disabled, transmit timestamp requests from unprivileged
235 processes are dropped unless socket option SOF    233 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
236                                                   234 
237 Default: 1 (on)                                   235 Default: 1 (on)
238                                                   236 
239                                                   237 
240 wmem_default                                      238 wmem_default
241 ------------                                      239 ------------
242                                                   240 
243 The default setting (in bytes) of the socket s    241 The default setting (in bytes) of the socket send buffer.
244                                                   242 
245 wmem_max                                          243 wmem_max
246 --------                                          244 --------
247                                                   245 
248 The maximum send socket buffer size in bytes.     246 The maximum send socket buffer size in bytes.
249                                                   247 
250 message_burst and message_cost                    248 message_burst and message_cost
251 ------------------------------                    249 ------------------------------
252                                                   250 
253 These parameters  are used to limit the warnin    251 These parameters  are used to limit the warning messages written to the kernel
254 log from  the  networking  code.  They  enforc    252 log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
255 denial-of-service attack  impossible. A higher    253 denial-of-service attack  impossible. A higher message_cost factor, results in
256 fewer messages that will be written. Message_b    254 fewer messages that will be written. Message_burst controls when messages will
257 be dropped.  The  default  settings  limit  wa    255 be dropped.  The  default  settings  limit  warning messages to one every five
258 seconds.                                          256 seconds.
259                                                   257 
260 warnings                                          258 warnings
261 --------                                          259 --------
262                                                   260 
263 This sysctl is now unused.                        261 This sysctl is now unused.
264                                                   262 
265 This was used to control console messages from    263 This was used to control console messages from the networking stack that
266 occur because of problems on the network like     264 occur because of problems on the network like duplicate address or bad
267 checksums.                                        265 checksums.
268                                                   266 
269 These messages are now emitted at KERN_DEBUG a    267 These messages are now emitted at KERN_DEBUG and can generally be enabled
270 and controlled by the dynamic_debug facility.     268 and controlled by the dynamic_debug facility.
271                                                   269 
272 netdev_budget                                     270 netdev_budget
273 -------------                                     271 -------------
274                                                   272 
275 Maximum number of packets taken from all inter    273 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
276 poll). In one polling cycle interfaces which a    274 poll). In one polling cycle interfaces which are registered to polling are
277 probed in a round-robin manner. Also, a pollin    275 probed in a round-robin manner. Also, a polling cycle may not exceed
278 netdev_budget_usecs microseconds, even if netd    276 netdev_budget_usecs microseconds, even if netdev_budget has not been
279 exhausted.                                        277 exhausted.
280                                                   278 
281 netdev_budget_usecs                               279 netdev_budget_usecs
282 ---------------------                             280 ---------------------
283                                                   281 
284 Maximum number of microseconds in one NAPI pol    282 Maximum number of microseconds in one NAPI polling cycle. Polling
285 will exit when either netdev_budget_usecs have    283 will exit when either netdev_budget_usecs have elapsed during the
286 poll cycle or the number of packets processed     284 poll cycle or the number of packets processed reaches netdev_budget.
287                                                   285 
288 netdev_max_backlog                                286 netdev_max_backlog
289 ------------------                                287 ------------------
290                                                   288 
291 Maximum number of packets, queued on the INPUT    289 Maximum number of packets, queued on the INPUT side, when the interface
292 receives packets faster than kernel can proces    290 receives packets faster than kernel can process them.
293                                                   291 
294 netdev_rss_key                                    292 netdev_rss_key
295 --------------                                    293 --------------
296                                                   294 
297 RSS (Receive Side Scaling) enabled drivers use    295 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
298 randomly generated.                               296 randomly generated.
299 Some user space might need to gather its conte    297 Some user space might need to gather its content even if drivers do not
300 provide ethtool -x support yet.                   298 provide ethtool -x support yet.
301                                                   299 
302 ::                                                300 ::
303                                                   301 
304   myhost:~# cat /proc/sys/net/core/netdev_rss_    302   myhost:~# cat /proc/sys/net/core/netdev_rss_key
305   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    303   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
306                                                   304 
307 File contains nul bytes if no driver ever call    305 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
308                                                   306 
309 Note:                                             307 Note:
310   /proc/sys/net/core/netdev_rss_key contains 5    308   /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
311   but most drivers only use 40 bytes of it.       309   but most drivers only use 40 bytes of it.
312                                                   310 
313 ::                                                311 ::
314                                                   312 
315   myhost:~# ethtool -x eth0                       313   myhost:~# ethtool -x eth0
316   RX flow hash indirection table for eth0 with    314   RX flow hash indirection table for eth0 with 8 RX ring(s):
317       0:    0     1     2     3     4     5       315       0:    0     1     2     3     4     5     6     7
318   RSS hash key:                                   316   RSS hash key:
319   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    317   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
320                                                   318 
321 netdev_tstamp_prequeue                            319 netdev_tstamp_prequeue
322 ----------------------                            320 ----------------------
323                                                   321 
324 If set to 0, RX packet timestamps can be sampl    322 If set to 0, RX packet timestamps can be sampled after RPS processing, when
325 the target CPU processes packets. It might giv    323 the target CPU processes packets. It might give some delay on timestamps, but
326 permit to distribute the load on several cpus.    324 permit to distribute the load on several cpus.
327                                                   325 
328 If set to 1 (default), timestamps are sampled     326 If set to 1 (default), timestamps are sampled as soon as possible, before
329 queueing.                                         327 queueing.
330                                                   328 
331 netdev_unregister_timeout_secs                    329 netdev_unregister_timeout_secs
332 ------------------------------                    330 ------------------------------
333                                                   331 
334 Unregister network device timeout in seconds.     332 Unregister network device timeout in seconds.
335 This option controls the timeout (in seconds)     333 This option controls the timeout (in seconds) used to issue a warning while
336 waiting for a network device refcount to drop     334 waiting for a network device refcount to drop to 0 during device
337 unregistration. A lower value may be useful du    335 unregistration. A lower value may be useful during bisection to detect
338 a leaked reference faster. A larger value may     336 a leaked reference faster. A larger value may be useful to prevent false
339 warnings on slow/loaded systems.                  337 warnings on slow/loaded systems.
340 Default value is 10, minimum 1, maximum 3600.     338 Default value is 10, minimum 1, maximum 3600.
341                                                   339 
342 skb_defer_max                                     340 skb_defer_max
343 -------------                                     341 -------------
344                                                   342 
345 Max size (in skbs) of the per-cpu list of skbs    343 Max size (in skbs) of the per-cpu list of skbs being freed
346 by the cpu which allocated them. Used by TCP s    344 by the cpu which allocated them. Used by TCP stack so far.
347                                                   345 
348 Default: 64                                       346 Default: 64
349                                                   347 
350 optmem_max                                        348 optmem_max
351 ----------                                        349 ----------
352                                                   350 
353 Maximum ancillary buffer size allowed per sock    351 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
354 of struct cmsghdr structures with appended dat !! 352 of struct cmsghdr structures with appended data.
355 optmem_max as a limit for its internal structu << 
356                                                << 
357 Default : 128 KB                               << 
358                                                   353 
359 fb_tunnels_only_for_init_net                      354 fb_tunnels_only_for_init_net
360 ----------------------------                      355 ----------------------------
361                                                   356 
362 Controls if fallback tunnels (like tunl0, gre0    357 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
363 sit0, ip6tnl0, ip6gre0) are automatically crea    358 sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities
364 (a) value = 0; respective fallback tunnels are    359 (a) value = 0; respective fallback tunnels are created when module is
365 loaded in every net namespaces (backward compa    360 loaded in every net namespaces (backward compatible behavior).
366 (b) value = 1; [kcmd value: initns] respective    361 (b) value = 1; [kcmd value: initns] respective fallback tunnels are
367 created only in init net namespace and every o    362 created only in init net namespace and every other net namespace will
368 not have them.                                    363 not have them.
369 (c) value = 2; [kcmd value: none] fallback tun    364 (c) value = 2; [kcmd value: none] fallback tunnels are not created
370 when a module is loaded in any of the net name    365 when a module is loaded in any of the net namespace. Setting value to
371 "2" is pointless after boot if these modules a    366 "2" is pointless after boot if these modules are built-in, so there is
372 a kernel command-line option that can change t    367 a kernel command-line option that can change this default. Please refer to
373 Documentation/admin-guide/kernel-parameters.tx    368 Documentation/admin-guide/kernel-parameters.txt for additional details.
374                                                   369 
375 Not creating fallback tunnels gives control to    370 Not creating fallback tunnels gives control to userspace to create
376 whatever is needed only and avoid creating dev    371 whatever is needed only and avoid creating devices which are redundant.
377                                                   372 
378 Default : 0  (for compatibility reasons)          373 Default : 0  (for compatibility reasons)
379                                                   374 
380 devconf_inherit_init_net                          375 devconf_inherit_init_net
381 ------------------------                          376 ------------------------
382                                                   377 
383 Controls if a new network namespace should inh    378 Controls if a new network namespace should inherit all current
384 settings under /proc/sys/net/{ipv4,ipv6}/conf/    379 settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
385 default, we keep the current behavior: for IPv    380 default, we keep the current behavior: for IPv4 we inherit all current
386 settings from init_net and for IPv6 we reset a    381 settings from init_net and for IPv6 we reset all settings to default.
387                                                   382 
388 If set to 1, both IPv4 and IPv6 settings are f    383 If set to 1, both IPv4 and IPv6 settings are forced to inherit from
389 current ones in init_net. If set to 2, both IP    384 current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
390 forced to reset to their default values. If se    385 forced to reset to their default values. If set to 3, both IPv4 and IPv6
391 settings are forced to inherit from current on    386 settings are forced to inherit from current ones in the netns where this
392 new netns has been created.                       387 new netns has been created.
393                                                   388 
394 Default : 0  (for compatibility reasons)          389 Default : 0  (for compatibility reasons)
395                                                   390 
396 txrehash                                          391 txrehash
397 --------                                          392 --------
398                                                   393 
399 Controls default hash rethink behaviour on soc    394 Controls default hash rethink behaviour on socket when SO_TXREHASH option is set
400 to SOCK_TXREHASH_DEFAULT (i. e. not overridden    395 to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt).
401                                                   396 
402 If set to 1 (default), hash rethink is perform    397 If set to 1 (default), hash rethink is performed on listening socket.
403 If set to 0, hash rethink is not performed.       398 If set to 0, hash rethink is not performed.
404                                                   399 
405 gro_normal_batch                                  400 gro_normal_batch
406 ----------------                                  401 ----------------
407                                                   402 
408 Maximum number of the segments to batch up on     403 Maximum number of the segments to batch up on output of GRO. When a packet
409 exits GRO, either as a coalesced superframe or    404 exits GRO, either as a coalesced superframe or as an original packet which
410 GRO has decided not to coalesce, it is placed     405 GRO has decided not to coalesce, it is placed on a per-NAPI list. This
411 list is then passed to the stack when the numb    406 list is then passed to the stack when the number of segments reaches the
412 gro_normal_batch limit.                           407 gro_normal_batch limit.
413                                                   408 
414 high_order_alloc_disable                          409 high_order_alloc_disable
415 ------------------------                          410 ------------------------
416                                                   411 
417 By default the allocator for page frags tries     412 By default the allocator for page frags tries to use high order pages (order-3
418 on x86). While the default behavior gives good    413 on x86). While the default behavior gives good results in most cases, some users
419 might have hit a contention in page allocation    414 might have hit a contention in page allocations/freeing. This was especially
420 true on older kernels (< 5.14) when high-order    415 true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
421 lists. This allows to opt-in for order-0 alloc    416 lists. This allows to opt-in for order-0 allocation instead but is now mostly of
422 historical importance.                            417 historical importance.
423                                                   418 
424 Default: 0                                        419 Default: 0
425                                                   420 
426 2. /proc/sys/net/unix - Parameters for Unix do    421 2. /proc/sys/net/unix - Parameters for Unix domain sockets
427 ----------------------------------------------    422 ----------------------------------------------------------
428                                                   423 
429 There is only one file in this directory.         424 There is only one file in this directory.
430 unix_dgram_qlen limits the max number of datag    425 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
431 socket's buffer. It will not take effect unles    426 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
432                                                   427 
433                                                   428 
434 3. /proc/sys/net/ipv4 - IPV4 settings             429 3. /proc/sys/net/ipv4 - IPV4 settings
435 -------------------------------------             430 -------------------------------------
436 Please see: Documentation/networking/ip-sysctl    431 Please see: Documentation/networking/ip-sysctl.rst and
437 Documentation/admin-guide/sysctl/net.rst for d    432 Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
438                                                   433 
439                                                   434 
440 4. Appletalk                                      435 4. Appletalk
441 ------------                                      436 ------------
442                                                   437 
443 The /proc/sys/net/appletalk  directory  holds     438 The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
444 when Appletalk is loaded. The configurable par    439 when Appletalk is loaded. The configurable parameters are:
445                                                   440 
446 aarp-expiry-time                                  441 aarp-expiry-time
447 ----------------                                  442 ----------------
448                                                   443 
449 The amount  of  time  we keep an ARP entry bef    444 The amount  of  time  we keep an ARP entry before expiring it. Used to age out
450 old hosts.                                        445 old hosts.
451                                                   446 
452 aarp-resolve-time                                 447 aarp-resolve-time
453 -----------------                                 448 -----------------
454                                                   449 
455 The amount of time we will spend trying to res    450 The amount of time we will spend trying to resolve an Appletalk address.
456                                                   451 
457 aarp-retransmit-limit                             452 aarp-retransmit-limit
458 ---------------------                             453 ---------------------
459                                                   454 
460 The number of times we will retransmit a query    455 The number of times we will retransmit a query before giving up.
461                                                   456 
462 aarp-tick-time                                    457 aarp-tick-time
463 --------------                                    458 --------------
464                                                   459 
465 Controls the rate at which expires are checked    460 Controls the rate at which expires are checked.
466                                                   461 
467 The directory  /proc/net/appletalk  holds the     462 The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
468 on a machine.                                     463 on a machine.
469                                                   464 
470 The fields  indicate  the DDP type, the local     465 The fields  indicate  the DDP type, the local address (in network:node format)
471 the remote  address,  the  size of the transmi    466 the remote  address,  the  size of the transmit pending queue, the size of the
472 received queue  (bytes waiting for application    467 received queue  (bytes waiting for applications to read) the state and the uid
473 owning the socket.                                468 owning the socket.
474                                                   469 
475 /proc/net/atalk_iface lists  all  the  interfa    470 /proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
476 shows the  name  of the interface, its Appleta    471 shows the  name  of the interface, its Appletalk address, the network range on
477 that address  (or  network number for phase 1     472 that address  (or  network number for phase 1 networks), and the status of the
478 interface.                                        473 interface.
479                                                   474 
480 /proc/net/atalk_route lists  each  known  netw    475 /proc/net/atalk_route lists  each  known  network  route.  It lists the target
481 (network) that the route leads to, the router     476 (network) that the route leads to, the router (may be directly connected), the
482 route flags, and the device the route is using    477 route flags, and the device the route is using.
483                                                   478 
484 5. TIPC                                           479 5. TIPC
485 -------                                           480 -------
486                                                   481 
487 tipc_rmem                                         482 tipc_rmem
488 ---------                                         483 ---------
489                                                   484 
490 The TIPC protocol now has a tunable for the re    485 The TIPC protocol now has a tunable for the receive memory, similar to the
491 tcp_rmem - i.e. a vector of 3 INTEGERs: (min,     486 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
492                                                   487 
493 ::                                                488 ::
494                                                   489 
495     # cat /proc/sys/net/tipc/tipc_rmem            490     # cat /proc/sys/net/tipc/tipc_rmem
496     4252725 34021800        68043600              491     4252725 34021800        68043600
497     #                                             492     #
498                                                   493 
499 The max value is set to CONN_OVERLOAD_LIMIT, a    494 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
500 are scaled (shifted) versions of that same val    495 are scaled (shifted) versions of that same value.  Note that the min value
501 is not at this point in time used in any meani    496 is not at this point in time used in any meaningful way, but the triplet is
502 preserved in order to be consistent with thing    497 preserved in order to be consistent with things like tcp_rmem.
503                                                   498 
504 named_timeout                                     499 named_timeout
505 -------------                                     500 -------------
506                                                   501 
507 TIPC name table updates are distributed asynch    502 TIPC name table updates are distributed asynchronously in a cluster, without
508 any form of transaction handling. This means t    503 any form of transaction handling. This means that different race scenarios are
509 possible. One such is that a name withdrawal s    504 possible. One such is that a name withdrawal sent out by one node and received
510 by another node may arrive after a second, ove    505 by another node may arrive after a second, overlapping name publication already
511 has been accepted from a third node, although     506 has been accepted from a third node, although the conflicting updates
512 originally may have been issued in the correct    507 originally may have been issued in the correct sequential order.
513 If named_timeout is nonzero, failed topology u    508 If named_timeout is nonzero, failed topology updates will be placed on a defer
514 queue until another event arrives that clears     509 queue until another event arrives that clears the error, or until the timeout
515 expires. Value is in milliseconds.                510 expires. Value is in milliseconds.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php