~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/sysctl/net.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/admin-guide/sysctl/net.rst (Version linux-6.12-rc7) and /Documentation/admin-guide/sysctl/net.rst (Version linux-6.9.12)


  1 ================================                    1 ================================
  2 Documentation for /proc/sys/net/                    2 Documentation for /proc/sys/net/
  3 ================================                    3 ================================
  4                                                     4 
  5 Copyright                                           5 Copyright
  6                                                     6 
  7 Copyright (c) 1999                                  7 Copyright (c) 1999
  8                                                     8 
  9         - Terrehon Bowden <terrehon@pacbell.net      9         - Terrehon Bowden <terrehon@pacbell.net>
 10         - Bodo Bauer <bb@ricochet.net>              10         - Bodo Bauer <bb@ricochet.net>
 11                                                    11 
 12 Copyright (c) 2000                                 12 Copyright (c) 2000
 13                                                    13 
 14         - Jorge Nerin <comandante@zaralinux.com     14         - Jorge Nerin <comandante@zaralinux.com>
 15                                                    15 
 16 Copyright (c) 2009                                 16 Copyright (c) 2009
 17                                                    17 
 18         - Shen Feng <shen@cn.fujitsu.com>           18         - Shen Feng <shen@cn.fujitsu.com>
 19                                                    19 
 20 For general info and legal blurb, please look      20 For general info and legal blurb, please look in index.rst.
 21                                                    21 
 22 ----------------------------------------------     22 ------------------------------------------------------------------------------
 23                                                    23 
 24 This file contains the documentation for the s     24 This file contains the documentation for the sysctl files in
 25 /proc/sys/net                                      25 /proc/sys/net
 26                                                    26 
 27 The interface  to  the  networking  parts  of      27 The interface  to  the  networking  parts  of  the  kernel  is  located  in
 28 /proc/sys/net. The following table shows all p     28 /proc/sys/net. The following table shows all possible subdirectories.  You may
 29 see only some of them, depending on your kerne     29 see only some of them, depending on your kernel's configuration.
 30                                                    30 
 31                                                    31 
 32 Table : Subdirectories in /proc/sys/net            32 Table : Subdirectories in /proc/sys/net
 33                                                    33 
 34  ========= =================== = ========== ==     34  ========= =================== = ========== ===================
 35  Directory Content               Directory  Co     35  Directory Content               Directory  Content
 36  ========= =================== = ========== ==     36  ========= =================== = ========== ===================
 37  802       E802 protocol         mptcp      Mu     37  802       E802 protocol         mptcp      Multipath TCP
 38  appletalk Appletalk protocol    netfilter  Ne     38  appletalk Appletalk protocol    netfilter  Network Filter
 39  ax25      AX25                  netrom     NE     39  ax25      AX25                  netrom     NET/ROM
 40  bridge    Bridging              rose       X.     40  bridge    Bridging              rose       X.25 PLP layer
 41  core      General parameter     tipc       TI     41  core      General parameter     tipc       TIPC
 42  ethernet  Ethernet protocol     unix       Un     42  ethernet  Ethernet protocol     unix       Unix domain sockets
 43  ipv4      IP version 4          x25        X.     43  ipv4      IP version 4          x25        X.25 protocol
 44  ipv6      IP version 6                            44  ipv6      IP version 6
 45  ========= =================== = ========== ==     45  ========= =================== = ========== ===================
 46                                                    46 
 47 1. /proc/sys/net/core - Network core options       47 1. /proc/sys/net/core - Network core options
 48 ============================================       48 ============================================
 49                                                    49 
 50 bpf_jit_enable                                     50 bpf_jit_enable
 51 --------------                                     51 --------------
 52                                                    52 
 53 This enables the BPF Just in Time (JIT) compil     53 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
 54 and efficient infrastructure allowing to execu     54 and efficient infrastructure allowing to execute bytecode at various
 55 hook points. It is used in a number of Linux k     55 hook points. It is used in a number of Linux kernel subsystems such
 56 as networking (e.g. XDP, tc), tracing (e.g. kp     56 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
 57 and security (e.g. seccomp). LLVM has a BPF ba     57 and security (e.g. seccomp). LLVM has a BPF back end that can compile
 58 restricted C into a sequence of BPF instructio     58 restricted C into a sequence of BPF instructions. After program load
 59 through bpf(2) and passing a verifier in the k     59 through bpf(2) and passing a verifier in the kernel, a JIT will then
 60 translate these BPF proglets into native CPU i     60 translate these BPF proglets into native CPU instructions. There are
 61 two flavors of JITs, the newer eBPF JIT curren     61 two flavors of JITs, the newer eBPF JIT currently supported on:
 62                                                    62 
 63   - x86_64                                         63   - x86_64
 64   - x86_32                                         64   - x86_32
 65   - arm64                                          65   - arm64
 66   - arm32                                          66   - arm32
 67   - ppc64                                          67   - ppc64
 68   - ppc32                                          68   - ppc32
 69   - sparc64                                        69   - sparc64
 70   - mips64                                         70   - mips64
 71   - s390x                                          71   - s390x
 72   - riscv64                                        72   - riscv64
 73   - riscv32                                        73   - riscv32
 74   - loongarch64                                    74   - loongarch64
 75   - arc                                        << 
 76                                                    75 
 77 And the older cBPF JIT supported on the follow     76 And the older cBPF JIT supported on the following archs:
 78                                                    77 
 79   - mips                                           78   - mips
 80   - sparc                                          79   - sparc
 81                                                    80 
 82 eBPF JITs are a superset of cBPF JITs, meaning     81 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
 83 migrate cBPF instructions into eBPF instructio     82 migrate cBPF instructions into eBPF instructions and then JIT
 84 compile them transparently. Older cBPF JITs ca     83 compile them transparently. Older cBPF JITs can only translate
 85 tcpdump filters, seccomp rules, etc, but not m     84 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
 86 programs loaded through bpf(2).                    85 programs loaded through bpf(2).
 87                                                    86 
 88 Values:                                            87 Values:
 89                                                    88 
 90         - 0 - disable the JIT (default value)      89         - 0 - disable the JIT (default value)
 91         - 1 - enable the JIT                       90         - 1 - enable the JIT
 92         - 2 - enable the JIT and ask the compi     91         - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
 93                                                    92 
 94 bpf_jit_harden                                     93 bpf_jit_harden
 95 --------------                                     94 --------------
 96                                                    95 
 97 This enables hardening for the BPF JIT compile     96 This enables hardening for the BPF JIT compiler. Supported are eBPF
 98 JIT backends. Enabling hardening trades off pe     97 JIT backends. Enabling hardening trades off performance, but can
 99 mitigate JIT spraying.                             98 mitigate JIT spraying.
100                                                    99 
101 Values:                                           100 Values:
102                                                   101 
103         - 0 - disable JIT hardening (default v    102         - 0 - disable JIT hardening (default value)
104         - 1 - enable JIT hardening for unprivi    103         - 1 - enable JIT hardening for unprivileged users only
105         - 2 - enable JIT hardening for all use    104         - 2 - enable JIT hardening for all users
106                                                   105 
107 where "privileged user" in this context means     106 where "privileged user" in this context means a process having
108 CAP_BPF or CAP_SYS_ADMIN in the root user name    107 CAP_BPF or CAP_SYS_ADMIN in the root user name space.
109                                                   108 
110 bpf_jit_kallsyms                                  109 bpf_jit_kallsyms
111 ----------------                                  110 ----------------
112                                                   111 
113 When BPF JIT compiler is enabled, then compile    112 When BPF JIT compiler is enabled, then compiled images are unknown
114 addresses to the kernel, meaning they neither     113 addresses to the kernel, meaning they neither show up in traces nor
115 in /proc/kallsyms. This enables export of thes    114 in /proc/kallsyms. This enables export of these addresses, which can
116 be used for debugging/tracing. If bpf_jit_hard    115 be used for debugging/tracing. If bpf_jit_harden is enabled, this
117 feature is disabled.                              116 feature is disabled.
118                                                   117 
119 Values :                                          118 Values :
120                                                   119 
121         - 0 - disable JIT kallsyms export (def    120         - 0 - disable JIT kallsyms export (default value)
122         - 1 - enable JIT kallsyms export for p    121         - 1 - enable JIT kallsyms export for privileged users only
123                                                   122 
124 bpf_jit_limit                                     123 bpf_jit_limit
125 -------------                                     124 -------------
126                                                   125 
127 This enforces a global limit for memory alloca    126 This enforces a global limit for memory allocations to the BPF JIT
128 compiler in order to reject unprivileged JIT r    127 compiler in order to reject unprivileged JIT requests once it has
129 been surpassed. bpf_jit_limit contains the val    128 been surpassed. bpf_jit_limit contains the value of the global limit
130 in bytes.                                         129 in bytes.
131                                                   130 
132 dev_weight                                        131 dev_weight
133 ----------                                        132 ----------
134                                                   133 
135 The maximum number of packets that kernel can     134 The maximum number of packets that kernel can handle on a NAPI interrupt,
136 it's a Per-CPU variable. For drivers that supp    135 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
137 aggregated packet is counted as one packet in     136 aggregated packet is counted as one packet in this context.
138                                                   137 
139 Default: 64                                       138 Default: 64
140                                                   139 
141 dev_weight_rx_bias                                140 dev_weight_rx_bias
142 ------------------                                141 ------------------
143                                                   142 
144 RPS (e.g. RFS, aRFS) processing is competing w    143 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
145 of the driver for the per softirq cycle netdev    144 of the driver for the per softirq cycle netdev_budget. This parameter influences
146 the proportion of the configured netdev_budget    145 the proportion of the configured netdev_budget that is spent on RPS based packet
147 processing during RX softirq cycles. It is fur    146 processing during RX softirq cycles. It is further meant for making current
148 dev_weight adaptable for asymmetric CPU needs     147 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
149 (see dev_weight_tx_bias) It is effective on a     148 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
150 on dev_weight and is calculated multiplicative    149 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
151                                                   150 
152 Default: 1                                        151 Default: 1
153                                                   152 
154 dev_weight_tx_bias                                153 dev_weight_tx_bias
155 ------------------                                154 ------------------
156                                                   155 
157 Scales the maximum number of packets that can     156 Scales the maximum number of packets that can be processed during a TX softirq cycle.
158 Effective on a per CPU basis. Allows scaling o    157 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
159 net stack processing needs. Be careful to avoi    158 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
160                                                   159 
161 Calculation is based on dev_weight (dev_weight    160 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
162                                                   161 
163 Default: 1                                        162 Default: 1
164                                                   163 
165 default_qdisc                                     164 default_qdisc
166 -------------                                     165 -------------
167                                                   166 
168 The default queuing discipline to use for netw    167 The default queuing discipline to use for network devices. This allows
169 overriding the default of pfifo_fast with an a    168 overriding the default of pfifo_fast with an alternative. Since the default
170 queuing discipline is created without addition    169 queuing discipline is created without additional parameters so is best suited
171 to queuing disciplines that work well without     170 to queuing disciplines that work well without configuration like stochastic
172 fair queue (sfq), CoDel (codel) or fair queue     171 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
173 queuing disciplines like Hierarchical Token Bu    172 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
174 which require setting up classes and bandwidth    173 which require setting up classes and bandwidths. Note that physical multiqueue
175 interfaces still use mq as root qdisc, which i    174 interfaces still use mq as root qdisc, which in turn uses this default for its
176 leaves. Virtual devices (like e.g. lo or veth)    175 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
177 default to noqueue.                               176 default to noqueue.
178                                                   177 
179 Default: pfifo_fast                               178 Default: pfifo_fast
180                                                   179 
181 busy_read                                         180 busy_read
182 ---------                                         181 ---------
183                                                   182 
184 Low latency busy poll timeout for socket reads    183 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
185 Approximate time in us to busy loop waiting fo    184 Approximate time in us to busy loop waiting for packets on the device queue.
186 This sets the default value of the SO_BUSY_POL    185 This sets the default value of the SO_BUSY_POLL socket option.
187 Can be set or overridden per socket by setting    186 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
188 which is the preferred method of enabling. If     187 which is the preferred method of enabling. If you need to enable the feature
189 globally via sysctl, a value of 50 is recommen    188 globally via sysctl, a value of 50 is recommended.
190                                                   189 
191 Will increase power usage.                        190 Will increase power usage.
192                                                   191 
193 Default: 0 (off)                                  192 Default: 0 (off)
194                                                   193 
195 busy_poll                                         194 busy_poll
196 ----------------                                  195 ----------------
197 Low latency busy poll timeout for poll and sel    196 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
198 Approximate time in us to busy loop waiting fo    197 Approximate time in us to busy loop waiting for events.
199 Recommended value depends on the number of soc    198 Recommended value depends on the number of sockets you poll on.
200 For several sockets 50, for several hundreds 1    199 For several sockets 50, for several hundreds 100.
201 For more than that you probably want to use ep    200 For more than that you probably want to use epoll.
202 Note that only sockets with SO_BUSY_POLL set w    201 Note that only sockets with SO_BUSY_POLL set will be busy polled,
203 so you want to either selectively set SO_BUSY_    202 so you want to either selectively set SO_BUSY_POLL on those sockets or set
204 sysctl.net.busy_read globally.                    203 sysctl.net.busy_read globally.
205                                                   204 
206 Will increase power usage.                        205 Will increase power usage.
207                                                   206 
208 Default: 0 (off)                                  207 Default: 0 (off)
209                                                   208 
210 mem_pcpu_rsv                                      209 mem_pcpu_rsv
211 ------------                                      210 ------------
212                                                   211 
213 Per-cpu reserved forward alloc cache size in p    212 Per-cpu reserved forward alloc cache size in page units. Default 1MB per CPU.
214                                                   213 
215 rmem_default                                      214 rmem_default
216 ------------                                      215 ------------
217                                                   216 
218 The default setting of the socket receive buff    217 The default setting of the socket receive buffer in bytes.
219                                                   218 
220 rmem_max                                          219 rmem_max
221 --------                                          220 --------
222                                                   221 
223 The maximum receive socket buffer size in byte    222 The maximum receive socket buffer size in bytes.
224                                                   223 
225 rps_default_mask                                  224 rps_default_mask
226 ----------------                                  225 ----------------
227                                                   226 
228 The default RPS CPU mask used on newly created    227 The default RPS CPU mask used on newly created network devices. An empty
229 mask means RPS disabled by default.               228 mask means RPS disabled by default.
230                                                   229 
231 tstamp_allow_data                                 230 tstamp_allow_data
232 -----------------                                 231 -----------------
233 Allow processes to receive tx timestamps loope    232 Allow processes to receive tx timestamps looped together with the original
234 packet contents. If disabled, transmit timesta    233 packet contents. If disabled, transmit timestamp requests from unprivileged
235 processes are dropped unless socket option SOF    234 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
236                                                   235 
237 Default: 1 (on)                                   236 Default: 1 (on)
238                                                   237 
239                                                   238 
240 wmem_default                                      239 wmem_default
241 ------------                                      240 ------------
242                                                   241 
243 The default setting (in bytes) of the socket s    242 The default setting (in bytes) of the socket send buffer.
244                                                   243 
245 wmem_max                                          244 wmem_max
246 --------                                          245 --------
247                                                   246 
248 The maximum send socket buffer size in bytes.     247 The maximum send socket buffer size in bytes.
249                                                   248 
250 message_burst and message_cost                    249 message_burst and message_cost
251 ------------------------------                    250 ------------------------------
252                                                   251 
253 These parameters  are used to limit the warnin    252 These parameters  are used to limit the warning messages written to the kernel
254 log from  the  networking  code.  They  enforc    253 log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
255 denial-of-service attack  impossible. A higher    254 denial-of-service attack  impossible. A higher message_cost factor, results in
256 fewer messages that will be written. Message_b    255 fewer messages that will be written. Message_burst controls when messages will
257 be dropped.  The  default  settings  limit  wa    256 be dropped.  The  default  settings  limit  warning messages to one every five
258 seconds.                                          257 seconds.
259                                                   258 
260 warnings                                          259 warnings
261 --------                                          260 --------
262                                                   261 
263 This sysctl is now unused.                        262 This sysctl is now unused.
264                                                   263 
265 This was used to control console messages from    264 This was used to control console messages from the networking stack that
266 occur because of problems on the network like     265 occur because of problems on the network like duplicate address or bad
267 checksums.                                        266 checksums.
268                                                   267 
269 These messages are now emitted at KERN_DEBUG a    268 These messages are now emitted at KERN_DEBUG and can generally be enabled
270 and controlled by the dynamic_debug facility.     269 and controlled by the dynamic_debug facility.
271                                                   270 
272 netdev_budget                                     271 netdev_budget
273 -------------                                     272 -------------
274                                                   273 
275 Maximum number of packets taken from all inter    274 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
276 poll). In one polling cycle interfaces which a    275 poll). In one polling cycle interfaces which are registered to polling are
277 probed in a round-robin manner. Also, a pollin    276 probed in a round-robin manner. Also, a polling cycle may not exceed
278 netdev_budget_usecs microseconds, even if netd    277 netdev_budget_usecs microseconds, even if netdev_budget has not been
279 exhausted.                                        278 exhausted.
280                                                   279 
281 netdev_budget_usecs                               280 netdev_budget_usecs
282 ---------------------                             281 ---------------------
283                                                   282 
284 Maximum number of microseconds in one NAPI pol    283 Maximum number of microseconds in one NAPI polling cycle. Polling
285 will exit when either netdev_budget_usecs have    284 will exit when either netdev_budget_usecs have elapsed during the
286 poll cycle or the number of packets processed     285 poll cycle or the number of packets processed reaches netdev_budget.
287                                                   286 
288 netdev_max_backlog                                287 netdev_max_backlog
289 ------------------                                288 ------------------
290                                                   289 
291 Maximum number of packets, queued on the INPUT    290 Maximum number of packets, queued on the INPUT side, when the interface
292 receives packets faster than kernel can proces    291 receives packets faster than kernel can process them.
293                                                   292 
294 netdev_rss_key                                    293 netdev_rss_key
295 --------------                                    294 --------------
296                                                   295 
297 RSS (Receive Side Scaling) enabled drivers use    296 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
298 randomly generated.                               297 randomly generated.
299 Some user space might need to gather its conte    298 Some user space might need to gather its content even if drivers do not
300 provide ethtool -x support yet.                   299 provide ethtool -x support yet.
301                                                   300 
302 ::                                                301 ::
303                                                   302 
304   myhost:~# cat /proc/sys/net/core/netdev_rss_    303   myhost:~# cat /proc/sys/net/core/netdev_rss_key
305   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    304   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
306                                                   305 
307 File contains nul bytes if no driver ever call    306 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
308                                                   307 
309 Note:                                             308 Note:
310   /proc/sys/net/core/netdev_rss_key contains 5    309   /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
311   but most drivers only use 40 bytes of it.       310   but most drivers only use 40 bytes of it.
312                                                   311 
313 ::                                                312 ::
314                                                   313 
315   myhost:~# ethtool -x eth0                       314   myhost:~# ethtool -x eth0
316   RX flow hash indirection table for eth0 with    315   RX flow hash indirection table for eth0 with 8 RX ring(s):
317       0:    0     1     2     3     4     5       316       0:    0     1     2     3     4     5     6     7
318   RSS hash key:                                   317   RSS hash key:
319   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    318   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
320                                                   319 
321 netdev_tstamp_prequeue                            320 netdev_tstamp_prequeue
322 ----------------------                            321 ----------------------
323                                                   322 
324 If set to 0, RX packet timestamps can be sampl    323 If set to 0, RX packet timestamps can be sampled after RPS processing, when
325 the target CPU processes packets. It might giv    324 the target CPU processes packets. It might give some delay on timestamps, but
326 permit to distribute the load on several cpus.    325 permit to distribute the load on several cpus.
327                                                   326 
328 If set to 1 (default), timestamps are sampled     327 If set to 1 (default), timestamps are sampled as soon as possible, before
329 queueing.                                         328 queueing.
330                                                   329 
331 netdev_unregister_timeout_secs                    330 netdev_unregister_timeout_secs
332 ------------------------------                    331 ------------------------------
333                                                   332 
334 Unregister network device timeout in seconds.     333 Unregister network device timeout in seconds.
335 This option controls the timeout (in seconds)     334 This option controls the timeout (in seconds) used to issue a warning while
336 waiting for a network device refcount to drop     335 waiting for a network device refcount to drop to 0 during device
337 unregistration. A lower value may be useful du    336 unregistration. A lower value may be useful during bisection to detect
338 a leaked reference faster. A larger value may     337 a leaked reference faster. A larger value may be useful to prevent false
339 warnings on slow/loaded systems.                  338 warnings on slow/loaded systems.
340 Default value is 10, minimum 1, maximum 3600.     339 Default value is 10, minimum 1, maximum 3600.
341                                                   340 
342 skb_defer_max                                     341 skb_defer_max
343 -------------                                     342 -------------
344                                                   343 
345 Max size (in skbs) of the per-cpu list of skbs    344 Max size (in skbs) of the per-cpu list of skbs being freed
346 by the cpu which allocated them. Used by TCP s    345 by the cpu which allocated them. Used by TCP stack so far.
347                                                   346 
348 Default: 64                                       347 Default: 64
349                                                   348 
350 optmem_max                                        349 optmem_max
351 ----------                                        350 ----------
352                                                   351 
353 Maximum ancillary buffer size allowed per sock    352 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
354 of struct cmsghdr structures with appended dat    353 of struct cmsghdr structures with appended data. TCP tx zerocopy also uses
355 optmem_max as a limit for its internal structu    354 optmem_max as a limit for its internal structures.
356                                                   355 
357 Default : 128 KB                                  356 Default : 128 KB
358                                                   357 
359 fb_tunnels_only_for_init_net                      358 fb_tunnels_only_for_init_net
360 ----------------------------                      359 ----------------------------
361                                                   360 
362 Controls if fallback tunnels (like tunl0, gre0    361 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
363 sit0, ip6tnl0, ip6gre0) are automatically crea    362 sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities
364 (a) value = 0; respective fallback tunnels are    363 (a) value = 0; respective fallback tunnels are created when module is
365 loaded in every net namespaces (backward compa    364 loaded in every net namespaces (backward compatible behavior).
366 (b) value = 1; [kcmd value: initns] respective    365 (b) value = 1; [kcmd value: initns] respective fallback tunnels are
367 created only in init net namespace and every o    366 created only in init net namespace and every other net namespace will
368 not have them.                                    367 not have them.
369 (c) value = 2; [kcmd value: none] fallback tun    368 (c) value = 2; [kcmd value: none] fallback tunnels are not created
370 when a module is loaded in any of the net name    369 when a module is loaded in any of the net namespace. Setting value to
371 "2" is pointless after boot if these modules a    370 "2" is pointless after boot if these modules are built-in, so there is
372 a kernel command-line option that can change t    371 a kernel command-line option that can change this default. Please refer to
373 Documentation/admin-guide/kernel-parameters.tx    372 Documentation/admin-guide/kernel-parameters.txt for additional details.
374                                                   373 
375 Not creating fallback tunnels gives control to    374 Not creating fallback tunnels gives control to userspace to create
376 whatever is needed only and avoid creating dev    375 whatever is needed only and avoid creating devices which are redundant.
377                                                   376 
378 Default : 0  (for compatibility reasons)          377 Default : 0  (for compatibility reasons)
379                                                   378 
380 devconf_inherit_init_net                          379 devconf_inherit_init_net
381 ------------------------                          380 ------------------------
382                                                   381 
383 Controls if a new network namespace should inh    382 Controls if a new network namespace should inherit all current
384 settings under /proc/sys/net/{ipv4,ipv6}/conf/    383 settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
385 default, we keep the current behavior: for IPv    384 default, we keep the current behavior: for IPv4 we inherit all current
386 settings from init_net and for IPv6 we reset a    385 settings from init_net and for IPv6 we reset all settings to default.
387                                                   386 
388 If set to 1, both IPv4 and IPv6 settings are f    387 If set to 1, both IPv4 and IPv6 settings are forced to inherit from
389 current ones in init_net. If set to 2, both IP    388 current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
390 forced to reset to their default values. If se    389 forced to reset to their default values. If set to 3, both IPv4 and IPv6
391 settings are forced to inherit from current on    390 settings are forced to inherit from current ones in the netns where this
392 new netns has been created.                       391 new netns has been created.
393                                                   392 
394 Default : 0  (for compatibility reasons)          393 Default : 0  (for compatibility reasons)
395                                                   394 
396 txrehash                                          395 txrehash
397 --------                                          396 --------
398                                                   397 
399 Controls default hash rethink behaviour on soc    398 Controls default hash rethink behaviour on socket when SO_TXREHASH option is set
400 to SOCK_TXREHASH_DEFAULT (i. e. not overridden    399 to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt).
401                                                   400 
402 If set to 1 (default), hash rethink is perform    401 If set to 1 (default), hash rethink is performed on listening socket.
403 If set to 0, hash rethink is not performed.       402 If set to 0, hash rethink is not performed.
404                                                   403 
405 gro_normal_batch                                  404 gro_normal_batch
406 ----------------                                  405 ----------------
407                                                   406 
408 Maximum number of the segments to batch up on     407 Maximum number of the segments to batch up on output of GRO. When a packet
409 exits GRO, either as a coalesced superframe or    408 exits GRO, either as a coalesced superframe or as an original packet which
410 GRO has decided not to coalesce, it is placed     409 GRO has decided not to coalesce, it is placed on a per-NAPI list. This
411 list is then passed to the stack when the numb    410 list is then passed to the stack when the number of segments reaches the
412 gro_normal_batch limit.                           411 gro_normal_batch limit.
413                                                   412 
414 high_order_alloc_disable                          413 high_order_alloc_disable
415 ------------------------                          414 ------------------------
416                                                   415 
417 By default the allocator for page frags tries     416 By default the allocator for page frags tries to use high order pages (order-3
418 on x86). While the default behavior gives good    417 on x86). While the default behavior gives good results in most cases, some users
419 might have hit a contention in page allocation    418 might have hit a contention in page allocations/freeing. This was especially
420 true on older kernels (< 5.14) when high-order    419 true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
421 lists. This allows to opt-in for order-0 alloc    420 lists. This allows to opt-in for order-0 allocation instead but is now mostly of
422 historical importance.                            421 historical importance.
423                                                   422 
424 Default: 0                                        423 Default: 0
425                                                   424 
426 2. /proc/sys/net/unix - Parameters for Unix do    425 2. /proc/sys/net/unix - Parameters for Unix domain sockets
427 ----------------------------------------------    426 ----------------------------------------------------------
428                                                   427 
429 There is only one file in this directory.         428 There is only one file in this directory.
430 unix_dgram_qlen limits the max number of datag    429 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
431 socket's buffer. It will not take effect unles    430 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
432                                                   431 
433                                                   432 
434 3. /proc/sys/net/ipv4 - IPV4 settings             433 3. /proc/sys/net/ipv4 - IPV4 settings
435 -------------------------------------             434 -------------------------------------
436 Please see: Documentation/networking/ip-sysctl    435 Please see: Documentation/networking/ip-sysctl.rst and
437 Documentation/admin-guide/sysctl/net.rst for d    436 Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
438                                                   437 
439                                                   438 
440 4. Appletalk                                      439 4. Appletalk
441 ------------                                      440 ------------
442                                                   441 
443 The /proc/sys/net/appletalk  directory  holds     442 The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
444 when Appletalk is loaded. The configurable par    443 when Appletalk is loaded. The configurable parameters are:
445                                                   444 
446 aarp-expiry-time                                  445 aarp-expiry-time
447 ----------------                                  446 ----------------
448                                                   447 
449 The amount  of  time  we keep an ARP entry bef    448 The amount  of  time  we keep an ARP entry before expiring it. Used to age out
450 old hosts.                                        449 old hosts.
451                                                   450 
452 aarp-resolve-time                                 451 aarp-resolve-time
453 -----------------                                 452 -----------------
454                                                   453 
455 The amount of time we will spend trying to res    454 The amount of time we will spend trying to resolve an Appletalk address.
456                                                   455 
457 aarp-retransmit-limit                             456 aarp-retransmit-limit
458 ---------------------                             457 ---------------------
459                                                   458 
460 The number of times we will retransmit a query    459 The number of times we will retransmit a query before giving up.
461                                                   460 
462 aarp-tick-time                                    461 aarp-tick-time
463 --------------                                    462 --------------
464                                                   463 
465 Controls the rate at which expires are checked    464 Controls the rate at which expires are checked.
466                                                   465 
467 The directory  /proc/net/appletalk  holds the     466 The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
468 on a machine.                                     467 on a machine.
469                                                   468 
470 The fields  indicate  the DDP type, the local     469 The fields  indicate  the DDP type, the local address (in network:node format)
471 the remote  address,  the  size of the transmi    470 the remote  address,  the  size of the transmit pending queue, the size of the
472 received queue  (bytes waiting for application    471 received queue  (bytes waiting for applications to read) the state and the uid
473 owning the socket.                                472 owning the socket.
474                                                   473 
475 /proc/net/atalk_iface lists  all  the  interfa    474 /proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
476 shows the  name  of the interface, its Appleta    475 shows the  name  of the interface, its Appletalk address, the network range on
477 that address  (or  network number for phase 1     476 that address  (or  network number for phase 1 networks), and the status of the
478 interface.                                        477 interface.
479                                                   478 
480 /proc/net/atalk_route lists  each  known  netw    479 /proc/net/atalk_route lists  each  known  network  route.  It lists the target
481 (network) that the route leads to, the router     480 (network) that the route leads to, the router (may be directly connected), the
482 route flags, and the device the route is using    481 route flags, and the device the route is using.
483                                                   482 
484 5. TIPC                                           483 5. TIPC
485 -------                                           484 -------
486                                                   485 
487 tipc_rmem                                         486 tipc_rmem
488 ---------                                         487 ---------
489                                                   488 
490 The TIPC protocol now has a tunable for the re    489 The TIPC protocol now has a tunable for the receive memory, similar to the
491 tcp_rmem - i.e. a vector of 3 INTEGERs: (min,     490 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
492                                                   491 
493 ::                                                492 ::
494                                                   493 
495     # cat /proc/sys/net/tipc/tipc_rmem            494     # cat /proc/sys/net/tipc/tipc_rmem
496     4252725 34021800        68043600              495     4252725 34021800        68043600
497     #                                             496     #
498                                                   497 
499 The max value is set to CONN_OVERLOAD_LIMIT, a    498 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
500 are scaled (shifted) versions of that same val    499 are scaled (shifted) versions of that same value.  Note that the min value
501 is not at this point in time used in any meani    500 is not at this point in time used in any meaningful way, but the triplet is
502 preserved in order to be consistent with thing    501 preserved in order to be consistent with things like tcp_rmem.
503                                                   502 
504 named_timeout                                     503 named_timeout
505 -------------                                     504 -------------
506                                                   505 
507 TIPC name table updates are distributed asynch    506 TIPC name table updates are distributed asynchronously in a cluster, without
508 any form of transaction handling. This means t    507 any form of transaction handling. This means that different race scenarios are
509 possible. One such is that a name withdrawal s    508 possible. One such is that a name withdrawal sent out by one node and received
510 by another node may arrive after a second, ove    509 by another node may arrive after a second, overlapping name publication already
511 has been accepted from a third node, although     510 has been accepted from a third node, although the conflicting updates
512 originally may have been issued in the correct    511 originally may have been issued in the correct sequential order.
513 If named_timeout is nonzero, failed topology u    512 If named_timeout is nonzero, failed topology updates will be placed on a defer
514 queue until another event arrives that clears     513 queue until another event arrives that clears the error, or until the timeout
515 expires. Value is in milliseconds.                514 expires. Value is in milliseconds.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php