~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

TOMOYO Linux Cross Reference
Linux/Documentation/admin-guide/sysctl/net.rst

Version: ~ [ linux-6.12-rc7 ] ~ [ linux-6.11.7 ] ~ [ linux-6.10.14 ] ~ [ linux-6.9.12 ] ~ [ linux-6.8.12 ] ~ [ linux-6.7.12 ] ~ [ linux-6.6.60 ] ~ [ linux-6.5.13 ] ~ [ linux-6.4.16 ] ~ [ linux-6.3.13 ] ~ [ linux-6.2.16 ] ~ [ linux-6.1.116 ] ~ [ linux-6.0.19 ] ~ [ linux-5.19.17 ] ~ [ linux-5.18.19 ] ~ [ linux-5.17.15 ] ~ [ linux-5.16.20 ] ~ [ linux-5.15.171 ] ~ [ linux-5.14.21 ] ~ [ linux-5.13.19 ] ~ [ linux-5.12.19 ] ~ [ linux-5.11.22 ] ~ [ linux-5.10.229 ] ~ [ linux-5.9.16 ] ~ [ linux-5.8.18 ] ~ [ linux-5.7.19 ] ~ [ linux-5.6.19 ] ~ [ linux-5.5.19 ] ~ [ linux-5.4.285 ] ~ [ linux-5.3.18 ] ~ [ linux-5.2.21 ] ~ [ linux-5.1.21 ] ~ [ linux-5.0.21 ] ~ [ linux-4.20.17 ] ~ [ linux-4.19.323 ] ~ [ linux-4.18.20 ] ~ [ linux-4.17.19 ] ~ [ linux-4.16.18 ] ~ [ linux-4.15.18 ] ~ [ linux-4.14.336 ] ~ [ linux-4.13.16 ] ~ [ linux-4.12.14 ] ~ [ linux-4.11.12 ] ~ [ linux-4.10.17 ] ~ [ linux-4.9.337 ] ~ [ linux-4.4.302 ] ~ [ linux-3.10.108 ] ~ [ linux-2.6.32.71 ] ~ [ linux-2.6.0 ] ~ [ linux-2.4.37.11 ] ~ [ unix-v6-master ] ~ [ ccs-tools-1.8.12 ] ~ [ policy-sample ] ~
Architecture: ~ [ i386 ] ~ [ alpha ] ~ [ m68k ] ~ [ mips ] ~ [ ppc ] ~ [ sparc ] ~ [ sparc64 ] ~

Diff markup

Differences between /Documentation/admin-guide/sysctl/net.rst (Version linux-6.12-rc7) and /Documentation/admin-guide/sysctl/net.rst (Version linux-5.3.18)


  1 ================================                    1 ================================
  2 Documentation for /proc/sys/net/                    2 Documentation for /proc/sys/net/
  3 ================================                    3 ================================
  4                                                     4 
  5 Copyright                                           5 Copyright
  6                                                     6 
  7 Copyright (c) 1999                                  7 Copyright (c) 1999
  8                                                     8 
  9         - Terrehon Bowden <terrehon@pacbell.net      9         - Terrehon Bowden <terrehon@pacbell.net>
 10         - Bodo Bauer <bb@ricochet.net>              10         - Bodo Bauer <bb@ricochet.net>
 11                                                    11 
 12 Copyright (c) 2000                                 12 Copyright (c) 2000
 13                                                    13 
 14         - Jorge Nerin <comandante@zaralinux.com     14         - Jorge Nerin <comandante@zaralinux.com>
 15                                                    15 
 16 Copyright (c) 2009                                 16 Copyright (c) 2009
 17                                                    17 
 18         - Shen Feng <shen@cn.fujitsu.com>           18         - Shen Feng <shen@cn.fujitsu.com>
 19                                                    19 
 20 For general info and legal blurb, please look      20 For general info and legal blurb, please look in index.rst.
 21                                                    21 
 22 ----------------------------------------------     22 ------------------------------------------------------------------------------
 23                                                    23 
 24 This file contains the documentation for the s     24 This file contains the documentation for the sysctl files in
 25 /proc/sys/net                                      25 /proc/sys/net
 26                                                    26 
 27 The interface  to  the  networking  parts  of      27 The interface  to  the  networking  parts  of  the  kernel  is  located  in
 28 /proc/sys/net. The following table shows all p     28 /proc/sys/net. The following table shows all possible subdirectories.  You may
 29 see only some of them, depending on your kerne     29 see only some of them, depending on your kernel's configuration.
 30                                                    30 
 31                                                    31 
 32 Table : Subdirectories in /proc/sys/net            32 Table : Subdirectories in /proc/sys/net
 33                                                    33 
 34  ========= =================== = ========== == !!  34  ========= =================== = ========== ==================
 35  Directory Content               Directory  Co     35  Directory Content               Directory  Content
 36  ========= =================== = ========== == !!  36  ========= =================== = ========== ==================
 37  802       E802 protocol         mptcp      Mu !!  37  core      General parameter     appletalk  Appletalk protocol
 38  appletalk Appletalk protocol    netfilter  Ne !!  38  unix      Unix domain sockets   netrom     NET/ROM
 39  ax25      AX25                  netrom     NE !!  39  802       E802 protocol         ax25       AX25
 40  bridge    Bridging              rose       X. !!  40  ethernet  Ethernet protocol     rose       X.25 PLP layer
 41  core      General parameter     tipc       TI << 
 42  ethernet  Ethernet protocol     unix       Un << 
 43  ipv4      IP version 4          x25        X.     41  ipv4      IP version 4          x25        X.25 protocol
 44  ipv6      IP version 6                        !!  42  bridge    Bridging              decnet     DEC net
 45  ========= =================== = ========== == !!  43  ipv6      IP version 6          tipc       TIPC
                                                   >>  44  ========= =================== = ========== ==================
 46                                                    45 
 47 1. /proc/sys/net/core - Network core options       46 1. /proc/sys/net/core - Network core options
 48 ============================================       47 ============================================
 49                                                    48 
 50 bpf_jit_enable                                     49 bpf_jit_enable
 51 --------------                                     50 --------------
 52                                                    51 
 53 This enables the BPF Just in Time (JIT) compil     52 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
 54 and efficient infrastructure allowing to execu     53 and efficient infrastructure allowing to execute bytecode at various
 55 hook points. It is used in a number of Linux k     54 hook points. It is used in a number of Linux kernel subsystems such
 56 as networking (e.g. XDP, tc), tracing (e.g. kp     55 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
 57 and security (e.g. seccomp). LLVM has a BPF ba     56 and security (e.g. seccomp). LLVM has a BPF back end that can compile
 58 restricted C into a sequence of BPF instructio     57 restricted C into a sequence of BPF instructions. After program load
 59 through bpf(2) and passing a verifier in the k     58 through bpf(2) and passing a verifier in the kernel, a JIT will then
 60 translate these BPF proglets into native CPU i     59 translate these BPF proglets into native CPU instructions. There are
 61 two flavors of JITs, the newer eBPF JIT curren     60 two flavors of JITs, the newer eBPF JIT currently supported on:
 62                                                    61 
 63   - x86_64                                         62   - x86_64
 64   - x86_32                                         63   - x86_32
 65   - arm64                                          64   - arm64
 66   - arm32                                          65   - arm32
 67   - ppc64                                          66   - ppc64
 68   - ppc32                                      << 
 69   - sparc64                                        67   - sparc64
 70   - mips64                                         68   - mips64
 71   - s390x                                          69   - s390x
 72   - riscv64                                    !!  70   - riscv
 73   - riscv32                                    << 
 74   - loongarch64                                << 
 75   - arc                                        << 
 76                                                    71 
 77 And the older cBPF JIT supported on the follow     72 And the older cBPF JIT supported on the following archs:
 78                                                    73 
 79   - mips                                           74   - mips
                                                   >>  75   - ppc
 80   - sparc                                          76   - sparc
 81                                                    77 
 82 eBPF JITs are a superset of cBPF JITs, meaning     78 eBPF JITs are a superset of cBPF JITs, meaning the kernel will
 83 migrate cBPF instructions into eBPF instructio     79 migrate cBPF instructions into eBPF instructions and then JIT
 84 compile them transparently. Older cBPF JITs ca     80 compile them transparently. Older cBPF JITs can only translate
 85 tcpdump filters, seccomp rules, etc, but not m     81 tcpdump filters, seccomp rules, etc, but not mentioned eBPF
 86 programs loaded through bpf(2).                    82 programs loaded through bpf(2).
 87                                                    83 
 88 Values:                                            84 Values:
 89                                                    85 
 90         - 0 - disable the JIT (default value)      86         - 0 - disable the JIT (default value)
 91         - 1 - enable the JIT                       87         - 1 - enable the JIT
 92         - 2 - enable the JIT and ask the compi     88         - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
 93                                                    89 
 94 bpf_jit_harden                                     90 bpf_jit_harden
 95 --------------                                     91 --------------
 96                                                    92 
 97 This enables hardening for the BPF JIT compile     93 This enables hardening for the BPF JIT compiler. Supported are eBPF
 98 JIT backends. Enabling hardening trades off pe     94 JIT backends. Enabling hardening trades off performance, but can
 99 mitigate JIT spraying.                             95 mitigate JIT spraying.
100                                                    96 
101 Values:                                            97 Values:
102                                                    98 
103         - 0 - disable JIT hardening (default v     99         - 0 - disable JIT hardening (default value)
104         - 1 - enable JIT hardening for unprivi    100         - 1 - enable JIT hardening for unprivileged users only
105         - 2 - enable JIT hardening for all use    101         - 2 - enable JIT hardening for all users
106                                                   102 
107 where "privileged user" in this context means  << 
108 CAP_BPF or CAP_SYS_ADMIN in the root user name << 
109                                                << 
110 bpf_jit_kallsyms                                  103 bpf_jit_kallsyms
111 ----------------                                  104 ----------------
112                                                   105 
113 When BPF JIT compiler is enabled, then compile    106 When BPF JIT compiler is enabled, then compiled images are unknown
114 addresses to the kernel, meaning they neither     107 addresses to the kernel, meaning they neither show up in traces nor
115 in /proc/kallsyms. This enables export of thes    108 in /proc/kallsyms. This enables export of these addresses, which can
116 be used for debugging/tracing. If bpf_jit_hard    109 be used for debugging/tracing. If bpf_jit_harden is enabled, this
117 feature is disabled.                              110 feature is disabled.
118                                                   111 
119 Values :                                          112 Values :
120                                                   113 
121         - 0 - disable JIT kallsyms export (def    114         - 0 - disable JIT kallsyms export (default value)
122         - 1 - enable JIT kallsyms export for p    115         - 1 - enable JIT kallsyms export for privileged users only
123                                                   116 
124 bpf_jit_limit                                     117 bpf_jit_limit
125 -------------                                     118 -------------
126                                                   119 
127 This enforces a global limit for memory alloca    120 This enforces a global limit for memory allocations to the BPF JIT
128 compiler in order to reject unprivileged JIT r    121 compiler in order to reject unprivileged JIT requests once it has
129 been surpassed. bpf_jit_limit contains the val    122 been surpassed. bpf_jit_limit contains the value of the global limit
130 in bytes.                                         123 in bytes.
131                                                   124 
132 dev_weight                                        125 dev_weight
133 ----------                                        126 ----------
134                                                   127 
135 The maximum number of packets that kernel can     128 The maximum number of packets that kernel can handle on a NAPI interrupt,
136 it's a Per-CPU variable. For drivers that supp    129 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
137 aggregated packet is counted as one packet in     130 aggregated packet is counted as one packet in this context.
138                                                   131 
139 Default: 64                                       132 Default: 64
140                                                   133 
141 dev_weight_rx_bias                                134 dev_weight_rx_bias
142 ------------------                                135 ------------------
143                                                   136 
144 RPS (e.g. RFS, aRFS) processing is competing w    137 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
145 of the driver for the per softirq cycle netdev    138 of the driver for the per softirq cycle netdev_budget. This parameter influences
146 the proportion of the configured netdev_budget    139 the proportion of the configured netdev_budget that is spent on RPS based packet
147 processing during RX softirq cycles. It is fur    140 processing during RX softirq cycles. It is further meant for making current
148 dev_weight adaptable for asymmetric CPU needs     141 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
149 (see dev_weight_tx_bias) It is effective on a     142 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
150 on dev_weight and is calculated multiplicative    143 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
151                                                   144 
152 Default: 1                                        145 Default: 1
153                                                   146 
154 dev_weight_tx_bias                                147 dev_weight_tx_bias
155 ------------------                                148 ------------------
156                                                   149 
157 Scales the maximum number of packets that can     150 Scales the maximum number of packets that can be processed during a TX softirq cycle.
158 Effective on a per CPU basis. Allows scaling o    151 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
159 net stack processing needs. Be careful to avoi    152 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
160                                                   153 
161 Calculation is based on dev_weight (dev_weight    154 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
162                                                   155 
163 Default: 1                                        156 Default: 1
164                                                   157 
165 default_qdisc                                     158 default_qdisc
166 -------------                                     159 -------------
167                                                   160 
168 The default queuing discipline to use for netw    161 The default queuing discipline to use for network devices. This allows
169 overriding the default of pfifo_fast with an a    162 overriding the default of pfifo_fast with an alternative. Since the default
170 queuing discipline is created without addition    163 queuing discipline is created without additional parameters so is best suited
171 to queuing disciplines that work well without     164 to queuing disciplines that work well without configuration like stochastic
172 fair queue (sfq), CoDel (codel) or fair queue     165 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
173 queuing disciplines like Hierarchical Token Bu    166 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
174 which require setting up classes and bandwidth    167 which require setting up classes and bandwidths. Note that physical multiqueue
175 interfaces still use mq as root qdisc, which i    168 interfaces still use mq as root qdisc, which in turn uses this default for its
176 leaves. Virtual devices (like e.g. lo or veth)    169 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
177 default to noqueue.                               170 default to noqueue.
178                                                   171 
179 Default: pfifo_fast                               172 Default: pfifo_fast
180                                                   173 
181 busy_read                                         174 busy_read
182 ---------                                         175 ---------
183                                                   176 
184 Low latency busy poll timeout for socket reads    177 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
185 Approximate time in us to busy loop waiting fo    178 Approximate time in us to busy loop waiting for packets on the device queue.
186 This sets the default value of the SO_BUSY_POL    179 This sets the default value of the SO_BUSY_POLL socket option.
187 Can be set or overridden per socket by setting    180 Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
188 which is the preferred method of enabling. If     181 which is the preferred method of enabling. If you need to enable the feature
189 globally via sysctl, a value of 50 is recommen    182 globally via sysctl, a value of 50 is recommended.
190                                                   183 
191 Will increase power usage.                        184 Will increase power usage.
192                                                   185 
193 Default: 0 (off)                                  186 Default: 0 (off)
194                                                   187 
195 busy_poll                                         188 busy_poll
196 ----------------                                  189 ----------------
197 Low latency busy poll timeout for poll and sel    190 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
198 Approximate time in us to busy loop waiting fo    191 Approximate time in us to busy loop waiting for events.
199 Recommended value depends on the number of soc    192 Recommended value depends on the number of sockets you poll on.
200 For several sockets 50, for several hundreds 1    193 For several sockets 50, for several hundreds 100.
201 For more than that you probably want to use ep    194 For more than that you probably want to use epoll.
202 Note that only sockets with SO_BUSY_POLL set w    195 Note that only sockets with SO_BUSY_POLL set will be busy polled,
203 so you want to either selectively set SO_BUSY_    196 so you want to either selectively set SO_BUSY_POLL on those sockets or set
204 sysctl.net.busy_read globally.                    197 sysctl.net.busy_read globally.
205                                                   198 
206 Will increase power usage.                        199 Will increase power usage.
207                                                   200 
208 Default: 0 (off)                                  201 Default: 0 (off)
209                                                   202 
210 mem_pcpu_rsv                                   << 
211 ------------                                   << 
212                                                << 
213 Per-cpu reserved forward alloc cache size in p << 
214                                                << 
215 rmem_default                                      203 rmem_default
216 ------------                                      204 ------------
217                                                   205 
218 The default setting of the socket receive buff    206 The default setting of the socket receive buffer in bytes.
219                                                   207 
220 rmem_max                                          208 rmem_max
221 --------                                          209 --------
222                                                   210 
223 The maximum receive socket buffer size in byte    211 The maximum receive socket buffer size in bytes.
224                                                   212 
225 rps_default_mask                               << 
226 ----------------                               << 
227                                                << 
228 The default RPS CPU mask used on newly created << 
229 mask means RPS disabled by default.            << 
230                                                << 
231 tstamp_allow_data                                 213 tstamp_allow_data
232 -----------------                                 214 -----------------
233 Allow processes to receive tx timestamps loope    215 Allow processes to receive tx timestamps looped together with the original
234 packet contents. If disabled, transmit timesta    216 packet contents. If disabled, transmit timestamp requests from unprivileged
235 processes are dropped unless socket option SOF    217 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
236                                                   218 
237 Default: 1 (on)                                   219 Default: 1 (on)
238                                                   220 
239                                                   221 
240 wmem_default                                      222 wmem_default
241 ------------                                      223 ------------
242                                                   224 
243 The default setting (in bytes) of the socket s    225 The default setting (in bytes) of the socket send buffer.
244                                                   226 
245 wmem_max                                          227 wmem_max
246 --------                                          228 --------
247                                                   229 
248 The maximum send socket buffer size in bytes.     230 The maximum send socket buffer size in bytes.
249                                                   231 
250 message_burst and message_cost                    232 message_burst and message_cost
251 ------------------------------                    233 ------------------------------
252                                                   234 
253 These parameters  are used to limit the warnin    235 These parameters  are used to limit the warning messages written to the kernel
254 log from  the  networking  code.  They  enforc    236 log from  the  networking  code.  They  enforce  a  rate  limit  to  make  a
255 denial-of-service attack  impossible. A higher    237 denial-of-service attack  impossible. A higher message_cost factor, results in
256 fewer messages that will be written. Message_b    238 fewer messages that will be written. Message_burst controls when messages will
257 be dropped.  The  default  settings  limit  wa    239 be dropped.  The  default  settings  limit  warning messages to one every five
258 seconds.                                          240 seconds.
259                                                   241 
260 warnings                                          242 warnings
261 --------                                          243 --------
262                                                   244 
263 This sysctl is now unused.                        245 This sysctl is now unused.
264                                                   246 
265 This was used to control console messages from    247 This was used to control console messages from the networking stack that
266 occur because of problems on the network like     248 occur because of problems on the network like duplicate address or bad
267 checksums.                                        249 checksums.
268                                                   250 
269 These messages are now emitted at KERN_DEBUG a    251 These messages are now emitted at KERN_DEBUG and can generally be enabled
270 and controlled by the dynamic_debug facility.     252 and controlled by the dynamic_debug facility.
271                                                   253 
272 netdev_budget                                     254 netdev_budget
273 -------------                                     255 -------------
274                                                   256 
275 Maximum number of packets taken from all inter    257 Maximum number of packets taken from all interfaces in one polling cycle (NAPI
276 poll). In one polling cycle interfaces which a    258 poll). In one polling cycle interfaces which are registered to polling are
277 probed in a round-robin manner. Also, a pollin    259 probed in a round-robin manner. Also, a polling cycle may not exceed
278 netdev_budget_usecs microseconds, even if netd    260 netdev_budget_usecs microseconds, even if netdev_budget has not been
279 exhausted.                                        261 exhausted.
280                                                   262 
281 netdev_budget_usecs                               263 netdev_budget_usecs
282 ---------------------                             264 ---------------------
283                                                   265 
284 Maximum number of microseconds in one NAPI pol    266 Maximum number of microseconds in one NAPI polling cycle. Polling
285 will exit when either netdev_budget_usecs have    267 will exit when either netdev_budget_usecs have elapsed during the
286 poll cycle or the number of packets processed     268 poll cycle or the number of packets processed reaches netdev_budget.
287                                                   269 
288 netdev_max_backlog                                270 netdev_max_backlog
289 ------------------                                271 ------------------
290                                                   272 
291 Maximum number of packets, queued on the INPUT !! 273 Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface
292 receives packets faster than kernel can proces    274 receives packets faster than kernel can process them.
293                                                   275 
294 netdev_rss_key                                    276 netdev_rss_key
295 --------------                                    277 --------------
296                                                   278 
297 RSS (Receive Side Scaling) enabled drivers use    279 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
298 randomly generated.                               280 randomly generated.
299 Some user space might need to gather its conte    281 Some user space might need to gather its content even if drivers do not
300 provide ethtool -x support yet.                   282 provide ethtool -x support yet.
301                                                   283 
302 ::                                                284 ::
303                                                   285 
304   myhost:~# cat /proc/sys/net/core/netdev_rss_    286   myhost:~# cat /proc/sys/net/core/netdev_rss_key
305   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    287   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
306                                                   288 
307 File contains nul bytes if no driver ever call    289 File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
308                                                   290 
309 Note:                                             291 Note:
310   /proc/sys/net/core/netdev_rss_key contains 5    292   /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
311   but most drivers only use 40 bytes of it.       293   but most drivers only use 40 bytes of it.
312                                                   294 
313 ::                                                295 ::
314                                                   296 
315   myhost:~# ethtool -x eth0                       297   myhost:~# ethtool -x eth0
316   RX flow hash indirection table for eth0 with    298   RX flow hash indirection table for eth0 with 8 RX ring(s):
317       0:    0     1     2     3     4     5       299       0:    0     1     2     3     4     5     6     7
318   RSS hash key:                                   300   RSS hash key:
319   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47    301   84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
320                                                   302 
321 netdev_tstamp_prequeue                            303 netdev_tstamp_prequeue
322 ----------------------                            304 ----------------------
323                                                   305 
324 If set to 0, RX packet timestamps can be sampl    306 If set to 0, RX packet timestamps can be sampled after RPS processing, when
325 the target CPU processes packets. It might giv    307 the target CPU processes packets. It might give some delay on timestamps, but
326 permit to distribute the load on several cpus.    308 permit to distribute the load on several cpus.
327                                                   309 
328 If set to 1 (default), timestamps are sampled     310 If set to 1 (default), timestamps are sampled as soon as possible, before
329 queueing.                                         311 queueing.
330                                                   312 
331 netdev_unregister_timeout_secs                 << 
332 ------------------------------                 << 
333                                                << 
334 Unregister network device timeout in seconds.  << 
335 This option controls the timeout (in seconds)  << 
336 waiting for a network device refcount to drop  << 
337 unregistration. A lower value may be useful du << 
338 a leaked reference faster. A larger value may  << 
339 warnings on slow/loaded systems.               << 
340 Default value is 10, minimum 1, maximum 3600.  << 
341                                                << 
342 skb_defer_max                                  << 
343 -------------                                  << 
344                                                << 
345 Max size (in skbs) of the per-cpu list of skbs << 
346 by the cpu which allocated them. Used by TCP s << 
347                                                << 
348 Default: 64                                    << 
349                                                << 
350 optmem_max                                        313 optmem_max
351 ----------                                        314 ----------
352                                                   315 
353 Maximum ancillary buffer size allowed per sock    316 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
354 of struct cmsghdr structures with appended dat !! 317 of struct cmsghdr structures with appended data.
355 optmem_max as a limit for its internal structu << 
356                                                << 
357 Default : 128 KB                               << 
358                                                   318 
359 fb_tunnels_only_for_init_net                      319 fb_tunnels_only_for_init_net
360 ----------------------------                      320 ----------------------------
361                                                   321 
362 Controls if fallback tunnels (like tunl0, gre0    322 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
363 sit0, ip6tnl0, ip6gre0) are automatically crea !! 323 sit0, ip6tnl0, ip6gre0) are automatically created when a new
364 (a) value = 0; respective fallback tunnels are !! 324 network namespace is created, if corresponding tunnel is present
365 loaded in every net namespaces (backward compa !! 325 in initial network namespace.
366 (b) value = 1; [kcmd value: initns] respective !! 326 If set to 1, these devices are not automatically created, and
367 created only in init net namespace and every o !! 327 user space is responsible for creating them if needed.
368 not have them.                                 << 
369 (c) value = 2; [kcmd value: none] fallback tun << 
370 when a module is loaded in any of the net name << 
371 "2" is pointless after boot if these modules a << 
372 a kernel command-line option that can change t << 
373 Documentation/admin-guide/kernel-parameters.tx << 
374                                                << 
375 Not creating fallback tunnels gives control to << 
376 whatever is needed only and avoid creating dev << 
377                                                   328 
378 Default : 0  (for compatibility reasons)          329 Default : 0  (for compatibility reasons)
379                                                   330 
380 devconf_inherit_init_net                          331 devconf_inherit_init_net
381 ------------------------                          332 ------------------------
382                                                   333 
383 Controls if a new network namespace should inh    334 Controls if a new network namespace should inherit all current
384 settings under /proc/sys/net/{ipv4,ipv6}/conf/    335 settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
385 default, we keep the current behavior: for IPv    336 default, we keep the current behavior: for IPv4 we inherit all current
386 settings from init_net and for IPv6 we reset a    337 settings from init_net and for IPv6 we reset all settings to default.
387                                                   338 
388 If set to 1, both IPv4 and IPv6 settings are f    339 If set to 1, both IPv4 and IPv6 settings are forced to inherit from
389 current ones in init_net. If set to 2, both IP    340 current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
390 forced to reset to their default values. If se !! 341 forced to reset to their default values.
391 settings are forced to inherit from current on << 
392 new netns has been created.                    << 
393                                                   342 
394 Default : 0  (for compatibility reasons)          343 Default : 0  (for compatibility reasons)
395                                                   344 
396 txrehash                                       << 
397 --------                                       << 
398                                                << 
399 Controls default hash rethink behaviour on soc << 
400 to SOCK_TXREHASH_DEFAULT (i. e. not overridden << 
401                                                << 
402 If set to 1 (default), hash rethink is perform << 
403 If set to 0, hash rethink is not performed.    << 
404                                                << 
405 gro_normal_batch                               << 
406 ----------------                               << 
407                                                << 
408 Maximum number of the segments to batch up on  << 
409 exits GRO, either as a coalesced superframe or << 
410 GRO has decided not to coalesce, it is placed  << 
411 list is then passed to the stack when the numb << 
412 gro_normal_batch limit.                        << 
413                                                << 
414 high_order_alloc_disable                       << 
415 ------------------------                       << 
416                                                << 
417 By default the allocator for page frags tries  << 
418 on x86). While the default behavior gives good << 
419 might have hit a contention in page allocation << 
420 true on older kernels (< 5.14) when high-order << 
421 lists. This allows to opt-in for order-0 alloc << 
422 historical importance.                         << 
423                                                << 
424 Default: 0                                     << 
425                                                << 
426 2. /proc/sys/net/unix - Parameters for Unix do    345 2. /proc/sys/net/unix - Parameters for Unix domain sockets
427 ----------------------------------------------    346 ----------------------------------------------------------
428                                                   347 
429 There is only one file in this directory.         348 There is only one file in this directory.
430 unix_dgram_qlen limits the max number of datag    349 unix_dgram_qlen limits the max number of datagrams queued in Unix domain
431 socket's buffer. It will not take effect unles    350 socket's buffer. It will not take effect unless PF_UNIX flag is specified.
432                                                   351 
433                                                   352 
434 3. /proc/sys/net/ipv4 - IPV4 settings             353 3. /proc/sys/net/ipv4 - IPV4 settings
435 -------------------------------------             354 -------------------------------------
436 Please see: Documentation/networking/ip-sysctl !! 355 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
437 Documentation/admin-guide/sysctl/net.rst for d !! 356 descriptions of these entries.
438                                                   357 
439                                                   358 
440 4. Appletalk                                      359 4. Appletalk
441 ------------                                      360 ------------
442                                                   361 
443 The /proc/sys/net/appletalk  directory  holds     362 The /proc/sys/net/appletalk  directory  holds the Appletalk configuration data
444 when Appletalk is loaded. The configurable par    363 when Appletalk is loaded. The configurable parameters are:
445                                                   364 
446 aarp-expiry-time                                  365 aarp-expiry-time
447 ----------------                                  366 ----------------
448                                                   367 
449 The amount  of  time  we keep an ARP entry bef    368 The amount  of  time  we keep an ARP entry before expiring it. Used to age out
450 old hosts.                                        369 old hosts.
451                                                   370 
452 aarp-resolve-time                                 371 aarp-resolve-time
453 -----------------                                 372 -----------------
454                                                   373 
455 The amount of time we will spend trying to res    374 The amount of time we will spend trying to resolve an Appletalk address.
456                                                   375 
457 aarp-retransmit-limit                             376 aarp-retransmit-limit
458 ---------------------                             377 ---------------------
459                                                   378 
460 The number of times we will retransmit a query    379 The number of times we will retransmit a query before giving up.
461                                                   380 
462 aarp-tick-time                                    381 aarp-tick-time
463 --------------                                    382 --------------
464                                                   383 
465 Controls the rate at which expires are checked    384 Controls the rate at which expires are checked.
466                                                   385 
467 The directory  /proc/net/appletalk  holds the     386 The directory  /proc/net/appletalk  holds the list of active Appletalk sockets
468 on a machine.                                     387 on a machine.
469                                                   388 
470 The fields  indicate  the DDP type, the local     389 The fields  indicate  the DDP type, the local address (in network:node format)
471 the remote  address,  the  size of the transmi    390 the remote  address,  the  size of the transmit pending queue, the size of the
472 received queue  (bytes waiting for application    391 received queue  (bytes waiting for applications to read) the state and the uid
473 owning the socket.                                392 owning the socket.
474                                                   393 
475 /proc/net/atalk_iface lists  all  the  interfa    394 /proc/net/atalk_iface lists  all  the  interfaces  configured for appletalk.It
476 shows the  name  of the interface, its Appleta    395 shows the  name  of the interface, its Appletalk address, the network range on
477 that address  (or  network number for phase 1     396 that address  (or  network number for phase 1 networks), and the status of the
478 interface.                                        397 interface.
479                                                   398 
480 /proc/net/atalk_route lists  each  known  netw    399 /proc/net/atalk_route lists  each  known  network  route.  It lists the target
481 (network) that the route leads to, the router     400 (network) that the route leads to, the router (may be directly connected), the
482 route flags, and the device the route is using    401 route flags, and the device the route is using.
483                                                   402 
484 5. TIPC                                           403 5. TIPC
485 -------                                           404 -------
486                                                   405 
487 tipc_rmem                                         406 tipc_rmem
488 ---------                                         407 ---------
489                                                   408 
490 The TIPC protocol now has a tunable for the re    409 The TIPC protocol now has a tunable for the receive memory, similar to the
491 tcp_rmem - i.e. a vector of 3 INTEGERs: (min,     410 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
492                                                   411 
493 ::                                                412 ::
494                                                   413 
495     # cat /proc/sys/net/tipc/tipc_rmem            414     # cat /proc/sys/net/tipc/tipc_rmem
496     4252725 34021800        68043600              415     4252725 34021800        68043600
497     #                                             416     #
498                                                   417 
499 The max value is set to CONN_OVERLOAD_LIMIT, a    418 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
500 are scaled (shifted) versions of that same val    419 are scaled (shifted) versions of that same value.  Note that the min value
501 is not at this point in time used in any meani    420 is not at this point in time used in any meaningful way, but the triplet is
502 preserved in order to be consistent with thing    421 preserved in order to be consistent with things like tcp_rmem.
503                                                   422 
504 named_timeout                                     423 named_timeout
505 -------------                                     424 -------------
506                                                   425 
507 TIPC name table updates are distributed asynch    426 TIPC name table updates are distributed asynchronously in a cluster, without
508 any form of transaction handling. This means t    427 any form of transaction handling. This means that different race scenarios are
509 possible. One such is that a name withdrawal s    428 possible. One such is that a name withdrawal sent out by one node and received
510 by another node may arrive after a second, ove    429 by another node may arrive after a second, overlapping name publication already
511 has been accepted from a third node, although     430 has been accepted from a third node, although the conflicting updates
512 originally may have been issued in the correct    431 originally may have been issued in the correct sequential order.
513 If named_timeout is nonzero, failed topology u    432 If named_timeout is nonzero, failed topology updates will be placed on a defer
514 queue until another event arrives that clears     433 queue until another event arrives that clears the error, or until the timeout
515 expires. Value is in milliseconds.                434 expires. Value is in milliseconds.
                                                      

~ [ source navigation ] ~ [ diff markup ] ~ [ identifier search ] ~

kernel.org | git.kernel.org | LWN.net | Project Home | SVN repository | Mail admin

Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.

sflogo.php