1 ================================ 1 ================================ 2 Documentation for /proc/sys/net/ 2 Documentation for /proc/sys/net/ 3 ================================ 3 ================================ 4 4 5 Copyright 5 Copyright 6 6 7 Copyright (c) 1999 7 Copyright (c) 1999 8 8 9 - Terrehon Bowden <terrehon@pacbell.net 9 - Terrehon Bowden <terrehon@pacbell.net> 10 - Bodo Bauer <bb@ricochet.net> 10 - Bodo Bauer <bb@ricochet.net> 11 11 12 Copyright (c) 2000 12 Copyright (c) 2000 13 13 14 - Jorge Nerin <comandante@zaralinux.com 14 - Jorge Nerin <comandante@zaralinux.com> 15 15 16 Copyright (c) 2009 16 Copyright (c) 2009 17 17 18 - Shen Feng <shen@cn.fujitsu.com> 18 - Shen Feng <shen@cn.fujitsu.com> 19 19 20 For general info and legal blurb, please look 20 For general info and legal blurb, please look in index.rst. 21 21 22 ---------------------------------------------- 22 ------------------------------------------------------------------------------ 23 23 24 This file contains the documentation for the s 24 This file contains the documentation for the sysctl files in 25 /proc/sys/net 25 /proc/sys/net 26 26 27 The interface to the networking parts of 27 The interface to the networking parts of the kernel is located in 28 /proc/sys/net. The following table shows all p 28 /proc/sys/net. The following table shows all possible subdirectories. You may 29 see only some of them, depending on your kerne 29 see only some of them, depending on your kernel's configuration. 30 30 31 31 32 Table : Subdirectories in /proc/sys/net 32 Table : Subdirectories in /proc/sys/net 33 33 34 ========= =================== = ========== == 34 ========= =================== = ========== =================== 35 Directory Content Directory Co 35 Directory Content Directory Content 36 ========= =================== = ========== == 36 ========= =================== = ========== =================== 37 802 E802 protocol mptcp Mu 37 802 E802 protocol mptcp Multipath TCP 38 appletalk Appletalk protocol netfilter Ne 38 appletalk Appletalk protocol netfilter Network Filter 39 ax25 AX25 netrom NE 39 ax25 AX25 netrom NET/ROM 40 bridge Bridging rose X. 40 bridge Bridging rose X.25 PLP layer 41 core General parameter tipc TI 41 core General parameter tipc TIPC 42 ethernet Ethernet protocol unix Un 42 ethernet Ethernet protocol unix Unix domain sockets 43 ipv4 IP version 4 x25 X. 43 ipv4 IP version 4 x25 X.25 protocol 44 ipv6 IP version 6 44 ipv6 IP version 6 45 ========= =================== = ========== == 45 ========= =================== = ========== =================== 46 46 47 1. /proc/sys/net/core - Network core options 47 1. /proc/sys/net/core - Network core options 48 ============================================ 48 ============================================ 49 49 50 bpf_jit_enable 50 bpf_jit_enable 51 -------------- 51 -------------- 52 52 53 This enables the BPF Just in Time (JIT) compil 53 This enables the BPF Just in Time (JIT) compiler. BPF is a flexible 54 and efficient infrastructure allowing to execu 54 and efficient infrastructure allowing to execute bytecode at various 55 hook points. It is used in a number of Linux k 55 hook points. It is used in a number of Linux kernel subsystems such 56 as networking (e.g. XDP, tc), tracing (e.g. kp 56 as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints) 57 and security (e.g. seccomp). LLVM has a BPF ba 57 and security (e.g. seccomp). LLVM has a BPF back end that can compile 58 restricted C into a sequence of BPF instructio 58 restricted C into a sequence of BPF instructions. After program load 59 through bpf(2) and passing a verifier in the k 59 through bpf(2) and passing a verifier in the kernel, a JIT will then 60 translate these BPF proglets into native CPU i 60 translate these BPF proglets into native CPU instructions. There are 61 two flavors of JITs, the newer eBPF JIT curren 61 two flavors of JITs, the newer eBPF JIT currently supported on: 62 62 63 - x86_64 63 - x86_64 64 - x86_32 64 - x86_32 65 - arm64 65 - arm64 66 - arm32 66 - arm32 67 - ppc64 67 - ppc64 68 - ppc32 << 69 - sparc64 68 - sparc64 70 - mips64 69 - mips64 71 - s390x 70 - s390x 72 - riscv64 !! 71 - riscv 73 - riscv32 << 74 - loongarch64 << 75 - arc << 76 72 77 And the older cBPF JIT supported on the follow 73 And the older cBPF JIT supported on the following archs: 78 74 79 - mips 75 - mips >> 76 - ppc 80 - sparc 77 - sparc 81 78 82 eBPF JITs are a superset of cBPF JITs, meaning 79 eBPF JITs are a superset of cBPF JITs, meaning the kernel will 83 migrate cBPF instructions into eBPF instructio 80 migrate cBPF instructions into eBPF instructions and then JIT 84 compile them transparently. Older cBPF JITs ca 81 compile them transparently. Older cBPF JITs can only translate 85 tcpdump filters, seccomp rules, etc, but not m 82 tcpdump filters, seccomp rules, etc, but not mentioned eBPF 86 programs loaded through bpf(2). 83 programs loaded through bpf(2). 87 84 88 Values: 85 Values: 89 86 90 - 0 - disable the JIT (default value) 87 - 0 - disable the JIT (default value) 91 - 1 - enable the JIT 88 - 1 - enable the JIT 92 - 2 - enable the JIT and ask the compi 89 - 2 - enable the JIT and ask the compiler to emit traces on kernel log. 93 90 94 bpf_jit_harden 91 bpf_jit_harden 95 -------------- 92 -------------- 96 93 97 This enables hardening for the BPF JIT compile 94 This enables hardening for the BPF JIT compiler. Supported are eBPF 98 JIT backends. Enabling hardening trades off pe 95 JIT backends. Enabling hardening trades off performance, but can 99 mitigate JIT spraying. 96 mitigate JIT spraying. 100 97 101 Values: 98 Values: 102 99 103 - 0 - disable JIT hardening (default v 100 - 0 - disable JIT hardening (default value) 104 - 1 - enable JIT hardening for unprivi 101 - 1 - enable JIT hardening for unprivileged users only 105 - 2 - enable JIT hardening for all use 102 - 2 - enable JIT hardening for all users 106 103 107 where "privileged user" in this context means << 108 CAP_BPF or CAP_SYS_ADMIN in the root user name << 109 << 110 bpf_jit_kallsyms 104 bpf_jit_kallsyms 111 ---------------- 105 ---------------- 112 106 113 When BPF JIT compiler is enabled, then compile 107 When BPF JIT compiler is enabled, then compiled images are unknown 114 addresses to the kernel, meaning they neither 108 addresses to the kernel, meaning they neither show up in traces nor 115 in /proc/kallsyms. This enables export of thes 109 in /proc/kallsyms. This enables export of these addresses, which can 116 be used for debugging/tracing. If bpf_jit_hard 110 be used for debugging/tracing. If bpf_jit_harden is enabled, this 117 feature is disabled. 111 feature is disabled. 118 112 119 Values : 113 Values : 120 114 121 - 0 - disable JIT kallsyms export (def 115 - 0 - disable JIT kallsyms export (default value) 122 - 1 - enable JIT kallsyms export for p 116 - 1 - enable JIT kallsyms export for privileged users only 123 117 124 bpf_jit_limit 118 bpf_jit_limit 125 ------------- 119 ------------- 126 120 127 This enforces a global limit for memory alloca 121 This enforces a global limit for memory allocations to the BPF JIT 128 compiler in order to reject unprivileged JIT r 122 compiler in order to reject unprivileged JIT requests once it has 129 been surpassed. bpf_jit_limit contains the val 123 been surpassed. bpf_jit_limit contains the value of the global limit 130 in bytes. 124 in bytes. 131 125 132 dev_weight 126 dev_weight 133 ---------- 127 ---------- 134 128 135 The maximum number of packets that kernel can 129 The maximum number of packets that kernel can handle on a NAPI interrupt, 136 it's a Per-CPU variable. For drivers that supp 130 it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware 137 aggregated packet is counted as one packet in 131 aggregated packet is counted as one packet in this context. 138 132 139 Default: 64 133 Default: 64 140 134 141 dev_weight_rx_bias 135 dev_weight_rx_bias 142 ------------------ 136 ------------------ 143 137 144 RPS (e.g. RFS, aRFS) processing is competing w 138 RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function 145 of the driver for the per softirq cycle netdev 139 of the driver for the per softirq cycle netdev_budget. This parameter influences 146 the proportion of the configured netdev_budget 140 the proportion of the configured netdev_budget that is spent on RPS based packet 147 processing during RX softirq cycles. It is fur 141 processing during RX softirq cycles. It is further meant for making current 148 dev_weight adaptable for asymmetric CPU needs 142 dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack. 149 (see dev_weight_tx_bias) It is effective on a 143 (see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based 150 on dev_weight and is calculated multiplicative 144 on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias). 151 145 152 Default: 1 146 Default: 1 153 147 154 dev_weight_tx_bias 148 dev_weight_tx_bias 155 ------------------ 149 ------------------ 156 150 157 Scales the maximum number of packets that can 151 Scales the maximum number of packets that can be processed during a TX softirq cycle. 158 Effective on a per CPU basis. Allows scaling o 152 Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric 159 net stack processing needs. Be careful to avoi 153 net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog. 160 154 161 Calculation is based on dev_weight (dev_weight 155 Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias). 162 156 163 Default: 1 157 Default: 1 164 158 165 default_qdisc 159 default_qdisc 166 ------------- 160 ------------- 167 161 168 The default queuing discipline to use for netw 162 The default queuing discipline to use for network devices. This allows 169 overriding the default of pfifo_fast with an a 163 overriding the default of pfifo_fast with an alternative. Since the default 170 queuing discipline is created without addition 164 queuing discipline is created without additional parameters so is best suited 171 to queuing disciplines that work well without 165 to queuing disciplines that work well without configuration like stochastic 172 fair queue (sfq), CoDel (codel) or fair queue 166 fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use 173 queuing disciplines like Hierarchical Token Bu 167 queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin 174 which require setting up classes and bandwidth 168 which require setting up classes and bandwidths. Note that physical multiqueue 175 interfaces still use mq as root qdisc, which i 169 interfaces still use mq as root qdisc, which in turn uses this default for its 176 leaves. Virtual devices (like e.g. lo or veth) 170 leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead 177 default to noqueue. 171 default to noqueue. 178 172 179 Default: pfifo_fast 173 Default: pfifo_fast 180 174 181 busy_read 175 busy_read 182 --------- 176 --------- 183 177 184 Low latency busy poll timeout for socket reads 178 Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL) 185 Approximate time in us to busy loop waiting fo 179 Approximate time in us to busy loop waiting for packets on the device queue. 186 This sets the default value of the SO_BUSY_POL 180 This sets the default value of the SO_BUSY_POLL socket option. 187 Can be set or overridden per socket by setting 181 Can be set or overridden per socket by setting socket option SO_BUSY_POLL, 188 which is the preferred method of enabling. If 182 which is the preferred method of enabling. If you need to enable the feature 189 globally via sysctl, a value of 50 is recommen 183 globally via sysctl, a value of 50 is recommended. 190 184 191 Will increase power usage. 185 Will increase power usage. 192 186 193 Default: 0 (off) 187 Default: 0 (off) 194 188 195 busy_poll 189 busy_poll 196 ---------------- 190 ---------------- 197 Low latency busy poll timeout for poll and sel 191 Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL) 198 Approximate time in us to busy loop waiting fo 192 Approximate time in us to busy loop waiting for events. 199 Recommended value depends on the number of soc 193 Recommended value depends on the number of sockets you poll on. 200 For several sockets 50, for several hundreds 1 194 For several sockets 50, for several hundreds 100. 201 For more than that you probably want to use ep 195 For more than that you probably want to use epoll. 202 Note that only sockets with SO_BUSY_POLL set w 196 Note that only sockets with SO_BUSY_POLL set will be busy polled, 203 so you want to either selectively set SO_BUSY_ 197 so you want to either selectively set SO_BUSY_POLL on those sockets or set 204 sysctl.net.busy_read globally. 198 sysctl.net.busy_read globally. 205 199 206 Will increase power usage. 200 Will increase power usage. 207 201 208 Default: 0 (off) 202 Default: 0 (off) 209 203 210 mem_pcpu_rsv << 211 ------------ << 212 << 213 Per-cpu reserved forward alloc cache size in p << 214 << 215 rmem_default 204 rmem_default 216 ------------ 205 ------------ 217 206 218 The default setting of the socket receive buff 207 The default setting of the socket receive buffer in bytes. 219 208 220 rmem_max 209 rmem_max 221 -------- 210 -------- 222 211 223 The maximum receive socket buffer size in byte 212 The maximum receive socket buffer size in bytes. 224 213 225 rps_default_mask << 226 ---------------- << 227 << 228 The default RPS CPU mask used on newly created << 229 mask means RPS disabled by default. << 230 << 231 tstamp_allow_data 214 tstamp_allow_data 232 ----------------- 215 ----------------- 233 Allow processes to receive tx timestamps loope 216 Allow processes to receive tx timestamps looped together with the original 234 packet contents. If disabled, transmit timesta 217 packet contents. If disabled, transmit timestamp requests from unprivileged 235 processes are dropped unless socket option SOF 218 processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set. 236 219 237 Default: 1 (on) 220 Default: 1 (on) 238 221 239 222 240 wmem_default 223 wmem_default 241 ------------ 224 ------------ 242 225 243 The default setting (in bytes) of the socket s 226 The default setting (in bytes) of the socket send buffer. 244 227 245 wmem_max 228 wmem_max 246 -------- 229 -------- 247 230 248 The maximum send socket buffer size in bytes. 231 The maximum send socket buffer size in bytes. 249 232 250 message_burst and message_cost 233 message_burst and message_cost 251 ------------------------------ 234 ------------------------------ 252 235 253 These parameters are used to limit the warnin 236 These parameters are used to limit the warning messages written to the kernel 254 log from the networking code. They enforc 237 log from the networking code. They enforce a rate limit to make a 255 denial-of-service attack impossible. A higher 238 denial-of-service attack impossible. A higher message_cost factor, results in 256 fewer messages that will be written. Message_b 239 fewer messages that will be written. Message_burst controls when messages will 257 be dropped. The default settings limit wa 240 be dropped. The default settings limit warning messages to one every five 258 seconds. 241 seconds. 259 242 260 warnings 243 warnings 261 -------- 244 -------- 262 245 263 This sysctl is now unused. 246 This sysctl is now unused. 264 247 265 This was used to control console messages from 248 This was used to control console messages from the networking stack that 266 occur because of problems on the network like 249 occur because of problems on the network like duplicate address or bad 267 checksums. 250 checksums. 268 251 269 These messages are now emitted at KERN_DEBUG a 252 These messages are now emitted at KERN_DEBUG and can generally be enabled 270 and controlled by the dynamic_debug facility. 253 and controlled by the dynamic_debug facility. 271 254 272 netdev_budget 255 netdev_budget 273 ------------- 256 ------------- 274 257 275 Maximum number of packets taken from all inter 258 Maximum number of packets taken from all interfaces in one polling cycle (NAPI 276 poll). In one polling cycle interfaces which a 259 poll). In one polling cycle interfaces which are registered to polling are 277 probed in a round-robin manner. Also, a pollin 260 probed in a round-robin manner. Also, a polling cycle may not exceed 278 netdev_budget_usecs microseconds, even if netd 261 netdev_budget_usecs microseconds, even if netdev_budget has not been 279 exhausted. 262 exhausted. 280 263 281 netdev_budget_usecs 264 netdev_budget_usecs 282 --------------------- 265 --------------------- 283 266 284 Maximum number of microseconds in one NAPI pol 267 Maximum number of microseconds in one NAPI polling cycle. Polling 285 will exit when either netdev_budget_usecs have 268 will exit when either netdev_budget_usecs have elapsed during the 286 poll cycle or the number of packets processed 269 poll cycle or the number of packets processed reaches netdev_budget. 287 270 288 netdev_max_backlog 271 netdev_max_backlog 289 ------------------ 272 ------------------ 290 273 291 Maximum number of packets, queued on the INPUT !! 274 Maximum number of packets, queued on the INPUT side, when the interface 292 receives packets faster than kernel can proces 275 receives packets faster than kernel can process them. 293 276 294 netdev_rss_key 277 netdev_rss_key 295 -------------- 278 -------------- 296 279 297 RSS (Receive Side Scaling) enabled drivers use 280 RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is 298 randomly generated. 281 randomly generated. 299 Some user space might need to gather its conte 282 Some user space might need to gather its content even if drivers do not 300 provide ethtool -x support yet. 283 provide ethtool -x support yet. 301 284 302 :: 285 :: 303 286 304 myhost:~# cat /proc/sys/net/core/netdev_rss_ 287 myhost:~# cat /proc/sys/net/core/netdev_rss_key 305 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47 288 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total) 306 289 307 File contains nul bytes if no driver ever call 290 File contains nul bytes if no driver ever called netdev_rss_key_fill() function. 308 291 309 Note: 292 Note: 310 /proc/sys/net/core/netdev_rss_key contains 5 293 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key, 311 but most drivers only use 40 bytes of it. 294 but most drivers only use 40 bytes of it. 312 295 313 :: 296 :: 314 297 315 myhost:~# ethtool -x eth0 298 myhost:~# ethtool -x eth0 316 RX flow hash indirection table for eth0 with 299 RX flow hash indirection table for eth0 with 8 RX ring(s): 317 0: 0 1 2 3 4 5 300 0: 0 1 2 3 4 5 6 7 318 RSS hash key: 301 RSS hash key: 319 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47 302 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89 320 303 321 netdev_tstamp_prequeue 304 netdev_tstamp_prequeue 322 ---------------------- 305 ---------------------- 323 306 324 If set to 0, RX packet timestamps can be sampl 307 If set to 0, RX packet timestamps can be sampled after RPS processing, when 325 the target CPU processes packets. It might giv 308 the target CPU processes packets. It might give some delay on timestamps, but 326 permit to distribute the load on several cpus. 309 permit to distribute the load on several cpus. 327 310 328 If set to 1 (default), timestamps are sampled 311 If set to 1 (default), timestamps are sampled as soon as possible, before 329 queueing. 312 queueing. 330 313 331 netdev_unregister_timeout_secs << 332 ------------------------------ << 333 << 334 Unregister network device timeout in seconds. << 335 This option controls the timeout (in seconds) << 336 waiting for a network device refcount to drop << 337 unregistration. A lower value may be useful du << 338 a leaked reference faster. A larger value may << 339 warnings on slow/loaded systems. << 340 Default value is 10, minimum 1, maximum 3600. << 341 << 342 skb_defer_max << 343 ------------- << 344 << 345 Max size (in skbs) of the per-cpu list of skbs << 346 by the cpu which allocated them. Used by TCP s << 347 << 348 Default: 64 << 349 << 350 optmem_max 314 optmem_max 351 ---------- 315 ---------- 352 316 353 Maximum ancillary buffer size allowed per sock 317 Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence 354 of struct cmsghdr structures with appended dat !! 318 of struct cmsghdr structures with appended data. 355 optmem_max as a limit for its internal structu << 356 << 357 Default : 128 KB << 358 319 359 fb_tunnels_only_for_init_net 320 fb_tunnels_only_for_init_net 360 ---------------------------- 321 ---------------------------- 361 322 362 Controls if fallback tunnels (like tunl0, gre0 323 Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0, 363 sit0, ip6tnl0, ip6gre0) are automatically crea !! 324 sit0, ip6tnl0, ip6gre0) are automatically created when a new 364 (a) value = 0; respective fallback tunnels are !! 325 network namespace is created, if corresponding tunnel is present 365 loaded in every net namespaces (backward compa !! 326 in initial network namespace. 366 (b) value = 1; [kcmd value: initns] respective !! 327 If set to 1, these devices are not automatically created, and 367 created only in init net namespace and every o !! 328 user space is responsible for creating them if needed. 368 not have them. << 369 (c) value = 2; [kcmd value: none] fallback tun << 370 when a module is loaded in any of the net name << 371 "2" is pointless after boot if these modules a << 372 a kernel command-line option that can change t << 373 Documentation/admin-guide/kernel-parameters.tx << 374 << 375 Not creating fallback tunnels gives control to << 376 whatever is needed only and avoid creating dev << 377 329 378 Default : 0 (for compatibility reasons) 330 Default : 0 (for compatibility reasons) 379 331 380 devconf_inherit_init_net 332 devconf_inherit_init_net 381 ------------------------ 333 ------------------------ 382 334 383 Controls if a new network namespace should inh 335 Controls if a new network namespace should inherit all current 384 settings under /proc/sys/net/{ipv4,ipv6}/conf/ 336 settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By 385 default, we keep the current behavior: for IPv 337 default, we keep the current behavior: for IPv4 we inherit all current 386 settings from init_net and for IPv6 we reset a 338 settings from init_net and for IPv6 we reset all settings to default. 387 339 388 If set to 1, both IPv4 and IPv6 settings are f 340 If set to 1, both IPv4 and IPv6 settings are forced to inherit from 389 current ones in init_net. If set to 2, both IP 341 current ones in init_net. If set to 2, both IPv4 and IPv6 settings are 390 forced to reset to their default values. If se !! 342 forced to reset to their default values. 391 settings are forced to inherit from current on << 392 new netns has been created. << 393 343 394 Default : 0 (for compatibility reasons) 344 Default : 0 (for compatibility reasons) 395 345 396 txrehash << 397 -------- << 398 << 399 Controls default hash rethink behaviour on soc << 400 to SOCK_TXREHASH_DEFAULT (i. e. not overridden << 401 << 402 If set to 1 (default), hash rethink is perform << 403 If set to 0, hash rethink is not performed. << 404 << 405 gro_normal_batch << 406 ---------------- << 407 << 408 Maximum number of the segments to batch up on << 409 exits GRO, either as a coalesced superframe or << 410 GRO has decided not to coalesce, it is placed << 411 list is then passed to the stack when the numb << 412 gro_normal_batch limit. << 413 << 414 high_order_alloc_disable << 415 ------------------------ << 416 << 417 By default the allocator for page frags tries << 418 on x86). While the default behavior gives good << 419 might have hit a contention in page allocation << 420 true on older kernels (< 5.14) when high-order << 421 lists. This allows to opt-in for order-0 alloc << 422 historical importance. << 423 << 424 Default: 0 << 425 << 426 2. /proc/sys/net/unix - Parameters for Unix do 346 2. /proc/sys/net/unix - Parameters for Unix domain sockets 427 ---------------------------------------------- 347 ---------------------------------------------------------- 428 348 429 There is only one file in this directory. 349 There is only one file in this directory. 430 unix_dgram_qlen limits the max number of datag 350 unix_dgram_qlen limits the max number of datagrams queued in Unix domain 431 socket's buffer. It will not take effect unles 351 socket's buffer. It will not take effect unless PF_UNIX flag is specified. 432 352 433 353 434 3. /proc/sys/net/ipv4 - IPV4 settings 354 3. /proc/sys/net/ipv4 - IPV4 settings 435 ------------------------------------- 355 ------------------------------------- 436 Please see: Documentation/networking/ip-sysctl !! 356 Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for 437 Documentation/admin-guide/sysctl/net.rst for d !! 357 descriptions of these entries. 438 358 439 359 440 4. Appletalk 360 4. Appletalk 441 ------------ 361 ------------ 442 362 443 The /proc/sys/net/appletalk directory holds 363 The /proc/sys/net/appletalk directory holds the Appletalk configuration data 444 when Appletalk is loaded. The configurable par 364 when Appletalk is loaded. The configurable parameters are: 445 365 446 aarp-expiry-time 366 aarp-expiry-time 447 ---------------- 367 ---------------- 448 368 449 The amount of time we keep an ARP entry bef 369 The amount of time we keep an ARP entry before expiring it. Used to age out 450 old hosts. 370 old hosts. 451 371 452 aarp-resolve-time 372 aarp-resolve-time 453 ----------------- 373 ----------------- 454 374 455 The amount of time we will spend trying to res 375 The amount of time we will spend trying to resolve an Appletalk address. 456 376 457 aarp-retransmit-limit 377 aarp-retransmit-limit 458 --------------------- 378 --------------------- 459 379 460 The number of times we will retransmit a query 380 The number of times we will retransmit a query before giving up. 461 381 462 aarp-tick-time 382 aarp-tick-time 463 -------------- 383 -------------- 464 384 465 Controls the rate at which expires are checked 385 Controls the rate at which expires are checked. 466 386 467 The directory /proc/net/appletalk holds the 387 The directory /proc/net/appletalk holds the list of active Appletalk sockets 468 on a machine. 388 on a machine. 469 389 470 The fields indicate the DDP type, the local 390 The fields indicate the DDP type, the local address (in network:node format) 471 the remote address, the size of the transmi 391 the remote address, the size of the transmit pending queue, the size of the 472 received queue (bytes waiting for application 392 received queue (bytes waiting for applications to read) the state and the uid 473 owning the socket. 393 owning the socket. 474 394 475 /proc/net/atalk_iface lists all the interfa 395 /proc/net/atalk_iface lists all the interfaces configured for appletalk.It 476 shows the name of the interface, its Appleta 396 shows the name of the interface, its Appletalk address, the network range on 477 that address (or network number for phase 1 397 that address (or network number for phase 1 networks), and the status of the 478 interface. 398 interface. 479 399 480 /proc/net/atalk_route lists each known netw 400 /proc/net/atalk_route lists each known network route. It lists the target 481 (network) that the route leads to, the router 401 (network) that the route leads to, the router (may be directly connected), the 482 route flags, and the device the route is using 402 route flags, and the device the route is using. 483 403 484 5. TIPC 404 5. TIPC 485 ------- 405 ------- 486 406 487 tipc_rmem 407 tipc_rmem 488 --------- 408 --------- 489 409 490 The TIPC protocol now has a tunable for the re 410 The TIPC protocol now has a tunable for the receive memory, similar to the 491 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, 411 tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max) 492 412 493 :: 413 :: 494 414 495 # cat /proc/sys/net/tipc/tipc_rmem 415 # cat /proc/sys/net/tipc/tipc_rmem 496 4252725 34021800 68043600 416 4252725 34021800 68043600 497 # 417 # 498 418 499 The max value is set to CONN_OVERLOAD_LIMIT, a 419 The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values 500 are scaled (shifted) versions of that same val 420 are scaled (shifted) versions of that same value. Note that the min value 501 is not at this point in time used in any meani 421 is not at this point in time used in any meaningful way, but the triplet is 502 preserved in order to be consistent with thing 422 preserved in order to be consistent with things like tcp_rmem. 503 423 504 named_timeout 424 named_timeout 505 ------------- 425 ------------- 506 426 507 TIPC name table updates are distributed asynch 427 TIPC name table updates are distributed asynchronously in a cluster, without 508 any form of transaction handling. This means t 428 any form of transaction handling. This means that different race scenarios are 509 possible. One such is that a name withdrawal s 429 possible. One such is that a name withdrawal sent out by one node and received 510 by another node may arrive after a second, ove 430 by another node may arrive after a second, overlapping name publication already 511 has been accepted from a third node, although 431 has been accepted from a third node, although the conflicting updates 512 originally may have been issued in the correct 432 originally may have been issued in the correct sequential order. 513 If named_timeout is nonzero, failed topology u 433 If named_timeout is nonzero, failed topology updates will be placed on a defer 514 queue until another event arrives that clears 434 queue until another event arrives that clears the error, or until the timeout 515 expires. Value is in milliseconds. 435 expires. Value is in milliseconds.
Linux® is a registered trademark of Linus Torvalds in the United States and other countries.
TOMOYO® is a registered trademark of NTT DATA CORPORATION.