—receive buffer configuration
In general, there are two ways to control how large a TCP socket receive buffer can grow on Linux:
- You can set
setsockopt(SO_RCVBUF)explicitly as the max receive buffer size on individual TCP/UDP sockets
- Or you can leave it to the operating system and allow it to auto-tune it dynamically, using the global
tcp_rmemvalues as a hint.
- … both values are capped by
/proc/sys/net/core/rmem_max — is a global hard limit on all sockets (TCP/UDP). I see 256M in my system. Can you set it to 1GB? I’m not sure but it’s probably unaffected by the boolean flag below.
/proc/sys/net/ipv4/tcp_rmem — doesn’t override SO_RCVBUF. The max value on RTS system is again 256M. The receive buffer for each socket is adjusted by kernel dynamically, at runtime.
The linux “tcp” manpage explains the relationship.
Note large TCP receive buffer size is usually required for high latency, high bandwidth, high volume connections. Low latency systems should use smaller TCP buffers.
For high-volume multicast connections, you need large receive buffers to guard against data loss — UDP sender doesn’t obey flow control to prevent receiver overflow.
/proc/sys/net/ipv4/tcp_window_scaling is a boolean configuration. (Turned on by default) 1GB is the new limit on AWS after turning on window scaling. If turned off, then AWS value is constrained to a 16-bit integer field in the TCP header — 65536
I think this flag affects AWS and not receive buffer size.
- if turned on, and if buffer is configured to grow beyond 64KB, then Ack can set AWS to above 65536.
- if turned off, then we don’t (?) need a large buffer since AWS can only be 65536 or lower.