Ironically scp/sftp caused me more bandwidth headaches than wireguard/openvpn. I frequently experienced cases where scp/sftp would get 10% or even less of the transfer speed compared to a plain http(s) connection. Maybe it was due to packet loss, buffer size, or qos/throttling, but I wasn't able to figure out a definitive solution.
In almost all cases, the reason is OpenSSH's silly limitation of buffer sizes [1].
It limits the amount of data that's "in the cable" (which needs to be more if the cable is long).
> The default SSH window size was 64 - 128 KB, which worked well for interactive sessions, but was severely limiting for bulk transfer in high bandwidth-delay product situations.
> OpenSSH later increased the default SSH window size to 2 MB in 2007.
2 MB is still incredibly little.
It means that on a 100 ms connection, you can not exceed 160 Mbit/s, even if your machines have 10 Gbit/s.
OpenSSH is one of the very few TCP programs that have garbage throughput on TCP. This is also what makes rsync slow.
In my opinion, this should really be fixed in OpenSSH upstream. I do not understand why it doesn't just use normal automatic TCP window size scaling, like all other TCP programs.
All the big megacorps and almost every other tech company in existence uses SSH, yet nobody seems to care that it's artificially 100x slower than necessary.
Ironically scp/sftp caused me more bandwidth headaches than wireguard/openvpn. I frequently experienced cases where scp/sftp would get 10% or even less of the transfer speed compared to a plain http(s) connection. Maybe it was due to packet loss, buffer size, or qos/throttling, but I wasn't able to figure out a definitive solution.