What could be reasons for my rsync, which is syncing two remote servers through ssh, to slow down over time like this? It keeps happening. How to check what is the bottleneck?
You aren’t giving us enough information to even speculate the answer. Are these Enterprise grade servers in a datacenter? Are these home made servers with consumer or low grade hardware you’re calling servers? Are they in the same datacenter or do they go out to the Internet? What exists between the hops on the network? Is the latency consistent? What is the quality of both sides of the connection? Fiber? Wi-Fi? Mobile? Satellite?
Does it drop too nothing or just settle into a constant slower speed? What have you tried to trouble shoot? Is it only rsync or do other tests between the hosts show the same behavior?
Give us more and you might get some help. If these hosts are Linux I would start with iperf to do a more scientific test. And report to us some more info.
Bandwidth (disk and network) is just one metric. Could it be an increase in number of IOPS due to syncing several small files?
yeah this is what i thought too. proliferation of small files.
It’s always the disk cache
At ~5GB per HOUR? I don’t think so
It’s the floppy disk cache
Use a VPN to check for a bottleneck, my ISP will cap my downloads from Steam to 10MB/s with a shitty VPN i get 25+MB/s.
Could be ISP throttling, at least that’s my experience with cross-country data transfer
If there is latency look at optimization around your tcp window scaling settings.
I’ve had some luck establishing the bottleneck using strace on both the sender side and receiver side. This will show if the sending rsync is waiting on local reads or remote writes and if the receiving rsync is waiting on network reads or local writes.
This helps find the specific resources to check.
Humans slow down over time. Computers slow down over time.
Since we don’t know any more details, I put my $.02 on a cheap plastic router that wants to get rebooted sometimes.