Why is SSH/SFTP failing for commands with larger returns?
We have a SFTP server which was working fine until we added another ISP.
The connection to the SFTP server is not going through the new ISP, I
confirmed it with tracert. No change was made on the server either. But
since then, some users' SFTP or SSH connections time out/hang if the
executed command has a larger return. Here's the scenario:
I can continue to ping and the ping will always return even when SSH/SFTP
times out
I can connect to the server, it asks for authentication and lets me log in.
If the ls command for my root directory is returning a small number of
files or folders, then it shows the listing of files and folders
If the ls command for my root directory is larger than let's say 5 or 6
files or folders, then it hangs/times out.
While trying this, I tried running a ping to the server, and it's
returning all the time.
This doesn't happen to everyone, but it seems to happen to users who are
in another city..
I tried different SFTP clients (FileZilla and WinSCP). Both have the same
issue.
I ran WireShark on my PC (which is outside of our network and outside of
the city), when SFTP/SSH times out, I see retransmission and part of
segment not captured errors coming up, which leads me to believe there
might be some packet loss somewhere between the hops.
Expert Info (Note/Sequence): Retransmission (suspected)
Previous segment not captured (common at capture start)
Is SFTP/SSH that sensitive to packet loss? Wouldn't SSH/SFTP
retransmit/reacknowledge to avoid these packet loss errors? Is there
something on the server settings I can tweak in order to make this work?
No comments:
Post a Comment