3

I am making an automatic remote SSH port forward request from my device every few minutes:

ssh -y -f -N -R port:localhost:22 user@server \
-o ExitOnForwardFailure=yes -o ServerAliveInterval=60 \
-o StrictHostKeyChecking=no

If the port forwarding is successful and I reboot my device and reset internet connection, I am unable to connect to it or make new port forwarding (this is normal and ok as my IP is changed and the port seems to be occupied). I have to wait about 1-2 hours so that the port is free again or I need to kill the process on my server.

I would like to send a script from my device before turning it off that would free the port, so that after reboot new port forward request is successful.

To summarize: How can I remotely cancel port forwarding from the device that requested it, instead of killing the process on the server?

2 Answers 2

4

tl;dr

Use these sshd options on the server (in sshd_config file):

ClientAliveCountMax 3
ClientAliveInterval 15

Client-side story

How can I remotely cancel port forwarding from the device that requested it, instead of killing the process on the server?

The simple answer is: just terminate the ssh on the client. Even if you force-kill it, the server should be notified the connection is terminated (because it's the kernel that does the job).

…if the notification ever gets to the server, that is. I guess this is the problem. Networking on your device somehow drops before ssh is terminated and there is no way to notify the server this particular SSH connection is no more.

Maybe you can redesign your client-side setup to ensure ssh terminates before anything is done to the network connection. I don't know the details of your client-side system, so I won't tell you exactly what you can do and how. Loose examples: systemd units and their dependencies, if applicable; a wrapper over reboot and/or shutdown.


Server-side story

I assume the SSH server is standard sshd. Its default config (sshd_config) specifies

TCPKeepAlive yes

From man 5 sshd_config:

TCPKeepAlive

Specifies whether the system should send TCP keepalive messages to the other side. If they are sent, death of the connection or crash of one of the machines will be properly noticed. However, this means that connections will die if the route is down temporarily, and some people find it annoying. On the other hand, if TCP keepalives are not sent, sessions may hang indefinitely on the server, leaving ''ghost'' users and consuming server resources.

The default is yes (to send TCP keepalive messages), and the server will notice if the network goes down or the client host crashes. This avoids infinitely hanging sessions.

This mechanism is not specific to SSH. Explanation:

In Linux, TCP keepalive parameters are:

tcp_keepalive_intvl
tcp_keepalive_probes
tcp_keepalive_time

Their default values are:

tcp_keepalive_time = 7200 (seconds)
tcp_keepalive_intvl = 75 (seconds)
tcp_keepalive_probes = 9 (number of probes)

This means that the keepalive process waits for two hours (7200 secs) for socket activity before sending the first keepalive probe, and then resend it every 75 seconds. If no ACK response is received for nine consecutive times, the connection is marked as broken.

(Source).

This explains why you "have to wait about 1-2 hours so that the port is free again".

Examples of how you can change these values (if you have the permissions):

  • temporarily

    echo 300 > /proc/sys/net/ipv4/tcp_keepalive_time
    

    or

    sysctl -w net.ipv4.tcp_keepalive_time=300
    
  • permanently by editing the /etc/sysctl.conf file, add:

    net.ipv4.tcp_keepalive_time=300
    

    then invoke sudo sysctl -p to apply the change.

But these are system-wide settings. In general any change will affect more than sshd. That's why it's better to use SSH-specific solution. Again from man 5 sshd_config:

ClientAliveCountMax

Sets the number of client alive messages (see below) which may be sent without sshd(8) receiving any messages back from the client. If this threshold is reached while client alive messages are being sent, sshd will disconnect the client, terminating the session. It is important to note that the use of client alive messages is very different from TCPKeepAlive. […] The client alive mechanism is valuable when the client or server depend on knowing when a connection has become inactive.

The default value is 3. If ClientAliveInterval (see below) is set to 15, and ClientAliveCountMax is left at the default, unresponsive SSH clients will be disconnected after approximately 45 seconds. This option applies to protocol version 2 only.

ClientAliveInterval

Sets a timeout interval in seconds after which if no data has been received from the client, sshd(8) will send a message through the encrypted channel to request a response from the client. The default is 0, indicating that these messages will not be sent to the client. This option applies to protocol version 2 only.

If only you can reconfigure sshd on the server, this is in my opinion the most elegant way. Let the sshd_config contain lines like:

ClientAliveCountMax 3
ClientAliveInterval 15

Notes:

  • -o ServerAliveInterval=60 you use is a similar option for ssh; it allows the client to detect broken connection. It doesn't affect the server though.
  • you may consider autossh on the client.

Back to the client

Let's suppose you cannot reconfigure the server nor the client. I know you said

instead of killing the process on the server

but in this case killing it may be the best option. As sshd forks to serve particular connections, it's enough to kill just the right process without affecting the rest. To initiate killing from the client, you should modify the way you invoke ssh. You said:

every few minutes:

ssh -f …

Instead of this, you may run a script similar to the following one, just once (e.g. via @reboot in crontab). It tries to kill the saved PID (of sshd, on the server) first, then establish a tunnel and save its sshd PID. If the tunnel cannot be established or gets terminated eventually, the script sleeps a while and loops.

#!/bin/sh
port=12345
address=user@server
while :; do
  ssh $address '[ -f ~/.tunnel.pid ] && kill `cat ~/.tunnel.pid` && rm ~/.tunnel.pid'
  ssh -o ExitOnForwardFailure=yes -R ${port}:localhost:22 $address 'echo $PPID > ~/.tunnel.pid; exec sleep infinity'
  sleep 60
done

Notes:

  • This is quick and dirty script, a proof of concept; portability was not my priority.
  • For clarity I've omitted some options you originally used.
  • If you run the script twice, the two instances will kill each other's tunnels over and over again.
1
  • Thank you very much for your answer. I will terminate shh on client-side before rebooting. I didn't know that server will be noticed about that. Additionally I will modify sshd config on server.
    – Jacek
    Commented Jan 9, 2018 at 8:16
1

Forwarding can be canceled using the ssh escape character (~ at the beginning of a line, by default) or ~? for help

Supported escape sequences:
 ~.   - terminate connection (and any multiplexed sessions)
 ~B   - send a BREAK to the remote system
 ~C   - open a command line
 ~R   - request rekey
 ~V/v - decrease/increase verbosity (LogLevel)
 ~^Z  - suspend ssh
 ~#   - list forwarded connections
 ~&   - background ssh (when waiting for connections to terminate)
 ~?   - this message
 ~~   - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)

and ~# will list connections but not really in a way suitable for killing them via a command

The following connections are open:
  #3 client-session (t4 r0 i0/0 o0/0 fd 8/9 cc -1)

which can be found via ~C and then help

ssh> help
Commands:
      -L[bind_address:]port:host:hostport    Request local forward
      -R[bind_address:]port:host:hostport    Request remote forward
      -D[bind_address:]port                  Request dynamic forward
      -KL[bind_address:]port                 Cancel local forward
      -KR[bind_address:]port                 Cancel remote forward
      -KD[bind_address:]port                 Cancel dynamic forward

so in theory yours could be canceled via ~C-KL22 to open the command line and cancel the LocalForward by port number.

Whether this will solve your issue is unclear; ports usually take far less than 1-2 hours to clear TIME_WAIT or such...

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .