Speeding up SSH logins

SSH is great; it’s highly secure, and actually easier to use than insecure alternatives like rsh or Telnet. In fact, it’s so easy to integrate SSH with everything else you do that it’s commonplace to rely on it for all sorts of things. But oddly, that very ubiquity tends to reveal an unexpected problem when you try to use SSH for, say, accessing a revision-control system: merely connecting to the remote end and performing the handshaking necessary to set up the encrypted channel takes an appreciable amount of time.

So herewith instructions on how to eliminate that overhead.

Here’s the short version. First, add these lines to the bottom of your ~/.ssh/config:

Host *
ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r
ServerAliveInterval 60
ServerAliveCountMax 60

Then install AutoSSH. Now, whenever you’re going to need to make several connections to a particular remote host, run this command:

autossh -f -M 0 -N remote-host

Now all subsequent SSH connections to remote-host should be screamingly fast. The End.


SSH has long had the ability to do port forwarding: you open an SSH connection to a remote host, and the client listens for TCP connections locally, and forwards them to a chosen port on the remote side. The extra TCP connections are multiplexed into the same encrypted connection used by the single SSH client. That can be useful for things like accessing a firewalled HTTP proxy from the outside world, as long as you can SSH to a machine inside the firewall.

The essential idea behind making SSH logins take less time is that you can tell SSH to do an equivalent thing for new logins. That is, you log in to a server once, and the SSH client listens for connections on a local (Unix-domain) socket; we can call that client the “master”. Then subsequent “slave” invocations of SSH to the same remote host will contact the master, and ask it to open a new terminal session to the remote host. Since this just involves sending more packets on the existing SSH-over-TCP connection, the authentication has already been done, and the login is blindingly fast.

So, how do you set this up?

First, you need version 3.9 or newer of the OpenSSH client. (That means that it works in the version that comes with Debian Etch, for example, but not Debian Sarge.) Fortunately, if you have a client that supports the relevant feature, it works even when you’re contacting older versions of the server.

Next, add the following lines to the top of your ~/.ssh/config file:

ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r

The ControlPath directive is the crucial bit; that’s what triggers the connection sharing. Or rather, on its own it causes the SSH client to attempt to reuse an existing connection, but if no such connection can be found, the client will just create a new non-shared connection. A Unix-domain socket is created with the name you designate, with %h, %p, and %r expandos being replaced with the hostname, port, and login username of the remote connection.

The ControlMaster directive fills in the blanks: setting it to auto enables opportunistic connection sharing — the first time you connect to a given host, the SSH connection you get will become a master. Then subsequent SSH client invocations to the same location will automatically act as slaves. (To be honest, I’m not sure why that isn’t the default; surely most people want that behaviour if they enable connection sharing with ControlPath.)

There are still other things to improve here. Suppose you SSH to a particular host twice, such that the second connection is a slave of the first. Now you try to close the first connection. Unfortunately, the first connection will hang, waiting for the second to exit.

That’s bad enough; worse still is that this doesn’t directly handle the revision-control scenario I mentioned above, because every time your Subversion-alike client connects to the server, it will create a fresh connection, which will then get discarded on exit.

The second problem could in principle be solved by keeping open a logged-in SSH connection to the machine with the appropriate server, but that doesn’t really help with the first problem: the whole issue is that you don’t want to be tying up a terminal window for each host you need to contact.

Enter AutoSSH. AutoSSH is a very simple program; all it does is execute SSH, and watch the connection. If SSH dies or the connection drops, it restarts it. So you can AutoSSH to your target server, and that will create a master SSH process which will sit in the background without getting in your way. Then every time you run SSH thereafter, you’ll automatically get a slave connection. Problem solved.

Well, almost. AutoSSH seems to be one of those programs that’s gradually evolved towards doing the right thing, while always retaining its old way of doing things. The most convenient way to use it, it seems to me, is to first add the following lines to the top of your ~/.ssh/config:

ServerAliveInterval 60
ServerAliveCountMax 60

The ServerAliveInterval tells SSH to send a keepalive message every 60 seconds while the connection is open; that both helps poor-quality NAT routers understand that the NAT table entry for your connection should be kept alive, and helps SSH detect when there’s a network problem between the server and client. The ServerAliveCountMax says that after 60 consecutive unanswered keepalive messages, the connection should be dropped. At that point, AutoSSH should try to invoke a fresh SSH client. You can tweak those specific values if you want, but they seem to work well for me.

So having done that, you now need to run AutoSSH with several magic unbreak-me options:

autossh -f -M 0 -N remote-host

The -f option tells AutoSSH to drop to the background before running SSH; -M 0 disables AutoSSH’s built-in monitoring in favour of ServerAliveInterval; and the -N gets passed to the underlying SSH client, telling it to just wait for multiplexed connections without executing any remote command.

One last handy trick, while I’m here. If you want to check whether a master connection currently exists, just run ssh -O check remote-host. You get a message to standard error with the answer, which is also reflected in the process exit status in the obvious way. You can also do ssh -O exit remote-host, which is occasionally useful.