One container per connection
Socket activated sshd running in a docker container


Posted on 2016-08-13

I'm thinking about setting up a wargame like scenario for work and thought it would be nice if I could have the users ssh to some machines for different steps of the game. However since the users aren't really trusted I would prefer to only give them access to a container. Moreover I don't want the different teams to be able to interact with each other on the machine itself, so I needed to spawn one container for every team.

This is when I realized that socket-activation is exactly what I want, I just need to bundle it with docker and all should be well.

Running sshd from docker

First, we need a Dockerfile

FROM debian:stretch

RUN (\
        export DEBIAN_FRONTEND=noninteractive && \
        apt-get update  && \
        apt-get install -y --no-install-recommends \
            openssh-server \
        && \
        apt-get autoremove && \
        apt-get clean \
    )
RUN (\
    useradd --create-home user && \
    echo 'user:woop' | chpasswd && \
    mkdir /var/run/sshd \
)

CMD /usr/sbin/sshd -i

Everything in this Dockerfile is to get sshd to run. However the -i flag is special. It tells sshd that it is run from inetd(8). This is what we need to get socket-activation to work.

Systemd units

Now we need to define our systemd units. First we define the unit (sshd@.service) to start the container and sshd in it. This was heavly inspired by the sshd service file in Arch.

[Unit]
Description=sshd container

[Service]
ExecStart=-/usr/bin/docker run --rm -i sshd
StandardInput=socket
StandardError=syslog

To be honest I'm not sure if the -i flag is required, however it helped with debugging because it made the container behave nicely with ^C.

StandardInput=socket is what will make systemd run this service like inetd(8) would have.

We also need to setup the sshd.socket unit file.

[Unit]
Description=SSH socket

[Socket]
ListenStream=0.0.0.0:22
Accept=yes

Testing

With all of this up and running, we should be able to connect to the ssh server.

ssh \
    -o UserKnownHostsFile=~/tmp-known_hosts \
    -o ControlMaster=no \
    -o ControlPath=none \
    user@localhost

I use a custom identityfile as every time you rebuild your container it will generate a new private key and I don't want to clutter my ordinary known_hosts file. Moreover I had to disable ControlMaster as otherwise ssh would simply use that to connect to the same container I had already connected to.

Now for the actual test, for brevity I have omitted the motd.

$ ssh -o UserKnownHostsFile=~/tmp-known_hosts -o ControlMaster=no -o ControlPath=none user@localhost
user@localhost's password: 
$ ls
$ touch asdf

and then, without closing that terminal:

$ ssh -o UserKnownHostsFile=~/tmp-known_hosts -o ControlMaster=no -o ControlPath=none user@localhost
user@localhost's password: 
$ ls
$

Success!

Resources

Some resources I used to arrive at this solution: