The simplest Docker deployment
I’m sure like a lot of people I’ve ended up with a bunch of personal websites over the years, all hosted and deployed in a very manual way. In my case this was a $5-a-month DigitalOcean Droplet with a handful of sites hosted on it, all deployed manually by doing a git pull
onto the server. No development environments to do any proper testing beforehand, so issues would generally just reveal themselves ‘in production’ and the site would no doubt end up broken while I fixed them. Not to mention, doing any updates on the server was not a very appealing prospect, as it might bring down some or all of the websites on there.
This is quite a contrast to the setups I’ve ended up using in work, with tools like Kubernetes, automatic deployments, rollbacks, etc. All quite complicated to set up, but making the deployment process much more predictable, and making it painless to deploy frequently.
What I was interested in was a blend of the two options for personal use. Something still relatively simple, but also getting the benefits of Docker containers and the ease of deployment that brings.
That’s the setup I’m now using and I’ll run through it here.
Overview
I’ve put a repo up at paulherron/docker-traefik-nginx-example but the gist is:
- Traefik running as a front-end router. It automatically gets a valid SSL certificate, picks up requests for the domain you specify and routes requests through to…
- Nginx as the webserver. HTML files, images etc. get served up from here.
docker-compose
is used to manage the containers - both locally and in production.- ‘Production’ vs. ‘local development’ configs are managed in the
docker-compose
configs. Production values are declared indocker-compose.yml
. Then a handful of overrides are declared indocker-compose.override.yml
, which is a file that gets parsed automatically if you don’t specify otherwise. docker-machine
allows a production server to be started and accessed.- GitHub Packages as the Docker registry for storing and serving up our custom Docker image.
The aim is to have Traefik running at the front, and then to declare all other services via Docker labels only. In this simple example I’m just running a single website (i.e. a single Nginx instance) behind Traefik, but it’s possible to run as many websites as you like behind the one instance of Traefik, with each website a separate container and its Traefik config declared just with a few labels in the docker-compose.yml
file.
This is a quite a nice setup as it’s possible to bring sites up and down behind Traefik with no downtime, and no need to restart servers or explicitly reload configs. It’s all just using labels in docker-compose.yml
files.
Local development
First, pull the repo:
git pull https://github.com/paulherron/docker-traefik-nginx-example.git
A note about URLs: in the GitHub example I’ve used example.paulherron.com
as the domain. You might want to replace any mentions of this in the .yml
files with a domain you control. So for example in docker-compose.yml
you could replace example.paulherron.com
with example.yoursite.com
. And in docker-compose.override.yml
you could replace local.example.paulherron.com
with local.example.yoursite.com
. The handy thing about making these changes in the .yml
file is they’re just config changes - there’s no need to rebuild the Docker image after making them.
Now, fire up the containers locally:
cd docker-traefik-nginx-example
docker-compose up -d
You should then be able to see both containers running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c24cf3c2d31d traefik:v2.0 "/entrypoint.sh --lo…" 3 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp traefik
5a28a797ae46 paulherron/docker-traefik-nginx-example "nginx" 3 seconds ago Up 2 seconds 8088/tcp example_nginx
I’ve set the domain of the local testing site to be local.example.paulherron.com
, so an entry in the hosts file is needed before viewing that:
sudo vim /etc/hosts
And then add local.example.paulherron.com
(or whatever domain you specified in your docker-compose.override.yml
) at the bottom:
127.0.0.1 local.example.paulherron.com
With that added, going to local.example.paulherron.com should show a ‘Hello world’ page. Don’t worry about seeing a warning about the self-signed certificate - that should only happen locally.
Notice how in serving this site up locally, the values of docker-compose.yml
have been used, and then those from docker-compose.override.yml
have been layered over the top. This is the default behaviour when doing docker-compose up
- Docker will automatically merge the two configs together.
Provisioning a server
The server to deploy to could be hosted anywhere. Amazon AWS is a popular option, but I’m going to use docker-machine
to create one in DigitalOcean:
docker-machine create --digitalocean-size "s-1vcpu-1gb" --driver digitalocean --digitalocean-region lon1 --digitalocean-access-token YOUR_ACCESS_TOKEN default
This should create a $5-a-month Droplet called default
, which should appear in the DigitalOcean control panel almost immediately.
That’s the server we’ll call ‘production’ created. I’d of course recommend you take additional steps to secure any server that’ll be visible on the web - for example adding some Firewall rules in DigitalOcean to restrict incoming SSH connections to your own IP address. And if you want to see the example running, you’ll need to create an A record
in your DNS to point to the IP of the Droplet that just got created.
Now, open a second Terminal window, which will become our view to this new production server, by doing:
your-second-terminal$ eval $(docker-machine env default)
This points Docker (in the current terminal only) to the new server you just created. (If you just run docker-machine env default
you can see the actual details that are being used.)
To check this is working as expected, do the following to checks what’s running:
your-second-terminal$ docker ps
If that command shows no containers are running, that’s good. It means Docker is correctly pointing at the server we just started (rather than at the local host were containers are already running).
Deploying
Deploying the code can be done from our second terminal window. Keep in mind that this terminal is still local (in the sense we haven’t SSH’d anywhere) but that we set its Docker config to point to a remote server. So when we run commands in it they’re working with the local filesystem, but enacting any Docker commands on the remote server.
So we need to first get into project directory, because that’s where the config files are. For example:
your-second-terminal$ cd ~/Projects/docker-traefik-nginx-example
Then doing the following should pull the Docker images onto the remote server:
your-second-terminal$ docker-compose -f docker-compose.yml pull
Then it’s possible to do docker up
on the remote server:
your-second-terminal$ docker-compose -f docker-compose.yml up
Note how the -f docker-compose.yml
option is always passed with these commands. This is telling Docker explicitly to only use the main docker-compose.yml
file, and not to use the docker-compse.override.yml
. This is our simple mechanism for differentiating between ‘production mode’ and ‘local development mode’.
So that’s it! You’ve deployed a website to a remote server using Docker.
Customising the example
If you wanted to take the example further you could of course start customising the code. As well as changing the code and pushing it to your own code repository, you’d also be looking to change the following in docker-compose.yml
to something you control:
nginx:
...
image: paulherron/docker-traefik-nginx-example
This is a URL for the Docker image that gets pushed. In the above format it’s referring to a project in Docker Hub (the official Docker image repositories) - and anyone can sign up and start creating repositories there. But you can also use a full URL here to make use of any registry that isn’t Docker Hub. A great alternative is using GitHub Packages, by declaring the image like this:
nginx:
...
image: docker.pkg.github.com/paulherron/simple-docker-example/example-nginx
Docker image URLs in GitHub Packages are declared in a format like username/repo/package-name
and this can work great if your code is already in GitHub. You’re pushing the code there anyway, and a short script can invoke GitHub Actions to automatically build the image when you push code, and upload it to GitHub Packages. Something like this is what I use, when I’m wanting the resulting image to end up in GitHub Packages.