When using Docker containers locally, the usual way is to expose a port and access the service via localhost:port
. This usually works, but is of course not a good solution. A few downsides are:
- Quick loss of overview of assigned ports, especially with more containers
- Access via HTTPS only if the container has been configured accordingly - and even then there is only a self-signed certificate that triggers a security warning in the web browser
- Entries of password managers are more difficult to separate if all services run via
localhost
instead of a unique domain
A better way would be to use subdomains and valid TLS certificates. However, giving each service its own signed certificate from a local certificate authority would mean more administrative work before the service can be used, and you would also have to take care of the certificate authority and signing yourself.
To simplify this process, a reverse proxy like Caddy can be used.
Reverse Proxy
A reverse proxy is a server that sits in front of webservers and forwards requests from clients to these webservers. As a result, they often also take care of security-relevant components of communication such as TLS termination of HTTPS connections.
The use case in this context is similar. We use a reverse proxy to handle requests to localhost
. Services in Docker containers do not need to expose a port anymore, the communication between proxy and service runs via an internal Docker network
Why Caddy?
Caddy is a modern web server offering a simplified configuration with automatic HTTPS. A reverse proxy block in a Caddyfile looks like this, for example:
|
|
That’s it. Caddy automatically generates the TLS certificates via Let’s Encrypt and sets the usual headers of a reverse proxy, which makes it a simple but powerful alternative to the well-known solutions such as Nginx.
Caddy Module: Caddy-Docker-Proxy
A useful module to use with docker services is Caddy-Docker-Proxy. It scans metadata and searches for labels that indicate that the service should be served by Caddy. A Caddyfile with the corresponding entries is created from these labels, which makes manual management for Docker containers superfluous. Entries for services outside of Docker can still be managed via Caddyfile.
Instructions how to convert the Caddyfile instructions to labels can be found in the repository.
Example
This example spins up traefik/whoami and adds it to an existing caddy proxy network. After creation, the container is reachable via https://whoami.dev.internal
, secured with an TLS certificate signed by Caddy’s internal Root CA.
|
|
Setup Caddy with Docker Compose
First, we create a proxy network. This network is created externally to ensure that a service can join it even if the Caddy stack is not running.
Info
With a shared proxy network, the services can communicate directly with each other. If you want to prevent this behavior, you should create a proxy network for each service.
|
|
We then store the following stack in a file, the default name is docker-compose.yml
. If a different name is used for the file, this must be explicitly specified when calling docker compose
.
|
|
Finally, the stack can be started with docker compose up -d
.
Trust Caddy’s Root CA
For your computer to trust the certificate issued by Caddy, it needs to trust the certificate chain. With a local Caddy, one could run caddy trust
to install the root CA to the system’s trust store.
With docker, the container is isolated from the system and has no direct access to it. The root certificate needs to be copied and trusted manually. Instructions for Linux, Mac and Windows can be found in the Caddy documentation.
For most Linux the commands are:
|
|
Arch Linux
The way local CA certificates are handled has changed in 2014. The corresponding commands would be:
|
|
Create a docker volume with Caddy’s Root CA
If a container needs to communicate with other services via caddy and checks the validity of the certificate, it also needs to trust the certificate chain.
The following commands create a docker volume named caddy_root_ca
that contains only the root CA and can be mounted in other containers. There, only the trust store needs to be updated, which can be triggered either manually or by overriding entrypoint
or command
.
|
|
For the container to access the other service via Caddy, an alias needs to be set.
Shortened incomplete example for a service that is accessible via https://service.dev.internal
and can be accessed by another docker service via Caddy:
|
|
After running the trust store update, the ‘client’ service can now communicate with ‘web’ over a trusted HTTPS connection via Caddy.
Local domain
With our reverse proxy, an individual subdomain can be used for each service.
Instead of having to enter each entry manually in /etc/hosts
, we are going to use a wildcard DNS entry for the subdomain dev.internal
. Since .internal
has been reserved by ICANN as a domain name for private application use, the top level domain is safe to use, as it is guaranteed that it will not be installed in the Internet’s DNS.
By specifying this entry, the domain itself and all subdomains are resolved to localhost
.
Info
Generally, any domain can be used. Using an existing, globally routed domain can cause problems with name resolution and therefore accessibility.
But even a TLD that is not yet registered in the DNS of the Internet should only be used with caution as long as it has not been explicitly reserved like .internal
.
Linux with NetworkManager
- Install dnsmasq
- Change DNS resolver of NetworkManager
|
|
- Add DNS entries
|
|
- Reload NetworkManager
|
|
MacOS
- If not already done: Install Homebrew
- Install dnsmasq
|
|
- Add DNS entries
|
|
- Enable autostart
|
|
- Add to resolvers
|
|
Test the setup
First check whether the domain resolution is working as intended. Open a terminal and use dig <domain>
(Linux) or dscacheutil -q host -a name <domain>
(MacOS).
Us the command both for the domain itself (e.g. dev.internal
) and for a subdomain of it (e.g. a.dev.internal
). Both the domain itself (e.g. dev.internal
) and for any subdomain of it (e.g. a.dev.internal
) should return 127.0.0.1
for IPv4 and ::1
for IPv6.
You can then start the example service “whoami” mentioned above. After starting the container, call up the specified URL (e.g. https://whoami.dev.internal). It should be served with a valid HTTPS without security warnings.