Documentation Revamp

Co-authored-by: jbdoumenjou <jb.doumenjou@gmail.com>
This commit is contained in:
Gérald Croës 2019-02-26 05:50:07 -08:00 committed by Traefiker Bot
parent 848e45c22c
commit ac6b11037d
174 changed files with 5858 additions and 4002 deletions

View file

@ -0,0 +1,294 @@
# Clustering / High Availability on Docker Swarm with Consul
This guide explains how to use Traefik in high availability mode in a Docker Swarm and with Let's Encrypt.
Why do we need Traefik in cluster mode? Running multiple instances should work out of the box?
If you want to use Let's Encrypt with Traefik, sharing configuration or TLS certificates between many Traefik instances, you need Traefik cluster/HA.
Ok, could we mount a shared volume used by all my instances? Yes, you can, but it will not work.
When you use Let's Encrypt, you need to store certificates, but not only.
When Traefik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
If the challenge is not known by other Traefik instances, the validation will fail.
For more information about the challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#http-challenge)
## Prerequisites
You will need a working Docker Swarm cluster.
## Traefik configuration
In this guide, we will not use a TOML configuration file, but only command line flag.
With that, we can use the base image without mounting configuration file or building custom image.
What Traefik should do:
- Listen to 80 and 443
- Redirect HTTP traffic to HTTPS
- Generate SSL certificate when a domain is added
- Listen to Docker Swarm event
### EntryPoints configuration
TL;DR:
```shell
$ traefik \
--entrypoints='Name:http Address::80 Redirect.EntryPoint:https' \
--entrypoints='Name:https Address::443 TLS' \
--defaultentrypoints=http,https
```
To listen to different ports, we need to create an entry point for each.
The CLI syntax is `--entrypoints='Name:a_name Address:an_ip_or_empty:a_port options'`.
If you want to redirect traffic from one entry point to another, it's the option `Redirect.EntryPoint:entrypoint_name`.
By default, we don't want to configure all our services to listen on http and https, we add a default entry point configuration: `--defaultentrypoints=http,https`.
### Let's Encrypt configuration
TL;DR:
```shell
$ traefik \
--acme \
--acme.storage=/etc/traefik/acme/acme.json \
--acme.entryPoint=https \
--acme.httpChallenge.entryPoint=http \
--acme.email=contact@mydomain.ca
```
Let's Encrypt needs 4 parameters: an TLS entry point to listen to, a non-TLS entry point to allow HTTP challenges, a storage for certificates, and an email for the registration.
To enable Let's Encrypt support, you need to add `--acme` flag.
Now, Traefik needs to know where to store the certificates, we can choose between a key in a Key-Value store, or a file path: `--acme.storage=my/key` or `--acme.storage=/path/to/acme.json`.
The `acme.httpChallenge.entryPoint` flag enables the `HTTP-01` challenge and specifies the entryPoint to use during the challenges.
For your email and the entry point, it's `--acme.entryPoint` and `--acme.email` flags.
### Docker configuration
TL;DR:
```shell
$ traefik \
--docker \
--docker.swarmMode \
--docker.domain=mydomain.ca \
--docker.watch
```
To enable docker and swarm-mode support, you need to add `--docker` and `--docker.swarmMode` flags.
To watch docker events, add `--docker.watch`.
### Full docker-compose file
```yaml
version: "3"
services:
traefik:
image: traefik:1.5
command:
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=/etc/traefik/acme/acme.json"
- "--acme.entryPoint=https"
- "--acme.httpChallenge.entryPoint=http"
- "--acme.onHostRule=true"
- "--acme.onDemand=false"
- "--acme.email=contact@mydomain.ca"
- "--docker"
- "--docker.swarmMode"
- "--docker.domain=mydomain.ca"
- "--docker.watch"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
networks:
webgateway:
driver: overlay
external: true
traefik:
driver: overlay
```
## Migrate configuration to Consul
We created a special Traefik command to help configuring your Key Value store from a Traefik TOML configuration file and/or CLI flags.
## Deploy a Traefik cluster
The best way we found is to have an initializer service.
This service will push the config to Consul via the `storeconfig` sub-command.
This service will retry until finishing without error because Consul may not be ready when the service tries to push the configuration.
The initializer in a docker-compose file will be:
```yaml
traefik_init:
image: traefik:1.5
command:
- "storeconfig"
- "--api"
[...]
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
networks:
- traefik
deploy:
restart_policy:
condition: on-failure
depends_on:
- consul
```
And now, the Traefik part will only have the Consul configuration.
```yaml
traefik:
image: traefik:1.5
depends_on:
- traefik_init
- consul
command:
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
[...]
```
!!! note
For Traefik <1.5.0 add `acme.storage=traefik/acme/account` because Traefik is not reading it from Consul.
If you have some update to do, update the initializer service and re-deploy it.
The new configuration will be stored in Consul, and you need to restart the Traefik node: `docker service update --force traefik_traefik`.
## Full docker-compose file
```yaml
version: "3.4"
services:
traefik_init:
image: traefik:1.5
command:
- "storeconfig"
- "--api"
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
- "--entrypoints=Name:https Address::443 TLS"
- "--defaultentrypoints=http,https"
- "--acme"
- "--acme.storage=traefik/acme/account"
- "--acme.entryPoint=https"
- "--acme.httpChallenge.entryPoint=http"
- "--acme.onHostRule=true"
- "--acme.onDemand=false"
- "--acme.email=foobar@example.com"
- "--docker"
- "--docker.swarmMode"
- "--docker.domain=example.com"
- "--docker.watch"
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
networks:
- traefik
deploy:
restart_policy:
condition: on-failure
depends_on:
- consul
traefik:
image: traefik:1.5
depends_on:
- traefik_init
- consul
command:
- "--consul"
- "--consul.endpoint=consul:8500"
- "--consul.prefix=traefik"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- webgateway
- traefik
ports:
- target: 80
published: 80
mode: host
- target: 443
published: 443
mode: host
- target: 8080
published: 8080
mode: host
deploy:
mode: global
placement:
constraints:
- node.role == manager
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
consul:
image: consul
command: agent -server -bootstrap-expect=1
volumes:
- consul-data:/consul/data
environment:
- CONSUL_LOCAL_CONFIG={"datacenter":"us_east2","server":true}
- CONSUL_BIND_INTERFACE=eth0
- CONSUL_CLIENT_INTERFACE=eth0
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
restart_policy:
condition: on-failure
networks:
- traefik
networks:
webgateway:
driver: overlay
external: true
traefik:
driver: overlay
volumes:
consul-data:
driver: [not local]
```

View file

@ -0,0 +1,33 @@
# Clustering / High Availability (beta)
This guide explains how to use Traefik in high availability mode.
In order to deploy and configure multiple Traefik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
## Prerequisites
You will need a working KV store cluster.
_(Currently, we recommend [Consul](https://consul.io) .)_
## File configuration to KV store migration
We created a special Traefik command to help configuring your Key Value store from a Traefik TOML configuration file.
Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) to get more details.
## Deploy a Traefik cluster
Once your Traefik configuration is uploaded on your KV store, you can start each Traefik instance.
A Traefik cluster is based on a manager/worker model.
When starting, Traefik will elect a manager.
If this instance fails, another manager will be automatically elected.
## Traefik cluster and Let's Encrypt
**In cluster mode, ACME certificates have to be stored in [a KV Store entry](/configuration/acme/#as-a-key-value-store-entry).**
Thanks to the Traefik cluster mode algorithm (based on [the Raft Consensus Algorithm](https://raft.github.io/)), only one instance will contact Let's encrypt to solve the challenges.
The others instances will get ACME certificate from the KV Store entry.

View file

@ -0,0 +1,262 @@
# Let's Encrypt & Docker
In this use case, we want to use Traefik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
We also want to automatically _discover any services_ on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
## Setting Up
In order for this to work, you'll need a server with a public IP address, with Docker and docker-compose installed on it.
In this example, we're using the fictitious domain _my-awesome-app.org_.
In real-life, you'll want to use your own domain and have the DNS configured accordingly so the hostname records you'll want to use point to the aforementioned public IP address.
## Networking
Docker containers can only communicate with each other over TCP when they share at least one network.
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
On the Docker host, run the following command:
```shell
docker network create web
```
Now, let's create a directory on the server where we will configure the rest of Traefik:
```shell
mkdir -p /opt/traefik
```
Within this directory, we're going to create 3 empty files:
```shell
touch /opt/traefik/docker-compose.yml
touch /opt/traefik/acme.json && chmod 600 /opt/traefik/acme.json
touch /opt/traefik/traefik.toml
```
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik.
The contents of the file is as follows:
```yaml
version: '2'
services:
traefik:
image: traefik:1.5.4
restart: always
ports:
- 80:80
- 443:443
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /opt/traefik/traefik.toml:/traefik.toml
- /opt/traefik/acme.json:/acme.json
container_name: traefik
networks:
web:
external: true
```
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
Finally, we're giving this container a static name called `traefik`.
Let's take a look at a simple `traefik.toml` configuration as well before we'll create the Traefik container:
```toml
debug = false
logLevel = "ERROR"
defaultEntryPoints = ["https","http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "my-awesome-app.org"
watch = true
exposedByDefault = false
[acme]
email = "your-email-here@my-awesome-app.org"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
```
This is the minimum configuration required to do the following:
- Log `ERROR`-level messages (or more severe) to the console, but silence `DEBUG`-level messages
- Check for new versions of Traefik periodically
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
- Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Traefik by default, we'll get into this in a bit!**
- Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Traefik container.
## Exposing Web Services to the Outside World
Now that we've fully configured and started Traefik, it's time to get our applications running!
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not.
The `docker-compose.yml` of our project looks like this:
```yaml
version: "2.1"
services:
app:
image: my-docker-registry.com/my-awesome-app/app:latest
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- web
- default
expose:
- "9000"
labels:
- "traefik.docker.network=web"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:app.my-awesome-app.org"
- "traefik.basic.port=9000"
- "traefik.basic.protocol=http"
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
- "traefik.admin.protocol=https"
- "traefik.admin.port=9443"
db:
image: my-docker-registry.com/back-end/5.7
restart: always
redis:
image: my-docker-registry.com/back-end/redis:4-alpine
restart: always
events:
image: my-docker-registry.com/my-awesome-app/events:latest
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
restart: always
networks:
- web
- default
expose:
- "3000"
labels:
- "traefik.backend=my-awesome-app-events"
- "traefik.docker.network=web"
- "traefik.frontend.rule=Host:events.my-awesome-app.org"
- "traefik.enable=true"
- "traefik.port=3000"
networks:
web:
external: true
```
Here, we can see a set of services with two applications that we're actually exposing to the outside world.
Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Also, only the containers that we want traffic to get routed to are attached to the `web` network we created at the start of this document.
Since the `traefik` container we've created and started earlier is also attached to this network, HTTP requests can now get routed to these containers.
### Labels
As mentioned earlier, we don't want containers exposed automatically by Traefik.
The reason behind this is simple: we want to have control over this process ourselves.
Thanks to Docker labels, we can tell Traefik how to create its internal routing configuration.
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
```yaml
- "traefik.docker.network=web"
- "traefik.enable=true"
- "traefik.basic.frontend.rule=Host:app.my-awesome-app.org"
- "traefik.basic.port=9000"
- "traefik.basic.protocol=http"
- "traefik.admin.frontend.rule=Host:admin-app.my-awesome-app.org"
- "traefik.admin.protocol=https"
- "traefik.admin.port=9443"
```
We use both `container labels` and `service labels`.
#### Container labels
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
We also tell Traefik to use the `web` network to route HTTP traffic to this container.
With the `traefik.enable` label, we tell Traefik to include this container in its internal configuration.
With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
Essentially, this is the actual rule used for Layer-7 load balancing.
Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
### Service labels
`Service labels` allow managing many routes for the same container.
When both `container labels` and `service labels` are defined, `container labels` are just used as default values for missing `service labels` but no frontend/backend are going to be defined only with these labels.
Obviously, labels `traefik.frontend.rule` and `traefik.port` described above, will only be used to complete information set in `service labels` during the container frontends/backends creation.
In the example, two service names are defined : `basic` and `admin`.
They allow creating two frontends and two backends.
- `basic` has only one `service label` : `traefik.basic.protocol`.
Traefik will use values set in `traefik.frontend.rule` and `traefik.port` to create the `basic` frontend and backend.
The frontend listens to incoming HTTP requests which contain the `Host` `app.my-awesome-app.org` and redirect them in `HTTP` to the port `9000` of the backend.
- `admin` has all the `services labels` needed to create the `admin` frontend and backend (`traefik.admin.frontend.rule`, `traefik.admin.protocol`, `traefik.admin.port`).
Traefik will create a frontend to listen to incoming HTTP requests which contain the `Host` `admin-app.my-awesome-app.org` and redirect them in `HTTPS` to the port `9443` of the backend.
#### Gotchas and tips
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
If a container exposes multiple ports, Traefik may forward traffic to the wrong port.
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
- Should you choose to enable the `exposedByDefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see.
Usually, this is a bad idea.
- With the `traefik.frontend.auth.basic` label, it's possible for Traefik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
- Traefik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
- Traefik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
### Final thoughts
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.

View file

@ -0,0 +1,415 @@
# Examples
You will find here some configuration examples of Traefik.
## HTTP only
```toml
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
```
## HTTP + HTTPS (with SNI)
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.org.cert"
keyFile = "integration/fixtures/https/snitest.org.key"
```
Note that we can either give path to certificate file or directly the file content itself ([like in this TOML example](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store)).
## HTTP redirect on HTTPS
```toml
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "examples/traefik.crt"
keyFile = "examples/traefik.key"
```
!!! note
Please note that `regex` and `replacement` do not have to be set in the `redirect` structure if an entrypoint is defined for the redirection (they will not be used in this case)
## Let's Encrypt support
### Basic example with HTTP challenge
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
```
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com` with described SANs.
Traefik generates these certificates when it starts and it needs to be restart if new domains are added.
### onHostRule option (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
onHostRule = true
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
```
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com`.
Traefik generates these certificates when it starts.
If a backend is added with a `onHost` rule, Traefik will automatically generate the Let's Encrypt certificate for the new domain (for frontends wired on the `acme.entryPoint`).
### OnDemand option (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
onDemand = true
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
This configuration allows generating a Let's Encrypt certificate (thanks to `HTTP-01` challenge) during the first HTTPS request on a new domain.
!!! note
This option simplifies the configuration but :
* TLS handshakes will be slow when requesting a hostname certificate for the first time, which can lead to DDoS attacks.
* Let's Encrypt have rate limiting: https://letsencrypt.org/docs/rate-limits
That's why, it's better to use the `onHostRule` option if possible.
### DNS challenge
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.dnsChallenge]
provider = "digitalocean" # DNS Provider name (cloudflare, OVH, gandi...)
delayBeforeCheck = 0
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
```
DNS challenge needs environment variables to be executed.
These variables have to be set on the machine/container that host Traefik.
These variables are described [in this section](/configuration/acme/#provider).
### DNS challenge with wildcard domains
```toml
[entryPoints]
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "acme.json"
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
entryPoint = "https"
[acme.dnsChallenge]
provider = "digitalocean" # DNS Provider name (cloudflare, OVH, gandi...)
delayBeforeCheck = 0
[[acme.domains]]
main = "*.local1.com"
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "*.local3.com"
[[acme.domains]]
main = "*.local4.com"
```
DNS challenge needs environment variables to be executed.
These variables have to be set on the machine/container that host Traefik.
These variables are described [in this section](/configuration/acme/#provider).
More information about wildcard certificates are available [in this section](/configuration/acme/#wildcard-domains).
### onHostRule option and provided certificates (with HTTP challenge)
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "examples/traefik.crt"
keyFile = "examples/traefik.key"
[acme]
email = "test@traefik.io"
storage = "acme.json"
onHostRule = true
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
```
Traefik will only try to generate a Let's encrypt certificate (thanks to `HTTP-01` challenge) if the domain cannot be checked by the provided certificates.
### Cluster mode
#### Prerequisites
Before you use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely at [this section](/user-guide/kv-config/#store-configuration-in-key-value-store), which will describe how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
#### Configuration
```toml
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "test@traefik.io"
storage = "traefik/acme/account"
caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
[acme.httpChallenge]
entryPoint = "http"
[[acme.domains]]
main = "local1.com"
sans = ["test1.local1.com", "test2.local1.com"]
[[acme.domains]]
main = "local2.com"
sans = ["test1.local2.com", "test2x.local2.com"]
[[acme.domains]]
main = "local3.com"
[[acme.domains]]
main = "local4.com"
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
```
This configuration allows to use the key `traefik/acme/account` to get/set Let's Encrypt certificates content.
The `consul` provider contains the configuration.
!!! note
It's possible to use others key-value store providers as described [here](/user-guide/kv-config/#key-value-store-configuration).
## Override entrypoints in frontends
```toml
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
```
## Override the Traefik HTTP server idleTimeout and/or throttle configurations from re-loading too quickly
```toml
providersThrottleDuration = "5s"
[respondingTimeouts]
idleTimeout = "360s"
```
## Using labels in docker-compose.yml
Pay attention to the **labels** section:
```
home:
image: abiosoft/caddy:0.10.14
networks:
- ntw_front
volumes:
- ./www/home/srv/:/srv/
deploy:
mode: replicated
replicas: 2
#placement:
# constraints: [node.role==manager]
restart_policy:
condition: on-failure
max_attempts: 5
resources:
limits:
cpus: '0.20'
memory: 9M
reservations:
cpus: '0.05'
memory: 9M
labels:
- "traefik.frontend.rule=PathPrefixStrip:/"
- "traefik.backend=home"
- "traefik.port=2015"
- "traefik.weight=10"
- "traefik.enable=true"
- "traefik.passHostHeader=true"
- "traefik.docker.network=ntw_front"
- "traefik.frontend.entryPoints=http"
- "traefik.backend.loadbalancer.swarm=true"
- "traefik.backend.loadbalancer.method=drr"
```
Something more tricky using `regex`.
In this case a slash is added to `siteexample.io/portainer` and redirect to `siteexample.io/portainer/`. For more details: https://github.com/containous/traefik/issues/563
The double sign `$$` are variables managed by the docker compose file ([documentation](https://docs.docker.com/compose/compose-file/#variable-substitution)).
```
portainer:
image: portainer/portainer:1.16.5
networks:
- ntw_front
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role==manager]
restart_policy:
condition: on-failure
max_attempts: 5
resources:
limits:
cpus: '0.33'
memory: 20M
reservations:
cpus: '0.05'
memory: 10M
labels:
- "traefik.frontend.rule=PathPrefixStrip:/portainer"
- "traefik.backend=portainer"
- "traefik.port=9000"
- "traefik.weight=10"
- "traefik.enable=true"
- "traefik.passHostHeader=true"
- "traefik.docker.network=ntw_front"
- "traefik.frontend.entryPoints=http"
- "traefik.backend.loadbalancer.swarm=true"
- "traefik.backend.loadbalancer.method=drr"
# https://github.com/containous/traefik/issues/563#issuecomment-421360934
- "traefik.frontend.redirect.regex=^(.*)/portainer$$"
- "traefik.frontend.redirect.replacement=$$1/portainer/"
- "traefik.frontend.rule=PathPrefix:/portainer;ReplacePathRegex: ^/portainer/(.*) /$$1"
```

183
old/docs/user-guide/grpc.md Normal file
View file

@ -0,0 +1,183 @@
# gRPC examples
## With HTTP (h2c)
This section explains how to use Traefik as reverse proxy for gRPC application.
### Traefik configuration
At last, we configure our Traefik instance to use both self-signed certificates.
```toml
defaultEntryPoints = ["https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http]
[api]
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
# Access on backend with h2c
url = "h2c://backend.local:8080"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:frontend.local"
```
!!! warning
For provider with label, you will have to specify the `traefik.protocol=h2c`
### Conclusion
We don't need specific configuration to use gRPC in Traefik, we just need to use `h2c` protocol, or use HTTPS communications to have HTTP2 with the backend.
## With HTTPS
This section explains how to use Traefik as reverse proxy for gRPC application with self-signed certificates.
![gRPC architecture](/img/grpc.svg)
### gRPC Server certificate
In order to secure the gRPC server, we generate a self-signed certificate for backend url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./backend.key -out ./backend.cert
```
That will prompt for information, the important answer is:
```
Common Name (e.g. server FQDN or YOUR name) []: backend.local
```
### gRPC Client certificate
Generate your self-signed certificate for frontend url:
```bash
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./frontend.key -out ./frontend.cert
```
with
```
Common Name (e.g. server FQDN or YOUR name) []: frontend.local
```
### Traefik configuration
At last, we configure our Traefik instance to use both self-signed certificates.
```toml
defaultEntryPoints = ["https"]
# For secure connection on backend.local
rootCAs = [ "./backend.cert" ]
[entryPoints]
[entryPoints.https]
address = ":4443"
[entryPoints.https.tls]
# For secure connection on frontend.local
[[entryPoints.https.tls.certificates]]
certFile = "./frontend.cert"
keyFile = "./frontend.key"
[api]
[file]
[backends]
[backends.backend1]
[backends.backend1.servers.server1]
# Access on backend with HTTPS
url = "https://backend.local:8080"
[frontends]
[frontends.frontend1]
backend = "backend1"
[frontends.frontend1.routes.test_1]
rule = "Host:frontend.local"
```
!!! warning
With some backends, the server URLs use the IP, so you may need to configure `insecureSkipVerify` instead of the `rootCAS` to activate HTTPS without hostname verification.
### A gRPC example in go (modify for https)
We use the gRPC greeter example in [grpc-go](https://github.com/grpc/grpc-go/tree/master/examples/helloworld)
!!! warning
In order to use this gRPC example, we need to modify it to use HTTPS
So we modify the "gRPC server example" to use our own self-signed certificate:
```go
// ...
// Read cert and key file
BackendCert, _ := ioutil.ReadFile("./backend.cert")
BackendKey, _ := ioutil.ReadFile("./backend.key")
// Generate Certificate struct
cert, err := tls.X509KeyPair(BackendCert, BackendKey)
if err != nil {
log.Fatalf("failed to parse certificate: %v", err)
}
// Create credentials
creds := credentials.NewServerTLSFromCert(&cert)
// Use Credentials in gRPC server options
serverOption := grpc.Creds(creds)
var s *grpc.Server = grpc.NewServer(serverOption)
defer s.Stop()
pb.RegisterGreeterServer(s, &server{})
err := s.Serve(lis)
// ...
```
Next we will modify gRPC Client to use our Traefik self-signed certificate:
```go
// ...
// Read cert file
FrontendCert, _ := ioutil.ReadFile("./frontend.cert")
// Create CertPool
roots := x509.NewCertPool()
roots.AppendCertsFromPEM(FrontendCert)
// Create credentials
credsClient := credentials.NewClientTLSFromCert(roots, "")
// Dial with specific Transport (with credentials)
conn, err := grpc.Dial("frontend.local:4443", grpc.WithTransportCredentials(credsClient))
if err != nil {
log.Fatalf("did not connect: %v", err)
}
defer conn.Close()
client := pb.NewGreeterClient(conn)
name := "World"
r, err := client.SayHello(context.Background(), &pb.HelloRequest{Name: name})
// ...
```

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,444 @@
# Key-value store configuration
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be stored in a Key-value store.
This section explains how to launch Traefik using a configuration loaded from a Key-value store.
Traefik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
## Static configuration in Key-value store
We will see the steps to set it up with an easy example.
!!! note
We could do the same with any other Key-value Store.
### docker-compose file for Consul
The Traefik global configuration will be retrieved from a [Consul](https://consul.io) store.
First we have to launch Consul in a container.
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [containous/whoami](https://github.com/containous/whoami) :
```yaml
consul:
image: progrium/consul
command: -server -bootstrap -log-level debug -ui-dir /ui
ports:
- "8400:8400"
- "8500:8500"
- "8600:53/udp"
expose:
- "8300"
- "8301"
- "8301/udp"
- "8302"
- "8302/udp"
whoami1:
image: containous/whoami
whoami2:
image: containous/whoami
whoami3:
image: containous/whoami
whoami4:
image: containous/whoami
```
### Upload the configuration in the Key-value store
We should now fill the store with the Traefik global configuration.
To do that, we can send the Key-value pairs via [curl commands](https://www.consul.io/intro/getting-started/kv.html) or via the [Web UI](https://www.consul.io/intro/getting-started/ui.html).
Fortunately, Traefik allows automation of this process using the `storeconfig` subcommand.
Please refer to the [store Traefik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
Here is the toml configuration we would like to store in the Key-value Store :
```toml
logLevel = "DEBUG"
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.api]
address = ":8081"
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "integration/fixtures/https/snitest.com.cert"
keyFile = "integration/fixtures/https/snitest.com.key"
[[entryPoints.https.tls.certificates]]
certFile = """-----BEGIN CERTIFICATE-----
<cert file content>
-----END CERTIFICATE-----"""
keyFile = """-----BEGIN PRIVATE KEY-----
<key file content>
-----END PRIVATE KEY-----"""
[entryPoints.other-https]
address = ":4443"
[entryPoints.other-https.tls]
[consul]
endpoint = "127.0.0.1:8500"
watch = true
prefix = "traefik"
[api]
entrypoint = "api"
```
And there, the same global configuration in the Key-value Store (using `prefix = "traefik"`):
| Key | Value |
|-----------------------------------------------------------|---------------------------------------------------------------|
| `/traefik/loglevel` | `DEBUG` |
| `/traefik/defaultentrypoints/0` | `http` |
| `/traefik/defaultentrypoints/1` | `https` |
| `/traefik/entrypoints/api/address` | `:8081` |
| `/traefik/entrypoints/http/address` | `:80` |
| `/traefik/entrypoints/https/address` | `:443` |
| `/traefik/entrypoints/https/tls/certificates/0/certfile` | `integration/fixtures/https/snitest.com.cert` |
| `/traefik/entrypoints/https/tls/certificates/0/keyfile` | `integration/fixtures/https/snitest.com.key` |
| `/traefik/entrypoints/https/tls/certificates/1/certfile` | `--BEGIN CERTIFICATE--<cert file content>--END CERTIFICATE--` |
| `/traefik/entrypoints/https/tls/certificates/1/keyfile` | `--BEGIN CERTIFICATE--<key file content>--END CERTIFICATE--` |
| `/traefik/entrypoints/other-https/address` | `:4443` |
| `/traefik/consul/endpoint` | `127.0.0.1:8500` |
| `/traefik/consul/watch` | `true` |
| `/traefik/consul/prefix` | `traefik` |
| `/traefik/api/entrypoint` | `api` |
In case you are setting key values manually:
- Remember to specify the indexes (`0`,`1`, `2`, ... ) under prefixes `/traefik/defaultentrypoints/` and `/traefik/entrypoints/https/tls/certificates/` in order to match the global configuration structure.
- Be careful to give the correct IP address and port on the key `/traefik/consul/endpoint`.
Note that we can either give path to certificate file or directly the file content itself.
### Launch Traefik
We will now launch Traefik in a container.
We use CLI flags to setup the connection between Traefik and Consul.
All the rest of the global configuration is stored in Consul.
Here is the [docker-compose file](https://docs.docker.com/compose/compose-file/) :
```yaml
traefik:
image: traefik
command: --consul --consul.endpoint=127.0.0.1:8500
ports:
- "80:80"
- "8080:8080"
```
!!! warning
Be careful to give the correct IP address and port in the flag `--consul.endpoint`.
### Consul ACL Token support
To specify a Consul ACL token for Traefik, we have to set a System Environment variable named `CONSUL_HTTP_TOKEN` prior to starting Traefik.
This variable must be initialized with the ACL token value.
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
If a Consul ACL is used to restrict Traefik read/write access, one of the following configurations is needed.
- HCL format :
```
key "traefik" {
policy = "write"
},
session "" {
policy = "write"
}
```
- JSON format :
```json
{
"key": {
"traefik": {
"policy": "write"
}
},
"session": {
"": {
"policy": "write"
}
}
}
```
### TLS support
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
- `--consul.endpoint=https://[consul-host]:[consul-ssl-port]`
### TLS support with client certificates
So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) support TLS connections with client certificates.
To set it up, we should enable [consul security](https://www.consul.io/docs/internals/security.html) (or [etcd security](https://coreos.com/etcd/docs/latest/security.html)).
Then, we have to provide CA, Cert and Key to Traefik using `consul` flags :
- `--consul.tls`
- `--consul.tls.ca=path/to/the/file`
- `--consul.tls.cert=path/to/the/file`
- `--consul.tls.key=path/to/the/file`
Or etcd flags :
- `--etcd.tls`
- `--etcd.tls.ca=path/to/the/file`
- `--etcd.tls.cert=path/to/the/file`
- `--etcd.tls.key=path/to/the/file`
!! note
We can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
Remember the command `traefik --help` to display the updated list of flags.
## Dynamic configuration in Key-value store
Following our example, we will provide backends/frontends rules and HTTPS certificates to Traefik.
!!! note
This section is independent of the way Traefik got its static configuration.
It means that the static configuration can either come from the same Key-value store or from any other sources.
### Key-value storage structure
Here is the toml configuration we would like to store in the store :
```toml
[file]
# rules
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
[frontends]
[frontends.frontend1]
backend = "backend2"
[frontends.frontend1.routes.test_1]
rule = "Host:test.localhost"
[frontends.frontend2]
backend = "backend1"
passHostHeader = true
priority = 10
[frontends.frontend2.auth.basic]
users = [
"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/",
"test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
]
entrypoints = ["https"] # overrides defaultEntryPoints
[frontends.frontend2.routes.test_1]
rule = "Host:{subdomain:[a-z]+}.localhost"
[frontends.frontend3]
entrypoints = ["http", "https"] # overrides defaultEntryPoints
backend = "backend2"
rule = "Path:/test"
[[tls]]
[tls.certificate]
certFile = "path/to/your.cert"
keyFile = "path/to/your.key"
[[tls]]
entryPoints = ["https","other-https"]
[tls.certificate]
certFile = """-----BEGIN CERTIFICATE-----
<cert file content>
-----END CERTIFICATE-----"""
keyFile = """-----BEGIN CERTIFICATE-----
<key file content>
-----END CERTIFICATE-----"""
```
And there, the same dynamic configuration in a KV Store (using `prefix = "traefik"`):
- backend 1
| Key | Value |
|--------------------------------------------------------|-----------------------------|
| `/traefik/backends/backend1/circuitbreaker/expression` | `NetworkErrorRatio() > 0.5` |
| `/traefik/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik/backends/backend1/servers/server1/weight` | `10` |
| `/traefik/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
| `/traefik/backends/backend1/servers/server2/weight` | `1` |
| `/traefik/backends/backend1/servers/server2/tags` | `api,helloworld` |
- backend 2
| Key | Value |
|-----------------------------------------------------|------------------------|
| `/traefik/backends/backend2/maxconn/amount` | `10` |
| `/traefik/backends/backend2/maxconn/extractorfunc` | `request.host` |
| `/traefik/backends/backend2/loadbalancer/method` | `drr` |
| `/traefik/backends/backend2/servers/server1/url` | `http://172.17.0.4:80` |
| `/traefik/backends/backend2/servers/server1/weight` | `1` |
| `/traefik/backends/backend2/servers/server2/url` | `http://172.17.0.5:80` |
| `/traefik/backends/backend2/servers/server2/weight` | `2` |
| `/traefik/backends/backend2/servers/server2/tags` | `web` |
- frontend 1
| Key | Value |
|---------------------------------------------------|-----------------------|
| `/traefik/frontends/frontend1/backend` | `backend2` |
| `/traefik/frontends/frontend1/routes/test_1/rule` | `Host:test.localhost` |
- frontend 2
| Key | Value |
|----------------------------------------------------|-----------------------------------------------|
| `/traefik/frontends/frontend2/backend` | `backend1` |
| `/traefik/frontends/frontend2/passhostheader` | `true` |
| `/traefik/frontends/frontend2/priority` | `10` |
| `/traefik/frontends/frontend2/auth/basic/users/0` | `test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/` |
| `/traefik/frontends/frontend2/auth/basic/users/1` | `test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0` |
| `/traefik/frontends/frontend2/entrypoints` | `http,https` |
| `/traefik/frontends/frontend2/routes/test_2/rule` | `PathPrefix:/test` |
- certificate 1
| Key | Value |
|---------------------------------------|--------------------|
| `/traefik/tls/1/certificate/certfile` | `path/to/your.cert`|
| `/traefik/tls/1/certificate/keyfile` | `path/to/your.key` |
!!! note
As `/traefik/tls/1/entrypoints` is not defined, the certificate will be attached to all `defaulEntryPoints` with a TLS configuration (in the example, the entryPoint `https`)
- certificate 2
| Key | Value |
|---------------------------------------|-----------------------|
| `/traefik/tls/2/entrypoints` | `https,other-https` |
| `/traefik/tls/2/certificate/certfile` | `<cert file content>` |
| `/traefik/tls/2/certificate/keyfile` | `<key file content>` |
### Atomic configuration changes
Traefik can watch the backends/frontends configuration changes and generate its configuration automatically.
!!! note
Only backends/frontends rules are dynamic, the rest of the Traefik configuration stay static.
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically.
As a result, it may be possible for Traefik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
To solve this problem, Traefik supports a special key called `/traefik/alias`.
If set, Traefik use the value as an alternative key prefix.
Given the key structure below, Traefik will use the `http://172.17.0.2:80` as its only backend (frontend keys have been omitted for brevity).
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/1` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
When an atomic configuration change is required, you may write a new configuration at an alternative prefix.
Here, although the `/traefik_configurations/2/...` keys have been set, the old configuration is still active because the `/traefik/alias` key still points to `/traefik_configurations/1`:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/1` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically.
Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://172.17.0.4:80` hosts while no traffic is sent to the `172.17.0.2:80` host:
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
| `/traefik/alias` | `/traefik_configurations/2` |
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.3:80` |
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
!!! note
Traefik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
## Store configuration in Key-value store
!!! note
Don't forget to [setup the connection between Traefik and Key-value store](/user-guide/kv-config/#launch-traefik).
The static Traefik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
```bash
traefik storeconfig [flags] ...
```
This command is here only to automate the [process which upload the configuration into the Key-value store](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store).
Traefik will not start but the [static configuration](/basics/#static-traefik-configuration) will be uploaded into the Key-value store.
If you configured ACME (Let's Encrypt), your registration account and your certificates will also be uploaded.
If you configured a file provider `[file]`, all your dynamic configuration (backends, frontends...) will be uploaded to the Key-value store.
To upload your ACME certificates to the KV store, get your Traefik TOML file and add the new `storage` option in the `acme` section:
```toml
[acme]
email = "test@traefik.io"
storage = "traefik/acme/account" # the key where to store your certificates in the KV store
storageFile = "acme.json" # your old certificates store
```
Call `traefik storeconfig` to upload your config in the KV store.
Then remove the line `storageFile = "acme.json"` from your TOML config file.
That's it!
![GIF Magica](https://i.giphy.com/ujUdrdpX7Ok5W.gif)

View file

@ -0,0 +1,144 @@
# Marathon
This guide explains how to integrate Marathon and operate the cluster in a reliable way from Traefik's standpoint.
## Host detection
Marathon offers multiple ways to run (Docker-containerized) applications, the most popular ones being
- BRIDGE-networked containers with dynamic high ports exposed
- HOST-networked containers with host machine ports
- containers with dedicated IP addresses ([IP-per-task](https://mesosphere.github.io/marathon/docs/ip-per-task.html)).
Traefik tries to detect the configured mode and route traffic to the right IP addresses. It is possible to force using task hosts with the `forceTaskHostname` option.
Given the complexity of the subject, it is possible that the heuristic fails.
Apart from filing an issue and waiting for the feature request / bug report to get addressed, one workaround for such situations is to customize the Marathon template file to the individual needs.
!!! note
This does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.
## Port detection
Traefik also attempts to determine the right port (which is a [non-trivial matter in Marathon](https://mesosphere.github.io/marathon/docs/ports.html)).
Following is the order by which Traefik tries to identify the port (the first one that yields a positive result will be used):
1. A arbitrary port specified through the `traefik.port` label.
1. The task port (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `portDefinitions` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
1. The port from the application's `ipAddressPerTask` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
## Applications with multiple ports
Some Marathon applications may expose multiple ports. Traefik supports creating one so-called _segment_ per port using [segment labels](/configuration/backends/marathon#applications-with-multiple-ports-segment-labels).
For instance, assume that a Marathon application exposes a web API on port 80 and an admin interface on port 8080. It would then be possible to make each service available by specifying the following Marathon labels:
```
traefik.web.port=80
```
```
traefik.admin.port=8080
```
(Note that the service names `web` and `admin` can be chosen arbitrarily.)
Technically, Traefik will create one pair of frontend and backend configurations for each service.
## Achieving high availability
### Scenarios
There are three scenarios where the availability of a Marathon application could be impaired along with the risk of losing or failing requests:
- During the startup phase when Traefik already routes requests to the backend even though it has not completed its bootstrapping process yet.
- During the shutdown phase when Traefik still routes requests to the backend while the backend is already terminating.
- During a failure of the application when Traefik has not yet identified the backend as being erroneous.
The first two scenarios are common with every rolling upgrade of an application (i.e. a new version release or configuration update).
The following sub-sections describe how to resolve or mitigate each scenario.
#### Startup
It is possible to define [readiness checks](https://mesosphere.github.io/marathon/docs/readiness-checks.html) (available since Marathon version 1.1) per application and have Marathon take these into account during the startup phase.
The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range.
As long as the check keeps failing, Marathon will not proceed with the deployment (within the configured upgrade strategy bounds).
Beginning with version 1.4, Traefik respects readiness check results if the Traefik option is set and checks are configured on the applications accordingly.
!!! note
Due to the way readiness check results are currently exposed by the Marathon API, ready tasks may be taken into rotation with a small delay.
It is on the order of one readiness check timeout interval (as configured on the application specification) and guarantees that non-ready tasks do not receive traffic prematurely.
If readiness checks are not possible, a current mitigation strategy is to enable [retries](/configuration/commons#retry-configuration) and make sure that a sufficient number of healthy application tasks exist so that one retry will likely hit one of those.
Apart from its probabilistic nature, the workaround comes at the price of increased latency.
#### Shutdown
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence (i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration).
Specifically, each termination handler should install a signal handler listening for a SIGTERM signal and implement the following steps on signal reception:
1. Disable Keep-Alive HTTP connections.
1. Keep accepting HTTP requests for a certain period of time.
1. Stop accepting new connections.
1. Finish serving any in-flight requests.
1. Shut down.
Traefik already ignores Marathon tasks whose state does not match `TASK_RUNNING`; since terminating tasks transition into the `TASK_KILLING` and eventually `TASK_KILLED` state, there is nothing further that needs to be done on Traefik's end.
How long HTTP requests should continue to be accepted in step 2 depends on how long Traefik needs to receive and process the Marathon configuration update.
Under regular operational conditions, it should be on the order of seconds, with 10 seconds possibly being a good default value.
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy.
Paired with termination handlers, they would cover for those cases where either the termination sequence or Traefik cannot complete their part of the orchestration process in time.
#### Failure
A failing application always happens unexpectedly, and hence, it is very difficult or even impossible to rule out the adversal effects categorically.
Failure reasons vary broadly and could stretch from unacceptable slowness, a task crash, or a network split.
There are two mitigaton efforts:
1. Configure [Marathon health checks](https://mesosphere.github.io/marathon/docs/health-checks.html) on each application.
1. Configure Traefik health checks (possibly via the `traefik.backend.healthcheck.*` labels) and make sure they probe with proper frequency.
The Marathon health check makes sure that applications once deemed dysfunctional are being rescheduled to different slaves.
However, they might take a while to get triggered and the follow-up processes to complete.
For that reason, the Treafik health check provides an additional check that responds more rapidly and does not require a configuration reload to happen.
Additionally, it protects from cases that the Marathon health check may not be able to cover, such as a network split.
### (Non-)Alternatives
There are a few alternatives of varying quality that are frequently asked for.
The remaining section is going to explore them along with a benefit/cost trade-off.
#### Reusing Marathon health checks
It may seem obvious to reuse the Marathon health checks as a signal to Traefik whether an application should be taken into load-balancing rotation or not.
Apart from the increased latency a failing health check may have, a major problem with this is is that Marathon does not persist the health check results.
Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure.
Re-elections do not only happen during regular maintenance work (often requiring rolling upgrades of the Marathon nodes) but also when the Marathon leader fails spontaneously.
As such, there is no way to handle this situation deterministically.
Finally, Marathon health checks are not mandatory (the default is to use the task state as reported by Mesos), so requiring them for Traefik would raise the entry barrier for Marathon users.
Traefik used to use the health check results as a strict requirement but moved away from it as [users reported the dramatic consequences](https://github.com/containous/traefik/issues/653).
#### Draining
Another common approach is to let a proxy drain backends that are supposed to shut down.
That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
On the plus side, this would not require any modifications to the application in question.
However, implementing this fully within Traefik seems like a non-trivial undertaking.
Additionally, the approach is less flexible compared to a custom termination handler since only the latter allows for the implementation of custom termination sequences that go beyond simple request draining (e.g., persisting a snapshot state to disk prior to terminating).
The feature is currently not implemented; a request for draining in general is at [issue 41](https://github.com/containous/traefik/issues/41).

View file

@ -0,0 +1,332 @@
# Docker Swarm (mode) cluster
This section explains how to create a multi-host docker cluster with swarm mode using [docker-machine](https://docs.docker.com/machine) and how to deploy Traefik on it.
The cluster consists of:
- 3 servers
- 1 manager
- 2 workers
- 1 [overlay](https://docs.docker.com/network/overlay/) network (multi-host networking)
## Prerequisites
1. You will need to install [docker-machine](https://docs.docker.com/machine/)
2. You will need the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
## Cluster provisioning
First, let's create all the required nodes.
It's a shorter version of the [swarm tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).
```shell
docker-machine create -d virtualbox manager
docker-machine create -d virtualbox worker1
docker-machine create -d virtualbox worker2
```
Then, let's setup the cluster, in order:
1. initialize the cluster
1. get the token for other host to join
1. on both workers, join the cluster with the token
```shell
docker-machine ssh manager "docker swarm init \
--listen-addr $(docker-machine ip manager) \
--advertise-addr $(docker-machine ip manager)"
export worker_token=$(docker-machine ssh manager "docker swarm \
join-token worker -q")
docker-machine ssh worker1 "docker swarm join \
--token=${worker_token} \
--listen-addr $(docker-machine ip worker1) \
--advertise-addr $(docker-machine ip worker1) \
$(docker-machine ip manager)"
docker-machine ssh worker2 "docker swarm join \
--token=${worker_token} \
--listen-addr $(docker-machine ip worker2) \
--advertise-addr $(docker-machine ip worker2) \
$(docker-machine ip manager)"
```
Let's validate the cluster is up and running.
```shell
docker-machine ssh manager docker node ls
```
```
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
013v16l1sbuwjqcn7ucbu4jwt worker1 Ready Active
8buzkquycd17jqjber0mo2gn8 worker2 Ready Active
fnpj8ozfc85zvahx2r540xfcf * manager Ready Active Leader
```
Finally, let's create a network for Traefik to use.
```shell
docker-machine ssh manager "docker network create --driver=overlay traefik-net"
```
## Deploy Traefik
Let's deploy Traefik as a docker service in our cluster.
The only requirement for Traefik to work with swarm mode is that it needs to run on a manager node - we are going to use a [constraint](https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint) for that.
```shell
docker-machine ssh manager "docker service create \
--name traefik \
--constraint=node.role==manager \
--publish 80:80 --publish 8080:8080 \
--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock \
--network traefik-net \
traefik \
--docker \
--docker.swarmMode \
--docker.domain=traefik \
--docker.watch \
--api"
```
Let's explain this command:
| Option | Description |
|-----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| `--publish 80:80 --publish 8080:8080` | we publish port `80` and `8080` on the cluster. |
| `--constraint=node.role==manager` | we ask docker to schedule Traefik on a manager node. |
| `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock` | we bind mount the docker socket where Traefik is scheduled to be able to speak to the daemon. |
| `--network traefik-net` | we attach the Traefik service (and thus the underlying container) to the `traefik-net` network. |
| `--docker` | enable docker provider, and `--docker.swarmMode` to enable the swarm mode on Traefik. |
| `--api` | activate the webUI on port 8080 |
## Deploy your apps
We can now deploy our app on the cluster, here [whoami](https://github.com/containous/whoami), a simple web server in Go.
We start 2 services, on the `traefik-net` network.
```shell
docker-machine ssh manager "docker service create \
--name whoami0 \
--label traefik.port=80 \
--network traefik-net \
containous/whoami"
docker-machine ssh manager "docker service create \
--name whoami1 \
--label traefik.port=80 \
--network traefik-net \
containous/whoami"
```
!!! note
We set `whoami1` to use sticky sessions (`--label traefik.backend.loadbalancer.stickiness=true`).
We'll demonstrate that later.
!!! note
If using `docker stack deploy`, there is [a specific way that the labels must be defined in the docker-compose file](https://github.com/containous/traefik/issues/994#issuecomment-269095109).
Check that everything is scheduled and started:
```shell
docker-machine ssh manager "docker service ls"
```
```
ID NAME MODE REPLICAS IMAGE PORTS
moq3dq4xqv6t traefik replicated 1/1 traefik:latest *:80->80/tcp,*:8080->8080/tcp
ysil6oto1wim whoami0 replicated 1/1 containous/whoami:latest
z9re2mnl34k4 whoami1 replicated 1/1 containous/whoami:latest
```
## Access to your apps through Traefik
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
```
```yaml
Hostname: 5b0b3d148359
IP: 127.0.0.1
IP: 10.0.0.8
IP: 10.0.0.4
IP: 172.18.0.5
GET / HTTP/1.1
Host: whoami0.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.2
X-Forwarded-Host: whoami0.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
```shell
curl -H Host:whoami1.traefik http://$(docker-machine ip manager)
```
```yaml
Hostname: 3633163970f6
IP: 127.0.0.1
IP: 10.0.0.14
IP: 10.0.0.6
IP: 172.18.0.5
GET / HTTP/1.1
Host: whoami1.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.2
X-Forwarded-Host: whoami1.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
!!! note
As Traefik is published, you can access it from any machine and not only the manager.
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip worker1)
```
```yaml
Hostname: 5b0b3d148359
IP: 127.0.0.1
IP: 10.0.0.8
IP: 10.0.0.4
IP: 172.18.0.5
GET / HTTP/1.1
Host: whoami0.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.3
X-Forwarded-Host: whoami0.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
```shell
curl -H Host:whoami1.traefik http://$(docker-machine ip worker2)
```
```yaml
Hostname: 3633163970f6
IP: 127.0.0.1
IP: 10.0.0.14
IP: 10.0.0.6
IP: 172.18.0.5
GET / HTTP/1.1
Host: whoami1.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.4
X-Forwarded-Host: whoami1.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
## Scale both services
```shell
docker-machine ssh manager "docker service scale whoami0=5"
docker-machine ssh manager "docker service scale whoami1=5"
```
Check that we now have 5 replicas of each `whoami` service:
```shell
docker-machine ssh manager "docker service ls"
```
```
ID NAME MODE REPLICAS IMAGE PORTS
moq3dq4xqv6t traefik replicated 1/1 traefik:latest *:80->80/tcp,*:8080->8080/tcp
ysil6oto1wim whoami0 replicated 5/5 containous/whoami:latest
z9re2mnl34k4 whoami1 replicated 5/5 containous/whoami:latest
```
## Access to your `whoami0` through Traefik multiple times.
Repeat the following command multiple times and note that the Hostname changes each time as Traefik load balances each request against the 5 tasks:
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
```
```yaml
Hostname: f3138d15b567
IP: 127.0.0.1
IP: 10.0.0.5
IP: 10.0.0.4
IP: 172.18.0.3
GET / HTTP/1.1
Host: whoami0.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.2
X-Forwarded-Host: whoami0.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
Do the same against `whoami1`:
```shell
curl -c cookies.txt -H Host:whoami1.traefik http://$(docker-machine ip manager)
```
```yaml
Hostname: 348e2f7bf432
IP: 127.0.0.1
IP: 10.0.0.15
IP: 10.0.0.6
IP: 172.18.0.6
GET / HTTP/1.1
Host: whoami1.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.255.0.2
X-Forwarded-Host: whoami1.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
Because the sticky sessions require cookies to work, we used the `-c cookies.txt` option to store the cookie into a file.
The cookie contains the IP of the container to which the session sticks:
```shell
cat ./cookies.txt
```
```
# Netscape HTTP Cookie File
# https://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
whoami1.traefik FALSE / FALSE 0 _TRAEFIK_BACKEND http://10.0.0.15:80
```
If you load the cookies file (`-b cookies.txt`) for the next request, you will see that stickiness is maintained:
```shell
curl -b cookies.txt -H Host:whoami1.traefik http://$(docker-machine ip manager)
```
```yaml
Hostname: 348e2f7bf432
IP: 127.0.0.1
IP: 10.0.0.15
IP: 10.0.0.6
IP: 172.18.0.6
GET / HTTP/1.1
Host: whoami1.traefik
User-Agent: curl/7.55.1
Accept: */*
Accept-Encoding: gzip
Cookie: _TRAEFIK_BACKEND=http://10.0.0.15:80
X-Forwarded-For: 10.255.0.2
X-Forwarded-Host: whoami1.traefik
X-Forwarded-Proto: http
X-Forwarded-Server: 77fc29c69fe4
```
![GIF Magica](https://i.giphy.com/ujUdrdpX7Ok5W.gif)

View file

@ -0,0 +1,181 @@
# Swarm cluster
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Traefik on it.
The cluster consists of:
- 2 servers
- 1 swarm master
- 2 swarm nodes
- 1 [overlay](https://docs.docker.com/network/overlay/) network (multi-host networking)
## Prerequisites
1. You need to install [docker-machine](https://docs.docker.com/machine/)
2. You need the latest [VirtualBox](https://www.virtualbox.org/wiki/Downloads)
## Cluster provisioning
We first follow [this guide](https://docs.docker.com/engine/userguide/networking/get-started-overlay/) to create the cluster.
### Create machine `mh-keystore`
This machine is the service registry of our cluster.
```shell
docker-machine create -d virtualbox mh-keystore
```
Then we install the service registry [Consul](https://consul.io) on this machine:
```shell
eval "$(docker-machine env mh-keystore)"
docker run -d \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
```
### Create machine `mhs-demo0`
This machine is a swarm master and a swarm agent on it.
```shell
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo0
```
### Create machine `mhs-demo1`
This machine have a swarm agent on it.
```shell
docker-machine create -d virtualbox \
--swarm \
--swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
mhs-demo1
```
### Create the overlay Network
Create the overlay network on the swarm master:
```shell
eval $(docker-machine env --swarm mhs-demo0)
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
```
## Deploy Traefik
Deploy Traefik:
```shell
docker $(docker-machine config mhs-demo0) run \
-d \
-p 80:80 -p 8080:8080 \
--net=my-net \
-v /var/lib/boot2docker/:/ssl \
traefik \
-l DEBUG \
-c /dev/null \
--docker \
--docker.domain=traefik \
--docker.endpoint=tcp://$(docker-machine ip mhs-demo0):2376 \
--docker.tls \
--docker.tls.ca=/ssl/ca.pem \
--docker.tls.cert=/ssl/server.pem \
--docker.tls.key=/ssl/server-key.pem \
--docker.tls.insecureSkipVerify \
--docker.watch \
--api
```
Let's explain this command:
| Option | Description |
|-------------------------------------------|---------------------------------------------------------------|
| `-p 80:80 -p 8080:8080` | we bind ports 80 and 8080 |
| `--net=my-net` | run the container on the network my-net |
| `-v /var/lib/boot2docker/:/ssl` | mount the ssl keys generated by docker-machine |
| `-c /dev/null` | empty config file |
| `--docker` | enable docker provider |
| `--docker.endpoint=tcp://172.18.0.1:2376` | connect to the swarm master using the docker_gwbridge network |
| `--docker.tls` | enable TLS using the docker-machine keys |
| `--api` | activate the webUI on port 8080 |
## Deploy your apps
We can now deploy our app on the cluster, here [whoami](https://github.com/containous/whoami), a simple web server in GO, on the network `my-net`:
```shell
eval $(docker-machine env --swarm mhs-demo0)
docker run -d --name=whoami0 --net=my-net --env="constraint:node==mhs-demo0" containous/whoami
docker run -d --name=whoami1 --net=my-net --env="constraint:node==mhs-demo1" containous/whoami
```
Check that everything is started:
```shell
docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba2c21488299 containous/whoami "/whoamI" 8 seconds ago Up 9 seconds 80/tcp mhs-demo1/whoami1
8147a7746e7a containous/whoami "/whoamI" 19 seconds ago Up 20 seconds 80/tcp mhs-demo0/whoami0
8fbc39271b4c traefik "/traefik -l DEBUG -c" 36 seconds ago Up 37 seconds 192.168.99.101:80->80/tcp, 192.168.99.101:8080->8080/tcp mhs-demo0/serene_bhabha
```
## Access to your apps through Traefik
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip mhs-demo0)
```
```yaml
Hostname: 8147a7746e7a
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.3
IP: fe80::42:aff:fe00:903
IP: 172.18.0.3
IP: fe80::42:acff:fe12:3
GET / HTTP/1.1
Host: 10.0.9.3:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.3:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
```shell
curl -H Host:whoami1.traefik http://$(docker-machine ip mhs-demo0)
```
```yaml
Hostname: ba2c21488299
IP: 127.0.0.1
IP: ::1
IP: 10.0.9.4
IP: fe80::42:aff:fe00:904
IP: 172.18.0.2
IP: fe80::42:acff:fe12:2
GET / HTTP/1.1
Host: 10.0.9.4:80
User-Agent: curl/7.35.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 192.168.99.1
X-Forwarded-Host: 10.0.9.4:80
X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
![GIF Magica](https://i.giphy.com/ujUdrdpX7Ok5W.gif)