Enhance documentation readability.
This commit is contained in:
parent
6d28c52f59
commit
c7c9349b00
35 changed files with 1044 additions and 577 deletions
|
@ -1,21 +1,25 @@
|
|||
# Clustering / High Availability (beta)
|
||||
|
||||
This guide explains how to use Træfik in high availability mode.
|
||||
|
||||
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You will need a working KV store cluster.
|
||||
_(Currently, we recommend [Consul](https://consul.io) .)_
|
||||
|
||||
## File configuration to KV store migration
|
||||
|
||||
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file.
|
||||
|
||||
Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) to get more details.
|
||||
|
||||
## Deploy a Træfik cluster
|
||||
|
||||
Once your Træfik configuration is uploaded on your KV store, you can start each Træfik instance.
|
||||
|
||||
A Træfik cluster is based on a manager/worker model.
|
||||
|
||||
When starting, Træfik will elect a manager.
|
||||
If this instance fails, another manager will be automatically elected.
|
||||
|
||||
|
|
|
@ -1,19 +1,24 @@
|
|||
# Docker & Traefik
|
||||
|
||||
In this use case, we want to use Traefik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
|
||||
|
||||
We also want to automatically _discover any services_ on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
|
||||
|
||||
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
|
||||
|
||||
## Setting Up
|
||||
|
||||
In order for this to work, you'll need a server with a public IP address, with Docker installed on it.
|
||||
|
||||
In this example, we're using the fictitious domain _my-awesome-app.org_.
|
||||
|
||||
In real-life, you'll want to use your own domain and have the DNS configured accordingly so the hostname records you'll want to use point to the aforementioned public IP address.
|
||||
|
||||
## Networking
|
||||
|
||||
Docker containers can only communicate with each other over TCP when they share at least one network.
|
||||
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
|
||||
|
||||
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
|
||||
|
||||
On the Docker host, run the following command:
|
||||
|
@ -37,6 +42,7 @@ touch /opt/traefik/traefik.toml
|
|||
```
|
||||
|
||||
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik.
|
||||
|
||||
The contents of the file is as follows:
|
||||
|
||||
```yaml
|
||||
|
@ -62,10 +68,10 @@ networks:
|
|||
external: true
|
||||
```
|
||||
|
||||
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
|
||||
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
|
||||
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
|
||||
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
|
||||
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
|
||||
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
|
||||
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
|
||||
Finally, we're giving this container a static name called `traefik`.
|
||||
|
||||
Let's take a look at a simply `traefik.toml` configuration as well before we'll create the Traefik container:
|
||||
|
@ -106,7 +112,8 @@ This is the minimum configuration required to do the following:
|
|||
- Check for new versions of Traefik periodically
|
||||
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
|
||||
- Enable the Docker configuration backend and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Traefik by default, we'll get into this in a bit!**
|
||||
- Enable automatic request and configuration of SSL certificates using Let's Encrypt. These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
|
||||
- Enable automatic request and configuration of SSL certificates using Let's Encrypt.
|
||||
These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
|
||||
|
||||
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Traefik container.
|
||||
|
||||
|
@ -114,7 +121,9 @@ Alright, let's boot the container. From the `/opt/traefik` directory, run `docke
|
|||
|
||||
Now that we've fully configured and started Traefik, it's time to get our applications running!
|
||||
|
||||
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not. The `docker-compose.yml` of our project looks like this:
|
||||
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not.
|
||||
|
||||
The `docker-compose.yml` of our project looks like this:
|
||||
|
||||
```yaml
|
||||
version: "2.1"
|
||||
|
@ -173,16 +182,19 @@ networks:
|
|||
external: true
|
||||
```
|
||||
|
||||
Here, we can see a set of services with two applications that we're actually exposing to the outside world.
|
||||
Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
|
||||
Here, we can see a set of services with two applications that we're actually exposing to the outside world.
|
||||
Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
|
||||
Also, only the containers that we want traffic to get routed to are attached to the `web` network we created at the start of this document.
|
||||
|
||||
Since the `traefik` container we've created and started earlier is also attached to this network, HTTP requests can now get routed to these containers.
|
||||
|
||||
### Labels
|
||||
|
||||
As mentioned earlier, we don't want containers exposed automatically by Traefik.
|
||||
|
||||
The reason behind this is simple: we want to have control over this process ourselves.
|
||||
Thanks to Docker labels, we can tell Traefik how to create it's internal routing configuration.
|
||||
|
||||
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
|
||||
|
||||
```yaml
|
||||
|
@ -194,14 +206,17 @@ Let's take a look at the labels themselves for the `app` service, which is a HTT
|
|||
```
|
||||
|
||||
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
|
||||
We also tell Traefik to use the `web` network to route HTTP traffic to this container. With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
|
||||
Essentially, this is the actual rule used for Layer-7 load balancing.
|
||||
|
||||
We also tell Traefik to use the `web` network to route HTTP traffic to this container.
|
||||
With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
|
||||
Essentially, this is the actual rule used for Layer-7 load balancing.
|
||||
With the `traefik.enable` label, we tell Traefik to include this container in it's internal configuration.
|
||||
|
||||
Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
|
||||
|
||||
#### Gotchas and tips
|
||||
|
||||
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
|
||||
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
|
||||
If a container exposes multiple ports, Traefik may forward traffic to the wrong port.
|
||||
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
|
||||
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see.
|
||||
|
@ -213,5 +228,6 @@ Finally but not unimportantly, we tell Traefik to route **to** port `9000`, sinc
|
|||
|
||||
### Final thoughts
|
||||
|
||||
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, performant and self-configuring solution for your projects.
|
||||
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
|
||||
|
||||
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.
|
||||
|
|
|
@ -22,11 +22,11 @@ defaultEntryPoints = ["http", "https"]
|
|||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
CertFile = "integration/fixtures/https/snitest.com.cert"
|
||||
KeyFile = "integration/fixtures/https/snitest.com.key"
|
||||
certFile = "integration/fixtures/https/snitest.com.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.com.key"
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
CertFile = "integration/fixtures/https/snitest.org.cert"
|
||||
KeyFile = "integration/fixtures/https/snitest.org.key"
|
||||
certFile = "integration/fixtures/https/snitest.org.cert"
|
||||
keyFile = "integration/fixtures/https/snitest.org.key"
|
||||
```
|
||||
Note that we can either give path to certificate file or directly the file content itself ([like in this TOML example](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store)).
|
||||
|
||||
|
@ -43,8 +43,8 @@ defaultEntryPoints = ["http", "https"]
|
|||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
CertFile = "examples/traefik.crt"
|
||||
KeyFile = "examples/traefik.key"
|
||||
certFile = "examples/traefik.crt"
|
||||
keyFile = "examples/traefik.key"
|
||||
```
|
||||
|
||||
## Let's Encrypt support
|
||||
|
@ -76,6 +76,7 @@ entryPoint = "https"
|
|||
```
|
||||
|
||||
This configuration allows generating Let's Encrypt certificates for the four domains `local[1-4].com` with described SANs.
|
||||
|
||||
Traefik generates these certificates when it starts and it needs to be restart if new domains are added.
|
||||
|
||||
### OnHostRule option
|
||||
|
@ -106,6 +107,7 @@ entryPoint = "https"
|
|||
```
|
||||
|
||||
This configuration allows generating Let's Encrypt certificates for the four domains `local[1-4].com`.
|
||||
|
||||
Traefik generates these certificates when it starts.
|
||||
|
||||
If a backend is added with a `onHost` rule, Traefik will automatically generate the Let's Encrypt certificate for the new domain.
|
||||
|
@ -121,10 +123,9 @@ If a backend is added with a `onHost` rule, Traefik will automatically generate
|
|||
[acme]
|
||||
email = "test@traefik.io"
|
||||
storage = "acme.json"
|
||||
OnDemand = true
|
||||
onDemand = true
|
||||
caServer = "http://172.18.0.1:4000/directory"
|
||||
entryPoint = "https"
|
||||
|
||||
```
|
||||
|
||||
This configuration allows generating a Let's Encrypt certificate during the first HTTPS request on a new domain.
|
||||
|
@ -166,8 +167,10 @@ entryPoint = "https"
|
|||
main = "local4.com"
|
||||
```
|
||||
|
||||
DNS challenge needs environment variables to be executed. This variables have to be set on the machine/container which host Traefik.
|
||||
These variables has described [in this section](toml/#acme-lets-encrypt-configuration).
|
||||
DNS challenge needs environment variables to be executed.
|
||||
This variables have to be set on the machine/container which host Traefik.
|
||||
|
||||
These variables has described [in this section](/configuration/acme/#dnsprovider).
|
||||
|
||||
### OnHostRule option and provided certificates
|
||||
|
||||
|
@ -177,8 +180,8 @@ These variables has described [in this section](toml/#acme-lets-encrypt-configur
|
|||
address = ":443"
|
||||
[entryPoints.https.tls]
|
||||
[[entryPoints.https.tls.certificates]]
|
||||
CertFile = "examples/traefik.crt"
|
||||
KeyFile = "examples/traefik.key"
|
||||
certFile = "examples/traefik.crt"
|
||||
keyFile = "examples/traefik.key"
|
||||
|
||||
[acme]
|
||||
email = "test@traefik.io"
|
||||
|
@ -226,7 +229,6 @@ entryPoint = "https"
|
|||
endpoint = "127.0.0.1:8500"
|
||||
watch = true
|
||||
prefix = "traefik"
|
||||
|
||||
```
|
||||
|
||||
This configuration allows to use the key `traefik/acme/account` to get/set Let's Encrypt certificates content.
|
||||
|
@ -277,7 +279,7 @@ defaultEntryPoints = ["http"]
|
|||
## Pass Authenticated user to application via headers
|
||||
|
||||
Providing an authentication method as described above, it is possible to pass the user to the application
|
||||
via a configurable header value
|
||||
via a configurable header value.
|
||||
|
||||
```toml
|
||||
defaultEntryPoints = ["http"]
|
||||
|
@ -293,6 +295,8 @@ defaultEntryPoints = ["http"]
|
|||
## Override the Traefik HTTP server IdleTimeout and/or throttle configurations from re-loading too quickly
|
||||
|
||||
```toml
|
||||
IdleTimeout = "360s"
|
||||
ProvidersThrottleDuration = "5s"
|
||||
providersThrottleDuration = "5s"
|
||||
|
||||
[respondingTimeouts]
|
||||
idleTimeout = "360s"
|
||||
```
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# Kubernetes Ingress Controller
|
||||
|
||||
This guide explains how to use Træfik as an Ingress controller in a Kubernetes cluster.
|
||||
|
||||
If you are not familiar with Ingresses in Kubernetes you might want to read the [Kubernetes user guide](https://kubernetes.io/docs/concepts/services-networking/ingress/)
|
||||
|
||||
The config files used in this guide can be found in the [examples directory](https://github.com/containous/traefik/tree/master/examples/k8s)
|
||||
|
@ -72,9 +73,10 @@ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/exa
|
|||
|
||||
It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
|
||||
whereas both options have their own pros and cons:
|
||||
The scalability is much better when using a Deployment, because you will have a Single-Pod-per-Node model when using the DeaemonSet.
|
||||
It is possible to exclusively run a Service on a dedicated set of machines using taints and tolerations with a DaemonSet.
|
||||
On the other hand the DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) object with a Deployment.
|
||||
|
||||
- The scalability is much better when using a Deployment, because you will have a Single-Pod-per-Node model when using the DeaemonSet.
|
||||
- It is possible to exclusively run a Service on a dedicated set of machines using taints and tolerations with a DaemonSet.
|
||||
- On the other hand the DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) object with a Deployment.
|
||||
|
||||
The Deployment objects looks like this:
|
||||
|
||||
|
@ -131,7 +133,8 @@ spec:
|
|||
```
|
||||
[examples/k8s/traefik-deployment.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/traefik-deployment.yaml)
|
||||
|
||||
> The Service will expose two NodePorts which allow access to the ingress and the web interface.
|
||||
!!! note
|
||||
The Service will expose two NodePorts which allow access to the ingress and the web interface.
|
||||
|
||||
The DaemonSet objects looks not much different:
|
||||
|
||||
|
@ -198,20 +201,20 @@ spec:
|
|||
To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with `kubectl`:
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
|
||||
```
|
||||
|
||||
```shell
|
||||
$ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
|
||||
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
|
||||
```
|
||||
|
||||
There are some significant differences between using Deployments and DaemonSets.
|
||||
The Deployment has easier up and down scaling possibilities. It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2.
|
||||
At least one Pod is needed to run the Deployment.
|
||||
The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
|
||||
Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
|
||||
|
||||
There are some significant differences between using Deployments and DaemonSets:
|
||||
|
||||
- The Deployment has easier up and down scaling possibilities.
|
||||
It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2.
|
||||
At least one Pod is needed to run the Deployment.
|
||||
- The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
|
||||
Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
|
||||
|
||||
### Check the Pods
|
||||
|
||||
|
@ -220,8 +223,10 @@ Now lets check if our command was successful.
|
|||
Start by listing the pods in the `kube-system` namespace:
|
||||
|
||||
```shell
|
||||
$ kubectl --namespace=kube-system get pods
|
||||
kubectl --namespace=kube-system get pods
|
||||
```
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
kube-addon-manager-minikubevm 1/1 Running 0 4h
|
||||
kubernetes-dashboard-s8krj 1/1 Running 0 4h
|
||||
|
@ -231,14 +236,17 @@ traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m
|
|||
You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched a Pod, and it is now running.
|
||||
_It might take a few moments for kubernetes to pull the Træfik image and start the container._
|
||||
|
||||
> You could also check the deployment with the Kubernetes dashboard, run
|
||||
> `minikube dashboard` to open it in your browser, then choose the `kube-system`
|
||||
> namespace from the menu at the top right of the screen.
|
||||
!!! note
|
||||
You could also check the deployment with the Kubernetes dashboard, run
|
||||
`minikube dashboard` to open it in your browser, then choose the `kube-system`
|
||||
namespace from the menu at the top right of the screen.
|
||||
|
||||
You should now be able to access Træfik on port 80 of your Minikube instance when using the DaemonSet:
|
||||
|
||||
```sh
|
||||
curl $(minikube ip)
|
||||
```
|
||||
```
|
||||
404 page not found
|
||||
```
|
||||
|
||||
|
@ -246,20 +254,24 @@ If you decided to use the deployment, then you need to target the correct NodePo
|
|||
|
||||
```sh
|
||||
curl $(minikube ip):<NODEPORT>
|
||||
```
|
||||
```
|
||||
404 page not found
|
||||
```
|
||||
|
||||
> We expect to see a 404 response here as we haven't yet given Træfik any configuration.
|
||||
!!! note
|
||||
We expect to see a 404 response here as we haven't yet given Træfik any configuration.
|
||||
|
||||
## Deploy Træfik using Helm Chart
|
||||
|
||||
Instead of installing Træfik via an own object, you can also use the Træfik Helm chart.
|
||||
|
||||
This allows more complex configuration via Kubernetes [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) and enabled TLS certificates.
|
||||
|
||||
Install Træfik chart by:
|
||||
|
||||
```shell
|
||||
$ helm install stable/traefik
|
||||
helm install stable/traefik
|
||||
```
|
||||
|
||||
For more information, check out [the doc](https://github.com/kubernetes/charts/tree/master/stable/traefik).
|
||||
|
@ -305,9 +317,8 @@ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/exa
|
|||
|
||||
Now lets setup an entry in our /etc/hosts file to route `traefik-ui.minikube` to our cluster.
|
||||
|
||||
> In production you would want to set up real dns entries.
|
||||
|
||||
> You can get the ip address of your minikube instance by running `minikube ip`
|
||||
In production you would want to set up real dns entries.
|
||||
You can get the ip address of your minikube instance by running `minikube ip`
|
||||
|
||||
```shell
|
||||
echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
|
||||
|
@ -474,8 +485,8 @@ spec:
|
|||
task: wensleydale
|
||||
```
|
||||
|
||||
> Notice that we also set a [circuit breaker expression](https://docs.traefik.io/basics/#backends) for one of the backends
|
||||
> by setting the `traefik.backend.circuitbreaker` annotation on the service.
|
||||
!!! note
|
||||
We also set a [circuit breaker expression](/basics/#backends) for one of the backends by setting the `traefik.backend.circuitbreaker` annotation on the service.
|
||||
|
||||
|
||||
[examples/k8s/cheese-services.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-services.yaml)
|
||||
|
@ -519,13 +530,15 @@ spec:
|
|||
```
|
||||
[examples/k8s/cheese-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheese-ingress.yaml)
|
||||
|
||||
> Notice that we list each hostname, and add a backend service.
|
||||
!!! note
|
||||
we list each hostname, and add a backend service.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
|
||||
```
|
||||
|
||||
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host. Along with a backend listing for each service with a Server set up for each pod.
|
||||
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host.
|
||||
Along with a backend listing for each service with a Server set up for each pod.
|
||||
|
||||
If you edit your `/etc/hosts` again you should be able to access the cheese websites in your browser.
|
||||
|
||||
|
@ -543,7 +556,6 @@ Now lets suppose that our fictional client has decided that while they are super
|
|||
|
||||
No problem, we say, why don't we reconfigure the sites to host all 3 under one domain.
|
||||
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
|
@ -572,9 +584,8 @@ spec:
|
|||
```
|
||||
[examples/k8s/cheeses-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheeses-ingress.yaml)
|
||||
|
||||
> Notice that we are configuring Træfik to strip the prefix from the url path
|
||||
> with the `traefik.frontend.rule.type` annotation so that we can use
|
||||
> the containers from the previous example without modification.
|
||||
!!! note
|
||||
we are configuring Træfik to strip the prefix from the url path with the `traefik.frontend.rule.type` annotation so that we can use the containers from the previous example without modification.
|
||||
|
||||
```shell
|
||||
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheeses-ingress.yaml
|
||||
|
@ -632,18 +643,20 @@ spec:
|
|||
## Forwarding to ExternalNames
|
||||
|
||||
When specifying an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors),
|
||||
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
|
||||
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
|
||||
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
|
||||
|
||||
## Disable passing the Host header
|
||||
|
||||
By default Træfik will pass the incoming Host header on to the upstream resource.
|
||||
|
||||
There are times however where you may not want this to be the case.
|
||||
For example if your service is of the ExternalName type.
|
||||
|
||||
### Disable entirely
|
||||
|
||||
Add the following to your toml config:
|
||||
|
||||
```toml
|
||||
disablePassHostHeaders = true
|
||||
```
|
||||
|
@ -653,6 +666,7 @@ disablePassHostHeaders = true
|
|||
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader` annotation on your ingress to `false`.
|
||||
|
||||
Here is an example ingress definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
|
@ -673,6 +687,7 @@ spec:
|
|||
```
|
||||
|
||||
And an example service definition:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -696,6 +711,7 @@ If you were to visit `example.com/static` the request would then be passed onto
|
|||
## Excluding an ingress from Træfik
|
||||
|
||||
You can control which ingress Træfik cares about by using the `kubernetes.io/ingress.class` annotation.
|
||||
|
||||
By default if the annotation is not set at all Træfik will include the ingress.
|
||||
If the annotation is set to anything other than traefik or a blank string Træfik will ignore it.
|
||||
|
||||
|
|
|
@ -14,13 +14,16 @@ Træfik supports several Key-value stores:
|
|||
## Static configuration in Key-value store
|
||||
|
||||
We will see the steps to set it up with an easy example.
|
||||
Note that we could do the same with any other Key-value Store.
|
||||
|
||||
## docker-compose file for Consul
|
||||
!!! note
|
||||
We could do the same with any other Key-value Store.
|
||||
|
||||
### docker-compose file for Consul
|
||||
|
||||
The Træfik global configuration will be getted from a [Consul](https://consul.io) store.
|
||||
|
||||
First we have to launch Consul in a container.
|
||||
|
||||
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [emilevauge/whoamI](https://github.com/emilevauge/whoamI) :
|
||||
|
||||
```yaml
|
||||
|
@ -51,12 +54,12 @@ whoami4:
|
|||
image: emilevauge/whoami
|
||||
```
|
||||
|
||||
## Upload the configuration in the Key-value store
|
||||
### Upload the configuration in the Key-value store
|
||||
|
||||
We should now fill the store with the Træfik global configuration, as we do with a [TOML file configuration](/toml).
|
||||
We should now fill the store with the Træfik global configuration, as we do with a [TOML file configuration](/toml).
|
||||
To do that, we can send the Key-value pairs via [curl commands](https://www.consul.io/intro/getting-started/kv.html) or via the [Web UI](https://www.consul.io/intro/getting-started/ui.html).
|
||||
|
||||
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
|
||||
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
|
||||
Please refer to the [store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
|
||||
|
||||
Here is the toml configuration we would like to store in the Key-value Store :
|
||||
|
@ -83,7 +86,6 @@ defaultEntryPoints = ["http", "https"]
|
|||
<key file content>
|
||||
-----END CERTIFICATE-----"""
|
||||
|
||||
|
||||
[consul]
|
||||
endpoint = "127.0.0.1:8500"
|
||||
watch = true
|
||||
|
@ -118,9 +120,10 @@ In case you are setting key values manually:
|
|||
|
||||
Note that we can either give path to certificate file or directly the file content itself.
|
||||
|
||||
## Launch Træfik
|
||||
### Launch Træfik
|
||||
|
||||
We will now launch Træfik in a container.
|
||||
|
||||
We use CLI flags to setup the connection between Træfik and Consul.
|
||||
All the rest of the global configuration is stored in Consul.
|
||||
|
||||
|
@ -138,21 +141,23 @@ traefik:
|
|||
!!! warning
|
||||
Be careful to give the correct IP address and port in the flag `--consul.endpoint`.
|
||||
|
||||
## Consul ACL Token support
|
||||
### Consul ACL Token support
|
||||
|
||||
To specify a Consul ACL token for Traefik, we have to set a System Environment variable named `CONSUL_HTTP_TOKEN` prior to starting traefik. This variable must be initialized with the ACL token value.
|
||||
To specify a Consul ACL token for Traefik, we have to set a System Environment variable named `CONSUL_HTTP_TOKEN` prior to starting Traefik.
|
||||
This variable must be initialized with the ACL token value.
|
||||
|
||||
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
|
||||
|
||||
## TLS support
|
||||
### TLS support
|
||||
|
||||
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
|
||||
|
||||
- `--consul.endpoint=https://[consul-host]:[consul-ssl-port]`
|
||||
|
||||
## TLS support with client certificates
|
||||
### TLS support with client certificates
|
||||
|
||||
So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) support TLS connections with client certificates.
|
||||
|
||||
To set it up, we should enable [consul security](https://www.consul.io/docs/internals/security.html) (or [etcd security](https://coreos.com/etcd/docs/latest/security.html)).
|
||||
|
||||
Then, we have to provide CA, Cert and Key to Træfik using `consul` flags :
|
||||
|
@ -169,18 +174,20 @@ Or etcd flags :
|
|||
- `--etcd.tls.cert=path/to/the/file`
|
||||
- `--etcd.tls.key=path/to/the/file`
|
||||
|
||||
Note that we can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
|
||||
!! note
|
||||
We can either give directly directly the file content itself (instead of the path to certificate) in a TOML file configuration.
|
||||
|
||||
Remember the command `traefik --help` to display the updated list of flags.
|
||||
|
||||
# Dynamic configuration in Key-value store
|
||||
## Dynamic configuration in Key-value store
|
||||
|
||||
Following our example, we will provide backends/frontends rules to Træfik.
|
||||
|
||||
Note that this section is independent of the way Træfik got its static configuration.
|
||||
It means that the static configuration can either come from the same Key-value store or from any other sources.
|
||||
!!! note
|
||||
This section is independent of the way Træfik got its static configuration.
|
||||
It means that the static configuration can either come from the same Key-value store or from any other sources.
|
||||
|
||||
## Key-value storage structure
|
||||
### Key-value storage structure
|
||||
|
||||
Here is the toml configuration we would like to store in the store :
|
||||
|
||||
|
@ -272,14 +279,15 @@ And there, the same dynamic configuration in a KV Store (using `prefix = "traefi
|
|||
| `/traefik/frontends/frontend2/entrypoints` | `http,https` |
|
||||
| `/traefik/frontends/frontend2/routes/test_2/rule` | `PathPrefix:/test` |
|
||||
|
||||
## Atomic configuration changes
|
||||
### Atomic configuration changes
|
||||
|
||||
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
|
||||
|
||||
Note that only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
|
||||
!!! note
|
||||
Only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
|
||||
|
||||
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically.
|
||||
As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
|
||||
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically.
|
||||
As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
|
||||
To solve this problem, Træfik supports a special key called `/traefik/alias`.
|
||||
If set, Træfik use the value as an alternative key prefix.
|
||||
|
||||
|
@ -292,6 +300,7 @@ Given the key structure below, Træfik will use the `http://172.17.0.2:80` as it
|
|||
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
|
||||
|
||||
When an atomic configuration change is required, you may write a new configuration at an alternative prefix.
|
||||
|
||||
Here, although the `/traefik_configurations/2/...` keys have been set, the old configuration is still active because the `/traefik/alias` key still points to `/traefik_configurations/1`:
|
||||
|
||||
| Key | Value |
|
||||
|
@ -305,6 +314,7 @@ Here, although the `/traefik_configurations/2/...` keys have been set, the old c
|
|||
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
||||
|
||||
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically.
|
||||
|
||||
Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://172.17.0.4:80` hosts while no traffic is sent to the `172.17.0.2:80` host:
|
||||
|
||||
| Key | Value |
|
||||
|
@ -317,22 +327,25 @@ Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://1
|
|||
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
|
||||
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
||||
|
||||
Note that Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
|
||||
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
|
||||
!!! note
|
||||
Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
|
||||
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
|
||||
|
||||
# Store configuration in Key-value store
|
||||
## Store configuration in Key-value store
|
||||
|
||||
!!! note
|
||||
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfk).
|
||||
|
||||
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfk).
|
||||
The static Træfik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
|
||||
|
||||
```bash
|
||||
traefik storeconfig [flags] ...
|
||||
```
|
||||
This command is here only to automate the [process which upload the configuration into the Key-value store](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store).
|
||||
Træfik will not start but the [static configuration](/basics/#static-trfk-configuration) will be uploaded into the Key-value store.
|
||||
Træfik will not start but the [static configuration](/basics/#static-trfk-configuration) will be uploaded into the Key-value store.
|
||||
If you configured ACME (Let's Encrypt), your registration account and your certificates will also be uploaded.
|
||||
|
||||
To upload your ACME certificates to the KV store, get your traefik TOML file and add the new `storage` option in the `acme` section:
|
||||
To upload your ACME certificates to the KV store, get your Traefik TOML file and add the new `storage` option in the `acme` section:
|
||||
|
||||
```toml
|
||||
[acme]
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
This guide explains how to integrate Marathon and operate the cluster in a reliable way from Traefik's standpoint.
|
||||
|
||||
# Host detection
|
||||
## Host detection
|
||||
|
||||
Marathon offers multiple ways to run (Docker-containerized) applications, the most popular ones being
|
||||
|
||||
|
@ -14,9 +14,11 @@ Traefik tries to detect the configured mode and route traffic to the right IP ad
|
|||
|
||||
Given the complexity of the subject, it is possible that the heuristic fails.
|
||||
Apart from filing an issue and waiting for the feature request / bug report to get addressed, one workaround for such situations is to customize the Marathon template file to the individual needs.
|
||||
(Note that this does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.)
|
||||
|
||||
# Port detection
|
||||
!!! note
|
||||
This does _not_ require rebuilding Traefik but only to point the `filename` configuration parameter to a customized version of the `marathon.tmpl` file on Traefik startup.
|
||||
|
||||
## Port detection
|
||||
|
||||
Traefik also attempts to determine the right port (which is a [non-trivial matter in Marathon](https://mesosphere.github.io/marathon/docs/ports.html)).
|
||||
Following is the order by which Traefik tries to identify the port (the first one that yields a positive result will be used):
|
||||
|
@ -26,9 +28,9 @@ Following is the order by which Traefik tries to identify the port (the first on
|
|||
1. The port from the application's `portDefinitions` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
|
||||
1. The port from the application's `ipAddressPerTask` field (possibly indexed through the `traefik.portIndex` label, otherwise the first one).
|
||||
|
||||
# Achieving high availability
|
||||
## Achieving high availability
|
||||
|
||||
## Scenarios
|
||||
### Scenarios
|
||||
|
||||
There are three scenarios where the availability of a Marathon application could be impaired along with the risk of losing or failing requests:
|
||||
|
||||
|
@ -36,27 +38,29 @@ There are three scenarios where the availability of a Marathon application could
|
|||
- During the shutdown phase when Traefik still routes requests to the backend while the backend is already terminating.
|
||||
- During a failure of the application when Traefik has not yet identified the backend as being erroneous.
|
||||
|
||||
The first two scenarios are common with every rolling upgrade of an application (i.e., a new version release or configuration update).
|
||||
The first two scenarios are common with every rolling upgrade of an application (i.e. a new version release or configuration update).
|
||||
|
||||
The following sub-sections describe how to resolve or mitigate each scenario.
|
||||
|
||||
### Startup
|
||||
#### Startup
|
||||
|
||||
It is possible to define [readiness checks](https://mesosphere.github.io/marathon/docs/readiness-checks.html) (available since Marathon version 1.1) per application and have Marathon take these into account during the startup phase.
|
||||
The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range.
|
||||
|
||||
The idea is that each application provides an HTTP endpoint that Marathon queries periodically during an ongoing deployment in order to mark the associated readiness check result as successful if and only if the endpoint returns a response within the configured HTTP code range.
|
||||
As long as the check keeps failing, Marathon will not proceed with the deployment (within the configured upgrade strategy bounds).
|
||||
|
||||
Beginning with version 1.4, Traefik respects readiness check results if the Traefik option is set and checks are configured on the applications accordingly.
|
||||
Note that due to the way readiness check results are currently exposed by the Marathon API, ready tasks may be taken into rotation with a small delay.
|
||||
It is on the order of one readiness check timeout interval (as configured on the application specifiation) and guarantees that non-ready tasks do not receive traffic prematurely.
|
||||
|
||||
!!! note
|
||||
Due to the way readiness check results are currently exposed by the Marathon API, ready tasks may be taken into rotation with a small delay.
|
||||
It is on the order of one readiness check timeout interval (as configured on the application specifiation) and guarantees that non-ready tasks do not receive traffic prematurely.
|
||||
|
||||
If readiness checks are not possible, a current mitigation strategy is to enable [retries](/configuration/commons#retry-configuration) and make sure that a sufficient number of healthy application tasks exist so that one retry will likely hit one of those.
|
||||
Apart from its probabilistic nature, the workaround comes at the price of increased latency.
|
||||
|
||||
### Shutdown
|
||||
#### Shutdown
|
||||
|
||||
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence
|
||||
(i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration).
|
||||
It is possible to install a [termination handler](https://mesosphere.github.io/marathon/docs/health-checks.html) (available since Marathon version 1.3) with each application whose responsibility it is to delay the shutdown process long enough until the backend has been taken out of load-balancing rotation with reasonable confidence (i.e., Traefik has received an update from the Marathon event bus, recomputes the available Marathon backends, and applies the new configuration).
|
||||
Specifically, each termination handler should install a signal handler listening for a SIGTERM signal and implement the following steps on signal reception:
|
||||
|
||||
1. Disable Keep-Alive HTTP connections.
|
||||
|
@ -70,12 +74,13 @@ Traefik already ignores Marathon tasks whose state does not match `TASK_RUNNING`
|
|||
How long HTTP requests should continue to be accepted in step 2 depends on how long Traefik needs to receive and process the Marathon configuration update.
|
||||
Under regular operational conditions, it should be on the order of seconds, with 10 seconds possibly being a good default value.
|
||||
|
||||
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy.
|
||||
Again, configuring Traefik to do retries (as discussed in the previous section) can serve as a decent workaround strategy.
|
||||
Paired with termination handlers, they would cover for those cases where either the termination sequence or Traefik cannot complete their part of the orchestration process in time.
|
||||
|
||||
### Failure
|
||||
#### Failure
|
||||
|
||||
A failing application always happens unexpectedly, and hence, it is very difficult or even impossible to rule out the adversal effects categorically.
|
||||
|
||||
Failure reasons vary broadly and could stretch from unacceptable slowness, a task crash, or a network split.
|
||||
|
||||
There are two mitigaton efforts:
|
||||
|
@ -85,19 +90,22 @@ There are two mitigaton efforts:
|
|||
|
||||
The Marathon health check makes sure that applications once deemed dysfunctional are being rescheduled to different slaves.
|
||||
However, they might take a while to get triggered and the follow-up processes to complete.
|
||||
|
||||
For that reason, the Treafik health check provides an additional check that responds more rapidly and does not require a configuration reload to happen.
|
||||
Additionally, it protects from cases that the Marathon health check may not be able to cover, such as a network split.
|
||||
|
||||
## (Non-)Alternatives
|
||||
### (Non-)Alternatives
|
||||
|
||||
There are a few alternatives of varying quality that are frequently asked for. The remaining section is going to explore them along with a benefit/cost trade-off.
|
||||
There are a few alternatives of varying quality that are frequently asked for.
|
||||
|
||||
### Reusing Marathon health checks
|
||||
The remaining section is going to explore them along with a benefit/cost trade-off.
|
||||
|
||||
#### Reusing Marathon health checks
|
||||
|
||||
It may seem obvious to reuse the Marathon health checks as a signal to Traefik whether an application should be taken into load-balancing rotation or not.
|
||||
|
||||
Apart from the increased latency a failing health check may have, a major problem with this is is that Marathon does not persist the health check results.
|
||||
Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure.
|
||||
Consequently, if a master re-election occurs in the Marathon clusters, all health check results will revert to the _unknown_ state, effectively causing all applications inside the cluster to become unavailable and leading to a complete cluster failure.
|
||||
Re-elections do not only happen during regular maintenance work (often requiring rolling upgrades of the Marathon nodes) but also when the Marathon leader fails spontaneously.
|
||||
As such, there is no way to handle this situation deterministically.
|
||||
|
||||
|
@ -106,11 +114,14 @@ Finally, Marathon health checks are not mandatory (the default is to use the tas
|
|||
Traefik used to use the health check results as a strict requirement but moved away from it as [users reported the dramatic consequences](https://github.com/containous/traefik/issues/653).
|
||||
If health check results are known to exist, however, they will be used to signal task availability.
|
||||
|
||||
### Draining
|
||||
#### Draining
|
||||
|
||||
Another common approach is to let a proxy drain backends that are supposed to shut down. That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
|
||||
Another common approach is to let a proxy drain backends that are supposed to shut down.
|
||||
That is, once a backend is supposed to shut down, Traefik would stop forwarding requests.
|
||||
|
||||
On the plus side, this would not require any modifications to the application in question.
|
||||
However, implementing this fully within Traefik seems like a non-trivial undertaking.
|
||||
|
||||
On the plus side, this would not require any modifications to the application in question. However, implementing this fully within Traefik seems like a non-trivial undertaking.
|
||||
Additionally, the approach is less flexible compared to a custom termination handler since only the latter allows for the implementation of custom termination sequences that go beyond simple request draining (e.g., persisting a snapshot state to disk prior to terminating).
|
||||
|
||||
The feature is currently not implemented; a request for draining in general is at [issue 41](https://github.com/containous/traefik/issues/41).
|
||||
|
|
|
@ -17,8 +17,8 @@ The cluster consists of:
|
|||
|
||||
## Cluster provisioning
|
||||
|
||||
First, let's create all the required nodes. It's a shorter version of
|
||||
the [swarm tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).
|
||||
First, let's create all the required nodes.
|
||||
It's a shorter version of the [swarm tutorial](https://docs.docker.com/engine/swarm/swarm-tutorial/).
|
||||
|
||||
```shell
|
||||
docker-machine create -d virtualbox manager
|
||||
|
@ -29,8 +29,8 @@ docker-machine create -d virtualbox worker2
|
|||
Then, let's setup the cluster, in order :
|
||||
|
||||
1. initialize the cluster
|
||||
2. get the token for other host to join
|
||||
3. on both workers, join the cluster with the token
|
||||
1. get the token for other host to join
|
||||
1. on both workers, join the cluster with the token
|
||||
|
||||
```shell
|
||||
docker-machine ssh manager "docker swarm init \
|
||||
|
@ -94,17 +94,19 @@ docker-machine ssh manager "docker service create \
|
|||
|
||||
Let's explain this command:
|
||||
|
||||
- `--publish 80:80 --publish 8080:8080`: we publish port `80` and `8080` on the cluster.
|
||||
- `--constraint=node.role==manager`: we ask docker to schedule Træfik on a manager node.
|
||||
- `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock`:
|
||||
we bind mount the docker socket where Træfik is scheduled to be able to speak to the daemon.
|
||||
- `--network traefik-net`: we attach the Træfik service (and thus the underlying container) to the `traefik-net` network.
|
||||
- `--docker`: enable docker backend, and `--docker.swarmmode` to enable the swarm mode on Træfik.
|
||||
- `--web`: activate the webUI on port 8080
|
||||
| Option | Description |
|
||||
|-----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
|
||||
| `--publish 80:80 --publish 8080:8080` | we publish port `80` and `8080` on the cluster. |
|
||||
| `--constraint=node.role==manager` | we ask docker to schedule Træfik on a manager node. |
|
||||
| `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock` | we bind mount the docker socket where Træfik is scheduled to be able to speak to the daemon. |
|
||||
| `--network traefik-net` | we attach the Træfik service (and thus the underlying container) to the `traefik-net` network. |
|
||||
| `--docker` | enable docker backend, and `--docker.swarmmode` to enable the swarm mode on Træfik. |
|
||||
| `--web` | activate the webUI on port 8080 |
|
||||
|
||||
## Deploy your apps
|
||||
|
||||
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in Go. We start 2 services, on the `traefik-net` network.
|
||||
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in Go.
|
||||
We start 2 services, on the `traefik-net` network.
|
||||
|
||||
```shell
|
||||
docker-machine ssh manager "docker service create \
|
||||
|
@ -121,9 +123,12 @@ docker-machine ssh manager "docker service create \
|
|||
emilevauge/whoami"
|
||||
```
|
||||
|
||||
Note that we set whoami1 to use sticky sessions (`--label traefik.backend.loadbalancer.sticky=true`). We'll demonstrate that later.
|
||||
!!! note
|
||||
We set whoami1 to use sticky sessions (`--label traefik.backend.loadbalancer.sticky=true`).
|
||||
We'll demonstrate that later.
|
||||
|
||||
**Note**: If using `docker stack deploy`, there is [a specific way that the labels must be defined in the docker-compose file](https://github.com/containous/traefik/issues/994#issuecomment-269095109).
|
||||
!!! note
|
||||
If using `docker stack deploy`, there is [a specific way that the labels must be defined in the docker-compose file](https://github.com/containous/traefik/issues/994#issuecomment-269095109).
|
||||
|
||||
Check that everything is scheduled and started:
|
||||
|
||||
|
@ -182,7 +187,8 @@ X-Forwarded-Proto: http
|
|||
X-Forwarded-Server: 8fbc39271b4c
|
||||
```
|
||||
|
||||
Note that as Træfik is published, you can access it from any machine and not only the manager.
|
||||
!!! note
|
||||
As Træfik is published, you can access it from any machine and not only the manager.
|
||||
|
||||
```shell
|
||||
curl -H Host:whoami0.traefik http://$(docker-machine ip worker1)
|
||||
|
@ -231,11 +237,9 @@ X-Forwarded-Server: 8fbc39271b4c
|
|||
|
||||
```shell
|
||||
docker-machine ssh manager "docker service scale whoami0=5"
|
||||
|
||||
docker-machine ssh manager "docker service scale whoami1=5"
|
||||
```
|
||||
|
||||
|
||||
Check that we now have 5 replicas of each `whoami` service:
|
||||
|
||||
```shell
|
||||
|
@ -298,7 +302,9 @@ X-Forwarded-Host: 10.0.9.4:80
|
|||
X-Forwarded-Proto: http
|
||||
X-Forwarded-Server: 8fbc39271b4c
|
||||
```
|
||||
Wait, I thought we added the sticky flag to `whoami1`? Traefik relies on a cookie to maintain stickyness so you'll need to test this with a browser.
|
||||
|
||||
Wait, I thought we added the sticky flag to `whoami1`?
|
||||
Traefik relies on a cookie to maintain stickyness so you'll need to test this with a browser.
|
||||
|
||||
First you need to add `whoami1.traefik` to your hosts file:
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
# Swarm cluster
|
||||
|
||||
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Træfik on it.
|
||||
|
||||
The cluster consists of:
|
||||
|
||||
- 2 servers
|
||||
|
@ -97,14 +98,17 @@ docker $(docker-machine config mhs-demo0) run \
|
|||
|
||||
Let's explain this command:
|
||||
|
||||
- `-p 80:80 -p 8080:8080`: we bind ports 80 and 8080
|
||||
- `--net=my-net`: run the container on the network my-net
|
||||
- `-v /var/lib/boot2docker/:/ssl`: mount the ssl keys generated by docker-machine
|
||||
- `-c /dev/null`: empty config file
|
||||
- `--docker`: enable docker backend
|
||||
- `--docker.endpoint=tcp://172.18.0.1:3376`: connect to the swarm master using the docker_gwbridge network
|
||||
- `--docker.tls`: enable TLS using the docker-machine keys
|
||||
- `--web`: activate the webUI on port 8080
|
||||
| Option | Description |
|
||||
|-------------------------------------------|---------------------------------------------------------------|
|
||||
| `-p 80:80 -p 8080:8080` | we bind ports 80 and 8080 |
|
||||
| `--net=my-net` | run the container on the network my-net |
|
||||
| `-v /var/lib/boot2docker/:/ssl` | mount the ssl keys generated by docker-machine |
|
||||
| `-c /dev/null` | empty config file |
|
||||
| `--docker` | enable docker backend |
|
||||
| `--docker.endpoint=tcp://172.18.0.1:3376` | connect to the swarm master using the docker_gwbridge network |
|
||||
| `--docker.tls` | enable TLS using the docker-machine keys |
|
||||
| `--web` | activate the webUI on port 8080 |
|
||||
|
||||
|
||||
## Deploy your apps
|
||||
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue