Merge tag 'v1.7.4' into master

This commit is contained in:
Fernandez Ludovic 2018-10-30 12:34:00 +01:00
commit d3ae88f108
154 changed files with 4356 additions and 1285 deletions

View file

@ -1,15 +1,15 @@
# Clustering / High Availability on Docker Swarm with Consul
This guide explains how to use Træfik in high availability mode in a Docker Swarm and with Let's Encrypt.
This guide explains how to use Traefik in high availability mode in a Docker Swarm and with Let's Encrypt.
Why do we need Træfik in cluster mode? Running multiple instances should work out of the box?
Why do we need Traefik in cluster mode? Running multiple instances should work out of the box?
If you want to use Let's Encrypt with Træfik, sharing configuration or TLS certificates between many Træfik instances, you need Træfik cluster/HA.
If you want to use Let's Encrypt with Traefik, sharing configuration or TLS certificates between many Traefik instances, you need Traefik cluster/HA.
Ok, could we mount a shared volume used by all my instances? Yes, you can, but it will not work.
When you use Let's Encrypt, you need to store certificates, but not only.
When Træfik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
If the challenge is not known by other Træfik instances, the validation will fail.
When Traefik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
If the challenge is not known by other Traefik instances, the validation will fail.
For more information about the challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#http-challenge)
@ -17,12 +17,12 @@ For more information about the challenge: [Automatic Certificate Management Envi
You will need a working Docker Swarm cluster.
## Træfik configuration
## Traefik configuration
In this guide, we will not use a TOML configuration file, but only command line flag.
With that, we can use the base image without mounting configuration file or building custom image.
What Træfik should do:
What Traefik should do:
- Listen to 80 and 443
- Redirect HTTP traffic to HTTPS
@ -64,7 +64,7 @@ Let's Encrypt needs 4 parameters: an TLS entry point to listen to, a non-TLS ent
To enable Let's Encrypt support, you need to add `--acme` flag.
Now, Træfik needs to know where to store the certificates, we can choose between a key in a Key-Value store, or a file path: `--acme.storage=my/key` or `--acme.storage=/path/to/acme.json`.
Now, Traefik needs to know where to store the certificates, we can choose between a key in a Key-Value store, or a file path: `--acme.storage=my/key` or `--acme.storage=/path/to/acme.json`.
The `acme.httpChallenge.entryPoint` flag enables the `HTTP-01` challenge and specifies the entryPoint to use during the challenges.
@ -143,9 +143,9 @@ networks:
## Migrate configuration to Consul
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file and/or CLI flags.
We created a special Traefik command to help configuring your Key Value store from a Traefik TOML configuration file and/or CLI flags.
## Deploy a Træfik cluster
## Deploy a Traefik cluster
The best way we found is to have an initializer service.
This service will push the config to Consul via the `storeconfig` sub-command.
@ -173,7 +173,7 @@ The initializer in a docker-compose file will be:
- consul
```
And now, the Træfik part will only have the Consul configuration.
And now, the Traefik part will only have the Consul configuration.
```yaml
traefik:
@ -189,10 +189,10 @@ And now, the Træfik part will only have the Consul configuration.
```
!!! note
For Træfik <1.5.0 add `acme.storage=traefik/acme/account` because Træfik is not reading it from Consul.
For Traefik <1.5.0 add `acme.storage=traefik/acme/account` because Traefik is not reading it from Consul.
If you have some update to do, update the initializer service and re-deploy it.
The new configuration will be stored in Consul, and you need to restart the Træfik node: `docker service update --force traefik_traefik`.
The new configuration will be stored in Consul, and you need to restart the Traefik node: `docker service update --force traefik_traefik`.
## Full docker-compose file

View file

@ -1,8 +1,8 @@
# Clustering / High Availability (beta)
This guide explains how to use Træfik in high availability mode.
This guide explains how to use Traefik in high availability mode.
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
In order to deploy and configure multiple Traefik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
## Prerequisites
@ -11,23 +11,23 @@ _(Currently, we recommend [Consul](https://consul.io) .)_
## File configuration to KV store migration
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file.
We created a special Traefik command to help configuring your Key Value store from a Traefik TOML configuration file.
Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) to get more details.
## Deploy a Træfik cluster
## Deploy a Traefik cluster
Once your Træfik configuration is uploaded on your KV store, you can start each Træfik instance.
Once your Traefik configuration is uploaded on your KV store, you can start each Traefik instance.
A Træfik cluster is based on a manager/worker model.
A Traefik cluster is based on a manager/worker model.
When starting, Træfik will elect a manager.
When starting, Traefik will elect a manager.
If this instance fails, another manager will be automatically elected.
## Træfik cluster and Let's Encrypt
## Traefik cluster and Let's Encrypt
**In cluster mode, ACME certificates have to be stored in [a KV Store entry](/configuration/acme/#as-a-key-value-store-entry).**
Thanks to the Træfik cluster mode algorithm (based on [the Raft Consensus Algorithm](https://raft.github.io/)), only one instance will contact Let's encrypt to solve the challenges.
Thanks to the Traefik cluster mode algorithm (based on [the Raft Consensus Algorithm](https://raft.github.io/)), only one instance will contact Let's encrypt to solve the challenges.
The others instances will get ACME certificate from the KV Store entry.

View file

@ -1,8 +1,8 @@
# Let's Encrypt & Docker
In this use case, we want to use Træfik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
In this use case, we want to use Traefik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
We also want to automatically _discover any services_ on the Docker host and let Træfik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
We also want to automatically _discover any services_ on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
@ -19,7 +19,7 @@ In real-life, you'll want to use your own domain and have the DNS configured acc
Docker containers can only communicate with each other over TCP when they share at least one network.
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Træfik) will reside in.
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
On the Docker host, run the following command:
@ -27,7 +27,7 @@ On the Docker host, run the following command:
docker network create web
```
Now, let's create a directory on the server where we will configure the rest of Træfik:
Now, let's create a directory on the server where we will configure the rest of Traefik:
```shell
mkdir -p /opt/traefik
@ -41,7 +41,7 @@ touch /opt/traefik/acme.json && chmod 600 /opt/traefik/acme.json
touch /opt/traefik/traefik.toml
```
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Træfik.
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik.
The contents of the file is as follows:
@ -69,12 +69,12 @@ networks:
```
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Træfik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure its own internal configuration when containers are created (or shut down).
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
Finally, we're giving this container a static name called `traefik`.
Let's take a look at a simple `traefik.toml` configuration as well before we'll create the Træfik container:
Let's take a look at a simple `traefik.toml` configuration as well before we'll create the Traefik container:
```toml
debug = false
@ -111,17 +111,17 @@ entryPoint = "http"
This is the minimum configuration required to do the following:
- Log `ERROR`-level messages (or more severe) to the console, but silence `DEBUG`-level messages
- Check for new versions of Træfik periodically
- Check for new versions of Traefik periodically
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
- Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Træfik by default, we'll get into this in a bit!**
- Enable the Docker provider and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Traefik by default, we'll get into this in a bit!**
- Enable automatic request and configuration of SSL certificates using Let's Encrypt.
These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Træfik container.
Alright, let's boot the container. From the `/opt/traefik` directory, run `docker-compose up -d` which will create and start the Traefik container.
## Exposing Web Services to the Outside World
Now that we've fully configured and started Træfik, it's time to get our applications running!
Now that we've fully configured and started Traefik, it's time to get our applications running!
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not.
@ -195,10 +195,10 @@ Since the `traefik` container we've created and started earlier is also attached
### Labels
As mentioned earlier, we don't want containers exposed automatically by Træfik.
As mentioned earlier, we don't want containers exposed automatically by Traefik.
The reason behind this is simple: we want to have control over this process ourselves.
Thanks to Docker labels, we can tell Træfik how to create its internal routing configuration.
Thanks to Docker labels, we can tell Traefik how to create its internal routing configuration.
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
@ -219,13 +219,13 @@ We use both `container labels` and `service labels`.
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
We also tell Træfik to use the `web` network to route HTTP traffic to this container.
With the `traefik.enable` label, we tell Træfik to include this container in its internal configuration.
We also tell Traefik to use the `web` network to route HTTP traffic to this container.
With the `traefik.enable` label, we tell Traefik to include this container in its internal configuration.
With the `frontend.rule` label, we tell Træfik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
Essentially, this is the actual rule used for Layer-7 load balancing.
Finally but not unimportantly, we tell Træfik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
### Service labels
@ -238,25 +238,25 @@ In the example, two service names are defined : `basic` and `admin`.
They allow creating two frontends and two backends.
- `basic` has only one `service label` : `traefik.basic.protocol`.
Træfik will use values set in `traefik.frontend.rule` and `traefik.port` to create the `basic` frontend and backend.
Traefik will use values set in `traefik.frontend.rule` and `traefik.port` to create the `basic` frontend and backend.
The frontend listens to incoming HTTP requests which contain the `Host` `app.my-awesome-app.org` and redirect them in `HTTP` to the port `9000` of the backend.
- `admin` has all the `services labels` needed to create the `admin` frontend and backend (`traefik.admin.frontend.rule`, `traefik.admin.protocol`, `traefik.admin.port`).
Træfik will create a frontend to listen to incoming HTTP requests which contain the `Host` `admin-app.my-awesome-app.org` and redirect them in `HTTPS` to the port `9443` of the backend.
Traefik will create a frontend to listen to incoming HTTP requests which contain the `Host` `admin-app.my-awesome-app.org` and redirect them in `HTTPS` to the port `9443` of the backend.
#### Gotchas and tips
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
If a container exposes multiple ports, Træfik may forward traffic to the wrong port.
If a container exposes multiple ports, Traefik may forward traffic to the wrong port.
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
- Should you choose to enable the `exposedByDefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Træfik will automatically be reachable from the outside world, for everyone and everyone to see.
- Should you choose to enable the `exposedByDefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see.
Usually, this is a bad idea.
- With the `traefik.frontend.auth.basic` label, it's possible for Træfik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
- Træfik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
- Træfik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
- With the `traefik.frontend.auth.basic` label, it's possible for Traefik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
- Traefik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
- Traefik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
### Final thoughts
Using Træfik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, powerful and self-configuring solution for your projects.
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.

View file

@ -1,6 +1,6 @@
# Examples
You will find here some configuration examples of Træfik.
You will find here some configuration examples of Traefik.
## HTTP only
@ -87,7 +87,7 @@ entryPoint = "https"
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com` with described SANs.
Træfik generates these certificates when it starts and it needs to be restart if new domains are added.
Traefik generates these certificates when it starts and it needs to be restart if new domains are added.
### onHostRule option (with HTTP challenge)
@ -122,9 +122,9 @@ entryPoint = "https"
This configuration allows generating Let's Encrypt certificates (thanks to `HTTP-01` challenge) for the four domains `local[1-4].com`.
Træfik generates these certificates when it starts.
Traefik generates these certificates when it starts.
If a backend is added with a `onHost` rule, Træfik will automatically generate the Let's Encrypt certificate for the new domain (for frontends wired on the `acme.entryPoint`).
If a backend is added with a `onHost` rule, Traefik will automatically generate the Let's Encrypt certificate for the new domain (for frontends wired on the `acme.entryPoint`).
### OnDemand option (with HTTP challenge)
@ -186,7 +186,7 @@ entryPoint = "https"
```
DNS challenge needs environment variables to be executed.
These variables have to be set on the machine/container that host Træfik.
These variables have to be set on the machine/container that host Traefik.
These variables are described [in this section](/configuration/acme/#provider).
@ -219,7 +219,7 @@ entryPoint = "https"
```
DNS challenge needs environment variables to be executed.
These variables have to be set on the machine/container that host Træfik.
These variables have to be set on the machine/container that host Traefik.
These variables are described [in this section](/configuration/acme/#provider).
@ -248,7 +248,7 @@ entryPoint = "https"
entryPoint = "http"
```
Træfik will only try to generate a Let's encrypt certificate (thanks to `HTTP-01` challenge) if the domain cannot be checked by the provided certificates.
Traefik will only try to generate a Let's encrypt certificate (thanks to `HTTP-01` challenge) if the domain cannot be checked by the provided certificates.
### Cluster mode

View file

@ -4,9 +4,9 @@
This section explains how to use Traefik as reverse proxy for gRPC application.
### Træfik configuration
### Traefik configuration
At last, we configure our Træfik instance to use both self-signed certificates.
At last, we configure our Traefik instance to use both self-signed certificates.
```toml
defaultEntryPoints = ["https"]
@ -39,7 +39,7 @@ defaultEntryPoints = ["https"]
### Conclusion
We don't need specific configuration to use gRPC in Træfik, we just need to use `h2c` protocol, or use HTTPS communications to have HTTP2 with the backend.
We don't need specific configuration to use gRPC in Traefik, we just need to use `h2c` protocol, or use HTTPS communications to have HTTP2 with the backend.
## With HTTPS
@ -75,9 +75,9 @@ with
Common Name (e.g. server FQDN or YOUR name) []: frontend.local
```
### Træfik configuration
### Traefik configuration
At last, we configure our Træfik instance to use both self-signed certificates.
At last, we configure our Traefik instance to use both self-signed certificates.
```toml
defaultEntryPoints = ["https"]
@ -152,7 +152,7 @@ err := s.Serve(lis)
// ...
```
Next we will modify gRPC Client to use our Træfik self-signed certificate:
Next we will modify gRPC Client to use our Traefik self-signed certificate:
```go
// ...

View file

@ -1,6 +1,6 @@
# Kubernetes Ingress Controller
This guide explains how to use Træfik as an Ingress controller for a Kubernetes cluster.
This guide explains how to use Traefik as an Ingress controller for a Kubernetes cluster.
If you are not familiar with Ingresses in Kubernetes you might want to read the [Kubernetes user guide](https://kubernetes.io/docs/concepts/services-networking/ingress/)
@ -19,12 +19,12 @@ The config files used in this guide can be found in the [examples directory](htt
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in 1.6+ to allow fine-grained control of Kubernetes resources and API.
If your cluster is configured with RBAC, you will need to authorize Træfik to use the Kubernetes API. There are two ways to set up the proper permission: Via namespace-specific RoleBindings or a single, global ClusterRoleBinding.
If your cluster is configured with RBAC, you will need to authorize Traefik to use the Kubernetes API. There are two ways to set up the proper permission: Via namespace-specific RoleBindings or a single, global ClusterRoleBinding.
RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Træfik is watching over, thereby following the least-privileges principle. This is the preferred approach if Træfik is not supposed to watch all namespaces, and the set of namespaces does not change dynamically. Otherwise, a single ClusterRoleBinding must be employed.
RoleBindings per namespace enable to restrict granted permissions to the very namespaces only that Traefik is watching over, thereby following the least-privileges principle. This is the preferred approach if Traefik is not supposed to watch all namespaces, and the set of namespaces does not change dynamically. Otherwise, a single ClusterRoleBinding must be employed.
!!! note
RoleBindings per namespace are available in Træfik 1.5 and later. Please use ClusterRoleBindings for older versions.
RoleBindings per namespace are available in Traefik 1.5 and later. Please use ClusterRoleBindings for older versions.
For the sake of simplicity, this guide will use a ClusterRoleBinding:
@ -74,11 +74,11 @@ subjects:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-rbac.yaml
```
For namespaced restrictions, one RoleBinding is required per watched namespace along with a corresponding configuration of Træfik's `kubernetes.namespaces` parameter.
For namespaced restrictions, one RoleBinding is required per watched namespace along with a corresponding configuration of Traefik's `kubernetes.namespaces` parameter.
## Deploy Træfik using a Deployment or DaemonSet
## Deploy Traefik using a Deployment or DaemonSet
It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
It is possible to use Traefik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
whereas both options have their own pros and cons:
- The scalability can be much better when using a Deployment, because you will have a Single-Pod-per-Node model when using a DaemonSet, whereas you may need less replicas based on your environment when using a Deployment.
@ -221,7 +221,7 @@ spec:
!!! note
This will create a Daemonset that uses privileged ports 80/8080 on the host. This may not work on all providers, but illustrates the static (non-NodePort) hostPort binding. The `traefik-ingress-service` can still be used inside the cluster to access the DaemonSet pods.
To deploy Træfik to your cluster start by submitting one of the YAML files to the cluster with `kubectl`:
To deploy Traefik to your cluster start by submitting one of the YAML files to the cluster with `kubectl`:
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-deployment.yaml
@ -257,14 +257,14 @@ traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m
```
You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched a Pod, and it is now running.
_It might take a few moments for Kubernetes to pull the Træfik image and start the container._
_It might take a few moments for Kubernetes to pull the Traefik image and start the container._
!!! note
You could also check the deployment with the Kubernetes dashboard, run
`minikube dashboard` to open it in your browser, then choose the `kube-system`
namespace from the menu at the top right of the screen.
You should now be able to access Træfik on port 80 of your Minikube instance when using the DaemonSet:
You should now be able to access Traefik on port 80 of your Minikube instance when using the DaemonSet:
```shell
curl $(minikube ip)
@ -285,23 +285,23 @@ curl $(minikube ip):<NODEPORT>
```
!!! note
We expect to see a 404 response here as we haven't yet given Træfik any configuration.
We expect to see a 404 response here as we haven't yet given Traefik any configuration.
All further examples below assume a DaemonSet installation. Deployment users will need to append the NodePort when constructing requests.
## Deploy Træfik using Helm Chart
## Deploy Traefik using Helm Chart
!!! note
The Helm Chart is maintained by the community, not the Træfik project maintainers.
The Helm Chart is maintained by the community, not the Traefik project maintainers.
Instead of installing Træfik via Kubernetes object directly, you can also use the Træfik Helm chart.
Instead of installing Traefik via Kubernetes object directly, you can also use the Traefik Helm chart.
Install the Træfik chart by:
Install the Traefik chart by:
```shell
helm install stable/traefik
```
Install the Træfik chart using a values.yaml file.
Install the Traefik chart using a values.yaml file.
```shell
helm install --values values.yaml stable/traefik
@ -320,7 +320,7 @@ For more information, check out [the documentation](https://github.com/kubernete
## Submitting an Ingress to the Cluster
Lets start by creating a Service and an Ingress that will expose the [Træfik Web UI](https://github.com/containous/traefik#web-ui).
Lets start by creating a Service and an Ingress that will expose the [Traefik Web UI](https://github.com/containous/traefik#web-ui).
```yaml
apiVersion: v1
@ -367,7 +367,7 @@ You can get the IP address of your minikube instance by running `minikube ip`:
echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
```
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Træfik web UI.
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Traefik web UI.
### Add a TLS Certificate to the Ingress
@ -421,7 +421,7 @@ If there are any errors while loading the TLS section of an ingress, the whole i
## Basic Authentication
It's possible to protect access to Træfik through basic authentication. (See the [Kubernetes Ingress](/configuration/backends/kubernetes) configuration page for syntactical details and restrictions.)
It's possible to protect access to Traefik through basic authentication. (See the [Kubernetes Ingress](/configuration/backends/kubernetes) configuration page for syntactical details and restrictions.)
### Creating the Secret
@ -677,7 +677,7 @@ spec:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
```
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host.
Now visit the [Traefik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host.
Along with a backend listing for each service with a server set up for each pod.
If you edit your `/etc/hosts` again you should be able to access the cheese websites in your browser.
@ -726,7 +726,7 @@ spec:
[examples/k8s/cheeses-ingress.yaml](https://github.com/containous/traefik/tree/master/examples/k8s/cheeses-ingress.yaml)
!!! note
We are configuring Træfik to strip the prefix from the url path with the `traefik.frontend.rule.type` annotation so that we can use the containers from the previous example without modification.
We are configuring Traefik to strip the prefix from the url path with the `traefik.frontend.rule.type` annotation so that we can use the containers from the previous example without modification.
```shell
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheeses-ingress.yaml
@ -744,7 +744,7 @@ You should now be able to visit the websites in your browser.
## Multiple Ingress Definitions for the Same Host (or Host+Path)
Træfik will merge multiple Ingress definitions for the same host/path pair into one definition.
Traefik will merge multiple Ingress definitions for the same host/path pair into one definition.
Let's say the number of cheese services is growing.
It is now time to move the cheese services to a dedicated cheese namespace to simplify the managements of cheese and non-cheese services.
@ -771,7 +771,7 @@ spec:
servicePort: http
```
Træfik will now look for cheddar service endpoints (ports on healthy pods) in both the cheese and the default namespace.
Traefik will now look for cheddar service endpoints (ports on healthy pods) in both the cheese and the default namespace.
Deploying cheddar into the cheese namespace and afterwards shutting down cheddar in the default namespace is enough to migrate the traffic.
!!! note
@ -824,12 +824,12 @@ Note that priority values must be quoted to avoid numeric interpretation (which
## Forwarding to ExternalNames
When specifying an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors),
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
Traefik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
## Disable passing the Host Header
By default Træfik will pass the incoming Host header to the upstream resource.
By default Traefik will pass the incoming Host header to the upstream resource.
However, there are times when you may not want this to be the case. For example, if your service is of the ExternalName type.
@ -889,38 +889,38 @@ If you were to visit `example.com/static` the request would then be passed on to
## Partitioning the Ingress object space
By default, Træfik processes every Ingress objects it observes. At times, however, it may be desirable to ignore certain objects. The following sub-sections describe common use cases and how they can be handled with Træfik.
By default, Traefik processes every Ingress objects it observes. At times, however, it may be desirable to ignore certain objects. The following sub-sections describe common use cases and how they can be handled with Traefik.
### Between Træfik and other Ingress controller implementations
### Between Traefik and other Ingress controller implementations
Sometimes Træfik runs along other Ingress controller implementations. One such example is when both Træfik and a cloud provider Ingress controller are active.
Sometimes Traefik runs along other Ingress controller implementations. One such example is when both Traefik and a cloud provider Ingress controller are active.
The `kubernetes.io/ingress.class` annotation can be attached to any Ingress object in order to control whether Træfik should handle it.
The `kubernetes.io/ingress.class` annotation can be attached to any Ingress object in order to control whether Traefik should handle it.
If the annotation is missing, contains an empty value, or the value `traefik`, then the Træfik controller will take responsibility and process the associated Ingress object.
If the annotation is missing, contains an empty value, or the value `traefik`, then the Traefik controller will take responsibility and process the associated Ingress object.
It is also possible to set the `ingressClass` option in Træfik to a particular value. Træfik will only process matching Ingress objects.
For instance, setting the option to `traefik-internal` causes Træfik to process Ingress objects with the same `kubernetes.io/ingress.class` annotation value, ignoring all other objects (including those with a `traefik` value, empty value, and missing annotation).
It is also possible to set the `ingressClass` option in Traefik to a particular value. Traefik will only process matching Ingress objects.
For instance, setting the option to `traefik-internal` causes Traefik to process Ingress objects with the same `kubernetes.io/ingress.class` annotation value, ignoring all other objects (including those with a `traefik` value, empty value, and missing annotation).
!!! note
Letting multiple ingress controllers handle the same ingress objects can lead to unintended behavior.
It is recommended to prefix all ingressClass values with `traefik` to avoid unintended collisions with other ingress implementations.
### Between multiple Træfik Deployments
### Between multiple Traefik Deployments
Sometimes multiple Træfik Deployments are supposed to run concurrently.
Sometimes multiple Traefik Deployments are supposed to run concurrently.
For instance, it is conceivable to have one Deployment deal with internal and another one with external traffic.
For such cases, it is advisable to classify Ingress objects through a label and configure the `labelSelector` option per each Træfik Deployment accordingly.
For such cases, it is advisable to classify Ingress objects through a label and configure the `labelSelector` option per each Traefik Deployment accordingly.
To stick with the internal/external example above, all Ingress objects meant for internal traffic could receive a `traffic-type: internal` label while objects designated for external traffic receive a `traffic-type: external` label.
The label selectors on the Træfik Deployments would then be `traffic-type=internal` and `traffic-type=external`, respectively.
The label selectors on the Traefik Deployments would then be `traffic-type=internal` and `traffic-type=external`, respectively.
## Traffic Splitting
It is possible to split Ingress traffic in a fine-grained manner between multiple deployments using _service weights_.
One canonical use case is canary releases where a deployment representing a newer release is to receive an initially small but ever-increasing fraction of the requests over time.
The way this can be done in Træfik is to specify a percentage of requests that should go into each deployment.
The way this can be done in Traefik is to specify a percentage of requests that should go into each deployment.
For instance, say that an application `my-app` runs in version 1.
A newer version 2 is about to be released, but confidence in the robustness and reliability of new version running in production can only be gained gradually.
@ -953,7 +953,7 @@ spec:
```
Take note of the `traefik.ingress.kubernetes.io/service-weights` annotation: It specifies the distribution of requests among the referenced backend services, `my-app` and `my-app-canary`.
With this definition, Træfik will route 99% of the requests to the pods backed by the `my-app` deployment, and 1% to those backed by `my-app-canary`.
With this definition, Traefik will route 99% of the requests to the pods backed by the `my-app` deployment, and 1% to those backed by `my-app-canary`.
Over time, the ratio may slowly shift towards the canary deployment until it is deemed to replace the previous main application, in steps such as 5%/95%, 10%/90%, 50%/50%, and finally 100%/0%.
A few conditions must hold for service weights to be applied correctly:
@ -1006,7 +1006,7 @@ The examples shown deliberately do not specify any [resource limitations](https:
In a production environment, however, it is important to set proper bounds, especially with regards to CPU:
- too strict and Træfik will be throttled while serving requests (as Kubernetes imposes hard quotas)
- too loose and Træfik may waste resources not available for other containers
- too strict and Traefik will be throttled while serving requests (as Kubernetes imposes hard quotas)
- too loose and Traefik may waste resources not available for other containers
When in doubt, you should measure your resource needs, and adjust requests and limits accordingly.

View file

@ -2,9 +2,9 @@
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be stored in a Key-value store.
This section explains how to launch Træfik using a configuration loaded from a Key-value store.
This section explains how to launch Traefik using a configuration loaded from a Key-value store.
Træfik supports several Key-value stores:
Traefik supports several Key-value stores:
- [Consul](https://consul.io)
- [etcd](https://coreos.com/etcd/)
@ -20,11 +20,11 @@ We will see the steps to set it up with an easy example.
### docker-compose file for Consul
The Træfik global configuration will be retrieved from a [Consul](https://consul.io) store.
The Traefik global configuration will be retrieved from a [Consul](https://consul.io) store.
First we have to launch Consul in a container.
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [emilevauge/whoamI](https://github.com/emilevauge/whoamI) :
The [docker-compose file](https://docs.docker.com/compose/compose-file/) allows us to launch Consul and four instances of the trivial app [containous/whoami](https://github.com/containous/whoami) :
```yaml
consul:
@ -42,25 +42,25 @@ consul:
- "8302/udp"
whoami1:
image: emilevauge/whoami
image: containous/whoami
whoami2:
image: emilevauge/whoami
image: containous/whoami
whoami3:
image: emilevauge/whoami
image: containous/whoami
whoami4:
image: emilevauge/whoami
image: containous/whoami
```
### Upload the configuration in the Key-value store
We should now fill the store with the Træfik global configuration.
We should now fill the store with the Traefik global configuration.
To do that, we can send the Key-value pairs via [curl commands](https://www.consul.io/intro/getting-started/kv.html) or via the [Web UI](https://www.consul.io/intro/getting-started/ui.html).
Fortunately, Træfik allows automation of this process using the `storeconfig` subcommand.
Please refer to the [store Træfik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
Fortunately, Traefik allows automation of this process using the `storeconfig` subcommand.
Please refer to the [store Traefik configuration](/user-guide/kv-config/#store-configuration-in-key-value-store) section to get documentation on it.
Here is the toml configuration we would like to store in the Key-value Store :
@ -128,11 +128,11 @@ In case you are setting key values manually:
Note that we can either give path to certificate file or directly the file content itself.
### Launch Træfik
### Launch Traefik
We will now launch Træfik in a container.
We will now launch Traefik in a container.
We use CLI flags to setup the connection between Træfik and Consul.
We use CLI flags to setup the connection between Traefik and Consul.
All the rest of the global configuration is stored in Consul.
Here is the [docker-compose file](https://docs.docker.com/compose/compose-file/) :
@ -156,7 +156,7 @@ This variable must be initialized with the ACL token value.
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
If a Consul ACL is used to restrict Træfik read/write access, one of the following configurations is needed.
If a Consul ACL is used to restrict Traefik read/write access, one of the following configurations is needed.
- HCL format :
@ -199,7 +199,7 @@ So far, only [Consul](https://consul.io) and [etcd](https://coreos.com/etcd/) su
To set it up, we should enable [consul security](https://www.consul.io/docs/internals/security.html) (or [etcd security](https://coreos.com/etcd/docs/latest/security.html)).
Then, we have to provide CA, Cert and Key to Træfik using `consul` flags :
Then, we have to provide CA, Cert and Key to Traefik using `consul` flags :
- `--consul.tls`
- `--consul.tls.ca=path/to/the/file`
@ -220,10 +220,10 @@ Remember the command `traefik --help` to display the updated list of flags.
## Dynamic configuration in Key-value store
Following our example, we will provide backends/frontends rules and HTTPS certificates to Træfik.
Following our example, we will provide backends/frontends rules and HTTPS certificates to Traefik.
!!! note
This section is independent of the way Træfik got its static configuration.
This section is independent of the way Traefik got its static configuration.
It means that the static configuration can either come from the same Key-value store or from any other sources.
### Key-value storage structure
@ -360,17 +360,17 @@ And there, the same dynamic configuration in a KV Store (using `prefix = "traefi
### Atomic configuration changes
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
Traefik can watch the backends/frontends configuration changes and generate its configuration automatically.
!!! note
Only backends/frontends rules are dynamic, the rest of the Træfik configuration stay static.
Only backends/frontends rules are dynamic, the rest of the Traefik configuration stay static.
The [Etcd](https://github.com/coreos/etcd/issues/860) and [Consul](https://github.com/hashicorp/consul/issues/886) backends do not support updating multiple keys atomically.
As a result, it may be possible for Træfik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
To solve this problem, Træfik supports a special key called `/traefik/alias`.
If set, Træfik use the value as an alternative key prefix.
As a result, it may be possible for Traefik to read an intermediate configuration state despite judicious use of the `--providersThrottleDuration` flag.
To solve this problem, Traefik supports a special key called `/traefik/alias`.
If set, Traefik use the value as an alternative key prefix.
Given the key structure below, Træfik will use the `http://172.17.0.2:80` as its only backend (frontend keys have been omitted for brevity).
Given the key structure below, Traefik will use the `http://172.17.0.2:80` as its only backend (frontend keys have been omitted for brevity).
| Key | Value |
|-------------------------------------------------------------------------|-----------------------------|
@ -407,21 +407,21 @@ Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://1
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
!!! note
Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Traefik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
Further, if the `/traefik/alias` key is set, all other configuration with `/traefik/backends` or `/traefik/frontends` prefix are ignored.
## Store configuration in Key-value store
!!! note
Don't forget to [setup the connection between Træfik and Key-value store](/user-guide/kv-config/#launch-trfik).
Don't forget to [setup the connection between Traefik and Key-value store](/user-guide/kv-config/#launch-traefik).
The static Træfik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
The static Traefik configuration in a key-value store can be automatically created and updated, using the [`storeconfig` subcommand](/basics/#commands).
```bash
traefik storeconfig [flags] ...
```
This command is here only to automate the [process which upload the configuration into the Key-value store](/user-guide/kv-config/#upload-the-configuration-in-the-key-value-store).
Træfik will not start but the [static configuration](/basics/#static-trfik-configuration) will be uploaded into the Key-value store.
Traefik will not start but the [static configuration](/basics/#static-traefik-configuration) will be uploaded into the Key-value store.
If you configured ACME (Let's Encrypt), your registration account and your certificates will also be uploaded.

View file

@ -1,6 +1,6 @@
# Docker Swarm (mode) cluster
This section explains how to create a multi-host docker cluster with swarm mode using [docker-machine](https://docs.docker.com/machine) and how to deploy Træfik on it.
This section explains how to create a multi-host docker cluster with swarm mode using [docker-machine](https://docs.docker.com/machine) and how to deploy Traefik on it.
The cluster consists of:
@ -66,17 +66,17 @@ ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
fnpj8ozfc85zvahx2r540xfcf * manager Ready Active Leader
```
Finally, let's create a network for Træfik to use.
Finally, let's create a network for Traefik to use.
```shell
docker-machine ssh manager "docker network create --driver=overlay traefik-net"
```
## Deploy Træfik
## Deploy Traefik
Let's deploy Træfik as a docker service in our cluster.
The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node - we are going to use a [constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints-constraint) for that.
Let's deploy Traefik as a docker service in our cluster.
The only requirement for Traefik to work with swarm mode is that it needs to run on a manager node - we are going to use a [constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints-constraint) for that.
```shell
docker-machine ssh manager "docker service create \
@ -98,16 +98,16 @@ Let's explain this command:
| Option | Description |
|-----------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|
| `--publish 80:80 --publish 8080:8080` | we publish port `80` and `8080` on the cluster. |
| `--constraint=node.role==manager` | we ask docker to schedule Træfik on a manager node. |
| `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock` | we bind mount the docker socket where Træfik is scheduled to be able to speak to the daemon. |
| `--network traefik-net` | we attach the Træfik service (and thus the underlying container) to the `traefik-net` network. |
| `--docker` | enable docker provider, and `--docker.swarmMode` to enable the swarm mode on Træfik. |
| `--constraint=node.role==manager` | we ask docker to schedule Traefik on a manager node. |
| `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock` | we bind mount the docker socket where Traefik is scheduled to be able to speak to the daemon. |
| `--network traefik-net` | we attach the Traefik service (and thus the underlying container) to the `traefik-net` network. |
| `--docker` | enable docker provider, and `--docker.swarmMode` to enable the swarm mode on Traefik. |
| `--api` | activate the webUI on port 8080 |
## Deploy your apps
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in Go.
We can now deploy our app on the cluster, here [whoami](https://github.com/containous/whoami), a simple web server in Go.
We start 2 services, on the `traefik-net` network.
```shell
@ -115,13 +115,13 @@ docker-machine ssh manager "docker service create \
--name whoami0 \
--label traefik.port=80 \
--network traefik-net \
emilevauge/whoami"
containous/whoami"
docker-machine ssh manager "docker service create \
--name whoami1 \
--label traefik.port=80 \
--network traefik-net \
emilevauge/whoami"
containous/whoami"
```
!!! note
@ -139,12 +139,12 @@ docker-machine ssh manager "docker service ls"
```
ID NAME MODE REPLICAS IMAGE PORTS
moq3dq4xqv6t traefik replicated 1/1 traefik:latest *:80->80/tcp,*:8080->8080/tcp
ysil6oto1wim whoami0 replicated 1/1 emilevauge/whoami:latest
z9re2mnl34k4 whoami1 replicated 1/1 emilevauge/whoami:latest
ysil6oto1wim whoami0 replicated 1/1 containous/whoami:latest
z9re2mnl34k4 whoami1 replicated 1/1 containous/whoami:latest
```
## Access to your apps through Træfik
## Access to your apps through Traefik
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
@ -186,7 +186,7 @@ X-Forwarded-Server: 77fc29c69fe4
```
!!! note
As Træfik is published, you can access it from any machine and not only the manager.
As Traefik is published, you can access it from any machine and not only the manager.
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip worker1)
@ -242,11 +242,11 @@ docker-machine ssh manager "docker service ls"
```
ID NAME MODE REPLICAS IMAGE PORTS
moq3dq4xqv6t traefik replicated 1/1 traefik:latest *:80->80/tcp,*:8080->8080/tcp
ysil6oto1wim whoami0 replicated 5/5 emilevauge/whoami:latest
z9re2mnl34k4 whoami1 replicated 5/5 emilevauge/whoami:latest
ysil6oto1wim whoami0 replicated 5/5 containous/whoami:latest
z9re2mnl34k4 whoami1 replicated 5/5 containous/whoami:latest
```
## Access to your `whoami0` through Træfik multiple times.
## Access to your `whoami0` through Traefik multiple times.
Repeat the following command multiple times and note that the Hostname changes each time as Traefik load balances each request against the 5 tasks:

View file

@ -1,6 +1,6 @@
# Swarm cluster
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Træfik on it.
This section explains how to create a multi-host [swarm](https://docs.docker.com/swarm) cluster using [docker-machine](https://docs.docker.com/machine/) and how to deploy Traefik on it.
The cluster consists of:
@ -71,9 +71,9 @@ eval $(docker-machine env --swarm mhs-demo0)
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
```
## Deploy Træfik
## Deploy Traefik
Deploy Træfik:
Deploy Traefik:
```shell
docker $(docker-machine config mhs-demo0) run \
@ -112,12 +112,12 @@ Let's explain this command:
## Deploy your apps
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in GO, on the network `my-net`:
We can now deploy our app on the cluster, here [whoami](https://github.com/containous/whoami), a simple web server in GO, on the network `my-net`:
```shell
eval $(docker-machine env --swarm mhs-demo0)
docker run -d --name=whoami0 --net=my-net --env="constraint:node==mhs-demo0" emilevauge/whoami
docker run -d --name=whoami1 --net=my-net --env="constraint:node==mhs-demo1" emilevauge/whoami
docker run -d --name=whoami0 --net=my-net --env="constraint:node==mhs-demo0" containous/whoami
docker run -d --name=whoami1 --net=my-net --env="constraint:node==mhs-demo1" containous/whoami
```
Check that everything is started:
@ -127,12 +127,12 @@ docker ps
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ba2c21488299 emilevauge/whoami "/whoamI" 8 seconds ago Up 9 seconds 80/tcp mhs-demo1/whoami1
8147a7746e7a emilevauge/whoami "/whoamI" 19 seconds ago Up 20 seconds 80/tcp mhs-demo0/whoami0
ba2c21488299 containous/whoami "/whoamI" 8 seconds ago Up 9 seconds 80/tcp mhs-demo1/whoami1
8147a7746e7a containous/whoami "/whoamI" 19 seconds ago Up 20 seconds 80/tcp mhs-demo0/whoami0
8fbc39271b4c traefik "/traefik -l DEBUG -c" 36 seconds ago Up 37 seconds 192.168.99.101:80->80/tcp, 192.168.99.101:8080->8080/tcp mhs-demo0/serene_bhabha
```
## Access to your apps through Træfik
## Access to your apps through Traefik
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip mhs-demo0)