doc: structural review

- user-guide review.
- add DataDog and StatD configuration.
- sync sample.toml and doc.
- split entry points doc.
- Deprecated.
This commit is contained in:
Fernandez Ludovic 2017-08-26 12:12:44 +02:00 committed by Traefiker
parent 24862402e5
commit 7c2ba62b56
32 changed files with 848 additions and 695 deletions

View file

@ -16,5 +16,6 @@ Please refer to [this section](/user-guide/kv-config/#store-configuration-in-key
Once your Træfik configuration is uploaded on your KV store, you can start each Træfik instance.
A Træfik cluster is based on a manager/worker model.
When starting, Træfik will elect a manager. If this instance fails, another manager will be automatically elected.
When starting, Træfik will elect a manager.
If this instance fails, another manager will be automatically elected.

View file

@ -1,4 +1,3 @@
# Examples
You will find here some configuration examples of Træfik.

View file

@ -1,11 +1,20 @@
# Docker & Traefik
In this use case, we want to use Traefik as a *layer-7* load balancer with SSL termination for a set of microservices used to run a webapplication. We also want to automatically *discover any services* on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly. In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
## Setting up
In order for this to work, you'll need a server with a public IP address, with Docker installed on it. In this example, we're using the fictitious domain *my-awesome-app.org*. In real-life, you'll want to use your own domain and have the DNS configured accordingly so the hostname records you'll want to use point to the aforementioned public IP address.
In this use case, we want to use Traefik as a _layer-7_ load balancer with SSL termination for a set of micro-services used to run a web application.
We also want to automatically _discover any services_ on the Docker host and let Traefik reconfigure itself automatically when containers get created (or shut down) so HTTP traffic can be routed accordingly.
In addition, we want to use Let's Encrypt to automatically generate and renew SSL certificates per hostname.
## Setting Up
In order for this to work, you'll need a server with a public IP address, with Docker installed on it.
In this example, we're using the fictitious domain _my-awesome-app.org_.
In real-life, you'll want to use your own domain and have the DNS configured accordingly so the hostname records you'll want to use point to the aforementioned public IP address.
## Networking
Docker containers can only communicate with each other over TCP when they share at least one network. This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers *unless you'd want to*. In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
Docker containers can only communicate with each other over TCP when they share at least one network.
This makes sense from a topological point of view in the context of networking, since Docker under the hood creates IPTable rules so containers can't reach other containers _unless you'd want to_.
In this example, we're going to use a single network called `web` where all containers that are handling HTTP traffic (including Traefik) will reside in.
On the Docker host, run the following command:
@ -23,7 +32,8 @@ $ touch /opt/traefik/acme.json && chmod 600 /opt/traefik/acme.json
$ touch /opt/traefik/traefik.toml
```
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik. The contents of the file is as follows:
The `docker-compose.yml` file will provide us with a simple, consistent and more importantly, a deterministic way to create Traefik.
The contents of the file is as follows:
```yaml
version: '2'
@ -48,8 +58,11 @@ networks:
external: true
```
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container. Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted). We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on. Finally, we're giving this container a static name called `traefik`.
As you can see, we're mounting the `traefik.toml` file as well as the (empty) `acme.json` file in the container.
Also, we're mounting the `/var/run/docker.sock` Docker socket in the container as well, so Traefik can listen to Docker events and reconfigure it's own internal configuration when containers are created (or shut down).
Also, we're making sure the container is automatically restarted by the Docker engine in case of problems (or: if the server is rebooted).
We're publishing the default HTTP ports `80` and `443` on the host, and making sure the container is placed within the `web` network we've created earlier on.
Finally, we're giving this container a static name called `traefik`.
Let's take a look at a simply `traefik.toml` configuration as well before we'll create the Traefik container:
@ -87,16 +100,17 @@ This is the minimum configuration required to do the following:
- Log `ERROR`-level messages (or more severe) to the console, but silence `DEBUG`-level messagse
- Check for new versions of Traefik periodically
- Create two entrypoints, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
- Create two entry points, namely an `HTTP` endpoint on port `80`, and an `HTTPS` endpoint on port `443` where all incoming traffic on port `80` will immediately get redirected to `HTTPS`.
- Enable the Docker configuration backend and listen for container events on the Docker unix socket we've mounted earlier. However, **new containers will not be exposed by Traefik by default, we'll get into this in a bit!**
- Enable automatic request and configuration of SSL certificates using Let's Encrypt. These certificates will be stored in the `acme.json` file, which you can back-up yourself and store off-premises.
Alright, let's boot the container. From the `/opt/traefik` directory, run `$ docker-compose up -d` which will create and start the Traefik container.
## Exposing your webservices to the outside world
## Exposing Web Services to the Outside World
Now that we've fully configured and started Traefik, it's time to get our applications running!
Let's take a simple example of a microservice project consisting of various services, where some will be exposed to the outside world and some will not. The `docker-compose.yml` of our project looks like this:
Let's take a simple example of a micro-service project consisting of various services, where some will be exposed to the outside world and some will not. The `docker-compose.yml` of our project looks like this:
```yaml
version: "2.1"
@ -155,10 +169,17 @@ networks:
external: true
```
Here, we can see a set of services with two applications that we're actually exposing to the outside world. Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks. Also, only the containers that we want traffic to get routed to are attached to the `web` network we created at the start of this document. Since the `traefik` container we've created and started earlier is also attached to this network, HTTP requests can now get routed to these containers.
Here, we can see a set of services with two applications that we're actually exposing to the outside world.
Notice how there isn't a single container that has any published ports to the host -- everything is routed through Docker networks.
Also, only the containers that we want traffic to get routed to are attached to the `web` network we created at the start of this document.
Since the `traefik` container we've created and started earlier is also attached to this network, HTTP requests can now get routed to these containers.
### Labels
As mentioned earlier, we don't want containers exposed automatically by Traefik. The reason behind this is simple: we want to have control over this process ourselves. Thanks to Docker labels, we can tell Traefik how to create it's internal routing configuration. Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
As mentioned earlier, we don't want containers exposed automatically by Traefik.
The reason behind this is simple: we want to have control over this process ourselves.
Thanks to Docker labels, we can tell Traefik how to create it's internal routing configuration.
Let's take a look at the labels themselves for the `app` service, which is a HTTP webservice listing on port 9000:
```yaml
- "traefik.backend=my-awesome-app-app"
@ -168,14 +189,25 @@ As mentioned earlier, we don't want containers exposed automatically by Traefik.
- "traefik.port=9000"
```
First, we specify the `backend` name which corresponds to the actual service we're routing **to**. We also tell Traefik to use the `web` network to route HTTP traffic to this container. With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`. Essentially, this is the actual rule used for Layer-7 load balancing. With the `traefik.enable` label, we tell Traefik to include this container in it's internal configuration. Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
First, we specify the `backend` name which corresponds to the actual service we're routing **to**.
We also tell Traefik to use the `web` network to route HTTP traffic to this container. With the `frontend.rule` label, we tell Traefik that we want to route to this container if the incoming HTTP request contains the `Host` `app.my-awesome-app.org`.
Essentially, this is the actual rule used for Layer-7 load balancing.
With the `traefik.enable` label, we tell Traefik to include this container in it's internal configuration.
Finally but not unimportantly, we tell Traefik to route **to** port `9000`, since that is the actual TCP/IP port the container actually listens on.
#### Gotchas and tips
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label. If a container exposes multiple ports, Traefik may forward traffic to the wrong port. Even if a container only exposes one port, you should always write configuration defensively and explicitly.
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see. Usually, this is a bad idea.
- Always specify the correct port where the container expects HTTP traffic using `traefik.port` label.
If a container exposes multiple ports, Traefik may forward traffic to the wrong port.
Even if a container only exposes one port, you should always write configuration defensively and explicitly.
- Should you choose to enable the `exposedbydefault` flag in the `traefik.toml` configuration, be aware that all containers that are placed in the same network as Traefik will automatically be reachable from the outside world, for everyone and everyone to see.
Usually, this is a bad idea.
- With the `traefik.frontend.auth.basic` label, it's possible for Traefik to provide a HTTP basic-auth challenge for the endpoints you provide the label for.
- Traefik has built-in support to automatically export [Prometheus](https://prometheus.io) metrics
- Traefik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol. Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
- Traefik supports websockets out of the box. In the example above, the `events`-service could be a NodeJS-based application which allows clients to connect using websocket protocol.
Thanks to the fact that HTTPS in our example is enforced, these websockets are automatically secure as well (WSS)
### Final thoughts
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, performant and self-configuring solution for your projects. With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.
Using Traefik as a Layer-7 load balancer in combination with both Docker and Let's Encrypt provides you with an extremely flexible, performant and self-configuring solution for your projects.
With Let's Encrypt, your endpoints are automatically secured with production-ready SSL certificates that are renewed automatically as well.

View file

@ -14,11 +14,9 @@ on your machine, as it is the quickest way to get a local Kubernetes cluster set
### Role Based Access Control configuration (Kubernetes 1.6+ only)
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/) in 1.6+ to allow fine-grained control
of Kubernetes resources and api.
Kubernetes introduces [Role Based Access Control (RBAC)](https://kubernetes.io/docs/admin/authorization/rbac/) in 1.6+ to allow fine-grained control of Kubernetes resources and api.
If your cluster is configured with RBAC, you may need to authorize Træfik to use the
Kubernetes API using ClusterRole and ClusterRoleBinding resources:
If your cluster is configured with RBAC, you may need to authorize Træfik to use the Kubernetes API using ClusterRole and ClusterRoleBinding resources:
_Note: your cluster may have suitable ClusterRoles already setup, but the following should work everywhere_
@ -71,20 +69,15 @@ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/exa
## Deploy Træfik using a Deployment or DaemonSet
It is possible to use Træfik with a
[Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/)
object, whereas both options have their own pros and cons: The scalability is much better when
using a Deployment, because you will have a Single-Pod-per-Node model when using
the DeaemonSet. It is possible to exclusively run a Service on a dedicated
set of machines using taints and tolerations with a DaemonSet. On the other hand the
DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a
[Service](https://kubernetes.io/docs/concepts/services-networking/service/) object
with a Deployment.
It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) or a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) object,
whereas both options have their own pros and cons:
The scalability is much better when using a Deployment, because you will have a Single-Pod-per-Node model when using the DeaemonSet.
It is possible to exclusively run a Service on a dedicated set of machines using taints and tolerations with a DaemonSet.
On the other hand the DaemonSet allows you to access any Node directly on Port 80 and 443, where you have to setup a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) object with a Deployment.
The Deployment objects looks like this:
```yaml
```yml
---
apiVersion: v1
kind: ServiceAccount
@ -207,10 +200,11 @@ $ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/e
$ kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/traefik-ds.yaml
```
There are some significant differences between using Deployments and DaemonSets. The Deployment has easier
up and down scaling possibilities. It can implement full pod lifecycle and supports rolling updates from
Kubernetes 1.2. At least one Pod is needed to run the Deployment. The DaemonSet automatically scales to all nodes that
meets a specific selector and guarantees to fill nodes one at a time. Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
There are some significant differences between using Deployments and DaemonSets.
The Deployment has easier up and down scaling possibilities. It can implement full pod lifecycle and supports rolling updates from Kubernetes 1.2.
At least one Pod is needed to run the Deployment.
The DaemonSet automatically scales to all nodes that meets a specific selector and guarantees to fill nodes one at a time.
Rolling updates are fully supported from Kubernetes 1.7 for DaemonSets as well.
@ -229,24 +223,21 @@ kubernetes-dashboard-s8krj 1/1 Running 0 4h
traefik-ingress-controller-678226159-eqseo 1/1 Running 0 7m
```
You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched
a Pod, and it is now running. _It might take a few moments for kubernetes to pull
the Træfik image and start the container._
You should see that after submitting the Deployment or DaemonSet to Kubernetes it has launched a Pod, and it is now running.
_It might take a few moments for kubernetes to pull the Træfik image and start the container._
> You could also check the deployment with the Kubernetes dashboard, run
> `minikube dashboard` to open it in your browser, then choose the `kube-system`
> namespace from the menu at the top right of the screen.
You should now be able to access Træfik on port 80 of your Minikube instance when using
the DaemonSet:
You should now be able to access Træfik on port 80 of your Minikube instance when using the DaemonSet:
```sh
curl $(minikube ip)
404 page not found
```
If you decided to use the deployment, then you need to target the correct NodePort, which can
be seen then you execute `kubectl get services --namespace=kube-system`.
If you decided to use the deployment, then you need to target the correct NodePort, which can be seen then you execute `kubectl get services --namespace=kube-system`.
```sh
curl $(minikube ip):<NODEPORT>
@ -257,10 +248,8 @@ curl $(minikube ip):<NODEPORT>
## Deploy Træfik using Helm Chart
Instead of installing Træfik via an own object, you can also use the Træfik Helm chart. This
allows more complex configuration via Kubernetes
[ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) and enabled
TLS certificates.
Instead of installing Træfik via an own object, you can also use the Træfik Helm chart.
This allows more complex configuration via Kubernetes [ConfigMap](https://kubernetes.io/docs/tasks/configure-pod-container/configmap/) and enabled TLS certificates.
Install Træfik chart by:
@ -272,8 +261,7 @@ For more information, check out [the doc](https://github.com/kubernetes/charts/t
## Submitting An Ingress to the cluster.
Lets start by creating a Service and an Ingress that will expose the
[Træfik Web UI](https://github.com/containous/traefik#web-ui).
Lets start by creating a Service and an Ingress that will expose the [Træfik Web UI](https://github.com/containous/traefik#web-ui).
```yaml
apiVersion: v1
@ -310,8 +298,7 @@ spec:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/ui.yaml
```
Now lets setup an entry in our /etc/hosts file to route `traefik-ui.minikube`
to our cluster.
Now lets setup an entry in our /etc/hosts file to route `traefik-ui.minikube` to our cluster.
> In production you would want to set up real dns entries.
@ -325,8 +312,7 @@ We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube)
## Name based routing
In this example we are going to setup websites for 3 of the United Kingdoms
best loved cheeses, Cheddar, Stilton and Wensleydale.
In this example we are going to setup websites for 3 of the United Kingdoms best loved cheeses, Cheddar, Stilton and Wensleydale.
First lets start by launching the 3 pods for the cheese websites.
@ -534,12 +520,9 @@ spec:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/master/examples/k8s/cheese-ingress.yaml
```
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should
see a frontend for each host. Along with a backend listing for each service
with a Server set up for each pod.
Now visit the [Træfik dashboard](http://traefik-ui.minikube/) and you should see a frontend for each host. Along with a backend listing for each service with a Server set up for each pod.
If you edit your `/etc/hosts` again you should be able to access the cheese
websites in your browser.
If you edit your `/etc/hosts` again you should be able to access the cheese websites in your browser.
```shell
echo "$(minikube ip) stilton.minikube cheddar.minikube wensleydale.minikube" | sudo tee -a /etc/hosts
@ -551,9 +534,7 @@ echo "$(minikube ip) stilton.minikube cheddar.minikube wensleydale.minikube" | s
## Path based routing
Now lets suppose that our fictional client has decided that while they are
super happy about our cheesy web design, when they asked for 3 websites
they had not really bargained on having to buy 3 domain names.
Now lets suppose that our fictional client has decided that while they are super happy about our cheesy web design, when they asked for 3 websites they had not really bargained on having to buy 3 domain names.
No problem, we say, why don't we reconfigure the sites to host all 3 under one domain.
@ -646,7 +627,8 @@ spec:
## Forwarding to ExternalNames
When specifying an [ExternalName](https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors),
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443. This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
Træfik will forward requests to the given host accordingly and use HTTPS when the Service port matches 443.
This still requires setting up a proper port mapping on the Service from the Ingress port to the (external) Service port.
## Disable passing the Host header
@ -663,8 +645,7 @@ disablePassHostHeaders = true
### Disable per ingress
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader`
annotation on your ingress to `false`.
To disable passing the Host header per ingress resource set the `traefik.frontend.passHostHeader` annotation on your ingress to `false`.
Here is an example ingress definition:
```yaml
@ -700,9 +681,7 @@ spec:
externalName: static.otherdomain.com
```
If you were to visit example.com/static the request would then be passed onto
static.otherdomain.com/static and static.otherdomain.com would receive the
request with the Host header being static.otherdomain.com.
If you were to visit `example.com/static` the request would then be passed onto `static.otherdomain.com/static` and s`tatic.otherdomain.com` would receive the request with the Host header being `static.otherdomain.com`.
Note: The per ingress annotation overides whatever the global value is set to.
So you could set `disablePassHostHeaders` to `true` in your toml file and then enable passing

View file

@ -1,4 +1,3 @@
# Key-value store configuration
Both [static global configuration](/user-guide/kv-config/#static-configuration-in-key-value-store) and [dynamic](/user-guide/kv-config/#dynamic-configuration-in-key-value-store) configuration can be sorted in a Key-value store.
@ -12,7 +11,7 @@ Træfik supports several Key-value stores:
- [ZooKeeper](https://zookeeper.apache.org/)
- [boltdb](https://github.com/boltdb/bolt)
# Static configuration in Key-value store
## Static configuration in Key-value store
We will see the steps to set it up with an easy example.
Note that we could do the same with any other Key-value Store.

View file

@ -1,4 +1,3 @@
# Marathon
This guide explains how to integrate Marathon and operate the cluster in a reliable way from Traefik's standpoint.

View file

@ -73,11 +73,9 @@ docker-machine ssh manager "docker network create --driver=overlay traefik-net"
## Deploy Træfik
Let's deploy Træfik as a docker service in our cluster. The only
requirement for Træfik to work with swarm mode is that it needs to run
on a manager node — we are going to use a
[constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints-constraint) for
that.
Let's deploy Træfik as a docker service in our cluster.
The only requirement for Træfik to work with swarm mode is that it needs to run on a manager node — we are going to use a
[constraint](https://docs.docker.com/engine/reference/commandline/service_create/#/specify-service-constraints-constraint) for that.
```shell
docker-machine ssh manager "docker service create \
@ -96,24 +94,17 @@ docker-machine ssh manager "docker service create \
Let's explain this command:
- `--publish 80:80 --publish 8080:8080`: we publish port `80` and
`8080` on the cluster.
- `--constraint=node.role==manager`: we ask docker to schedule Træfik
on a manager node.
- `--publish 80:80 --publish 8080:8080`: we publish port `80` and `8080` on the cluster.
- `--constraint=node.role==manager`: we ask docker to schedule Træfik on a manager node.
- `--mount type=bind,source=/var/run/docker.sock,target=/var/run/docker.sock`:
we bind mount the docker socket where Træfik is scheduled to be able
to speak to the daemon.
- `--network traefik-net`: we attach the Træfik service (and thus
the underlying container) to the `traefik-net` network.
- `--docker`: enable docker backend, and `--docker.swarmmode` to
enable the swarm mode on Træfik.
we bind mount the docker socket where Træfik is scheduled to be able to speak to the daemon.
- `--network traefik-net`: we attach the Træfik service (and thus the underlying container) to the `traefik-net` network.
- `--docker`: enable docker backend, and `--docker.swarmmode` to enable the swarm mode on Træfik.
- `--web`: activate the webUI on port 8080
## Deploy your apps
We can now deploy our app on the cluster,
here [whoami](https://github.com/emilevauge/whoami), a simple web
server in Go. We start 2 services, on the `traefik-net` network.
We can now deploy our app on the cluster, here [whoami](https://github.com/emilevauge/whoami), a simple web server in Go. We start 2 services, on the `traefik-net` network.
```shell
docker-machine ssh manager "docker service create \
@ -184,8 +175,7 @@ X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
Note that as Træfik is published, you can access it from any machine
and not only the manager.
Note that as Træfik is published, you can access it from any machine and not only the manager.
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip worker1)
@ -245,7 +235,8 @@ dtpl249tfghc traefik 1/1 traefik --docker --docker.swarmmode
```
## Access to your whoami0 through Træfik multiple times.
Repeat the following command multiple times and note that the Hostname changes each time as Traefik load balances each request against the 5 tasks.
Repeat the following command multiple times and note that the Hostname changes each time as Traefik load balances each request against the 5 tasks:
```shell
curl -H Host:whoami0.traefik http://$(docker-machine ip manager)
Hostname: 8147a7746e7a
@ -266,7 +257,8 @@ X-Forwarded-Proto: http
X-Forwarded-Server: 8fbc39271b4c
```
Do the same against whoami1.
Do the same against whoami1:
```shell
curl -H Host:whoami1.traefik http://$(docker-machine ip manager)
Hostname: ba2c21488299
@ -289,6 +281,7 @@ X-Forwarded-Server: 8fbc39271b4c
Wait, I thought we added the sticky flag to whoami1? Traefik relies on a cookie to maintain stickyness so you'll need to test this with a browser.
First you need to add whoami1.traefik to your hosts file:
```ssh
if [ -n "$(grep whoami1.traefik /etc/hosts)" ];
then