Différences
Ci-dessous, les différences entre deux révisions de la page.
| Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
| elearning:workbooks:kubernetes:k8s06 [2022/09/04 11:44] – created admin | elearning:workbooks:kubernetes:k8s06 [2025/01/17 15:25] (Version actuelle) – admin | ||
|---|---|---|---|
| Ligne 1: | Ligne 1: | ||
| ~~PDF: | ~~PDF: | ||
| - | Version - **2022.02** | + | Version - **2024.01** |
| Dernière mise-à-jour : ~~LASTMOD~~ | Dernière mise-à-jour : ~~LASTMOD~~ | ||
| - | ======DOF304 | + | |
| + | ======DOF307 | ||
| =====Contenu du Module===== | =====Contenu du Module===== | ||
| - | * **DOF304 | + | * **DOF307 |
| * Contenu du Module | * Contenu du Module | ||
| - | * LAB #1 - Gestion du Réseau et des Services | + | * LAB #1 - Le Serveur API |
| - | * 1.1 - Présentation | + | * 1.1 - Connexion Refusée |
| - | * 1.2 - Le Service NodePort | + | * 1.2 - Journaux des Pods Système |
| - | * 1.3 - Le Service ClusterIP | + | * LAB #2 - Les Noeuds |
| - | * LAB #2 - Gestion de l' | + | * 2.1 - Le Statut NotReady |
| - | * 2.1 - Présentation | + | * LAB #3 - Les Pods |
| - | * 2.2 - Création des Deployments | + | * 3.1 - L' |
| - | * 2.3 - Création des Services | + | * 3.2 - L' |
| - | * 2.4 - Déployer l' | + | * LAB #4 - Les Conteneurs |
| - | * 2.5 - Scaling Up | + | * 4.1 - La Commande exec |
| + | * LAB #5 - Le Réseau | ||
| + | * 5.1 - kube-proxy et le DNS | ||
| + | * 5.2 - Le Conteneur netshoot | ||
| - | =====LAB #1 - Gestion du Réseau et des Services===== | + | =====LAB #1 - Le Serveur API===== |
| - | ====1.1 - Présentation==== | + | ====1.1 - Connexion Refusée==== |
| - | Kubernetes impose des conditions pour l’implémentation d'un réseau | + | Quand il n'est pas possible de se connecter au serveur API de K8s, on obtient une erreur telle que : |
| - | * Les PODs sur un nœud peuvent communiquer avec tous les PODs sur tous le nœuds sans utiliser NAT, | + | < |
| - | * Les agents sur un nœud (par exemple kubelet) peuvent communiquer avec tous les PODs sur le nœud. | + | trainee@kubemaster: |
| + | The connection to the server localhost: | ||
| + | </ | ||
| - | <WRAP center round important> | + | En règle générale, cette erreur est due à une des trois situations suivantes |
| - | **Important** : La description technique et détaillée de l' | + | |
| - | </ | + | |
| - | Dans le cluster de ce cours, le réseau mis en place pour Kubernetes est le 192.168.56.0/ | + | ===Le Service kubelet=== |
| - | Actuellement il y a 3 PODs dans le cluster | + | Vérifiez que le service kubelet est activé et en cours d' |
| < | < | ||
| - | root@kubemaster: | + | trainee@kubemaster: |
| - | NAME READY | + | Mot de passe : fenestros |
| - | myapp-deployment-57c6cb89d9-dh4cb | + | |
| - | myapp-deployment-57c6cb89d9-f69nk | + | |
| - | myapp-deployment-57c6cb89d9-q7d4p | + | |
| - | </ | + | |
| - | Sous Kubernetes, les adresses IP sont attachées aux PODs : | + | root@kubemaster: |
| + | ● kubelet.service - kubelet: The Kubernetes | ||
| + | | ||
| + | Drop-In: / | ||
| + | | ||
| + | | ||
| + | Docs: https:// | ||
| + | Main PID: 550 (kubelet) | ||
| + | Tasks: 17 (limit: 4915) | ||
| + | | ||
| + | CPU: 4h 16min 54.676s | ||
| + | | ||
| + | | ||
| - | < | + | Warning: Journal has been rotated since unit was started. Log output is incomplete or |
| - | root@kubemaster:~# kubectl get pods -o wide | + | lines 1-14/14 (END) |
| - | NAME READY | + | [q] |
| - | myapp-deployment-57c6cb89d9-dh4cb | + | |
| - | myapp-deployment-57c6cb89d9-f69nk | + | |
| - | myapp-deployment-57c6cb89d9-q7d4p | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | ===La Variable KUBECONFIG=== |
| - | **Important** : Notez que les adresses **192.168.239.x** sont associées aux PODs sur kubenode1 tandis que les adresses **192.168.150.x** sont associées aux PODs sur kubenode2. Ces adresses sont issues du réseau **192.168.0.0/ | + | |
| - | </ | + | |
| - | En sachant | + | Si vous utilisez le compte root pour interagir avec K8s, vérifiez |
| < | < | ||
| - | trainee@kubemaster: | + | root@kubemaster: |
| - | déconnexion | + | / |
| - | Connection to 10.0.2.65 closed. | + | |
| - | trainee@gateway: | + | |
| - | curl: (7) Failed to connect to 192.168.56.3 port 80: Connection refused | + | |
| - | trainee@gateway: | + | |
| - | curl: (7) Failed to connect to 192.168.56.4 port 80: Connection refused | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | ===Le Fichier $HOME/.kube/config=== |
| - | **Important** : Notez l' | + | |
| - | </WRAP> | + | |
| - | Testez maintenant si vous pouvez afficher la page d'accueil de Nginx en vous connectant à un des PODs **à partir de votre Gateway** : | + | Si vous utilisez un compte |
| < | < | ||
| - | trainee@gateway:~$ curl 192.168.239.19 | + | root@kubemaster: |
| - | ^C | + | déconnexion |
| + | trainee@kubemaster:~$ | ||
| + | |||
| + | trainee@kubemaster: | ||
| + | apiVersion: v1 | ||
| + | clusters: | ||
| + | - cluster: | ||
| + | certificate-authority-data: | ||
| + | server: https://192.168.56.2:6443 | ||
| + | name: kubernetes | ||
| + | contexts: | ||
| + | - context: | ||
| + | cluster: kubernetes | ||
| + | user: kubernetes-admin | ||
| + | name: kubernetes-admin@kubernetes | ||
| + | current-context: | ||
| + | kind: Config | ||
| + | preferences: | ||
| + | users: | ||
| + | - name: kubernetes-admin | ||
| + | user: | ||
| + | client-certificate-data: | ||
| + | client-key-data: | ||
| </ | </ | ||
| - | |||
| - | Connectez-vous à **kubemaster** : | ||
| < | < | ||
| - | trainee@gateway:~$ ssh -l trainee 192.168.56.2 | + | trainee@kubemaster:~$ ls -l $HOME/.kube/config |
| - | trainee@192.168.56.2's password: | + | -rw------- 1 trainee |
| - | Linux kubemaster.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 | + | |
| - | The programs included with the Debian GNU/Linux system are free software; | ||
| - | the exact distribution terms for each program are described in the | ||
| - | individual files in / | ||
| - | |||
| - | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent | ||
| - | permitted by applicable law. | ||
| - | Last login: Wed Jul 13 15:45:46 2022 from 10.0.2.40 | ||
| trainee@kubemaster: | trainee@kubemaster: | ||
| - | Mot de passe : fenestros | + | Mot de passe : |
| - | root@kubemaster: | + | root@kubemaster: |
| </ | </ | ||
| - | Bien évidement, il est possible | + | ====1.2 - Journaux des Pods Système==== |
| + | |||
| + | Si, à ce stade, vous n'avez pas trouvé | ||
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | < | + | total 28 |
| - | < | + | drwxr-xr-x 6 root root 4096 sept. 4 09:44 kube-system_calico-node-dc7hd_3fe340ed-6df4-4252-9e4e-8c244453176a |
| - | < | + | drwxr-xr-x 3 root root 4096 sept. 4 13:00 kube-system_coredns-565d847f94-tqd8z_d96f42ed-ebd4-4eb9-8c89-2d80b81ef9cf |
| - | < | + | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_etcd-kubemaster.ittraining.loc_ddbb10499877103d862e5ce637b18ab1 |
| - | < | + | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-apiserver-kubemaster.ittraining.loc_ec70600cac9ca8c8ea9545f1a42f82e5 |
| - | body { | + | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-controller-manager-kubemaster.ittraining.loc_0e3dcf54223b4398765d21e9e6aaebc6 |
| - | width: 35em; | + | drwxr-xr-x 3 root root 4096 sept. 4 12:31 kube-system_kube-proxy-x7fpc_80673937-ff21-4dba-a821-fb3b0b1541a4 |
| - | | + | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-scheduler-kubemaster.ittraining.loc_c3485d2a42b90757729a745cd8ee5f7d |
| - | font-family: Tahoma, Verdana, Arial, sans-serif; | + | |
| - | } | + | |
| - | </ | + | |
| - | </ | + | |
| - | < | + | |
| - | < | + | |
| - | <p>If you see this page, the nginx web server is successfully installed and | + | |
| - | working. Further configuration is required.</p> | + | |
| - | < | + | root@kubemaster:~# ls -l /var/log/pods/kube-system_kube-apiserver-kubemaster.ittraining.loc_ec70600cac9ca8c8ea9545f1a42f82e5 |
| - | <a href=" | + | total 4 |
| - | Commercial support is available at | + | drwxr-xr-x 2 root root 4096 sept. 16 09:31 kube-apiserver |
| - | <a href=" | + | |
| - | < | + | root@kubemaster: |
| - | </body> | + | total 2420 |
| - | </html> | + | -rw-r----- 1 root root 1009731 sept. 16 08:19 0.log |
| + | -rw-r----- 1 root root 1460156 sept. 28 12:22 1.log | ||
| + | |||
| + | root@kubemaster: | ||
| + | 2022-09-28T11: | ||
| + | 2022-09-28T11: | ||
| + | 2022-09-28T11: | ||
| + | 2022-09-28T11: | ||
| + | 2022-09-28T12: | ||
| + | 2022-09-28T12: | ||
| + | 2022-09-28T12: | ||
| + | 2022-09-28T12: | ||
| + | 2022-09-28T12: | ||
| + | 2022-09-28T12: | ||
| </ | </ | ||
| - | <WRAP center round important> | + | A noter que quand le serveur API redevient fonctionnel, |
| - | **Important** : Retenez donc qu'à ce stade il n'est pas possible | + | |
| - | </ | + | |
| - | + | ||
| - | Lors de l' | + | |
| - | + | ||
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * **[[https:// | + | |
| - | * Canal (utilise Flannel pour le réseau et Calico pour le pare-feu). | + | |
| - | + | ||
| - | <WRAP center round important> | + | |
| - | **Important** : Une étude comparative des extensions réseau pour Kubernetes peut être trouvée à la page : **[[https:// | + | |
| - | </ | + | |
| - | + | ||
| - | Ces extensions permettent la mise en place de Services : | + | |
| - | + | ||
| - | * NodePort, | + | |
| - | * Ce Service rend un POD accessible sur un port du nœud le contenant, | + | |
| - | * ClusterIP | + | |
| - | * Ce Service crée une adresse IP virtuelle afin de permettre | + | |
| - | * LoadBalancer | + | |
| - | * Ce service provisionne un équilibrage de charge pour une application dans certains fournisseurs de Cloud publique tels **A**mazon **W**eb **S**ervices et **G**oogle **C**loud **P**latform. | + | |
| - | + | ||
| - | ====1.2 - Le Service NodePort==== | + | |
| - | + | ||
| - | Le Service NodePort définit trois ports : | + | |
| - | + | ||
| - | * **TargetPort** : le port sur le POD, | + | |
| - | * **Port** : le port sur le Service lié à un IP du Cluster, | + | |
| - | * **NodePort** : le port sur le Nœud issu de la plage 30000-32767. | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | Si dans le même nœud, plusieurs PODs ont les étiquettes qui correspondent au **selector** du Service, le Service identifie les PODs et s' | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | <WRAP center round important> | + | |
| - | **Important** : Notez que dans ce cas l' | + | |
| - | </ | + | |
| - | + | ||
| - | De même, quand les PODs sont distribués sur plusieurs nœuds, le Service s' | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | Créez donc le fichier YAML **service-definition.yaml** : | + | |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | NAME READY |
| - | apiVersion: v1 | + | calico-kube-controllers-6799f5f4b4-2tgpq |
| - | kind: Service | + | calico-node-5htrc |
| - | metadata: | + | calico-node-dc7hd |
| - | name: myapp-service | + | calico-node-qk5kt |
| - | + | coredns-565d847f94-kkpbp | |
| - | spec: | + | coredns-565d847f94-tqd8z |
| - | type: NodePort | + | etcd-kubemaster.ittraining.loc |
| - | | + | kube-apiserver-kubemaster.ittraining.loc |
| - | - targetPort: 80 | + | kube-controller-manager-kubemaster.ittraining.loc |
| - | port: 80 | + | kube-proxy-ggmt6 |
| - | | + | kube-proxy-x5j2r |
| - | | + | kube-proxy-x7fpc |
| - | app: myapp | + | kube-scheduler-kubemaster.ittraining.loc |
| - | | + | metrics-server-5dbb5ff5bd-vh5fz |
| </ | </ | ||
| - | |||
| - | <WRAP center round important> | ||
| - | **Important** : Notez que si le champ **type:** est manquant, sa valeur par défaut est **ClusterIP**. Notez aussi que dans **ports**, seul le champ **port** est obligatoire. Si le champ **targetPort** est manquant, sa valeur par défaut est celle du champ **port**. Si le champ **nodePort** est manquant, sa valeur par défaut est le premier port disponible dans la plage entre **30 000** et **32 767**. Dernièrement, | ||
| - | </ | ||
| - | |||
| - | Le champs **selector** contient les étiquettes des PODs concernés par la mise en place du Service : | ||
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | apiVersion: v1 | + | Trace[1595276047]: [564.497826ms] [564.497826ms] END |
| - | kind: Pod | + | I0928 09:22:18.405784 |
| - | metadata: | + | Trace[1267846829]: ---" |
| - | name: myapp-pod | + | Trace[1267846829]: [505.988424ms] [505.988424ms] END |
| - | | + | I0928 10: |
| - | app: myapp | + | I0928 10: |
| - | type: front-end | + | Trace[338168453]: [768.168206ms] [768.168206ms] END |
| - | spec: | + | I0928 10: |
| - | | + | Trace[238339745]: ---" |
| - | - name: nginx-container | + | Trace[238339745]: |
| - | image: nginx | + | |
| </ | </ | ||
| - | Créez le Service en utilisant le fichier **service-definition.yaml** : | + | =====LAB #2 - Les Nœuds===== |
| - | < | + | ====2.1 - Le Statut NotReady==== |
| - | root@kubemaster: | + | |
| - | service/ | + | |
| - | </ | + | |
| - | Constatez | + | Quand un nœud du cluster démontre un problème, il convient de regarder la section **Conditions** dans la sortie de la commande **kubectl describe node** |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME TYPE CLUSTER-IP | + | ... |
| - | kubernetes | + | Conditions: |
| - | myapp-service | + | Type |
| + | | ||
| + | NetworkUnavailable | ||
| + | | ||
| + | | ||
| + | PIDPressure | ||
| + | Ready True Wed, 28 Sep 2022 09:17:21 +0200 | ||
| + | ... | ||
| </ | </ | ||
| - | <WRAP center round important> | + | En règle générale, |
| - | **Important** : Notez que le Service a une adresse IP du cluster et qu'il a exposé le port **30 008**. | + | |
| - | </ | + | |
| - | + | ||
| - | Testez maintenant si vous pouvez afficher la page d'accueil de Nginx en vous connectant à un des PODs à partir de votre Gateway en utilisant le port exposé | + | |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | déconnexion | + | trainee@192.168.56.3' |
| + | Linux kubenode1.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 | ||
| - | trainee@kubemaster: | + | The programs included with the Debian GNU/Linux system are free software; |
| - | déconnexion | + | the exact distribution terms for each program are described in the |
| - | Connection to 192.168.56.2 closed. | + | individual files in / |
| - | trainee@gateway:~$ curl 192.168.56.3:30008 | + | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent |
| - | < | + | permitted by applicable law. |
| - | < | + | Last login: Fri Sep 16 18:07:39 2022 from 192.168.56.2 |
| - | < | + | trainee@kubenode1:~$ su - |
| - | < | + | Mot de passe : fenestros |
| - | < | + | |
| - | body { | + | |
| - | width: 35em; | + | |
| - | | + | |
| - | font-family: | + | |
| - | } | + | |
| - | </ | + | |
| - | </ | + | |
| - | < | + | |
| - | < | + | |
| - | <p>If you see this page, the nginx web server is successfully installed and | + | |
| - | working. Further configuration is required.</ | + | |
| - | < | + | root@kubenode1:~# systemctl stop kubelet |
| - | <a href=" | + | |
| - | Commercial support is available at | + | |
| - | <a href=" | + | |
| - | < | + | root@kubenode1: |
| - | </body> | + | Removed |
| - | </html> | + | |
| - | trainee@gateway:~$ curl 192.168.56.4:30008 | + | root@kubenode1: |
| - | < | + | déconnexion |
| - | < | + | trainee@kubenode1:~$ exit |
| - | < | + | déconnexion |
| - | < | + | Connection to 192.168.56.3 closed. |
| - | < | + | |
| - | body { | + | |
| - | width: 35em; | + | |
| - | margin: 0 auto; | + | |
| - | font-family: | + | |
| - | } | + | |
| - | </ | + | |
| - | </ | + | |
| - | < | + | |
| - | < | + | |
| - | <p>If you see this page, the nginx web server is successfully installed and | + | |
| - | working. Further configuration is required.</p> | + | |
| - | < | + | root@kubemaster: |
| - | <a href=" | + | NAME STATUS |
| - | Commercial support is available at | + | kubemaster.ittraining.loc |
| - | <a href=" | + | kubenode1.ittraining.loc NotReady |
| - | + | kubenode2.ittraining.loc | |
| - | <p>< | + | |
| - | </ | + | |
| - | </ | + | |
| </ | </ | ||
| - | ====1.3 - Le Service ClusterIP==== | + | En activant et en démarrant |
| - | + | ||
| - | Le Service **ClusterIP** permet de regrouper les PODs offrant | + | |
| - | + | ||
| - | * 3 PODs front-end = une adresse ClusterIP, | + | |
| - | * 3 PODs back-end = une autre adresse ClusterIP. | + | |
| - | + | ||
| - | Pour créer un Service ClusterIP, créez | + | |
| < | < | ||
| - | trainee@gateway:~$ ssh -l trainee 192.168.56.2 | + | root@kubemaster:~# ssh -l trainee 192.168.56.3 |
| - | trainee@192.168.56.2's password: trainee | + | trainee@192.168.56.3's password: trainee |
| - | Linux kubemaster.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 | + | Linux kubenode1.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 |
| The programs included with the Debian GNU/Linux system are free software; | The programs included with the Debian GNU/Linux system are free software; | ||
| Ligne 332: | Ligne 255: | ||
| Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent | ||
| permitted by applicable law. | permitted by applicable law. | ||
| - | Last login: Wed Jul 13 15:53:15 2022 from 10.0.2.40 | + | Last login: Wed Sep 28 09:20:14 2022 from 192.168.56.2 |
| - | trainee@kubemaster:~$ su - | + | trainee@kubenode1:~$ su - |
| Mot de passe : fenestros | Mot de passe : fenestros | ||
| - | root@kubemaster: | ||
| - | root@kubemaster: | ||
| - | --- | ||
| - | apiVersion: v1 | ||
| - | kind: Service | ||
| - | metadata: | ||
| - | name: back-end | ||
| - | spec: | + | root@kubenode1:~# systemctl enable kubelet |
| - | type: ClusterIP | + | Created symlink / |
| - | ports: | + | |
| - | | + | |
| - | port: 80 | + | |
| - | selector: | + | |
| - | app: myapp | + | |
| - | type: front-end | + | |
| - | </code> | + | |
| - | Créez le Service en utilisant le fichier **clusterip-definition.yaml** | + | root@kubenode1:~# systemctl start kubelet |
| - | < | + | root@kubenode1:~# systemctl status kubelet |
| - | root@kubemaster:~# kubectl create | + | ● kubelet.service |
| - | service/back-end created | + | |
| - | </code> | + | Drop-In: / |
| + | └─10-kubeadm.conf | ||
| + | | ||
| + | Docs: https:// | ||
| + | Main PID: 5996 (kubelet) | ||
| + | Tasks: 18 (limit: 4915) | ||
| + | | ||
| + | CPU: 555ms | ||
| + | | ||
| + | | ||
| - | Vérifiez maintenant la présence du Service | + | sept. 28 09:54:51 kubenode1.ittraining.loc kubelet[5996]: |
| + | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:54 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:56 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:56 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | sept. 28 09:54:57 kubenode1.ittraining.loc kubelet[5996]: | ||
| + | root@kubenode1: | ||
| - | < | + | root@kubenode1: |
| - | root@kubemaster: | + | déconnexion |
| - | NAME | + | trainee@kubenode1: |
| - | back-end | + | déconnexion |
| - | kubernetes | + | Connection to 192.168.56.3 closed. |
| - | myapp-service | + | |
| + | root@kubemaster: | ||
| + | NAME | ||
| + | kubemaster.ittraining.loc | ||
| + | kubenode1.ittraining.loc | ||
| + | kubenode2.ittraining.loc | ||
| </ | </ | ||
| - | Supprimez maintenant les Services créés : | + | =====LAB #3 - Les Pods===== |
| - | < | + | Quand un pod du cluster démontre un problème, il convient de regarder la section **Events** dans la sortie de la commande **kubectl |
| - | root@kubemaster: | + | |
| - | service " | + | |
| - | root@kubemaster: | + | ====3.1 |
| - | service " | + | |
| - | </ | + | |
| - | Dernièrement supprimez | + | Commencez par créer |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | deployment.extensions "myapp-deployment" | + | root@kubemaster: |
| + | apiVersion: apps/v1 | ||
| + | kind: Deployment | ||
| + | metadata: | ||
| + | name: postgresql | ||
| + | labels: | ||
| + | app: postgresql | ||
| + | spec: | ||
| + | replicas: 1 | ||
| + | selector: | ||
| + | matchLabels: | ||
| + | app: postgresql | ||
| + | template: | ||
| + | metadata: | ||
| + | labels: | ||
| + | app: postgresql | ||
| + | spec: | ||
| + | containers: | ||
| + | | ||
| + | imagePullPolicy: | ||
| + | name: postgresql | ||
| </ | </ | ||
| - | Vérifiez qu'il ne reste que le service par défaut **kubernetes** | + | Déployez ensuite l'application |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME | + | deployment.apps/postgresql created |
| - | service/ | + | |
| </ | </ | ||
| - | =====LAB #2 - Gestion d'une Architecture de Microservices===== | + | En consultant |
| - | + | ||
| - | ====2.1 - Présentation==== | + | |
| - | + | ||
| - | Vous allez mettre en place une application simple, appelé **demo-voting-app** et développé par Docker, sous forme de microservices : | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | Dans cette application | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | Lors de la vote, le résultat de celle-ci est stocké dans **Redis** dans une base de données en mémoire. Le résultat est ensuite passé au conteneur | + | |
| - | + | ||
| - | L' | + | |
| - | + | ||
| - | {{ : | + | |
| - | + | ||
| - | Cette application peut être mise en place sous docker avec les commandes suivantes | + | |
| < | < | ||
| - | docker run -d --name=redis redis | + | root@kubemaster:~# kubectl get pods |
| - | docker run -d --name=db -e POSTGRES_PASSWORD=postgres -e POSTGRES_USER=postgres postgres:9.4 | + | NAME READY |
| - | docker run -d --name=vote -p 5000:80 --link redis:redis dockersamples/examplevotingapp_vote | + | postgresql-6778f6569c-x84xd 0/1 |
| - | docker run -d --name=result -p 5001:80 --link db:db dockersamples/examplevotingapp_result | + | sharedvolume |
| - | docker run -d --name=worker --link db:db --link redis:redis dockersamples/examplevotingapp_worker | + | volumepod |
| </ | </ | ||
| - | Par contre, Docker annonce le retrait éventuel | + | Consultez la section **Events** |
| - | + | ||
| - | " | + | |
| - | + | ||
| - | Cette application peut être mise en place sous docker swarm avec les commandes suivantes | + | |
| < | < | ||
| - | docker@manager1:~$ docker node ls | + | root@kubemaster:~# kubectl describe pod postgresql-6778f6569c-x84xd | tail |
| - | ID HOSTNAME | + | node.kubernetes.io/ |
| - | vwshwppuaoze785gy12k0gh62 * | + | Events: |
| - | t0rjtq76j35mbn44olp0t3yeq | + | Type |
| - | udv7w988tepuba7pf6rb5k1o3 | + | |
| - | uz2m26qe0hdf7lplb9a5m0ysv | + | Normal |
| - | sfig9atrbgzt41sjxhj95wfgu | + | |
| - | 56az1cupssf9uqx9h0yvbmydw | + | |
| + | | ||
| + | Normal | ||
| + | Warning | ||
| </ | </ | ||
| - | < | + | Comme vous pouvez constater, il existe trois avertissements |
| - | docker@manager1: | + | |
| - | docker@manager1: | + | |
| - | version: " | + | |
| - | services: | + | |
| - | redis: | + | < |
| - | image: | + | |
| - | ports: | + | |
| - | - "6379" | + | |
| - | networks: | + | |
| - | - frontend | + | |
| - | deploy: | + | |
| - | replicas: 1 | + | |
| - | update_config: | + | |
| - | parallelism: | + | |
| - | delay: 10s | + | |
| - | restart_policy: | + | |
| - | condition: on-failure | + | |
| - | db: | + | |
| - | | + | |
| - | volumes: | + | |
| - | - db-data:/ | + | |
| - | networks: | + | |
| - | - backend | + | |
| - | deploy: | + | |
| - | placement: | + | |
| - | constraints: | + | |
| - | vote: | + | |
| - | image: dockersamples/examplevotingapp_vote:before | + | |
| - | ports: | + | |
| - | - 5000:80 | + | |
| - | networks: | + | |
| - | - frontend | + | |
| - | depends_on: | + | |
| - | - redis | + | |
| - | deploy: | + | |
| - | replicas: 2 | + | |
| - | update_config: | + | |
| - | parallelism: | + | |
| - | restart_policy: | + | |
| - | condition: on-failure | + | |
| - | result: | + | |
| - | image: dockersamples/examplevotingapp_result:before | + | |
| - | ports: | + | |
| - | - 5001:80 | + | |
| - | networks: | + | |
| - | - backend | + | |
| - | depends_on: | + | |
| - | - db | + | |
| - | deploy: | + | |
| - | replicas: 1 | + | |
| - | update_config: | + | |
| - | parallelism: | + | |
| - | delay: 10s | + | |
| - | restart_policy: | + | |
| - | condition: on-failure | + | |
| - | | + | |
| - | image: dockersamples/ | + | |
| - | networks: | + | |
| - | - frontend | + | |
| - | - backend | + | |
| - | deploy: | + | |
| - | mode: replicated | + | |
| - | replicas: 1 | + | |
| - | labels: [APP=VOTING] | + | |
| - | restart_policy: | + | |
| - | condition: on-failure | + | |
| - | delay: 10s | + | |
| - | max_attempts: | + | |
| - | window: 120s | + | |
| - | placement: | + | |
| - | constraints: | + | |
| - | | + | |
| - | | + | </file> |
| - | ports: | + | |
| - | - " | + | |
| - | stop_grace_period: | + | |
| - | volumes: | + | |
| - | - "/ | + | |
| - | deploy: | + | |
| - | placement: | + | |
| - | constraints: | + | |
| - | networks: | + | Le premier des trois avertissements nous dit clairement qu'il y a un problème au niveau du tag de l' |
| - | frontend: | + | |
| - | backend: | + | |
| - | volumes: | + | Modifiez donc le tag dans ce fichier à ** 10.13.0** |
| - | db-data: | + | |
| - | </ | + | |
| < | < | ||
| - | docker@manager1: | + | root@kubemaster: |
| - | Creating network app_backend | + | root@kubemaster: |
| - | Creating network app_frontend | + | |
| - | Creating network app_default | + | |
| - | Creating service app_worker | + | |
| - | Creating service app_visualizer | + | |
| - | Creating service app_redis | + | |
| - | Creating service app_db | + | |
| - | Creating service app_vote | + | |
| - | Creating service app_result | + | |
| - | </ | + | |
| - | + | ||
| - | ====2.2 - Création des Deployments==== | + | |
| - | + | ||
| - | Créez le répertoire **myapp**. Placez-vous dans ce répertoire et créez le fichier **voting-app-deployment.yaml** : | + | |
| - | + | ||
| - | < | + | |
| - | root@kubemaster: | + | |
| - | root@kubemaster: | + | |
| - | root@kubemaster: | + | |
| - | root@kubemaster: | + | |
| - | --- | + | |
| apiVersion: apps/v1 | apiVersion: apps/v1 | ||
| kind: Deployment | kind: Deployment | ||
| metadata: | metadata: | ||
| - | name: voting-app-deployment | + | name: postgresql |
| labels: | labels: | ||
| - | app: demo-voting-app | + | app: postgresql |
| spec: | spec: | ||
| replicas: 1 | replicas: 1 | ||
| selector: | selector: | ||
| matchLabels: | matchLabels: | ||
| - | | + | app: postgresql |
| - | | + | |
| template: | template: | ||
| metadata: | metadata: | ||
| - | name: voting-app-pod | ||
| labels: | labels: | ||
| - | | + | app: postgresql |
| - | | + | |
| spec: | spec: | ||
| containers: | containers: | ||
| - | - name: voting-app | + | - image: |
| - | | + | |
| - | | + | |
| - | | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | Appliquez |
| - | **Important** : Ce fichier décrit un Deployment. Notez que le Deployment crée **un** replica du POD spécifié par **template** contenant un conteneur dénommé **voting-app** qui utilise le port 80 et qui est créé à partir de l' | + | |
| - | </ | + | |
| - | + | ||
| - | Créez | + | |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | deployment.apps/ |
| - | --- | + | |
| - | apiVersion: | + | |
| - | kind: Deployment | + | |
| - | metadata: | + | |
| - | name: redis-deployment | + | |
| - | labels: | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | |
| - | replicas: 1 | + | |
| - | selector: | + | |
| - | matchLabels: | + | |
| - | name: redis-pod | + | |
| - | app: demo-voting-app | + | |
| - | template: | + | |
| - | metadata: | + | |
| - | name: redis pod | + | |
| - | labels: | + | |
| - | name: redis-pod | + | |
| - | app: demo-voting-app | + | |
| - | + | ||
| - | spec: | + | |
| - | containers: | + | |
| - | - name: redis | + | |
| - | image: redis | + | |
| - | ports: | + | |
| - | - containerPort: | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | ====3.2 - L'Erreur CrashLoopBackOff==== |
| - | **Important** : Ce fichier décrit un Deployment. Notez que le Deployment crée **un** replica du POD spécifié par **template** contenant un conteneur dénommé **redis** qui utilise le port 6379 et qui est créé à partir de l'image **redis**. | + | |
| - | </ | + | |
| - | Créez | + | En consultant |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | NAME READY |
| - | --- | + | postgresql-6668d5d6b5-swr9g 0/1 |
| - | apiVersion: apps/v1 | + | postgresql-6778f6569c-x84xd |
| - | kind: Deployment | + | sharedvolume |
| - | metadata: | + | volumepod |
| - | name: worker-app-deployment | + | </ |
| - | | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | |
| - | replicas: | + | |
| - | | + | |
| - | matchLabels: | + | |
| - | name: worker-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | template: | + | |
| - | metadata: | + | |
| - | name: worker-app-pod | + | |
| - | labels: | + | |
| - | name: worker-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | Consultez la section **Events** de la sortie de la commande **describe** pour voir ce que se passe avec le deuxième pod : |
| - | | + | |
| - | - name: worker-app | + | < |
| - | image: | + | root@kubemaster: |
| + | Events: | ||
| + | Type | ||
| + | ---- | ||
| + | | ||
| + | Normal | ||
| + | Normal | ||
| + | Normal | ||
| + | Normal | ||
| + | Normal | ||
| + | Warning | ||
| </ | </ | ||
| - | <WRAP center round important> | + | Cette fois-ci, la section |
| - | **Important** : Ce fichier décrit un Deployment. Notez que le Deployment crée **un** replica du POD spécifié par **template** contenant un conteneur dénommé **worker-app** qui est créé à partir de l' | + | |
| - | </ | + | |
| - | Créez ensuite | + | Pour obtenir plus d' |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster:~/app# cat postgres-deployment.yaml | + | postgresql 08:43:48.60 |
| - | --- | + | postgresql 08:43:48.60 Welcome to the Bitnami postgresql container |
| - | apiVersion: apps/v1 | + | postgresql 08:43:48.60 Subscribe to project updates by watching https:// |
| - | kind: Deployment | + | postgresql 08:43:48.60 Submit issues and feature requests at https:// |
| - | metadata: | + | postgresql 08:43:48.60 |
| - | name: postgres-deployment | + | postgresql 08:43:48.62 INFO ==> ** Starting PostgreSQL setup ** |
| - | labels: | + | postgresql 08:43:48.63 INFO ==> Validating settings in POSTGRESQL_* |
| - | app: demo-voting-app | + | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. |
| - | spec: | + | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. |
| - | replicas: 1 | + | |
| - | selector: | + | |
| - | matchLabels: | + | |
| - | name: postgres-pod | + | |
| - | app: demo-voting-app | + | |
| - | | + | |
| - | metadata: | + | |
| - | name: postgres pod | + | |
| - | labels: | + | |
| - | name: postgres-pod | + | |
| - | app: demo-voting-app | + | |
| - | + | ||
| - | spec: | + | |
| - | containers: | + | |
| - | - name: postgres | + | |
| - | image: postgres:9.4 | + | |
| - | | + | |
| - | - name: POSTGRES_USER | + | |
| - | value: postgres | + | |
| - | - name: POSTGRES_PASSWORD | + | |
| - | value: postgres | + | |
| - | ports: | + | |
| - | - containerPort: | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | La sortie de la commande |
| - | **Important** : Ce fichier décrit un Deployment. Notez que le Deployment crée **un** replica du POD spécifié par **template** contenant un conteneur dénommé | + | |
| - | </WRAP> | + | < |
| + | ... | ||
| + | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. | ||
| + | </file> | ||
| - | Dernièrement, | + | Mettez à jour donc le fichier **deployment-postgresql.yaml** : |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | root@kubemaster: |
| - | --- | + | |
| apiVersion: apps/v1 | apiVersion: apps/v1 | ||
| kind: Deployment | kind: Deployment | ||
| metadata: | metadata: | ||
| - | name: result-app-deployment | + | name: postgresql |
| labels: | labels: | ||
| - | app: demo-voting-app | + | app: postgresql |
| spec: | spec: | ||
| replicas: 1 | replicas: 1 | ||
| selector: | selector: | ||
| matchLabels: | matchLabels: | ||
| - | | + | app: postgresql |
| - | | + | |
| template: | template: | ||
| metadata: | metadata: | ||
| - | name: result-app-pod | ||
| labels: | labels: | ||
| - | | + | app: postgresql |
| - | | + | |
| spec: | spec: | ||
| containers: | containers: | ||
| - | - name: result-app | + | - image: bitnami/ |
| - | | + | |
| - | | + | |
| - | - containerPort: 80 | + | env: |
| + | - name: POSTGRESQL_PASSWORD | ||
| + | value: " | ||
| </ | </ | ||
| - | <WRAP center round important> | + | Appliquez la configuration |
| - | **Important** | + | |
| - | </ | + | |
| - | ====2.3 | + | < |
| + | root@kubemaster: | ||
| + | deployment.apps/ | ||
| + | </ | ||
| - | Créez maintenant | + | Constatez l' |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | NAME READY |
| - | --- | + | postgresql-6f885d8957-tnlbb |
| - | apiVersion: v1 | + | sharedvolume |
| - | kind: Service | + | volumepod |
| - | metadata: | + | |
| - | name: redis | + | |
| - | labels: | + | |
| - | name: redis-service | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | root@kubemaster:~# kubectl get deployments |
| - | | + | NAME |
| - | | + | postgresql |
| - | targetPort: 6379 | + | |
| - | selector: | + | |
| - | name: redis-pod | + | |
| - | app: demo-voting-app | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | Utilisez maintenant l' |
| - | **Important** : Ce fichier décrit un Service **ClusterIP**. Notez que le Service expose le port **6379** sur tout POD ayant le nom **redis-pod**. | + | |
| - | </ | + | |
| - | + | ||
| - | Créez ensuite le fichier **postgres-service.yaml** : | + | |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster:~/app# cat postgres-service.yaml | + | postgresql 08:48:35.14 |
| - | --- | + | postgresql 08:48:35.14 Welcome to the Bitnami postgresql container |
| - | apiVersion: v1 | + | postgresql 08:48:35.14 Subscribe to project updates by watching https://github.com/ |
| - | kind: Service | + | postgresql 08:48:35.14 Submit issues and feature requests at https:// |
| - | metadata: | + | postgresql 08:48:35.15 |
| - | | + | postgresql 08:48:35.16 INFO ==> ** Starting PostgreSQL setup ** |
| - | | + | postgresql 08:48:35.17 INFO ==> Validating settings in POSTGRESQL_* env vars.. |
| - | name: db-service | + | postgresql 08:48:35.18 INFO ==> Loading custom pre-init scripts... |
| - | app: demo-voting-app | + | postgresql 08:48:35.18 INFO ==> Initializing PostgreSQL database... |
| + | postgresql 08:48:35.20 INFO ==> pg_hba.conf file not detected. Generating it... | ||
| + | postgresql 08:48:35.20 INFO ==> Generating local authentication configuration | ||
| + | postgresql 08:48:47.94 INFO ==> Starting PostgreSQL in background... | ||
| + | postgresql 08:48:48.36 INFO ==> Changing password of postgres | ||
| + | postgresql 08:48:48.39 INFO ==> Configuring replication parameters | ||
| + | postgresql 08:48:48.46 INFO ==> Configuring fsync | ||
| + | postgresql 08:48:48.47 INFO ==> Loading custom scripts... | ||
| + | postgresql 08:48:48.47 INFO ==> Enabling remote connections | ||
| + | postgresql 08:48:48.48 INFO ==> Stopping PostgreSQL... | ||
| + | postgresql 08:48:49.49 INFO ==> ** PostgreSQL setup finished! ** | ||
| - | spec: | + | postgresql 08:48:49.50 INFO ==> ** Starting PostgreSQL ** |
| - | ports: | + | 2022-09-28 08:48:49.633 GMT [1] LOG: listening on IPv4 address " |
| - | | + | 2022-09-28 08:48:49.633 GMT [1] LOG: listening on IPv6 address "::", |
| - | | + | 2022-09-28 08:48:49.699 GMT [1] LOG: listening on Unix socket "/ |
| - | | + | 2022-09-28 08:48:49.817 GMT [106] LOG: database system was shut down at 2022-09-28 08:48:48 GMT |
| - | name: postgres-pod | + | 2022-09-28 08: |
| - | app: demo-voting-app | + | ^C |
| </ | </ | ||
| - | <WRAP center round important> | + | <WRAP center round important |
| - | **Important** : Ce fichier décrit un Service **ClusterIP**. | + | **Important** : Notez l' |
| </ | </ | ||
| - | Créez | + | =====LAB #4 - Les Conteneurs===== |
| + | |||
| + | ====4.1 - La Commande exec==== | ||
| + | |||
| + | La commande **exec** peut être utilisée pour exécuter une commande à l' | ||
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | # ----------------------------- |
| - | --- | + | # PostgreSQL configuration file |
| - | apiVersion: v1 | + | # ----------------------------- |
| - | kind: Service | + | # |
| - | metadata: | + | # This file consists of lines of the form: |
| - | name: voting-service | + | # |
| - | | + | # name = value |
| - | name: voting-service | + | # |
| - | app: demo-voting-app | + | # (The " |
| + | # "#" | ||
| + | # values can be found in the PostgreSQL documentation. | ||
| + | # | ||
| + | # The commented-out settings shown in this file represent the default values. | ||
| + | # Re-commenting a setting is NOT sufficient to revert it to the default value; | ||
| + | # you need to reload the server. | ||
| + | # | ||
| + | # This file is read on server startup and when the server receives a SIGHUP | ||
| + | # signal. | ||
| + | # server for the changes to take effect, run " | ||
| + | # " | ||
| + | # require a server shutdown and restart to take effect. | ||
| + | # | ||
| + | # Any parameter can also be given as a command-line option to the server, e.g., | ||
| + | # " | ||
| + | # with the " | ||
| + | # | ||
| + | # Memory units: | ||
| + | # MB = megabytes | ||
| + | # GB = gigabytes | ||
| + | # TB = terabytes | ||
| + | # | ||
| - | spec: | ||
| - | type: NodePort | ||
| - | ports: | ||
| - | - port: 80 | ||
| - | targetPort: 80 | ||
| - | selector: | ||
| - | name: voting-app-pod | ||
| - | app: demo-voting-app | ||
| - | </ | ||
| - | <WRAP center round important> | + | # |
| - | **Important** : Ce fichier décrit un Service **NodePort**. Notez que le Service expose le port **80** sur tout POD ayant le nom **voting-app-pod**. | + | # FILE LOCATIONS |
| - | </ | + | # |
| - | Dernièrement, | + | # The default values of these variables are driven from the -D command-line |
| + | # option or PGDATA environment variable, represented here as ConfigDir. | ||
| - | < | + | # |
| - | root@kubemaster: | + | # (change requires restart) |
| - | root@kubemaster: | + | #hba_file = ' |
| - | --- | + | # (change requires restart) |
| - | apiVersion: v1 | + | #ident_file = ' |
| - | kind: Service | + | # (change requires restart) |
| - | metadata: | + | |
| - | name: result-service | + | |
| - | labels: | + | |
| - | name: result-service | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | # If external_pid_file is not explicitly set, no extra PID file is written. |
| - | type: NodePort | + | # |
| - | | + | # (change requires restart) |
| - | - port: 80 | + | |
| - | targetPort: 80 | + | |
| - | selector: | + | |
| - | name: result-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | </ | + | |
| - | <WRAP center round important> | ||
| - | **Important** : Ce fichier décrit un Service **NodePort**. Notez que le Service expose le port **80** sur tout POD ayant le nom **result-app-pod**. | ||
| - | </ | ||
| - | ====2.4 | + | # |
| + | # CONNECTIONS AND AUTHENTICATION | ||
| + | # | ||
| - | Vérifiez que vous avez créé tous les fichiers YAML necéssaires : | + | --More-- |
| - | + | ||
| - | < | + | |
| - | root@kubemaster: | + | |
| - | postgres-deployment.yaml | + | |
| - | postgres-service.yaml | + | |
| </ | </ | ||
| - | Utilisez ensuite la commande **kubectl create** | + | Dernièrement, |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | deployment.apps/ | + | I have no name!@postgresql-6f885d8957-tnlbb:/$ exit |
| - | service/db created | + | exit |
| - | deployment.apps/ | + | root@kubemaster: |
| - | service/ | + | |
| - | deployment.apps/ | + | |
| - | service/ | + | |
| - | deployment.apps/ | + | |
| - | service/voting-service created | + | |
| - | deployment.apps/ | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | =====LAB #5 - Le Réseau====== |
| - | **Important** : Notez l' | + | |
| - | </ | + | |
| - | Attendez que tous les Deployments soient | + | ====5.1 - kube-proxy et le DNS==== |
| + | |||
| + | Utilisez la commande **kubectl get pods** pour obtenir | ||
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME READY UP-TO-DATE | + | NAME READY STATUS |
| - | postgres-deployment | + | calico-kube-controllers-6799f5f4b4-2tgpq |
| - | redis-deployment | + | calico-node-5htrc |
| - | result-app-deployment | + | calico-node-dc7hd |
| - | voting-app-deployment | + | calico-node-qk5kt |
| - | worker-app-deployment | + | coredns-565d847f94-kkpbp |
| + | coredns-565d847f94-tqd8z 1/1 | ||
| + | etcd-kubemaster.ittraining.loc | ||
| + | kube-apiserver-kubemaster.ittraining.loc | ||
| + | kube-controller-manager-kubemaster.ittraining.loc | ||
| + | kube-proxy-ggmt6 | ||
| + | kube-proxy-x5j2r 1/1 | ||
| + | kube-proxy-x7fpc | ||
| + | kube-scheduler-kubemaster.ittraining.loc | ||
| + | metrics-server-5dbb5ff5bd-vh5fz | ||
| </ | </ | ||
| - | Contrôlez ensuite l' | + | Recherchez |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME | + | I0916 07: |
| - | postgres-deployment-5b8bd66778-j99zz 1/1 Running | + | I0916 07: |
| - | redis-deployment-67d4c466c4-9wzfn | + | I0916 07: |
| - | result-app-deployment-b8f9dc967-nzbgd | + | I0916 07: |
| - | voting-app-deployment-669dccccfb-jpn6h | + | I0916 07: |
| - | worker-app-deployment-559f7749b6-jh86r | + | I0916 07: |
| + | I0916 07: | ||
| + | I0916 07: | ||
| + | I0916 07: | ||
| + | Trace[210170851]: | ||
| </ | </ | ||
| - | |||
| - | ainsi que la liste des Services : | ||
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME | + | [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server |
| - | db | + | [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server |
| - | kubernetes | + | .:53 |
| - | redis ClusterIP | + | [INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908 |
| - | result-service | + | CoreDNS-1.9.3 |
| - | voting-service | + | linux/amd64, go1.18.2, 45b0a11 |
| </ | </ | ||
| - | Dans le cas donc de l' | + | ====5.2 - Le Conteneur netshoot==== |
| - | {{ :elearning: | + | Si, à ce stade, vous n'avez pas trouvé d' |
| - | ====2.5 - Scaling Up===== | + | {{ : |
| - | Éditez | + | Créez |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | root@kubemaster: |
| - | --- | + | apiVersion: v1 |
| - | apiVersion: | + | kind: Pod |
| - | kind: Deployment | + | |
| metadata: | metadata: | ||
| - | name: voting-app-deployment | + | name: nginx-netshoot |
| labels: | labels: | ||
| - | app: demo-voting-app | + | app: nginx-netshoot |
| spec: | spec: | ||
| - | | + | |
| + | - name: nginx | ||
| + | image: nginx: | ||
| + | --- | ||
| + | apiVersion: v1 | ||
| + | kind: Service | ||
| + | metadata: | ||
| + | name: service-netshoot | ||
| + | spec: | ||
| + | type: ClusterIP | ||
| selector: | selector: | ||
| - | | + | app: nginx-netshoot |
| - | name: voting-app-pod | + | |
| - | | + | - protocol: TCP |
| - | | + | |
| - | | + | |
| - | name: voting-app-pod | + | |
| - | labels: | + | |
| - | name: voting-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | + | ||
| - | spec: | + | |
| - | | + | |
| - | | + | |
| - | image: dockersamples/ | + | |
| - | ports: | + | |
| - | - containerPort: 80 | + | |
| </ | </ | ||
| - | Éditez | + | Créez |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | root@kubemaster: | + | pod/nginx-netshoot created |
| - | --- | + | service/service-netshoot created |
| - | apiVersion: apps/v1 | + | |
| - | kind: Deployment | + | |
| - | metadata: | + | |
| - | name: result-app-deployment | + | |
| - | labels: | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | |
| - | replicas: 3 | + | |
| - | selector: | + | |
| - | matchLabels: | + | |
| - | name: result-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | template: | + | |
| - | metadata: | + | |
| - | name: result-app-pod | + | |
| - | labels: | + | |
| - | name: result-app-pod | + | |
| - | app: demo-voting-app | + | |
| - | + | ||
| - | spec: | + | |
| - | containers: | + | |
| - | - name: result-app | + | |
| - | image: dockersamples/ | + | |
| - | ports: | + | |
| - | | + | |
| </ | </ | ||
| - | Appliquez les modifications à l'aide de la commande **kubectl apply** | + | Vérifiez que le service est en cours d'exécution |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | Warning: kubectl apply should be used on resource created by either kubectl create | + | NAME |
| - | deployment.apps/voting-app-deployment configured | + | kubernetes |
| - | + | service-netshoot | |
| - | root@kubemaster: | + | |
| - | Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply | + | |
| - | deployment.apps/result-app-deployment configured | + | |
| </ | </ | ||
| - | Contrôlez ensuite les Deployments | + | Créez maintenant le fichier **netshoot.yaml** |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME READY | + | root@kubemaster: |
| - | postgres-deployment | + | apiVersion: v1 |
| - | redis-deployment | + | kind: Pod |
| - | result-app-deployment | + | metadata: |
| - | voting-app-deployment | + | name: netshoot |
| - | worker-app-deployment | + | spec: |
| + | containers: | ||
| + | | ||
| + | | ||
| + | | ||
| </ | </ | ||
| - | ainsi que les PODs : | + | Créez le pod : |
| < | < | ||
| - | root@kubemaster: | + | root@kubemaster: |
| - | NAME | + | pod/netshoot created |
| - | postgres-deployment-5b8bd66778-j99zz | + | |
| - | redis-deployment-67d4c466c4-9wzfn | + | |
| - | result-app-deployment-b8f9dc967-nzbgd | + | |
| - | result-app-deployment-b8f9dc967-r84k6 | + | |
| - | result-app-deployment-b8f9dc967-zbsk2 | + | |
| - | voting-app-deployment-669dccccfb-jpn6h | + | |
| - | voting-app-deployment-669dccccfb-ktd7d | + | |
| - | voting-app-deployment-669dccccfb-x868p | + | |
| - | worker-app-deployment-559f7749b6-jh86r | + | |
| </ | </ | ||
| - | Dans le cas de l' | + | Vérifiez que le status du pod est **READY** |
| - | {{ :elearning: | + | < |
| + | root@kubemaster:~# kubectl get pods | ||
| + | NAME READY | ||
| + | netshoot | ||
| + | nginx-netshoot | ||
| + | postgresql-6f885d8957-tnlbb | ||
| + | sharedvolume | ||
| + | troubleshooting | ||
| + | volumepod | ||
| + | </ | ||
| - | Retournez sur le navigateur de votre machine hôte et rafraichissez la page du voting-app | + | Entrez dans le conteneur **netshoot** |
| - | {{ :elearning: | + | < |
| + | root@kubemaster:~# kubectl exec --stdin --tty netshoot -- /bin/bash | ||
| + | bash-5.1# | ||
| + | </ | ||
| - | <WRAP center round important> | + | Testez le bon fonctionnement du service |
| - | **Important** : Notez le POD qui a servi la page. | + | |
| - | </ | + | |
| - | Rafraîchissez la page de nouveau | + | < |
| + | bash-5.1# curl service-netshoot | ||
| + | < | ||
| + | < | ||
| + | < | ||
| + | < | ||
| + | < | ||
| + | body { | ||
| + | width: 35em; | ||
| + | margin: 0 auto; | ||
| + | font-family: | ||
| + | } | ||
| + | </ | ||
| + | </ | ||
| + | < | ||
| + | < | ||
| + | <p>If you see this page, the nginx web server is successfully installed and | ||
| + | working. Further configuration is required.</ | ||
| - | {{ :elearning:workbooks: | + | < |
| + | <a href=" | ||
| + | Commercial support is available at | ||
| + | <a href=" | ||
| - | <WRAP center round important> | + | <p>< |
| - | **Important** : Notez que le POD qui a servi la page a changé. | + | </body> |
| - | </WRAP> | + | </ |
| - | + | </ | |
| - | Notez que ce changement de POD n' | + | |
| - | Par contre, dans le cas d'une application sur GCP par exemple, il convient de modifier les deux fichiers suivants en changeant | + | Dernièrement, utilisez |
| < | < | ||
| - | root@kubemaster: | + | bash-5.1# nslookup |
| - | root@kubemaster: | + | Server: 10.96.0.10 |
| - | --- | + | Address: |
| - | apiVersion: v1 | + | |
| - | kind: Service | + | |
| - | metadata: | + | |
| - | name: voting-service | + | |
| - | labels: | + | |
| - | name: voting-service | + | |
| - | app: demo-voting-app | + | |
| - | spec: | + | Name: service-netshoot.default.svc.cluster.local |
| - | type: LoadBalancer | + | Address: 10.107.115.28 |
| - | ports: | + | |
| - | | + | |
| - | | + | |
| - | selector: | + | |
| - | name: voting-app-pod | + | |
| - | app: demo-voting-app | + | |
| </ | </ | ||
| - | <WRAP center round important> | + | <WRAP center round important |
| - | **Important** : Ce fichier décrit un Service | + | **Important** : Pour plus d' |
| </ | </ | ||
| - | |||
| - | Dernièrement, | ||
| - | |||
| - | < | ||
| - | root@kubemaster: | ||
| - | root@kubemaster: | ||
| - | --- | ||
| - | apiVersion: v1 | ||
| - | kind: Service | ||
| - | metadata: | ||
| - | name: result-service | ||
| - | labels: | ||
| - | name: result-service | ||
| - | app: demo-voting-app | ||
| - | |||
| - | spec: | ||
| - | type: LoadBalancer | ||
| - | ports: | ||
| - | - port: 80 | ||
| - | targetPort: 80 | ||
| - | selector: | ||
| - | name: result-app-pod | ||
| - | app: demo-voting-app | ||
| - | </ | ||
| ---- | ---- | ||
| - | Copyright © 2022 Hugh Norris | + | Copyright © 2024 Hugh Norris |