CKA 模拟真题 Killer.sh | Question 12 | Deployment on all Nodes
Use context: kubectl config use-context k8s-c1-H
Use Namespace project-tiger for the following. Create a Deployment named deploy-important with label id=very-important (the Pods should also have this label) and 3 replicas. It should contain two containers, the first named container1 with image nginx:1.17.6-alpine and the second one named container2 with image kubernetes/pause .
There should be only ever one Pod of that Deployment running on one worker node. We have two worker nodes: cluster1-node1 and cluster1-node2 . Because the Deployment has three replicas the result should be that on both nodes one Pod is running. The third Pod won’t be scheduled, unless a new worker node will be added.
In a way we kind of simulate the behaviour of a DaemonSet here, but using a Deployment and a fixed number of replicas.
译文
使用命名空间 project-tiger 进行以下工作。
创建一个名为deploy-important 的部署,标签 id=very-important (Pod也应该有这个标签)和3个副本。
它应该包含两个容器,第一个名为container1 ,镜像为 nginx:1.17.6-alpine ,第二个名为 container2 ,镜像为 kubernetes/pause 。
在一个工作节点上应该只有 一个 部署的Pod在运行。我们有两个工作节点: cluster1-node1 和 cluster1-node2 。
因为该部署有三个副本,结果应该是两个节点上都有一个Pod在运行。第三个Pod不会被安排,除非有一个新的工作节点被添加。
在某种程度上,我们在这里模拟了DaemonSet的行为,但使用了一个部署和固定数量的副本
解答
有两种方式,一种使用 podAntiAffinity ,一种使用 topologySpreadConstraint
PodAntiAffinity
k -n project-tiger create deployment \ |
12.vim
# 12.yaml |
TopologySpreadConstraints
# 12.yaml |
创建对应的deployment
k -f 12.yaml create |
我们检查部署状态,它显示2/3的准备数量。其中有一个没有被调度
k -n project-tiger get deploy -l id=very-important |



