Kubernetes集群搭建
准备三台Linux虚拟机(K8S集群三台起步),系统用CentOS7.4,虚拟机配置是2颗CPU和2G内存(K8S最低要求的配置),网络使用桥接网卡方式并使用静态IP
主机名
IP地址
角色描述
Master
ens32:192.168.2.1
K8S Master节点/ETCD节点
Node1
ens32:192.168.2.2
K8S Node节点
Node2
ens32:192.168.2.3
K8S Node节点
harbor
ens32:192.168.2.4
docker 镜像仓库节点
1. 系统环境初始化 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 hostnamectl set-hostname master1 hostnamectl set-hostname node1 hostnamectl set-hostname node2 hostnamectl set-hostname harbor logout timedatectl set-timezone Asia/Shanghai timedatectl set-local-rtc 0 systemctl restart rsyslog systemctl restart crond wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y epel-release conntrack ntpdate ntp ipvsadm ipset iptables-services iptables curl sysstat libseccomp wget unzip net-tools git yum-utils jq device-mapper-persistent-data lvm2 ntpdate ntp1.aliyun.com cat <<EOF>> /etc/hosts 192.168.2.1 master1 192.168.2.2 node1 192.168.2.3 node2 192.168.2.4 hub.lemon.com 199.232.68.133 raw.githubusercontent.com EOF systemctl stop firewalld systemctl disable firewalld systemctl start iptables systemctl enable iptables iptables -F && service iptables save swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config systemctl stop postfix && systemctl disable postfix mkdir /var/log /journal mkdir /etc/systemd/journald.conf.d cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间 10G SystemMaxUse=10G # 单日志文件最大 200 MSystemMaxFileSize=200M # 日志保存时间 2 周 MaxRetentionSec=2week # 不将日志转发到 syslog ForwardToSyslog=no EOF systemctl restart systemd-journald
2. 升级系统内核并优化
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: 高版本的 docker(1.13 以后) 启用了 3.10 kernel 实验支持的 kernel memory account 功能(无法关闭),当节点压力大如频繁启动和停止容器时会导致 cgroup memory leak;网络设备引用计数泄漏, 会导致类似于报错:”kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1”;
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm yum --enablerepo=elrepo-kernel install -y kernel-lt cat /boot/grub2/grub.cfg |grep menuentry grub2-editenv list yum update -y grub2-set-default "CentOS Linux (4.4.236-1.el7.elrepo.x86_64) 7 (Core)" && reboot uname -r modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules bash /etc/sysconfig/modules/ipvs.modules lsmod | grep -e ip_vs -e nf_conntrack_ipv4 chmod a+x /etc/rc.d/rc.local echo 'bash /etc/sysconfig/modules/ipvs.modules' >> /etc/rc.localcat > /etc/sysctl.d/kubernetes.conf <<EOF # 关闭IPV6协议 net.ipv6.conf.all.disable_ipv6=1 # 开启网桥模式 net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 # 开启路由转发 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 vm.swappiness=0 # 不检查物理内存是否够用 vm.overcommit_memory=1 # 开启 OOM vm.panic_on_oom=0 fs.inotify.max_user_watches=1048576 fs.inotify.max_user_instances=8192 # 开启的文件句柄数目 fs.file-max=52706963 # 开启对大的文件数目 fs.nr_open=52706963 # ~~~ net.netfilter.nf_conntrack_max=2310720 vm.dirty_bytes=15728640 EOF sysctl -p /etc/sysctl.d/kubernetes.conf
3. 部署Docker、kubeadm、kubectl 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 cd /etc/yum.repos.d/wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo cat <<EOF>> kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF yum clean all && yum makecache && cd yum -y install docker-ce-18.09.6 mkdir -p /etc/docker cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://p8hkkij9.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } EOF systemctl start docker systemctl enable docker yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 systemctl enable kubelet
4. 初始化主节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 cat image-k8s-v1_15_1.sh images=( kube-apiserver:v1.15.1 kube-controller-manager:v1.15.1 kube-scheduler:v1.15.1 kube-proxy:v1.15.1 pause:3.1 etcd:3.3.10 coredns:1.3.1 ) for imageName in ${images[@]} ; do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done bash image-k8s-v1_15_1.sh docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 14 months ago 207MB k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 14 months ago 159MB k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 14 months ago 81.1MB k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 14 months ago 82.4MB k8s.gcr.io/coredns 1.3.1 eb516548c180 20 months ago 40.3MB k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 21 months ago 258MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB [root@master1 ~] [root@master1 ~] apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.2.1 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master1 taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type : CoreDNS etcd: local : dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.15.1 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 scheduler: {} --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs [root@master1 ~]
5. 加入主节点以及其余工作节点 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [root@master1 ~] Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.2.1:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:4dc6790b3232bc4fd7f80e614d38e21b15cfd3d318f09b596b68336375b381e4 systemctl status kubelet | grep running Active: active (running) since 日 2020-09-13 01:53:32 CST; 6min ago [root@master1 ~] NAME STATUS ROLES AGE VERSION master1 NotReady master 11m v1.15.1 node1 NotReady <none> 2m13s v1.15.1 node2 NotReady <none> 2m10s v1.15.1
这就需要下面来部署flannel来解决扁平化网络问题
6. 部署 CNI - flannel网络 1 2 3 4 5 6 7 8 9 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@master1 ~] NAME STATUS ROLES AGE VERSION master1 Ready master 18m v1.15.1 node1 Ready <none> 8m38s v1.15.1 node2 Ready <none> 8m35s v1.15.1+
7. 查看集群状态 和相关的容器 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@master1 ~] NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master1 Ready master 20m v1.15.1 192.168.2.1 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6 node1 Ready <none> 10m v1.15.1 192.168.2.2 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6 node2 Ready <none> 10m v1.15.1 192.168.2.3 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6 [root@master1 ~] NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-5c98db65d4-57czz 1/1 Running 4 136d 10.244.0.9 master1 <none> <none> kube-system coredns-5c98db65d4-s9fsm 1/1 Running 4 136d 10.244.0.8 master1 <none> <none> kube-system etcd-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none> kube-system kube-apiserver-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none> kube-system kube-controller-manager-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none> kube-system kube-flannel-ds-amd64-6qw52 1/1 Running 20 136d 192.168.2.3 node2 <none> <none> kube-system kube-flannel-ds-amd64-hj7wr 1/1 Running 1 136d 192.168.2.1 master1 <none> <none> kube-system kube-flannel-ds-amd64-wc96r 1/1 Running 20 136d 192.168.2.2 node1 <none> <none> kube-system kube-proxy-5xcpl 1/1 Running 5 136d 192.168.2.3 node2 <none> <none> kube-system kube-proxy-lwjps 1/1 Running 4 136d 192.168.2.1 master1 <none> <none> kube-system kube-proxy-r77pb 1/1 Running 4 136d 192.168.2.2 node1 <none> <none> kube-system kube-scheduler-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none>
8. 搭建配置harbor私有仓库
安装Harbor需要先安装docker和docker-compose,上面的系统初始化、系统升级和优化、安装Docker的步骤这里不再陈述
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 cat /etc/docker/daemon.json { "registry-mirrors" : ["https://p8hkkij9.mirror.aliyuncs.com" ], "exec-opts" : ["native.cgroupdriver=systemd" ], "insecure-registries" : ["https://hub.lemon.com" ] } [root@harbor ~] [root@harbor ~] [root@harbor ~] [root@harbor ~] docker-compose version 1.23.2, build 1110ad01 [root@harbor ~] /usr/bin/openssl [root@harbor ~] [root@harbor ssl]
1 2 3 4 5 [root@harbor ssl] Signature ok subject=/C=CN/ST=Beijing/L=Beijing/O=lemon/OU=hub/CN=hub.lemon.com Getting CA Private Key
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@harbor ssl] [root@harbor ssl] 总用量 4.0K -rw-r--r-- 1 root root 1.9K 9月 13 02:23 hub.lemon.com.crt [root@harbor ssl] [root@harbor ssl] [root@harbor ssl] [root@harbor ssl] [root@harbor ssl]
1 2 3 [root@harbor ~] [root@harbor ~] [root@harbor ~]
1 2 3 [root@harbor harbor] [root@harbor harbor]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@harbor harbor] [root@harbor harbor] REPOSITORY TAG IMAGE ID CREATED SIZE goharbor/prepare v1.9.0 aa594772c1e8 12 months ago 147MB [root@harbor harbor] 1、停止Harbor [root@harbor ~] 2、启动Harbor [root@harbor ~]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@harbor ~] docker-compose -f /usr/local /harbor/docker-compose.yml up -d END [root@harbor ~] [root@harbor harbor] Username: admin Password: Harbor12345 WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/ Login Succeeded https://hub.lemon.com/
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master1 ~] Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/ Login Succeeded [root@master1 ~] [root@master1 ~] [root@master1 ~] Untagged: httpd:latest Untagged: httpd@sha256:0fce91cc167634ede639701b7dd1d8093f4ad2f2d9d0d5a8f4be2eaef8a570fb
1 2 3 4 5 6 7 8 9 [root@master1 ~] The push refers to repository [hub.lemon.com/library/httpd] f1fee4547086: Pushed a6a46c0268b1: Pushed 951b1be5cf2d: Pushed d37da03a9458: Pushed 07cab4339852: Pushed v1: digest: sha256:8c9bc11ca46ffd0b6b8a00e30aa670abef6c0d5d308e318a5cb8cf9e23931649 size: 1367
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [root@master1 ~] Untagged: hub.lemon.com/library/httpd:v1 Untagged: hub.lemon.com/library/httpd@sha256:8c9bc11ca46ffd0b6b8a00e30aa670abef6c0d5d308e318a5cb8cf9e23931649 Deleted: sha256:6d82971d37d087be9917ab2015a4dc807569c736d3f2017c0821ddc4ed126617 Deleted: sha256:59a897aaa844713f078ea9234bd61b0f4885598a9ffb1267b4c59983813abb52 Deleted: sha256:6942605f2c5a8ba622491e369f2585daafe749a645835f5abb4fb9d11803664d Deleted: sha256:0f44970c8ecb7e1107f45ff7d5a7f7f3799a9821dce5cd30c51f2f7641339665 Deleted: sha256:97635989e45ed57deef09cd09be52d008a073f2e1e045a1ba91956fbc2db2961 Deleted: sha256:07cab433985205f29909739f511777a810f4a9aff486355b71308bb654cdc868 [root@master1 ~] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/httpd-01 created [root@master1 ~] NAME READY UP-TO-DATE AVAILABLE AGE httpd-01 1/1 1 1 21s [root@master1 ~] NAME DESIRED CURRENT READY AGE httpd-01-6c9fbcfb65 1 1 1 47s [root@master1 ~] NAME READY STATUS RESTARTS AGE httpd-01-6c9fbcfb65-jvmc8 1/1 Running 0 58s [root@master1 ~] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd-01-6c9fbcfb65-jvmc8 1/1 Running 0 69s 10.244.1.2 node1 <none> <none> [root@master1 ~] bin build cgi-bin conf error htdocs icons include logs modules [root@master1 ~] <html><body><h1>It works!</h1></body></html> docker rm -v $(docker ps -qa -f status=exited)
9. 基本的使用下K8S 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 [root@master1 ~] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd-01-6c9fbcfb65-jvmc8 1/1 Running 1 12h 10.244.1.3 node1 <none> <none> kubectl -n default delete pod httpd-01-6c9fbcfb65-jvmc8 [root@master1 ~] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h 10.244.1.3 node1 <none> <none> [root@master1 ~] deployment.extensions/httpd-01 scaled [root@master1 ~] NAME READY UP-TO-DATE AVAILABLE AGE httpd-01 3/3 3 3 12h [root@master1 ~] NAME READY STATUS RESTARTS AGE httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 51s httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 51s [root@master1 ~] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h 10.244.1.3 node1 <none> <none> httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 2m8s 10.244.2.3 node2 <none> <none> httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 2m8s 10.244.2.2 node2 <none> <none> [root@master1 ~] kubectl expose deployment nginx --port=80 --target-port=8000 [root@master1 ~] NAME READY UP-TO-DATE AVAILABLE AGE httpd-01 3/3 3 3 13h [root@master1 ~] service/httpd-01 exposed [root@master1 ~] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR httpd-01 ClusterIP 10.99.191.162 <none> 88/TCP 30s run=httpd-01 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h <none> [root@master1 ~] NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 13h 10.244.1.3 node1 <none> <none> httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 23m 10.244.2.3 node2 <none> <none> httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 23m 10.244.2.2 node2 <none> <none> [root@master1 ~] root@httpd-01-6c9fbcfb65-9hlhv:/usr/local /apache2 [root@master1 ~] root@httpd-01-6c9fbcfb65-hjhjn:/usr/local /apache2 [root@master1 ~] root@httpd-01-6c9fbcfb65-x52hh:/usr/local /apache2 [root@master1 ~] node2-10.244.2.3 [root@master1 ~] node2-10.244.2.2 [root@master1 ~] node1-10.244.1.3 [root@master1 ~] node2-10.244.2.3 [root@master1 ~] IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.2.1:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.6:9153 Masq 1 0 0 -> 10.244.0.7:9153 Masq 1 0 0 TCP 10.99.191.162:88 rr -> 10.244.1.3:80 Masq 1 0 1 -> 10.244.2.2:80 Masq 1 0 1 -> 10.244.2.3:80 Masq 1 0 2 UDP 10.96.0.10:53 rr -> 10.244.0.6:53 Masq 1 0 0 -> 10.244.0.7:53 Masq 1 0 0 [root@master1 ~] apiVersion: v1 kind: Service metadata: creationTimestamp: "2020-09-13T07:50:56Z" labels: run: httpd-01 name: httpd-01 namespace: default resourceVersion: "21567" selfLink: /api/v1/namespaces/default/services/httpd-01 uid: 624c99c1-b27f-4990-aa5b-d381e0a37755 spec: clusterIP: 10.99.191.162 ports: - port: 88 protocol: TCP targetPort: 80 selector: run: httpd-01 sessionAffinity: None type : NodePort status: loadBalancer: {} [root@master1 ~] NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR httpd-01 NodePort 10.99.191.162 <none> 88:32552/TCP 28m run=httpd-01 kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h <none> [root@master1 ~] tcp6 0 0 :::32552 :::* LISTEN 1945/kube-proxy [root@node1 ~] tcp6 0 0 :::32552 :::* LISTEN 1568/kube-proxy [root@node2 ~] tcp6 0 0 :::32552 :::* LISTEN 1601/kube-proxy
至此整体kubernetes集群架构搭建完成