LEMON

记录站

0%

Kubernetes集群搭建(二)

Kubernetes集群搭建

准备三台Linux虚拟机(K8S集群三台起步),系统用CentOS7.4,虚拟机配置是2颗CPU和2G内存(K8S最低要求的配置),网络使用桥接网卡方式并使用静态IP

主机名 IP地址 角色描述
Master ens32:192.168.2.1 K8S Master节点/ETCD节点
Node1 ens32:192.168.2.2 K8S Node节点
Node2 ens32:192.168.2.3 K8S Node节点
harbor ens32:192.168.2.4 docker 镜像仓库节点

1. 系统环境初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
# 修改主机名
hostnamectl set-hostname master1
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname harbor
logout

# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai

# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0

# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond


# 下载阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo


# 安装相关依赖
yum install -y epel-release conntrack ntpdate ntp ipvsadm ipset iptables-services iptables curl sysstat libseccomp wget unzip net-tools git yum-utils jq device-mapper-persistent-data lvm2


# 同步时间
ntpdate ntp1.aliyun.com


# 添加hosts
cat <<EOF>> /etc/hosts
192.168.2.1 master1
192.168.2.2 node1
192.168.2.3 node2
192.168.2.4 hub.lemon.com
199.232.68.133 raw.githubusercontent.com
EOF

# 设置防火墙为 Iptables 并设置空规则
systemctl stop firewalld
systemctl disable firewalld
systemctl start iptables
systemctl enable iptables
iptables -F && service iptables save


# 关闭swap、selinux
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix


# 设置 rsyslogd 和 systemd journald
# 持久化保存日志的目录
mkdir /var/log/journal
mkdir /etc/systemd/journald.conf.d

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent

# 压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# 最大占用空间 10G
SystemMaxUse=10G

# 单日志文件最大 200
MSystemMaxFileSize=200M

# 日志保存时间 2 周
MaxRetentionSec=2week

# 不将日志转发到 syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

2. 升级系统内核并优化

CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如:
高版本的 docker(1.13 以后) 启用了 3.10 kernel 实验支持的 kernel memory account 功能(无法关闭),当节点压力大如频繁启动和停止容器时会导致 cgroup memory leak;网络设备引用计数泄漏, 会导致类似于报错:”kernel:unregister_netdevice: waiting for eth0 to become free. Usage count = 1”;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
# 解决方案:升级内核到 4.4.X 以上;

# 下载内核源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

# 安装最新版本内核
yum --enablerepo=elrepo-kernel install -y kernel-lt

# 查看可用内核
cat /boot/grub2/grub.cfg |grep menuentry

# 查看内核启动项
grub2-editenv list

# 升级系统文件
yum update -y

# # 设置开机从新内核启动 & 重启系统
grub2-set-default "CentOS Linux (4.4.236-1.el7.elrepo.x86_64) 7 (Core)" && reboot

# 查看内核
uname -r


# kube-proxy开启ipvs的前置条件
modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules

bash /etc/sysconfig/modules/ipvs.modules

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 开机自加载
chmod a+x /etc/rc.d/rc.local
echo 'bash /etc/sysconfig/modules/ipvs.modules' >> /etc/rc.local


# 优化内核参数
cat > /etc/sysctl.d/kubernetes.conf <<EOF
# 关闭IPV6协议
net.ipv6.conf.all.disable_ipv6=1

# 开启网桥模式
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1

# 开启路由转发
net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

# 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.swappiness=0

# 不检查物理内存是否够用
vm.overcommit_memory=1

# 开启 OOM
vm.panic_on_oom=0

fs.inotify.max_user_watches=1048576
fs.inotify.max_user_instances=8192

# 开启的文件句柄数目
fs.file-max=52706963

# 开启对大的文件数目
fs.nr_open=52706963

# ~~~
net.netfilter.nf_conntrack_max=2310720
vm.dirty_bytes=15728640
EOF

# 开机自动加载
sysctl -p /etc/sysctl.d/kubernetes.conf

3. 部署Docker、kubeadm、kubectl

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# 下载阿里的docker源、Centos7源、kubernetes源
cd /etc/yum.repos.d/

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cat <<EOF>> kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum clean all && yum makecache && cd


# 安装docke,记给docker换镜像源
yum -y install docker-ce-18.09.6

# 创建 /etc/docker 目录
mkdir -p /etc/docker

# 配置 daemon配置文件
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://p8hkkij9.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#启动docker
systemctl start docker
systemctl enable docker


# 安装 Kubeadm (主从配置)
# 这里正常情况下,master上安装kubeadm、kubectl,node只安装kubelet就行,不过都装上也没事。
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

# 设置开机启动kubelet
systemctl enable kubelet

4. 初始化主节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
# k8s节点上提前下载k8s镜像
cat image-k8s-v1_15_1.sh
#/bin/bash
images=(
kube-apiserver:v1.15.1
kube-controller-manager:v1.15.1
kube-scheduler:v1.15.1
kube-proxy:v1.15.1
pause:3.1
etcd:3.3.10
coredns:1.3.1
)

for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done

# 执行脚本
bash image-k8s-v1_15_1.sh

# 检查镜像是否完善
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 14 months ago 207MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 14 months ago 159MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da 14 months ago 81.1MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d 14 months ago 82.4MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 20 months ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 21 months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB


# 修改kubeadm默认初始化配置模板
[root@master1 ~]# kubeadm config print init-defaults > kubeadm-config.yaml

[root@master1 ~]# cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
# 填写好master1的IP地址
advertiseAddress: 192.168.2.1
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master1
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
# 选择初始化的k8s集群版本镜像
kubernetesVersion: v1.15.1
networking:
dnsDomain: cluster.local
# 原因是一会要用的flannel来解决pod的扁平化网络,而flannel默认的网段就是10.244.0.0/16,所以这就将pod的IP段设置为和flannel相同的IP段,免得后期再做修改
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}
# 下面这一段的意思是将默认的iptables调度方式改为为IPVS
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs


# 初始化k8s集群
# 如果
# --experimental-upload-certs:可以让后来加入的主节点自动加入证书;注:v1.15以上版本的参数已改为--upload-certs
[root@master1 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

5. 加入主节点以及其余工作节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
[root@master1 ~]# cat kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.2.1:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4dc6790b3232bc4fd7f80e614d38e21b15cfd3d318f09b596b68336375b381e4


# 检查kubelet服务是否已经打开
systemctl status kubelet | grep running
Active: active (running) since 日 2020-09-13 01:53:32 CST; 6min ago


# 但是会发现节点的状态是NotReady
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady master 11m v1.15.1
node1 NotReady <none> 2m13s v1.15.1
node2 NotReady <none> 2m10s v1.15.1

这就需要下面来部署flannel来解决扁平化网络问题

6. 部署 CNI - flannel网络

1
2
3
4
5
6
7
8
9
# 部署flannel全覆盖网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 再来检查节点状态
[root@master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 Ready master 18m v1.15.1
node1 Ready <none> 8m38s v1.15.1
node2 Ready <none> 8m35s v1.15.1+

7. 查看集群状态 和相关的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@master1 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1 Ready master 20m v1.15.1 192.168.2.1 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6
node1 Ready <none> 10m v1.15.1 192.168.2.2 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6
node2 Ready <none> 10m v1.15.1 192.168.2.3 <none> CentOS Linux 7 (Core) 4.4.236-1.el7.elrepo.x86_64 docker://18.9.6

[root@master1 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-5c98db65d4-57czz 1/1 Running 4 136d 10.244.0.9 master1 <none> <none>
kube-system coredns-5c98db65d4-s9fsm 1/1 Running 4 136d 10.244.0.8 master1 <none> <none>
kube-system etcd-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none>
kube-system kube-flannel-ds-amd64-6qw52 1/1 Running 20 136d 192.168.2.3 node2 <none> <none>
kube-system kube-flannel-ds-amd64-hj7wr 1/1 Running 1 136d 192.168.2.1 master1 <none> <none>
kube-system kube-flannel-ds-amd64-wc96r 1/1 Running 20 136d 192.168.2.2 node1 <none> <none>
kube-system kube-proxy-5xcpl 1/1 Running 5 136d 192.168.2.3 node2 <none> <none>
kube-system kube-proxy-lwjps 1/1 Running 4 136d 192.168.2.1 master1 <none> <none>
kube-system kube-proxy-r77pb 1/1 Running 4 136d 192.168.2.2 node1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 3 136d 192.168.2.1 master1 <none> <none>

8. 搭建配置harbor私有仓库

安装Harbor需要先安装docker和docker-compose,上面的系统初始化、系统升级和优化、安装Docker的步骤这里不再陈述

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# 在所有docker节点的daemon.json文件上添加下面信任配置
cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://p8hkkij9.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries": ["https://hub.lemon.com"]
}
#~~ 这里先不急着重启docker,等一会颁发完证书之后在重启


# 在Harbor节点上安装docker-compose
[root@harbor ~]# curl -L https://github.com/docker/compose/releases/download/1.14.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
[root@harbor ~]# chmod a+x docker-compose
[root@harbor ~]# mv docker-compose /usr/local/bin/
[root@harbor ~]# docker-compose --version
docker-compose version 1.23.2, build 1110ad01


# 安装Harbor私有hub,创建CA证书
[root@harbor ~]# which openssl
/usr/bin/openssl
[root@harbor ~]# mkdir -p /data/ssl && cd /data/ssl/
[root@harbor ssl]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout ca.key -x509 -days 365 -out ca.crt

image-20210830132110731

image-20210830132043525

1
2
# 创建生成证书签名请求
[root@harbor ssl]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout hub.lemon.com.key -out hub.lemon.com.csr

1
2
3
4
5
# 生成注册表主机证书
[root@harbor ssl]# openssl x509 -req -days 365 -in hub.lemon.com.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out hub.lemon.com.crt
Signature ok
subject=/C=CN/ST=Beijing/L=Beijing/O=lemon/OU=hub/CN=hub.lemon.com
Getting CA Private Key
1
# 查看生成的证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 信任自签发的域名证书
# 由于linux操作系统不信任自签发的CA证书,所以需要把证书加入到系统的信任证书里
# 添加自签证书到系统
[root@harbor ssl]# cp hub.lemon.com.crt /etc/pki/ca-trust/source/anchors/
[root@harbor ssl]# ls -lh /etc/pki/ca-trust/source/anchors/
总用量 4.0K
-rw-r--r-- 1 root root 1.9K 9月 13 02:23 hub.lemon.com.crt

# 让系统CA信任立刻生效
[root@harbor ssl]# update-ca-trust enable
[root@harbor ssl]# update-ca-trust extract

# 如果已经启动Docker了,必须要重启;如果安装过Harbor以后再重启的话,有可能会出现harbor连不上的情况,需要重新把Harbor启动的容器和镜像删除后,重新install一遍
[root@harbor ssl]# systemctl restart docker

# 创建harbor的证.目录
[root@harbor ssl]# mkdir -p /usr/local/harbor/ssh

# 复制域名证到harbor要安装的路径
[root@harbor ssl]# cp hub.lemon.com.crt hub.lemon.com.key /usr/local/harbor/ssh/

# 安装测试,上传harbor安装包并解压到相应路径

1
2
3
[root@harbor ~]# tar xf harbor-offline-installer-v1.9.0.tgz
[root@harbor ~]# mv harbor/* /usr/local/harbor/
[root@harbor ~]# cd /usr/local/harbor/ && ls

image-20210830132223301

1
2
3
# harbor配置备份 & 修改配置文件
[root@harbor harbor]# cp harbor.yml harbor.yml.bak
[root@harbor harbor]# vim harbor.yml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# 生成配置 & 安装(需联网)
# 下载harbor所需镜像
[root@harbor harbor]# ./prepare
[root@harbor harbor]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
goharbor/prepare v1.9.0 aa594772c1e8 12 months ago 147MB

# 启动Harbor
[root@harbor harbor]# ./install.sh --with-notary --with-clair --with-chartmuseum

# Harbor日志文件存放路径为/var/log/harbor/
# 如果需要修改Harbor的配置文件harbor.yml,因为Harbor是基于docker-compose服务编排的,我们可以使用docker-compose命令重启Harbor。不修改配置文件,重启Harbor命令:docker-compose start | stop | restart
1、停止Harbor
[root@harbor ~]# docker-compose -f /usr/local/harbor/docker-compose.yml down

2、启动Harbor
[root@harbor ~]# docker-compose -f /usr/local/harbor/docker-compose.yml up -d

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# 开机自运行
[root@harbor ~]# cat <<END>> /etc/rc.local
docker-compose -f /usr/local/harbor/docker-compose.yml up -d
END

[root@harbor ~]# chmod u+x /etc/rc.d/rc.local

# 登录Harbor仓库
[root@harbor harbor]# docker login https://hub.lemon.com
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

# 客户端测试是否能够访问(须在客户端加入hosts)
https://hub.lemon.com/


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# 随便在一个节点上测试docker是否能使用harbor库
[root@master1 ~]# docker login https://hub.lemon.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@master1 ~]# docker pull httpd
[root@master1 ~]# docker tag httpd:latest hub.lemon.com/library/httpd:v1 #打好标签
[root@master1 ~]# docker rmi httpd:latest
Untagged: httpd:latest
Untagged: httpd@sha256:0fce91cc167634ede639701b7dd1d8093f4ad2f2d9d0d5a8f4be2eaef8a570fb

1
2
3
4
5
6
7
8
9
[root@master1 ~]# docker push hub.lemon.com/library/httpd:v1    #推送至harbor仓库
The push refers to repository [hub.lemon.com/library/httpd]
f1fee4547086: Pushed
a6a46c0268b1: Pushed
951b1be5cf2d: Pushed
d37da03a9458: Pushed
07cab4339852: Pushed
v1: digest: sha256:8c9bc11ca46ffd0b6b8a00e30aa670abef6c0d5d308e318a5cb8cf9e23931649 size: 1367
# 回到浏览器查看


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# Kubernetes拉取harbor库镜像创建pod
# 在此之前先将打标签的镜像删除
[root@master1 ~]# docker rmi hub.lemon.com/library/httpd:v1
Untagged: hub.lemon.com/library/httpd:v1
Untagged: hub.lemon.com/library/httpd@sha256:8c9bc11ca46ffd0b6b8a00e30aa670abef6c0d5d308e318a5cb8cf9e23931649
Deleted: sha256:6d82971d37d087be9917ab2015a4dc807569c736d3f2017c0821ddc4ed126617
Deleted: sha256:59a897aaa844713f078ea9234bd61b0f4885598a9ffb1267b4c59983813abb52
Deleted: sha256:6942605f2c5a8ba622491e369f2585daafe749a645835f5abb4fb9d11803664d
Deleted: sha256:0f44970c8ecb7e1107f45ff7d5a7f7f3799a9821dce5cd30c51f2f7641339665
Deleted: sha256:97635989e45ed57deef09cd09be52d008a073f2e1e045a1ba91956fbc2db2961
Deleted: sha256:07cab433985205f29909739f511777a810f4a9aff486355b71308bb654cdc868

# 基于harbor仓库的镜像启动Pod
[root@master1 ~]# kubectl run httpd-01 --image=hub.lemon.com/library/httpd:v1 --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/httpd-01 created

# 查看所有deployment
[root@master1 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-01 1/1 1 1 21s

# 查看所有rs(RESTARTS副本)
[root@master1 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
httpd-01-6c9fbcfb65 1 1 1 47s

# 查看所有pod
[root@master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
httpd-01-6c9fbcfb65-jvmc8 1/1 Running 0 58s
[root@master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-01-6c9fbcfb65-jvmc8 1/1 Running 0 69s 10.244.1.2 node1 <none> <none>

# 交互式执行容器命令
[root@master1 ~]# kubectl exec -it httpd-01-6c9fbcfb65-9hlhv ls
bin build cgi-bin conf error htdocs icons include logs modules

# 访问这个pod的IP
[root@master1 ~]# curl 10.244.1.2
<html><body><h1>It works!</h1></body></html>

# 删除所有已经退出的容器
docker rm -v $(docker ps -qa -f status=exited)

9. 基本的使用下K8S

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
# 查看pod的详细信息
[root@master1 ~]# kubectl -n default get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-01-6c9fbcfb65-jvmc8 1/1 Running 1 12h 10.244.1.3 node1 <none> <none>

# 测试删除这个pod容器后,k8s会不会重新策划这个pod
kubectl -n default delete pod httpd-01-6c9fbcfb65-jvmc8

# 验证, 可以看到k8s看到副本的期望值不符合之后,就会马上新起来一个pod来满足这个期望值
[root@master1 ~]# kubectl -n default get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h 10.244.1.3 node1 <none> <none>


# 在生产环境下发现一个副本的pod已经不够用了,需要扩容
[root@master1 ~]# kubectl -n default scale --replicas=3 deployment/httpd-01
deployment.extensions/httpd-01 scaled

[root@master1 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-01 3/3 3 3 12h

[root@master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h
httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 51s
httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 51s

[root@master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 12h 10.244.1.3 node1 <none> <none>
httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 2m8s 10.244.2.3 node2 <none> <none>
httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 2m8s 10.244.2.2 node2 <none> <none>
# 能够看到,已经扩容成功了,没错,就是这么简单~~


# 但是现在又引来了一个新的问题,我有三个容器,端口一样,但ip确是不一样的,外界该要怎么访问这个pod呢?
# 答:使用SVC来实现
[root@master1 ~]# kubectl expose --help|grep -A 1 'Create a service for an nginx'
# Create a service for an nginx deployment, which serves on port 80 and connects to the containers on port 8000.
kubectl expose deployment nginx --port=80 --target-port=8000

# 查看一下deployment名称
[root@master1 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
httpd-01 3/3 3 3 13h

# 创建svc
[root@master1 ~]# kubectl expose deployment httpd-01 --port=88 --target-port=80
service/httpd-01 exposed

# 查看一下svc地址
[root@master1 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
httpd-01 ClusterIP 10.99.191.162 <none> 88/TCP 30s run=httpd-01
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13h <none>

# 这里为了好验证负载均衡,在访问之前先修改容器网页
[root@master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
httpd-01-6c9fbcfb65-9hlhv 1/1 Running 1 13h 10.244.1.3 node1 <none> <none>
httpd-01-6c9fbcfb65-hjhjn 1/1 Running 0 23m 10.244.2.3 node2 <none> <none>
httpd-01-6c9fbcfb65-x52hh 1/1 Running 0 23m 10.244.2.2 node2 <none> <none>

[root@master1 ~]# kubectl exec -it httpd-01-6c9fbcfb65-9hlhv bash
root@httpd-01-6c9fbcfb65-9hlhv:/usr/local/apache2# echo 'node1-10.244.1.3' > htdocs/index.html

[root@master1 ~]# kubectl exec -it httpd-01-6c9fbcfb65-hjhjn bash
root@httpd-01-6c9fbcfb65-hjhjn:/usr/local/apache2# echo 'node2-10.244.2.3' > htdocs/index.html

[root@master1 ~]# kubectl exec -it httpd-01-6c9fbcfb65-x52hh bash
root@httpd-01-6c9fbcfb65-x52hh:/usr/local/apache2# echo 'node2-10.244.2.2' > htdocs/index.html

# 访问SVC从而以负载均衡的方式访问到pod副本
[root@master1 ~]# curl 10.99.191.162:88
node2-10.244.2.3
[root@master1 ~]# curl 10.99.191.162:88
node2-10.244.2.2
[root@master1 ~]# curl 10.99.191.162:88
node1-10.244.1.3
[root@master1 ~]# curl 10.99.191.162:88
node2-10.244.2.3

# 原理:查看下ipvsadm规则
[root@master1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 192.168.2.1:6443 Masq 1 3 0
TCP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.6:9153 Masq 1 0 0
-> 10.244.0.7:9153 Masq 1 0 0
# 这就是刚才创建的SVC,实际上他就是一个转发规则
TCP 10.99.191.162:88 rr
-> 10.244.1.3:80 Masq 1 0 1
-> 10.244.2.2:80 Masq 1 0 1
-> 10.244.2.3:80 Masq 1 0 2
UDP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0


# 上面做的其实只能在内部访问,如果像对外开放的话,需将原本svc的type类型改为NodePort类型,因为默认的ClusterIP类型只是针对这个集群,封闭不对外暴露的。
[root@master1 ~]# kubectl edit svc httpd-01
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-09-13T07:50:56Z"
labels:
run: httpd-01
name: httpd-01
namespace: default
resourceVersion: "21567"
selfLink: /api/v1/namespaces/default/services/httpd-01
uid: 624c99c1-b27f-4990-aa5b-d381e0a37755
spec:
clusterIP: 10.99.191.162
ports:
- port: 88
protocol: TCP
targetPort: 80
selector:
run: httpd-01
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}

# 再来查看这个svc的类型
[root@master1 ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
httpd-01 NodePort 10.99.191.162 <none> 88:32552/TCP 28m run=httpd-01
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h <none>

# 会发现它基于上面的88端口对外打开了一个随机端口32552提供服务,而且是将所有k8s节点都打开了这个端口号对外服务
[root@master1 ~]# netstat -antpu | grep 32552
tcp6 0 0 :::32552 :::* LISTEN 1945/kube-proxy

[root@node1 ~]# netstat -antpu | grep 32552
tcp6 0 0 :::32552 :::* LISTEN 1568/kube-proxy

[root@node2 ~]# netstat -antpu | grep 32552
tcp6 0 0 :::32552 :::* LISTEN 1601/kube-proxy

# 外界访问




至此整体kubernetes集群架构搭建完成

-------------本文结束感谢您的阅读-------------