二进制部署Kubernetes集群参考文档(V1.15.0)-创新互联

一、基础概念

目前创新互联已为千余家的企业提供了网站建设、域名、网页空间、网站托管、服务器托管、企业网站设计、盱眙网站维护等服务,公司将坚持客户导向、应用为本的策略,正道将秉承"和谐、参与、激情"的文化,与客户和合作伙伴齐心协力一起成长,共同发展。

1、概念

Kubernetes(通常写成“k8s”)Kubernetes是Google开源的容器集群管理系统。其设计目标是在主机集群之间提供一个能够自动化部署、可拓展、应用容器可运营的平台。

Kubernetes通常结合docker容器工具工作,并且整合多个运行着docker容器的主机集群。

2、功能特性

a、自动化容器部署 b、自动化扩/缩容器规模 c、提供容器间的负载均衡 d、快速更新和快速回滚

3、相关组件说明

3.1、Master节点组件

master节点上主要运行四个组件:api-server、scheduler、controller-manager、etcd。

APIServer:APIServer负责对外提供RESTful的Kubernetes API服务,它是系统管理指令的统一入口,任何对资源进行增删改查的操作都给APIServer处理后再提交给etcd。

schedule:scheduler的职责很明确,就是负责调度pod到合适的Node上。如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一

个Node的绑定,即将这个pod部署到这个Node上。Kubernetes目前提供了调度算法,但是同样也保了接口,用户可以根据自己的需求定义自己的调度算法。

controller-manager:如果说APIServer做的是“前台”的工作的话,那controller manager就是负责“后台”的。每个资源一般都对一个控制器,而controller manager就是

负责管理这些控制器的。比如我们通过APIServer创建一个pod,当这个pod创建成功后,APIServer的任务就算完成了。而后面保证Pod的状态始终和我们预期的一样的重任

就由controller manager去保证了。

etcd:etcd是一个高可用的键值存储系统,Kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。

3.2、Node节点组件

每个Node节点主要由三个模块组成:kubelet、kube-proxy、runtime。

runtime:指的是容器运行环境,目前Kubernetes支持docker和rkt两种容器。

kube-proxy:该模块实现了Kubernetes中的服务发现和反向代理功能。反向代理方面:kube-proxy支持TCP和UDP连接转发,默认基于Round Robin算法将客户端流量转发到

与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制,监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint

的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响。另外kube-proxy还支持session affinity。

kubelet:Kubelet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上面的所有容器但是如果容器不是通过Kubernetes创建

的,它并不会管理。本质上,它负责使Pod得运行状态与期望的状态一致。

3.3、pod

Pod是k8s进行资源调度的最小单位,每个Pod中运行着一个或多个密切相关的业务容器,这些业务容器共享这个Pause容器的IP和Volume,我们以这个不易死亡的Pause容器

作为Pod的根容器,以它的状态表示整个容器组的状态。一个Pod一旦被创建就会放到Etcd中存储,然后由Master调度到一个Node绑定,由这个Node上的Kubelet进行实例化。

每个Pod会被分配一个单独的Pod IP,Pod IP + ContainerPort 组成了一个Endpoint。

3.4、Service

Service其功能使应用暴露,Pods 是有生命周期的,也有独立的 IP 地址,随着 Pods 的创建与销毁,一个必不可少的工作就是保证各个应用能够感知这种变化。这就要提

到 Service 了,Service 是 YAML 或 JSON 定义的由 Pods 通过某种策略的逻辑组合。更重要的是,Pods 独立 IP 需要通过 Service 暴露到网络中。

二、安装部署

部署方式有多中,此篇文章我们采用二进制方式部署。

1、环境介绍

主机名IP安装软件包系统版本
k8s-master192.168.248.65kube-apiserver,kube-controller-manager,kube-schedulerRed Hat Enterprise Linux Server release 7.3
k8s-node1192.168.248.66etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3
k8s-node2192.168.248.67etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3
k8s-node3192.168.248.68etcd,kubelet,kube-proxy,flannel,dockerRed Hat Enterprise Linux Server release 7.3

软件部署版本及下载链接

版本

kubenetes version v1.15.0

etcd version v3.3.10

flannel version v0.11.0

下载链接

kubernetes网址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.15.md#v1150

server端二进制文件:https://dl.k8s.io/v1.15.0/kubernetes-server-linux-amd64.tar.gz

node端二进制文件:https://dl.k8s.io/v1.15.0/kubernetes-node-linux-amd64.tar.gz

etcd网址:https://github.com/etcd-io/etcd/releases

flannel网址:https://github.com/coreos/flannel/releases

2、服务器初始化环境准备

同步系统时间

# ntpdate time1.aliyun.com # echo "*/5 * * * * /usr/sbin/ntpdate -s time1.aliyun.com" > /var/spool/cron/root

修改主机名

# hostnamectl --static set-hostname k8s-master # hostnamectl --static set-hostname k8s-node1 # hostnamectl --static set-hostname k8s-node2 # hostnamectl --static set-hostname k8s-node3

添加hosts解析

[root@k8s-master ~]# cat /etc/hosts 192.168.248.65 k8s-master 192.168.248.66 k8s-node1 192.168.248.67 k8s-node2 192.168.248.68 k8s-node3

关闭并禁用firewalld及selinux

# systemctl stop firewalld # systemctl disable firewalld # setenforce 0 # vim /etc/sysconfig/selinux   SELINUX=disabled

关闭swap

# swapoff -a && sysctl -w vm.swappiness=0 # sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

设置系统参数

# cat /etc/sysctl.d/kubernetes.conf net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 vm.overcommit_memory=1 vm.panic_on_oom=0 fs.inotify.max_user_watches=89100 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1

3、kubernetes集群安装部署

所有node节点安装docker-ce

# wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # yum makecache # yum install docker-ce-18.06.2.ce-3.el7 -y # systemctl start docker && systemctl enable docker

创建安装目录

# mkdir /data/{install,ssl_config} -pv # mkdir /data/ssl_config/{etcd,kubernetes} -pv # mkdir /cloud/k8s/etcd/{bin,cfg,ssl} -pv # mkdir /cloud/k8s/kubernetes/{bin,cfg,ssl} -pv

添加环境变量

vim /etc/profile ######Kubernetes######## export PATH=$PATH:/cloud/k8s/etcd/bin/:/cloud/k8s/kubernetes/bin/

4、创建ssl证书

下载证书生成工具

[root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@k8s-master ~]# wget -P /usr/local/bin/ https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64  [root@k8s-master ~]# mv /usr/local/bin/cfssl_linux-amd64 /usr/local/bin/cfssl [root@k8s-master ~]# mv /usr/local/bin/cfssljson_linux-amd64 /usr/local/bin/cfssljson [root@k8s-master ~]# mv /usr/local/bin/cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo [root@k8s-master ~]# chmod +x /usr/local/bin/*

创建etcd相关证书

# etcd证书ca配置 [root@k8s-master etcd]# pwd /data/ssl_config/etcd [root@k8s-master etcd]# cat ca-config.json  {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "www": {          "expiry": "87600h",          "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ]       }     }   } } # etcd ca配置文件 [root@k8s-master etcd]# cat ca-csr.json  {     "CN": "etcd CA",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing"         }     ] } # etcd server 证书 [root@k8s-master etcd]# cat server-csr.json  {     "CN": "etcd",     "hosts": [     "k8s-node3",     "k8s-node2",     "k8s-node1",     "192.168.248.66",     "192.168.248.67",     "192.168.248.68"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing"         }     ] } # 生成etcd ca证书和私钥 # cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

创建kubernetes相关证书

# kubernetes 证书ca配置 [root@k8s-master kubernetes]# pwd /data/ssl_config/kubernetes [root@k8s-master kubernetes]# cat ca-config.json {   "signing": {     "default": {       "expiry": "87600h"     },     "profiles": {       "kubernetes": {          "expiry": "87600h",          "usages": [             "signing",             "key encipherment",             "server auth",             "client auth"         ]       }     }   } } # 创建ca证书配置 [root@k8s-master kubernetes]# cat ca-csr.json  {     "CN": "kubernetes",     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing",             "O": "k8s",             "OU": "System"         }     ] } # 生成API_SERVER证书 [root@k8s-master kubernetes]# cat server-csr.json  {     "CN": "kubernetes",     "hosts": [       "10.0.0.1",       "127.0.0.1",       "192.168.248.65",       "k8s-master",       "kubernetes",       "kubernetes.default",       "kubernetes.default.svc",       "kubernetes.default.svc.cluster",       "kubernetes.default.svc.cluster.local"     ],     "key": {         "algo": "rsa",         "size": 2048     },     "names": [         {             "C": "CN",             "L": "Beijing",             "ST": "Beijing",             "O": "k8s",             "OU": "System"         }     ] } # 创建 Kubernetes Proxy 证书 [root@k8s-master kubernetes]# cat kube-proxy-csr.json {   "CN": "system:kube-proxy",   "hosts": [],   "key": {     "algo": "rsa",     "size": 2048   },   "names": [     {       "C": "CN",       "L": "Beijing",       "ST": "Beijing",       "O": "k8s",       "OU": "System"     }   ] } # 生成ca证书 # cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 生成 api-server 证书 # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server # 生成 kube-proxy 证书 # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

5、部署etcd集群(在所有node节点操作)

解压并配置etcd软件包

# tar -xvf etcd-v3.3.10-linux-amd64.tar.gz # cp etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /cloud/k8s/etcd/bin/

编写etcd配置文件

[root@k8s-node1 ~]# cat /cloud/k8s/etcd/cfg/etcd  #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.66:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.66:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.66:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.66:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node2 ~]# cat /cloud/k8s/etcd/cfg/etcd  #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.67:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.67:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.67:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.67:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node3 ~]# cat /cloud/k8s/etcd/cfg/etcd  #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.248.68:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.248.68:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.248.68:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.248.68:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.248.66:2380,etcd02=https://192.168.248.67:2380,etcd03=https://192.168.248.68:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"

创建etcd启动文件

[root@k8s-node1 ~]# cat /usr/lib/systemd/system/etcd.service  [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/cloud/k8s/etcd/cfg/etcd ExecStart=/cloud/k8s/etcd/bin/etcd \ --name=${ETCD_NAME} \ --data-dir=${ETCD_DATA_DIR} \ --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --peer-cert-file=/cloud/k8s/etcd/ssl/server.pem \ --peer-key-file=/cloud/k8s/etcd/ssl/server-key.pem \ --trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/cloud/k8s/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target

将生成的etcd证书文件拷贝到所有node节点

[root@k8s-master etcd]# pwd /data/ssl_config/etcd [root@k8s-master etcd]# scp *.pem k8s-node1:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd]# scp *.pem k8s-node2:/cloud/k8s/etcd/ssl/ [root@k8s-master etcd]# scp *.pem k8s-node3:/cloud/k8s/etcd/ssl/

启动etcd集群服务

systemctl daemon-reload systemctl enable etcd systemctl start etcd

查看启动状态(任意一个node节点上执行即可)

[root@k8s-node1 ssl]# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem --cert-file=/cloud/k8s/etcd/ssl/server.pem --key-file=/cloud/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"  cluster-health member 2830381866015ef6 is healthy: got healthy result from https://192.168.248.67:2379 member 355a96308320dc2a is healthy: got healthy result from https://192.168.248.66:2379 member a9a44d5d05a31ce0 is healthy: got healthy result from https://192.168.248.68:2379 cluster is healthy

二进制部署Kubernetes集群参考文档(V1.15.0)

6、部署flannel网络(所有node节点)

向etcd集群中写入pod网段信息(任意一台node节点上执行)

etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem  \ --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"  \ set /coreos.com/network/config '{ "Network": "172.18.0.0/16", "Backend": {"Type": "vxlan"}}'

二进制部署Kubernetes集群参考文档(V1.15.0)

查看写入etcd集群中的网段信息

# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ --cert-file=/cloud/k8s/etcd/ssl/server.pem \ --key-file=/cloud/k8s/etcd/ssl/server-key.pem  \ --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379"  \ get /coreos.com/network/config # [root@k8s-node1 ssl]# etcdctl --ca-file=/cloud/k8s/etcd/ssl/ca.pem \ > --cert-file=/cloud/k8s/etcd/ssl/server.pem \ > --key-file=/cloud/k8s/etcd/ssl/server-key.pem  \ > --endpoints="https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379" \ > ls /coreos.com/network/subnets /coreos.com/network/subnets/172.18.95.0-24 /coreos.com/network/subnets/172.18.22.0-24 /coreos.com/network/subnets/172.18.54.0-24

二进制部署Kubernetes集群参考文档(V1.15.0)二进制部署Kubernetes集群参考文档(V1.15.0)

解压并配置flannel网络插件

# tar xf  flannel-v0.11.0-linux-amd64.tar.gz # mv flanneld mk-docker-opts.sh /cloud/k8s/kubernetes/bin/

配置flannel

[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/flanneld  FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 -etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem -etcd-certfile=/cloud/k8s/etcd/ssl/server.pem -etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"

配置flanneld启动文件

[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/flanneld.service  [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/cloud/k8s/kubernetes/cfg/flanneld ExecStart=/cloud/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/cloud/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target

配置Docker启动指定子网段

[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/docker.service  [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. #TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process # restart the docker process if it exits prematurely Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target

启动服务

systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl restart docker

验证fiannel网络配置

二进制部署Kubernetes集群参考文档(V1.15.0)

二进制部署Kubernetes集群参考文档(V1.15.0)二进制部署Kubernetes集群参考文档(V1.15.0)

二进制部署Kubernetes集群参考文档(V1.15.0)

node节点之间相互ping测docker0网卡的ip地址,能ping通说明flanneld网络插件部署成功。

7、部署master节点组件

解压master节点安装包

# tar xf kubernetes-server-linux-amd64.tar.gz # cp kubernetes//server/bin/{kube-scheduler,kube-apiserver,kube-controller-manager,kubectl}  /cloud/k8s/kubernetes/bin/

配置kubernetes相关证书

# cp /data/ssl_config/kubernetes/*.pem /cloud/k8s/kubernetes/ssl/

部署 kube-apiserver 组件

创建 TLS Bootstrapping Token

[root@k8s-master cfg]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '   #生成随机字符串 [root@k8s-master cfg]# pwd /cloud/k8s/kubernetes/cfg [root@k8s-master cfg]# cat token.csv  a081e7ba91d597006cbdacfa8ee114ac,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

apiserver配置文件

[root@k8s-master cfg]# cat kube-apiserver  KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 \ --bind-address=192.168.248.65 \ --secure-port=6443 \ --advertise-address=192.168.248.65 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem  \ --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem"

kube-apiserver启动文件

[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-apiserver.service  [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-apiserver ExecStart=/cloud/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target

启动kube-apiserver服务

[root@k8s-master cfg]# systemctl daemon-reload [root@k8s-master cfg]# systemctl enable kube-apiserver [root@k8s-master cfg]# systemctl start kube-apiserver [root@k8s-master cfg]# ps -ef |grep kube-apiserver root       1050      1  4 09:02 ?        00:25:21 /cloud/k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.248.66:2379,https://192.168.248.67:2379,https://192.168.248.68:2379 --bind-address=192.168.248.65 --secure-port=6443 --advertise-address=192.168.248.65 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/cloud/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/cloud/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/cloud/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/cloud/k8s/etcd/ssl/ca.pem --etcd-certfile=/cloud/k8s/etcd/ssl/server.pem --etcd-keyfile=/cloud/k8s/etcd/ssl/server-key.pem root       1888   1083  0 18:15 pts/0    00:00:00 grep --color=auto kube-apiserve

部署kube-scheduler组件

创建kube-scheduler配置文件

[root@k8s-master cfg]# cat /cloud/k8s/kubernetes/cfg/kube-scheduler  KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

创建kube-scheduler启动文件

[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-scheduler.service  [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/cloud/k8s/kubernetes/cfg/kube-scheduler ExecStart=/cloud/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target

启动kube-scheduler服务

[root@k8s-master cfg]# systemctl daemon-reload  [root@k8s-master cfg]# systemctl enable kube-scheduler.service   [root@k8s-master cfg]# systemctl start kube-scheduler.service [root@k8s-master cfg]# ps -ef |grep kube-scheduler root       1716      1  0 16:12 ?        00:00:19 /cloud/k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect root       1897   1083  0 18:21 pts/0    00:00:00 grep --color=auto kube-scheduler

部署kube-controller-manager组件

创建kube-controller-manager配置文件

[root@k8s-master cfg]# cat /cloud/k8s/kubernetes/cfg/kube-controller-manager  KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem  \ --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem"

创建kube-controller-manager启动文件

[root@k8s-master cfg]# cat /usr/lib/systemd/system/kube-controller-manager.service  [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/cloud/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target

启动kube-controller-manager服务

[root@k8s-master cfg]# systemctl daemon-reload [root@k8s-master cfg]# systemctl enable kube-controller-manager [root@k8s-master cfg]# systemctl start kube-controller-manager [root@k8s-master cfg]# ps -ef |grep kube-controller-manager root       1709      1  2 16:12 ?        00:03:11 /cloud/k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/cloud/k8s/kubernetes/ssl/ca.pem --cluster-signing-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem --root-ca-file=/cloud/k8s/kubernetes/ssl/ca.pem --service-account-private-key-file=/cloud/k8s/kubernetes/ssl/ca-key.pem root       1907   1083  0 18:29 pts/0    00:00:00 grep --color=auto kube-controller-manager

查看集群状态

[root@k8s-master cfg]# kubectl get cs NAME                 STATUS    MESSAGE             ERROR controller-manager   Healthy   ok                   scheduler            Healthy   ok                   etcd-0               Healthy   {"health":"true"}    etcd-2               Healthy   {"health":"true"}    etcd-1               Healthy   {"health":"true"}

8、部署node节点组件(所有node节点操作)

解压node节点安装包

[root@k8s-node1 install]# tar xf kubernetes-node-linux-amd64.tar.gz [root@k8s-node1 install]# cp kubernetes/node/bin/{kubelet,kube-proxy} /cloud/k8s/kubernetes/bin/

创建kubelet bootstrap.kubeconfig 文件

[root@k8s-master kubernetes]# cat environment.sh  # 创建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" # 设置集群参数 kubectl config set-cluster kubernetes \   --certificate-authority=./ca.pem \   --embed-certs=true \   --server=${KUBE_APISERVER} \   --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \   --token=${BOOTSTRAP_TOKEN} \   --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \   --cluster=kubernetes \   --user=kubelet-bootstrap \   --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 执行environment.sh生成bootstrap.kubeconfig[object Object]

创建 kubelet.kubeconfig 文件

[root@k8s-master kubernetes]# cat envkubelet.kubeconfig.sh # 创建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" # 设置集群参数 kubectl config set-cluster kubernetes \   --certificate-authority=./ca.pem \   --embed-certs=true \   --server=${KUBE_APISERVER} \   --kubeconfig=kubelet.kubeconfig # 设置客户端认证参数 kubectl config set-credentials kubelet \   --token=${BOOTSTRAP_TOKEN} \   --kubeconfig=kubelet.kubeconfig # 设置上下文参数 kubectl config set-context default \   --cluster=kubernetes \   --user=kubelet \   --kubeconfig=kubelet.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=kubelet.kubeconfig #执行envkubelet.kubeconfig.sh脚本,生成kubelet.kubeconfig[object Object]

创建kube-proxy.kubeconfig文件

[root@k8s-master kubernetes]# cat env_proxy.sh # 创建kube-proxy kubeconfig文件 BOOTSTRAP_TOKEN=a081e7ba91d597006cbdacfa8ee114ac KUBE_APISERVER="https://192.168.248.65:6443" kubectl config set-cluster kubernetes \   --certificate-authority=./ca.pem \   --embed-certs=true \   --server=${KUBE_APISERVER} \   --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \   --client-certificate=./kube-proxy.pem \   --client-key=./kube-proxy-key.pem \   --embed-certs=true \   --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \   --cluster=kubernetes \   --user=kube-proxy \   --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig #执行env_proxy.sh脚本生成kube-proxy.kubeconfig文件

将以上生成的kubeconfig复制到所有node节点

[root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node1:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node2:/cloud/k8s/kubernetes/cfg/ [root@k8s-master kubernetes]# scp bootstrap.kubeconfig kubelet.kubeconfig kube-proxy.kubeconfig k8s-node3:/cloud/k8s/kubernetes/cfg/

所有node节点创建kubelet 参数配置模板文件

[root@k8s-node1 cfg]# cat kubelet.config  kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.66 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication:   anonymous:     enabled: true [root@k8s-node2 cfg]# cat kubelet.config  kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.67 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication:   anonymous:     enabled: true      [root@k8s-node3 cfg]# cat kubelet.config  kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.248.68 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication:   anonymous:     enabled: true

创建kubelet配置文件

[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node1 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@k8s-node2 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node2 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" [root@k8s-node3 cfg]# cat /cloud/k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node3 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/cloud/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/cloud/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

创建kubelet启动文件

[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/kubelet.service  [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kubelet ExecStart=/cloud/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target

将kubelet-bootstrap用户绑定到系统集群角色(不绑定角色,kubelet将无法启动成功)

kubectl create clusterrolebinding kubelet-bootstrap \   --clusterrole=system:node-bootstrapper \   --user=kubelet-bootstrap

启动kubelet服务(所有node节点)

[root@k8s-node1 cfg]# systemctl daemon-reload [root@k8s-node1 cfg]# systemctl enable kubelet [root@k8s-node1 cfg]# systemctl start kubelet [root@k8s-node1 cfg]# ps -ef |grep kubelet root       3306      1  2 09:02 ?        00:14:47 /cloud/k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=k8s-node1 --kubeconfig=/cloud/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/cloud/k8s/kubernetes/cfg/bootstrap.kubeconfig --config=/cloud/k8s/kubernetes/cfg/kubelet.config --cert-dir=/cloud/k8s/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 root      87181  12020  0 19:22 pts/0    00:00:00 grep --color=auto kubelet

在master节点上approve kubelet CSR 请求

kubectl get csr kubectl certificate approve $NAME csr 状态变为 Approved,Issued 即可

查看集群状态及node节点

[root@k8s-master kubernetes]# kubectl  get cs,node NAME                                 STATUS    MESSAGE             ERROR componentstatus/controller-manager   Healthy   ok                   componentstatus/scheduler            Healthy   ok                   componentstatus/etcd-2               Healthy   {"health":"true"}    componentstatus/etcd-0               Healthy   {"health":"true"}    componentstatus/etcd-1               Healthy   {"health":"true"}    NAME             STATUS   ROLES    AGE    VERSION node/k8s-node1   Ready    <none>   4d2h   v1.15.0 node/k8s-node2   Ready    <none>   4d2h   v1.15.0 node/k8s-node3   Ready    <none>   4d2h   v1.15.0

部署 node kube-proxy 组件

[root@k8s-node1 cfg]# cat /cloud/k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=k8s-node1 \ --cluster-cidr=10.0.0.0/24 \ --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig"

创建kube-proxy启动文件

[root@k8s-node1 cfg]# cat /usr/lib/systemd/system/kube-proxy.service  [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/cloud/k8s/kubernetes/cfg/kube-proxy ExecStart=/cloud/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target

启动kube-proxy服务

[root@k8s-node1 cfg]# systemctl daemon-reload [root@k8s-node1 cfg]# systemctl enable kube-proxy [root@k8s-node1 cfg]# systemctl start kube-proxy [root@k8s-node1 cfg]# ps -ef |grep kube-proxy root        966      1  0 09:02 ?        00:01:20 /cloud/k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=k8s-node1 --cluster-cidr=10.0.0.0/24 --kubeconfig=/cloud/k8s/kubernetes/cfg/kube-proxy.kubeconfig root      87093  12020  0 19:22 pts/0    00:00:00 grep --color=auto kube-proxy

部署Coredns组件

[root@k8s-master ~]# cat coredns.yaml  # Warning: This is a file generated from the base underscore template file: coredns.yaml.base apiVersion: v1 kind: ServiceAccount metadata:   name: coredns   namespace: kube-system   labels:       kubernetes.io/cluster-service: "true"       addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:   labels:     kubernetes.io/bootstrapping: rbac-defaults     addonmanager.kubernetes.io/mode: Reconcile   name: system:coredns rules: - apiGroups:   - ""   resources:   - endpoints   - services   - pods   - namespaces   verbs:   - list   - watch --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:   annotations:     rbac.authorization.kubernetes.io/autoupdate: "true"   labels:     kubernetes.io/bootstrapping: rbac-defaults     addonmanager.kubernetes.io/mode: EnsureExists   name: system:coredns roleRef:   apiGroup: rbac.authorization.k8s.io   kind: ClusterRole   name: system:coredns subjects: - kind: ServiceAccount   name: coredns   namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata:   name: coredns   namespace: kube-system   labels:       addonmanager.kubernetes.io/mode: EnsureExists data:   Corefile: |     .:53 {         errors         health         kubernetes cluster.local in-addr.arpa ip6.arpa {             pods insecure             upstream             fallthrough in-addr.arpa ip6.arpa         }         prometheus :9153         proxy . /etc/resolv.conf         cache 30         loop         reload         loadbalance     } --- apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: coredns   namespace: kube-system   labels:     k8s-app: kube-dns     kubernetes.io/cluster-service: "true"     addonmanager.kubernetes.io/mode: Reconcile     kubernetes.io/name: "CoreDNS" spec:   # replicas: not specified here:   # 1. In order to make Addon Manager do not reconcile this replicas parameter.   # 2. Default is 1.   # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.   replicas: 3   strategy:     type: RollingUpdate     rollingUpdate:       maxUnavailable: 1   selector:     matchLabels:       k8s-app: kube-dns   template:     metadata:       labels:         k8s-app: kube-dns       annotations:         seccomp.security.alpha.kubernetes.io/pod: 'docker/default'     spec:       serviceAccountName: coredns       tolerations:         - key: node-role.kubernetes.io/master           effect: NoSchedule         - key: "CriticalAddonsOnly"           operator: "Exists"       containers:       - name: coredns         image: coredns/coredns:1.3.1         imagePullPolicy: IfNotPresent         resources:           limits:             memory: 170Mi           requests:             cpu: 100m             memory: 70Mi         args: [ "-conf", "/etc/coredns/Corefile" ]         volumeMounts:         - name: config-volume           mountPath: /etc/coredns           readOnly: true         ports:         - containerPort: 53           name: dns           protocol: UDP         - containerPort: 53           name: dns-tcp           protocol: TCP         - containerPort: 9153           name: metrics           protocol: TCP         livenessProbe:           httpGet:             path: /health             port: 8080             scheme: HTTP           initialDelaySeconds: 60           timeoutSeconds: 5           successThreshold: 1           failureThreshold: 5         securityContext:           allowPrivilegeEscalation: false           capabilities:             add:             - NET_BIND_SERVICE             drop:             - all           readOnlyRootFilesystem: true       dnsPolicy: Default       volumes:         - name: config-volume           configMap:             name: coredns             items:             - key: Corefile               path: Corefile --- apiVersion: v1 kind: Service metadata:   name: kube-dns   namespace: kube-system   annotations:     prometheus.io/port: "9153"     prometheus.io/scrape: "true"   labels:     k8s-app: kube-dns     kubernetes.io/cluster-service: "true"     addonmanager.kubernetes.io/mode: Reconcile     kubernetes.io/name: "CoreDNS" spec:   selector:     k8s-app: kube-dns   clusterIP: 10.0.0.2    ports:   - name: dns     port: 53     protocol: UDP   - name: dns-tcp     port: 53     protocol: TCP[root@k8s-master ~]# kubectl apply -f coredns.yaml  serviceaccount/coredns unchanged clusterrole.rbac.authorization.k8s.io/system:coredns unchanged clusterrolebinding.rbac.authorization.k8s.io/system:coredns unchanged configmap/coredns unchanged deployment.extensions/coredns unchanged service/kube-dns unchanged [root@k8s-master ~]# kubectl get deployment -n kube-system    NAME      READY   UP-TO-DATE   AVAILABLE   AGE coredns   3/3     3            3           33h [root@k8s-master ~]# kubectl get deployment -n kube-system -o wide NAME      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                  SELECTOR coredns   3/3     3            3           33h   coredns      coredns/coredns:1.3.1   k8s-app=kube-dns [root@k8s-master ~]# kubectl get pod -n kube-system -o wide           NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES coredns-b49c586cf-nwzv6   1/1     Running   1          33h   172.18.54.3   k8s-node3   <none>           <none> coredns-b49c586cf-qv5b9   1/1     Running   1          33h   172.18.22.3   k8s-node1   <none>           <none> coredns-b49c586cf-rcqhc   1/1     Running   1          33h   172.18.95.2   k8s-node2   <none>           <none> [root@k8s-master ~]# kubectl get svc -n kube-system -o wide    NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGE   SELECTOR kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP   33h   k8s-app=kube-dns

到此kubernetes V1.15.0乞丐版部署完成。

另外有需要云服务器可以了解下创新互联cdcxhl.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。

标题名称:二进制部署Kubernetes集群参考文档(V1.15.0)-创新互联
文章源于:https://www.cdcxhl.com/article26/csccjg.html

成都网站建设公司_创新互联,为您提供标签优化品牌网站制作微信公众号静态网站网站改版网站内链

广告

声明:本网站发布的内容(图片、视频和文字)以用户投稿、用户转载内容为主,如果涉及侵权请尽快告知,我们将会在第一时间删除。文章观点不代表本网站立场,如需处理请联系客服。电话:028-86922220;邮箱:631063699@qq.com。内容未经允许不得转载,或转载时需注明来源: 创新互联

成都做网站