k8s是什么意思?kubeadm部署k8s集群(k8s部署)|PetaExpres

PLD技术

1人已加入

描述

k8s是什么意思?

kubernetes简称K8s,是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。

在Kubernetes中,我们可以创建多个容器,每个容器里面运行一个应用实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问,而这些细节都不需要运维人员去进行复杂的手工配置和处理。

kubernetes(K8s)特点:

可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)

可扩展: 模块化,插件化,可挂载,可组合

自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展

kubeadm部署k8s集群(k8s部署):

kubernetes(k8s)三种部署方式

minikube

Minikube是一种可以在本地快速运行单点Kubernetes的工具,仅用于尝试Kubernetes或日常开发的用户。

kubeadm

Kubeadm也是提供kubeadm的工具 init和kubeadm join,用于快速部署Kubernetes集群。

二进制包推荐,从官方下载发行版的二进制包,手动部署每个组件,形成Kubernetes集群。

安装kubeadm环境准备

以下操作,在三台节点都执行

2.2.1 环境需求

环境:centos 7.4 +

硬件需求:CPU>=2c ,内存>=2G

环境角色

IP角色安装软件
10.0.0.100k8s-Masterkube-apiserver kube-schduler kube-controller-manager docker flannel kubelet
10.0.0.101k8s-node01kubelet kube-proxy docker flannel
10.0.0.102k8s-node02kubelet kube-proxy docker flannel

1、关闭防火墙及selinux

systemctl stop firewalld && systemctl disable firewalld

sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config && setenforce 0

2、关闭 swap 分区

swapoff -a # 临时

sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab #永久

3、分别在10.0.0.100、10.0.0.101、10.0.0.102上设置主机名及配置hosts

hostnamectl set-hostname k8s-master

hostnamectl set-hostname k8s-node01

hostnamectl set-hostname k8s-node02

4、在所有主机上上添加如下命令

cat >> /etc/hosts << EOF

10.0.0.100 k8s-master

10.0.0.101 k8s-node01

10.0.0.102 k8s-node02

EOF

5、内核调整,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

6、设置系统时区并同步时间服务器

yum install -y ntpdate

ntpdate time.windows.com

2.2.4 docker 安装

wget https://06)">mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce-18.06.1.ce-3.el7

systemctl enable docker && systemctl start docker

docker --version

Docker version 18.06.1-ce, build e68fc7a

2.2.5 添加kubernetes YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

2.2.6 安装kubeadm,kubelet和kubectl

2.2.6上所有主机都需要操作

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0

systemctl enable kubelet

如果遇到下面情况

清除yum 缓存,然后再执行安装命令

yum clean all

如果还不行,则直接安装最新版的

yum makecache fast

yum install -y kubelet kubeadm kubectl

2.3 部署Kubernetes Master

只需要在Master 节点执行,这里的apiserve需要修改成自己的master地址

[root@k8s-master ~]#

kubeadm init

--apiserver-advertise-address=10.0.0.100

--image-repository registry.aliyuncs.com/google_containers

--kubernetes-version v1.15.0

--service-cidr=10.1.0.0/16

--pod-network-cidr=10.244.0.0/16

由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址。

输出结果则为成功:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown (id -u):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.100:6443 --token fylid0.3udi31k2aw9zvjtc

--discovery-token-ca-cert-hash sha256:3fbb4b58eccff32668473b99cc3b0c64964f1363c93d7c6a8f502d43d34718d3

根据输出提示操作:

[root@k8s-master ~]# mkdir -p $HOME/.kube

[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master ~]# sudo chown (id -u):(id -g) $HOME/.kube/config

重新生成新的token(初次不执行)

默认token的有效期为24小时,当过期之后,该token就不可用了,

如果后续有nodes节点加入,解决方法如下:

重新生成新的token

kubeadm token create

[root@k8s-master ~]# kubeadm token create

0w3a92.ijgba9ia0e3scicg

[root@k8s-master ~]# kubeadm token list

TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS

0w3a92.ijgba9ia0e3scicg 23h 2019-09-08T22:02:40+08:00 authentication,signing system:bootstrappers:kubeadm:default-node-token

t0ehj8.k4ef3gq0icr3etl0 22h 2019-09-08T20:58:34+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

[root@k8s-master ~]#

获取ca证书sha256编码hash值

[root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

ce07a7f5b259961884c55e3ff8784b1eda6f8b5931e6fa2ab0b30b6a4234c09a

节点加入集群

[root@k8s-node01 ~]# kubeadm join --token aa78f6.8b4cafc8ed26c34f --discovery-token-ca-cert-hash sha256:0fd95a9bc67a7bf0ef42da968a0d55d92e52898ec37c971bd77ee501d845b538 10.0.0.100:6443 --skip-preflight-chec

2.4 加入Kubernetes Node

在两个Node 节点执行

使用kubeadm join 注册Node节点到Matser

kubeadm join 的内容,在上面kubeadm init 已经生成好了

[root@k8s-node01 ~]# kubeadm join 10.0.0.100:6443 --token fylid0.3udi31k2aw9zvjtc

--discovery-token-ca-cert-hash sha256:3fbb4b58eccff32668473b99cc3b0c64964f1363c93d7c6a8f502d43d34718d3

输出内容则为成功:

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Activating the kubelet service

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2.5 安装网络插件

只需要在Master 节点执行

[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改镜像地址:(有可能默认不能拉取,确保能够访问到quay.io这个registery,否则修改如下内容)

[root@k8s-master ~]# vim kube-flannel.yml

进入编辑,把106行,120行的内容,替换如下image,替换之后查看如下为正确

[root@k8s-master ~]# cat -n kube-flannel.yml|grep lizhenliang/flannel:v0.11.0-amd64

172 image: lizhenliang/flannel:v0.11.0-amd64

186 image: lizhenliang/flannel:v0.11.0-amd64

[root@k8s-master ~]# kubectl apply -f kube-flannel.yml

[root@k8s-master ~]# kubectl get pods -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-bccdc95cf-b9pz7 1/1 Running 0 10m

coredns-bccdc95cf-dfb58 1/1 Running 0 10m

etcd-k8s-master 1/1 Running 0 9m47s

kube-apiserver-k8s-master 1/1 Running 0 9m49s

kube-controller-manager-k8s-master 1/1 Running 0 9m46s

kube-flannel-ds-amd64-5lqjb 1/1 Running 0 2m6s

kube-flannel-ds-amd64-jvvpx 0/1 Init:0/1 0 2m6s

kube-proxy-hjwzg 1/1 Running 0 10m

kube-proxy-rxm2g 1/1 Running 0 6m51s

kube-scheduler-k8s-master 1/1 Running 0 9m41s

2.6 查看集群node状态

查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作

[root@k8s-master ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

k8s-master Ready master 37m v1.15.0

k8s-node01 Ready 5m22s v1.15.0

k8s-node02 Ready 5m18s v1.15.0

[root@k8s-node01 ~]# kubectl get pod -n kube-system

NAME READY STATUS RESTARTS AGE

coredns-bccdc95cf-6pdgv 1/1 Running 0 80m

coredns-bccdc95cf-f845x 1/1 Running 0 80m

etcd-k8s-master 1/1 Running 0 80m

kube-apiserver-k8s-master 1/1 Running 0 79m

kube-controller-manager-k8s-master 1/1 Running 0 80m

kube-flannel-ds-amd64-chpz8 1/1 Running 0 70m

kube-flannel-ds-amd64-jx56v 1/1 Running 0 70m

kube-flannel-ds-amd64-tsgvv 1/1 Running 0 70m

kube-proxy-d5b7l 1/1 Running 0 80m

kube-proxy-f7v46 1/1 Running 0 75m

kube-proxy-wqhsj 1/1 Running 0 78m

kube-scheduler-k8s-master 1/1 Running 0 80m

kubernetes-dashboard-8499f49758-6f6ct 1/1 Running 0 42m

只有全部都为1/1则可以成功执行后续步骤,如果flannel需检查网络情况,重新进行如下操作

kubectl delete -f kube-flannel.yml

然后重新wget,然后修改镜像地址,然后

kubectl apply -f kube-flannel.yml

2.7 创建一个nignx

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx

deployment.apps/nginx created

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort

service/nginx exposed

[root@k8s-master ~]# kubectl get pods,svc

NAME READY STATUS RESTARTS AGE

pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

service/kubernetes ClusterIP 10.1.0.1 443/TCP 39m

service/nginx NodePort 10.1.224.251 80:32039/TCP 9s

访问地址:http://NodeIP:Port ,此例就是:http://10.0.0.101:32039

2.8 部署 Dashboard

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

[root@k8s-master ~]# vim kubernetes-dashboard.yaml

修改内容:

109 spec:

110 containers:

111 - name: kubernetes-dashboard

112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行

......

157 spec:

158 type: NodePort # 增加此行

159 ports:

160 - port: 443

161 targetPort: 8443

162 nodePort: 30001 # 增加此行

163 selector:

164 k8s-app: kubernetes-dashboard

[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml

[root@k8s-master ~]# kubectl get pod -n kube-system

[root@k8s-master ~]# kubectl get pods,svc -n kube-system

在火狐浏览器访问(google受信任问题不能访问)地址: https://10.0.0.101:30001

创建service account并绑定默认cluster-admin管理员集群角色:

[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system

[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin

[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

2.9 解决其他浏览器不能访问的问题

[root@k8s-master ~]# cd /etc/kubernetes/pki/

[root@k8s-master pki]# mkdir ui

[root@k8s-master pki]# cp apiserver.crt ui/

[root@k8s-master pki]# cp apiserver.key ui/

[root@k8s-master pki]# cd ui/

[root@k8s-master ui]# mv apiserver.crt dashboard.pem

[root@k8s-master ui]# mv apiserver.key dashboard-key.pem

[root@k8s-master ui]# kubectl delete secret kubernetes-dashboard-certs -n kube-system

[root@k8s-master ui]# kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system

[root@k8s-master]# vim kubernetes-dashboard.yaml

#回到这个yaml的路径下修改

修改dashboard-controller.yaml 文件,在args下面增加证书两行

- --tls-key-file=dashboard-key.pem

      - --tls-cert-file=dashboard.pem

[root@k8s-master ~]kubectl apply -f kubernetes-dashboard.yaml

[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system

serviceaccount/dashboard-admin created

[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin

--serviceaccount=kube-system:dashboard-admin

[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

PetaExpress:https://www.petaexpress.com/products/d4a363ee47f6fe2d.html

PetaExpres推出云服务器免费试用,有需要的可进行参与:https://www.petaexpress.com/free

审核编辑 黄宇

打开APP阅读更多精彩内容
声明:本文内容及配图由入驻作者撰写或者入驻合作网站授权转载。文章观点仅代表作者本人,不代表电子发烧友网立场。文章及其配图仅供工程师学习之用,如有内容侵权或者其他违规问题,请联系本站处理。 举报投诉

全部0条评论

快来发表一下你的评论吧 !

×
20
完善资料,
赚取积分