東川印記

一本東川,笑看爭龍斗虎;寰茫兦者,度橫佰昧人生。

简单学习K8s

2022年7月12日星期二



学完k8s,就开始暂时停几天了。

近期学的有点多,需要消化消化。。。。

容器自动化部署技术 Kubernets

生产级别容器编排,按照一定目的依次排列,调度,安排。

Google 2014年容器编排引擎,go语言开发。

Go语言是Google2009年发布的开源编程语言。

K8S管理员认证CKA。

官方文档,很详细,还支持中文 https://kubernetes.io/zh-cn/docs/home/


1,整体架构

users      -> master ------------------------------------------> nodes

                ->api server                                               -> kubelet

                -> scheduler                                              -> kube proxy

                -> clusterState Sotre(ETCD数据库)          -> Pod(Container Runtime)

                -> Controller ManangerServer

 master是k8s集群控制节点,对集群进行调度管理,接收集群外用户去集群操作请求。

nodes是集群工作节点Worker Node,运行用户业务应用容器。


2,环境搭建方式

1)minikube

2) kind

3) kubeadm

4) 二进制包 github

5) yum

6)三方封装工具

7) 阿里云公有云平台K8s


3,使用kubeadm 部署 ks8

环境要求 centos7+、2G+2核、集群内通信,禁止swap分区。

Apple M1安不上centos,终于安上了ubuntu server....

1)安docker

有文档 https://docs.docker.com/engine/install/ubuntu/

senrsl@senrsl:~$ sudo apt-get remove docker docker-engine docker.io containerd runc

senrsl@senrsl:~$ sudo apt-get update

senrsl@senrsl:~$ sudo apt-get install \
>     ca-certificates \
>     curl \
>     gnupg \
>     lsb-release

senrsl@senrsl:~$ sudo mkdir -p /etc/apt/keyrings
senrsl@senrsl:~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
senrsl@senrsl:~$ echo \
>   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
>   $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

senrsl@senrsl:~$ sudo apt-get update

senrsl@senrsl:~$ sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin

senrsl@senrsl:~$ docker --version
Docker version 20.10.17, build 100c701
senrsl@senrsl:~$

docker 有了。。。。

2)关防火墙

senrsl@senrsl:~$ sudo ufw status

Status: inactive
senrsl@senrsl:~$

默认就没开

3)切su权限

senrsl@senrsl:~/k8s$ sudo su -
root@senrsl:~# pwd
/root
root@senrsl:~# exit
logout
senrsl@senrsl:~/k8s$ sudo su
root@senrsl:/home/senrsl/k8s# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
root@senrsl:/home/senrsl/k8s#

4)再复制一台虚拟机

两台虚拟机,一台master、一台node

之前那台是 master 172.16.5.128

新复制的IP竟然一样。。。。断开网桥重启后再连接网桥

新复制的一台 node  172.168.5.129

5) 安kubeadm环境准备

Google 每周运行数十亿个容器,Kubernetes 基于与之相同的原则来设计,能够在不扩张运维团队的情况下进行规模扩展。

同样有文档 https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

修改主机名,主机名不能重复
root@senrsl:/home/senrsl/k8s# hostnamectl set-hostname senrslk8snode
root@senrsl:/home/senrsl/k8s# hostnamectl
   Static hostname: senrslk8snode
         Icon name: computer-vm
           Chassis: vm
        Machine ID: b6d041a2c6b04f0390bb4963c095b713
           Boot ID: 0830859f3e95471a905640d592904655
    Virtualization: vmware
  Operating System: Ubuntu 20.04.4 LTS
            Kernel: Linux 5.4.0-100-generic
      Architecture: arm64
root@senrsl:/home/senrsl/k8s#

6)设置iptables桥接流量

oot@senrsl:/home/senrsl# cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
> br_netfilter
> EOF
br_netfilter
root@senrsl:/home/senrsl# cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
root@senrsl:/home/senrsl# sysctl --system

7)安装kubeadm工具

root@senrsl:/home/senrsl# apt-get install -y apt-transport-https ca-certificates curl

root@senrsl:/home/senrsl#  curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
root@senrsl:/home/senrsl# echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
root@senrsl:/home/senrsl# apt-get update

root@senrsl:/home/senrsl# apt-get install -y kubelet kubeadm kubectl

root@senrsl:/home/senrsl/k8s#  apt-mark hold kubelet kubeadm kubectl

8)reboot

重启后IP竟然又重复了,apple M1版的 vmware果然比inetl版的不稳定很多。。。。

之前复制虚拟机早了,应该在这里复制。。。。

9)确认安装

senrsl@senrsl:~$ kubelet --version
Kubernetes v1.24.2
senrsl@senrsl:~$

主要组件

kubelet:运行在所有cluster节点上,负责启动POD和容器;

kubeadm:用于初始化cluster的工具。

kubectl:是k8s的命令行工具,通过kubectl部署和管理应用。


10)部署k8s master主节点

之前的需要在所有机器运行,下面的就要分机器了。

master机器执行

root@senrsl:/home/senrsl/k8s# kubeadm init --kubernetes-version=1.24.2 --apiserver-advertise-address=172.16.5.128  --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16  --ignore-preflight-errors=Swap

。。。。成功

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.16.5.128:6443 --token s9fwin.wyrz6fsm8swlvir0 \
        --discovery-token-ca-cert-hash sha256:7348dccbb862c71a41e6364b9db2f7d9012663ff87b171d61c7f8718b9b192b2
root@senrsl:/home/senrsl/k8s#

各种报错

Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService

root@senrsl:/home/senrsl/k8s# rm /etc/containerd/config.toml
root@senrsl:/home/senrsl/k8s# systemctl restart containerd

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

root@senrsl:/home/senrsl/k8s# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
     Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/kubelet.service.d
             └─10-kubeadm.conf
     Active: activating (auto-restart) (Result: exit-code) since Mon 2022-07-11 07:39:59 UTC; 9s ago
       Docs: https://kubernetes.io/docs/home/
    Process: 5982 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exit>
   Main PID: 5982 (code=exited, status=1/FAILURE)

root@senrsl:/home/senrsl/k8s#

禁用swap

root@senrsl:/home/senrsl/k8s# swapoff -a
root@senrsl:/home/senrsl/k8s# vi /etc/fstab  //注释最后一行
root@senrsl:/home/senrsl/k8s#

FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists

root@senrsl:/home/senrsl/k8s# kubeadm reset

终于成功了。。。。

master继续执行

root@senrsl:/home/senrsl/k8s# mkdir -p $HOME/.kube
root@senrsl:/home/senrsl/k8s# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@senrsl:/home/senrsl/k8s# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@senrsl:/home/senrsl/k8s# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
senrsl   NotReady   control-plane   3m32s   v1.24.2
root@senrsl:/home/senrsl/k8s#

11)k8s node加入

同样的,需要关掉swap

root@senrslk8snode:/home/senrsl/k8s# kubeadm join 172.16.5.128:6443 --token s9fwin.wyrz6fsm8swlvir0 \>         --discovery-token-ca-cert-hash sha256:7348dccbb862c71a41e6364b9db2f7d9012663ff87b171d61c7f8718b9b192b2
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

root@senrslk8snode:/home/senrsl/k8s#

然后在master 可以看到这个node了。。。。

root@senrsl:/home/senrsl/k8s# kubectl get nodes
NAME            STATUS     ROLES           AGE     VERSION
senrsl          NotReady   control-plane   7m52s   v1.24.2
senrslk8snode   NotReady   <none>          38s     v1.24.2
root@senrsl:/home/senrsl/k8s#

12)master机器部署网络插件

root@senrsl:/home/senrsl/k8s# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@senrsl:/home/senrsl/k8s# kubectl apply -f kube-flannel.yml

13)等待就绪
root@senrsl:/home/senrsl/k8s# kubectl get nodes
NAME            STATUS     ROLES           AGE     VERSION
senrsl          NotReady   control-plane   10m     v1.24.2
senrslk8snode   NotReady   <non

等状态由NotReady变成Ready

很快,两分钟

root@senrsl:/home/senrsl/k8s# kubectl get nodes
NAME            STATUS   ROLES           AGE     VERSION
senrsl          Ready    control-plane   11m     v1.24.2
senrslk8snode   Ready    <none>          4m45s   v1.24.2
root@senrsl:/home/senrsl/k8s#

14)查看pod

root@senrsl:/home/senrsl/k8s# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-74586cf9b6-5w4m4         1/1     Running   0          12m
coredns-74586cf9b6-6k9h9         1/1     Running   0          12m
etcd-senrsl                      1/1     Running   1          12m
kube-apiserver-senrsl            1/1     Running   1          12m
kube-controller-manager-senrsl   1/1     Running   1          12m
kube-proxy-nspjc                 1/1     Running   0          12m
kube-proxy-vrdkb                 1/1     Running   0          5m29s
kube-scheduler-senrsl            1/1     Running   1          12m
root@senrsl:/home/senrsl/k8s#

四点了,买菜做核酸。。。。

4,k8s部署容器化应用步骤

1)Dockerfile制作镜像,或从仓库拉取镜像;

2)通过控制器管理pod,镜像启动到一个容器,容器在pod里;

3)暴露应用,以使外界访问。

5,Pod部署Nginx

root@senrsl:/home/senrsl/k8s# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
root@senrsl:/home/senrsl/k8s# kubectl expose deploy nginx --port=80 --type=NodePort
service/nginx exposed

查看命令

root@senrsl:/home/senrsl/k8s# kubelctl get pod,svc
kubelctl: command not found
root@senrsl:/home/senrsl/k8s# kubectl get pod,svc
NAME                        READY   STATUS    RESTARTS   AGE
pod/nginx-8f458dc5b-ml78k   1/1     Running   0          43s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        4h19m
service/nginx        NodePort    10.96.126.74   <none>        80:30811/TCP   15s
root@senrsl:/home/senrsl/k8s# kubectl get node
NAME            STATUS   ROLES           AGE     VERSION
senrsl          Ready    control-plane   4h20m   v1.24.2
senrslk8snode   Ready    <none>          4h13m   v1.24.2
root@senrsl:/home/senrsl/k8s# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        4h20m
nginx        NodePort    10.96.126.74   <none>        80:30811/TCP   68s
root@senrsl:/home/senrsl/k8s# kubectl get deployment
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           116s
root@senrsl:/home/senrsl/k8s# kubectl get pod
NAME                    READY   STATUS    RESTARTS   AGE
nginx-8f458dc5b-ml78k   1/1     Running   0          2m3s

访问

http://172.16.5.128:30811/

http://172.16.5.129:30811/

其中128是master,刚才的操作在master上执行,129是node,node看来是自动同步了

6,pod部署tomcat

root@senrsl:/home/senrsl/k8s# kubectl create deployment tomcat --image=tomcat
deployment.apps/tomcat created
root@senrsl:/home/senrsl/k8s# kubectl expose deployment tomcat --port=8080 --type=NodePort
service/tomcat exposed
root@senrsl:/home/senrsl/k8s# kubectl get pod,svc
NAME                         READY   STATUS              RESTARTS   AGE
pod/nginx-8f458dc5b-ml78k    1/1     Running             0          10m
pod/tomcat-84c98b5bb-m629j   0/1     ContainerCreating   0          32s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          4h29m
service/nginx        NodePort    10.96.126.74    <none>        80:30811/TCP     10m
service/tomcat       NodePort    10.104.63.100   <none>        8080:30239/TCP   11s
root@senrsl:/home/senrsl/k8s#

访问

http://172.16.5.128:30239/

http://172.16.5.129:30239/

执行之后需要等一会,部署需要时间

7,关系

master控制Node节点 -> service -> deployment控制器 -> pod -> docker

8,部署项目

1)命令部署

kubectl create deployment 项目名 --image=项目镜像 --dry-run -o yaml

-o 可以为yaml或json

kubectl create deployment 项目名 --image=项目镜像 --dry-run -o yaml > deploy.yaml

保存到本地yaml文件

2)yml部署

kubectl apply -f deploy.yaml

等于

kubectl create deplyment 项目名 --image=镜像名

其中yml文件中需要添加设置为从本地拉取,默认是从远程拉取

containers:

    imagePullPolicy:Never //设置为从本地拉取镜像


部署后,同样的暴露服务后访问

暴露服务也可以通过 --dry-run -o yaml 空运行生成文件

9,部署 dashboard

源码 https://github.com/kubernetes/dashboard

通过yml方式部署

root@senrsl:/home/senrsl/k8s# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
root@senrsl:/home/senrsl/k8s#

确认部署成功

root@senrsl:/home/senrsl/k8s# kubectl get pod -n kubernetes-dashboard
NAME                                        READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-8c47d4b5d-mt2l8   1/1     Running   0          3m20s
kubernetes-dashboard-5676d8b865-rvm9v       1/1     Running   0          3m20s
root@senrsl:/home/senrsl/k8s#

然后发现这个地址只能在虚拟机里访问,外部访问不了。。。。

需要修改 recommonded.yaml

spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard

然后重新应用

root@senrsl:/home/senrsl/k8s# vi recommended.yaml
root@senrsl:/home/senrsl/k8s# kubectl apply -f recommended.yaml

这时候再查看,发现暴露了30001端口

root@senrsl:/home/senrsl/k8s# kubectl get pod,svc  -n kubernetes-dashboard
NAME                                            READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-8c47d4b5d-mt2l8   1/1     Running   0          13m
pod/kubernetes-dashboard-5676d8b865-rvm9v       1/1     Running   0          13m

NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.100.134.163   <none>        8000/TCP        13m
service/kubernetes-dashboard        NodePort    10.103.234.165   <none>        443:30001/TCP   13m
root@senrsl:/home/senrsl/k8s#

访问 https://172.16.5.128:30001/

需要输入token

生成token

root@senrsl:/home/senrsl/k8s# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
root@senrsl:/home/senrsl/k8s# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
root@senrsl:/home/senrsl/k8s# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

执行完并没有生成token

看起来是 高版本改了,需要指定失效时间

root@senrsl:/home/senrsl/k8s# kubectl create token dashboard-admin -n kube-system --duration=87600h
eyJhbGciOiJSUzI1NiIsImtpZCI6IkJncDBwdTJWQURiRUl0YzdfMk1rWUhvT1pvd1FHMjg4VG9Vdl9peW9QUkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxOTcyOTA3NzMxLCJpYXQiOjE2NTc1NDc3MzEsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiI3M2M2M2QxYS1jYmVlLTQ5MDctOGE3Yy03MmMzYTQyNGIwYTAifX0sIm5iZiI6MTY1NzU0NzczMSwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.HCP1KeHFTeDh9Fw8Qg6kcaw0ycvTGajt6zfie0ZvyQcLhkzuvaed4FGN_tKwS5ydNwGs7ZCvsnVv3UjABUi3k3ssUECbs0FMNwWXpFgai4jCoKqT2hvgJ4ZenoIA5TZgSJHsDtSWrX5hOUSqpTFHyz2a23lFfcOdo79pt1WmofOaA4b2vO8zLRpKJNwiokAxFbGJ1SJHZynsHpQycKXQ8KtCINXUaYCQr5XtP0Pv_7glwiv3QNSB7E9X55XHdkjl01goHHZMx4V9YI55xiTNZL2tEsl3CF0IxsgfuyTE_PPcidRMjNAL03hd0yh52Xr9GPa45ziHcSILuwIXh_MXrQ
root@senrsl:/home/senrsl/k8s#

复制token,登录dashboard。

配图1

10,harbor

docker私服

11,暴露应用的几种方式

1)NodePort,每个节点产生30000以上端口

    只能供一个服务使用,只能使用30000-32767之间端口,IP变化后需要手动修改。


yaml文件中三种端口

nodePort:外部机器可以访问的端口。

targetPort:容器内端口,与制作容器是暴露端口一致。

port: 各服务之间访问的端口。


2)LoadBalancer

负载均衡器,深度耦合云平台

3)Ingress

相当于网关, 用户 -> ingress -> 多个service -> 多个pod

Ingress是k8s集群中一个API资源对象,相当于一个集群网关。需要单独安装。

12,Ingress Nginx

https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/

https://github.com/kubernetes/ingress-nginx

1)环境确认

root@senrsl:/home/senrsl/k8s# kubectl delete service nginx
service "nginx" deleted
root@senrsl:/home/senrsl/k8s# kubectl delete deployment nginx
deployment.apps "nginx" deleted
root@senrsl:/home/senrsl/k8s# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
tomcat-84c98b5bb-m629j   1/1     Running   0          126m
root@senrsl:/home/senrsl/k8s# kubectl get node
NAME            STATUS   ROLES           AGE     VERSION
senrsl          Ready    control-plane   6h36m   v1.24.2
senrslk8snode   Ready    <none>          6h28m   v1.24.2
root@senrsl:/home/senrsl/k8s# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          6h36m
tomcat       NodePort    10.104.63.100   <none>        8080:30239/TCP   126m
root@senrsl:/home/senrsl/k8s# kubectl get deploy
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
tomcat   1/1     1            1           127m
root@senrsl:/home/senrsl/k8s# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
tomcat-84c98b5bb-m629j   1/1     Running   0          127m
root@senrsl:/home/senrsl/k8s#

2)安nginx

root@senrsl:/home/senrsl/k8s# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
root@senrsl:/home/senrsl/k8s# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
root@senrsl:/home/senrsl/k8s# 

3)安nginx ingress

https://kubernetes.github.io/ingress-nginx/deploy/

root@senrsl:/home/senrsl/k8s# kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml


补充

https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml

需要修改

wait-shutdown上面标签为containers再上面标签spec

spec标签内添加hostNetwork:true

重新删除安装

删除service

root@senrsl:/home/senrsl/k8s# kubectl get service -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.99.134.145   <pending>     80:30663/TCP,443:30307/TCP   31m
ingress-nginx-controller-admission   ClusterIP      10.110.30.48    <none>        443/TCP                      31m
root@senrsl:/home/senrsl/k8s# kubectl delete service ingress-nginx-controller -n ingress-nginx
service "ingress-nginx-controller" deleted
root@senrsl:/home/senrsl/k8s# kubectl delete service ingress-nginx-controller-admission -n ingress-nginx
service "ingress-nginx-controller-admission" deleted
root@senrsl:/home/senrsl/k8s# kubectl get service -n ingress-nginx
No resources found in ingress-nginx namespace.
root@senrsl:/home/senrsl/k8s#

删除deployment

root@senrsl:/home/senrsl/k8s# kubectl get deploy -n ingress-nginx
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx-controller   1/1     1            1           29m
root@senrsl:/home/senrsl/k8s# kubectl delete deploy ingress-nginx-controller -n ingress-nginx
deployment.apps "ingress-nginx-controller" deleted
root@senrsl:/home/senrsl/k8s#

删除pod

root@senrsl:/home/senrsl/k8s# kubectl get pod -n ingress-nginx
NAME                                   READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-jntdr   0/1     Completed   0          32m
ingress-nginx-admission-patch-dmzlh    0/1     Completed   0          32m
root@senrsl:/home/senrsl/k8s# kubectl delete pod ingress-nginx-admission-create-jntdr -n ingress-nginx
pod "ingress-nginx-admission-create-jntdr" deleted
root@senrsl:/home/senrsl/k8s# kubectl delete pod ingress-nginx-admission-patch-dmzlh -n ingress-nginx
pod "ingress-nginx-admission-patch-dmzlh" deleted
root@senrsl:/home/senrsl/k8s# kubectl get pod -n ingress-nginx
No resources found in ingress-nginx namespace.
root@senrsl:/home/senrsl/k8s#


4)安装确认

root@senrsl:/home/senrsl/k8s# kubectl get service -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.99.134.145   <pending>     80:30663/TCP,443:30307/TCP   55s
ingress-nginx-controller-admission   ClusterIP      10.110.30.48    <none>        443/TCP                      55s
root@senrsl:/home/senrsl/k8s# kubectl get deploy  -n ingress-nginx
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx-controller   0/1     1            0           73s
root@senrsl:/home/senrsl/k8s# kubectl get pod  -n ingress-nginx
NAME                                        READY   STATUS              RESTARTS   AGE
ingress-nginx-admission-create-jntdr        0/1     ContainerCreating   0          82s
ingress-nginx-admission-patch-dmzlh         0/1     ContainerCreating   0          82s
ingress-nginx-controller-778667bc4b-qbrvw   0/1     ContainerCreating   0          82s
root@senrsl:/home/senrsl/k8s#

这次的时间明显要长很多。。。。

重新确认环境

root@senrsl:/home/senrsl/k8s# kubectl get service -n ingress-nginx
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.111.24.240   <pending>     80:30665/TCP,443:31988/TCP   25s
ingress-nginx-controller-admission   ClusterIP      10.109.227.12   <none>        443/TCP                      25s
root@senrsl:/home/senrsl/k8s# kubectl get deploy  -n ingress-nginx
NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
ingress-nginx-controller   1/1     1            1           40s
root@senrsl:/home/senrsl/k8s# kubectl get pod  -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-7ffc958f5-5nn4n   1/1     Running   0          46s
root@senrsl:/home/senrsl/k8s# kubectl get pod  -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-7ffc958f5-5nn4n   1/1     Running   0          69s
root@senrsl:/home/senrsl/k8s#


5)应用ingress规则

root@senrsl:/home/senrsl/k8s# kubectl apply -f ingress_nginx_rule.yml
ingress.networking.k8s.io/k8s-ingress created
root@senrsl:/home/senrsl/k8s# kubectl get ing
NAME          CLASS    HOSTS         ADDRESS   PORTS   AGE
k8s-ingress   <none>   www.abc.com             80      6s
root@senrsl:/home/senrsl/k8s#


yml文件

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: k8s-ingress
spec:
  rules:
  - host: www.abc.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: nginx
            port:
              number: 80

还是需要修改本机host....

看起来是把nginx service的端口80,绑定到了host上。。。。

然后通过get ingress应该可以看到实际指向地址才对。。。。。

文档 https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/

13,部署流程

1)项目打包 jar或war;

2) 制作项目镜像(Dockerfile文件);

3) 用k8s部署镜像(命令方式或yml方式);

4)对外暴露服务(nodePort临时,Ingress统一暴露);


14,写个简单的SpringCloudAlibaba项目

提供者provider,消费者consumer,网关gateway。

安装nacos

按照Spring Cloud Alibaba的版本说明,选用

Spring Cloud Alibaba 2.2.8.RELEASE

SpringCloud Hoxton.SR12

SpringBoot 2.3.12.RELEASE

Sentinel 1.8.4

Nacos 2.1.0

RocketMQ 4.9.3

Seata 1.5.1

1)安装nacos

下载2.1.0 https://github.com/alibaba/nacos/releases

解压,运行

root@senrsl:/home/senrsl/nacos/nacos/bin# java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1~20.04-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
root@senrsl:/home/senrsl/nacos/nacos/bin# ./startup.sh -m standalone
/usr/lib/jvm/java-8-openjdk-arm64/bin/java -Djava.ext.dirs=/usr/lib/jvm/java-8-openjdk-arm64/jre/lib/ext:/usr/lib/jvm/java-8-openjdk-arm64/lib/ext  -Xms512m -Xmx512m -Xmn256m -Dnacos.standalone=true -Dnacos.member.list= -Xloggc:/home/senrsl/nacos/nacos/logs/nacos_gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dloader.path=/home/senrsl/nacos/nacos/plugins/health,/home/senrsl/nacos/nacos/plugins/cmdb,/home/senrsl/nacos/nacos/plugins/selector -Dnacos.home=/home/senrsl/nacos/nacos -jar /home/senrsl/nacos/nacos/target/nacos-server.jar  --spring.config.additional-location=file:/home/senrsl/nacos/nacos/conf/ --logging.config=/home/senrsl/nacos/nacos/conf/nacos-logback.xml --server.max-http-header-size=524288
nacos is starting with standalone
nacos is starting,you can check the /home/senrsl/nacos/nacos/logs/start.out
root@senrsl:/home/senrsl/nacos/nacos/bin#


访问 http://172.16.5.128:8848/nacos/index.html


2)内容提供者provider

Request URI does not contain a valid hostname

启用Ribbon的时候,服务名不能包含下划线。。。。屮

controller

@RefreshScope
@RestController
public class EchoController {

//    @Value("${spring.application.name}")
    @Value("${user.name}")  //这俩是配置在了nacos配置列表里
    private String name;

//    @Value("${server.port}")
    @Value("${user.age}")
    private int age;


    @RequestMapping("/echo")
    public String echo() {
        return "读取到的信息 " + name + "  " + age;
    }

}

bootstrap.yml

# 中文乱码
spring:
  application:
    name: test-provider
  cloud:
    nacos:
      # 配置
      config:
        server-addr: 172.16.5.128:8848
      # 发现
      discovery:
        server-addr: 172.16.5.128:8848

3)消费者consumer

controller

@Slf4j
@RefreshScope  //nacos配置中心更改了配置值,自动感知并更新到配置修改
@RestController
public class EchoController {

    @Value("${user.name}")
    private String name;

    @Value("${user.age}")
    private int age;

    @Value("${provider.server.name}")
    private String serviceName;
    @Autowired
    private RestTemplate restTemplate;


    //使用feign方式 feign 4-2
    @Autowired
    private EchoServiceFeignClient echoServiceFeign;

    @RequestMapping("/echo")
    public String echo() {
        log.info("读取到的 配置 {}  and {} and {}", name, age, serviceName);
        String resultAll = "本服务获取到的 " + name + "   " + age + "  " + serviceName + "  ";

        //restTemplate
        String url = "http://" + serviceName + "/echo";
//        String url = "http://test-provider/echo"; //服务名不能用下划线。。。。
        String result = restTemplate.getForObject(url, String.class);
        log.info("rest template is {}", result);


        //feign 4-3
        String result2 = echoServiceFeign.echo();

        resultAll = resultAll + " \n restTemp方式: " + result + " \n feign方式:" + result2;
        return resultAll;
    }

}

feign client

@FeignClient("test-provider")
public interface EchoServiceFeignClient {

    //或者这里写/echo,但是需要@FeignClient(name = "test-provider",url = "http://localhost:8080/test-provider") 这种格式写全路径
    @GetMapping("/echo")
    String echo();

}

启动

@SpringBootApplication
@EnableFeignClients //feign 4-4
public class TestConsumerApplication {

    public static void main(String[] args) {
        SpringApplication.run(TestConsumerApplication.class, args);
    }


    @Bean
    @LoadBalanced
    public RestTemplate getRestTemplate() {
        return new RestTemplate();
    }

}

bootstrap.yml

spring:
  application:
    name: test-consumer
  cloud:
    nacos:
      # 配置
      config:
        server-addr: 172.16.5.128:8848
        # 发现
      discovery:
        server-addr: 172.16.5.128:8848
server:
  port: 8090

4)gateway 网关

application.yml

server:
  port: 80

spring:
  application:
    name: test-gateway
  cloud:
    nacos:
      discovery:
        server-addr: 172.16.5.128:8848
    gateway:
      discovery:
        locator:
          enabled: true # 网关发现
      routes:
        - id: route111
          uri: lb://test-consumer
          predicates: #判断
            - Path=/**
            #- After=2020-08-15T22:35:25.335+08:00[Asia/Shanghai]
            #- Before=2020-09-15T17:35:25.335+08:00[Asia/Shanghai]
            #- Between=2020-08-13T17:35:25.335+08:00[Asia/Shanghai], 2020-08-14T17:35:25.335+08:00[Asia/Shanghai]
            #- Cookie=token, 123456
            #- Header=X-Request-Id, \d+
            #- Query=token
            #- Token=123456
            #- AccessTime=������6:00, ������6:00
          filters:
            - AddRequestHeader=X-Request-red, blue
            - AddRequestParameter=color, red


生成这三个的jar包,准备部署到k8s.....


15,手动部署这个项目

实际中都是绑定工具自动部署

k8s适合部署无状态服务。

一般nacos、mysql等属于有状态服务,一般直接安在机器上,不走k8s。


分别产生Dockerfile,部署到k8s.

在gateway上部署Ingress,统一入口。

编写Dockerfile_provider

FROM dcjz/jdk:1.8u202v5
# ARG JAR_FILE
ADD test_provider-0.0.1-SNAPSHOT.jar /
# EXPOSE 8001 9015
ENTRYPOINT ["java", "-jar","/test_provider-0.0.1-SNAPSHOT.jar"]

生成镜像

root@senrsl:/home/senrsl/k8s# docker build -t test-provider -f Dockerfile_provider .
root@senrsl:/home/senrsl/k8s# docker images
REPOSITORY      TAG         IMAGE ID       CREATED          SIZE
test-provider   latest      b9900bb4f3e0   20 seconds ago   489MB
dcjz/jdk        1.8u202v5   cf99cba415ec   4 days ago       439MB
root@senrsl:/home/senrsl/k8s#

k8s部署

生成yml文件

root@senrsl:/home/senrsl/k8s# kubectl create deployment test-provider --image=test-provider --dry-run=client -o yaml > test-provider.yaml

修改镜像为本地拉取

    spec:
      containers:
      - image: test-provider
        name: test-provider
        imagePullPolicy: Never

应用yml文件

root@senrsl:/home/senrsl/k8s# kubectl apply -f test-provider.yaml

查看部署状态为ErrImageNeverPull

从节点上也需要有镜像才可以。。。。不然分发到从节点,从节点找不到该镜像,pod就报这个错误。

确切说,应该是只部署在node节点就可以,在node节点工作。

后面还需要搭个镜像存储服务。。。。

provider不需要对外暴露,所以不需要对外expose端口

consumer、gateway也都要走上面流程

暴露端口


错误1:

vmware ubuntu 重启了一下不能启动了,研究了一下,应该是apt update的时候更新内核了

卡在 exiting boot service and installing virutal address map

在启动的时候,选择boot option,选低版本的内核5.4.0.100就能启动了。。。。

错误2:

The connection to the server localhost:8080 was refused - did you specify the right host or port?

因为node没有配置环境,之前也没配置。。。。

查看状态

root@senrsl:/home/senrsl/k8s# kubectl describe pod test-provider-6c6f7d6cd9-l8xc6
Name:         test-provider-6c6f7d6cd9-l8xc6
Namespace:    default
Priority:     0
Node:         senrslk8snode/172.16.5.129
Start Time:   Tue, 12 Jul 2022 14:15:30 +0000
Labels:       app=test-provider
              pod-template-hash=6c6f7d6cd9
Annotations:  <none>
Status:       Pending
IP:           10.244.1.21
IPs:
  IP:           10.244.1.21
Controlled By:  ReplicaSet/test-provider-6c6f7d6cd9
Containers:
  test-provider:
    Container ID:  
    Image:          test-provider
    Image ID:      
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ErrImageNeverPull
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxz64 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-lxz64:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason             Age               From               Message
  ----     ------             ----              ----               -------
  Warning  FailedScheduling   2m35s             default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
  Normal   Scheduled          61s               default-scheduler  Successfully assigned default/test-provider-6c6f7d6cd9-l8xc6 to senrslk8snode
  Warning  ErrImageNeverPull  6s (x6 over 61s)  kubelet            Container image "test-provider" is not present with pull policy of Never
  Warning  Failed             6s (x6 over 61s)  kubelet            Error: ErrImageNeverPull
root@senrsl:/home/senrsl/k8s#

错误3:

pod跟deploy全都不Ready了。

0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/disk-pressure: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

查看是否被打污点

root@senrsl:/home/senrsl/k8s# kubectl describe node senrslk8snode | grep Taint
Taints:             <none>
root@senrsl:/home/senrsl/k8s# 

折腾半天,又变了

root@senrsl:/home/senrsl/k8s# kubectl describe node senrslk8snode | grep Taint
Taints:             node.kubernetes.io/disk-pressure:NoSchedule
root@senrsl:/home/senrsl/k8s#

污点值由三个

NoSchedule:一定不被调度
PreferNoSchedule:尽量不被调度【也有被调度的几率】
NoExecute:不会调度,并且还会驱逐Node已有Pod

奇怪,一会有一会没的。。。。


查看node状态

root@senrsl:/home/senrsl/k8s# kubectl describe node senrslk8snode
Name:               senrslk8snode
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=senrslk8snode
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"2a:0e:c7:c6:d6:4d"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.16.5.129
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Mon, 11 Jul 2022 07:53:55 +0000
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  senrslk8snode
  AcquireTime:     <unset>
  RenewTime:       Tue, 12 Jul 2022 14:33:08 +0000
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Tue, 12 Jul 2022 21:56:22 +0000   Tue, 12 Jul 2022 21:56:22 +0000   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Tue, 12 Jul 2022 14:33:13 +0000   Tue, 12 Jul 2022 13:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Tue, 12 Jul 2022 14:33:13 +0000   Tue, 12 Jul 2022 14:28:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Tue, 12 Jul 2022 14:33:13 +0000   Tue, 12 Jul 2022 13:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Tue, 12 Jul 2022 14:33:13 +0000   Tue, 12 Jul 2022 13:56:27 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  172.16.5.129
  Hostname:    senrslk8snode
Capacity:
  cpu:                2
  ephemeral-storage:  10255636Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             4013104Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  9451594122
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             3910704Ki
  pods:               110
System Info:
  Machine ID:                 b6d041a2c6b04f0390bb4963c095b713
  System UUID:                68b84d56-8698-0d32-dd9b-8bc1f03e5072
  Boot ID:                    0981e7e5-1bdc-4f59-a858-1463bbc6b333
  Kernel Version:             5.4.0-100-generic
  OS Image:                   Ubuntu 20.04.4 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.6.6
  Kubelet Version:            v1.24.2
  Kube-Proxy Version:         v1.24.2
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (8 in total)
  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
  default                     nginx-8f458dc5b-8b68r                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
  default                     test-provider-6c6f7d6cd9-jpwnk               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m5s
  default                     tomcat-84c98b5bb-s4xdc                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
  ingress-nginx               ingress-nginx-controller-7ffc958f5-mpsss     100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         9m57s
  kube-flannel                kube-flannel-ds-8zm8g                        100m (5%)     100m (5%)   50Mi (1%)        50Mi (1%)      30h
  kube-system                 kube-proxy-vrdkb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30h
  kubernetes-dashboard        dashboard-metrics-scraper-8c47d4b5d-h29sb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
  kubernetes-dashboard        kubernetes-dashboard-5676d8b865-8wp7b        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                200m (10%)  100m (5%)
  memory             140Mi (3%)  50Mi (1%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:
  Type     Reason                   Age                            From             Message
  ----     ------                   ----                           ----             -------
  Normal   Starting                 <invalid>                      kube-proxy      
  Normal   NodeHasDiskPressure      56m                            kubelet          Node senrslk8snode status is now: NodeHasDiskPressure
  Warning  FreeDiskSpaceFailed      52m                            kubelet          failed to garbage collect required amount of images. Wanted to free 633982156 bytes, but freed 0 bytes
  Warning  EvictionThresholdMet     52m (x6 over 56m)              kubelet          Attempting to reclaim ephemeral-storage
  Normal   NodeNotReady             47m                            node-controller  Node senrslk8snode status is now: NodeNotReady
  Warning  EvictionThresholdMet     31m                            kubelet          Attempting to reclaim ephemeral-storage
  Warning  FreeDiskSpaceFailed      22m                            kubelet          failed to garbage collect required amount of images. Wanted to free 552373452 bytes, but freed 72942770 bytes
  Normal   NodeHasNoDiskPressure    11m (x15 over <invalid>)       kubelet          Node senrslk8snode status is now: NodeHasNoDiskPressure
  Warning  FreeDiskSpaceFailed      7m46s                          kubelet          failed to garbage collect required amount of images. Wanted to free 476703948 bytes, but freed 357013411 bytes
  Warning  InvalidDiskCapacity      <invalid>                      kubelet          invalid capacity 0 on image filesystem
  Normal   NodeAllocatableEnforced  <invalid>                      kubelet          Updated Node Allocatable limit across pods
  Normal   Starting                 <invalid>                      kubelet          Starting kubelet.
  Normal   NodeHasSufficientPID     <invalid> (x7 over <invalid>)  kubelet          Node senrslk8snode status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  <invalid> (x8 over <invalid>)  kubelet          Node senrslk8snode status is now: NodeHasSufficientMemory
root@senrsl:/home/senrsl/k8s#

错误4:

莫名其妙又变成了ErrImageNeverPull

尝试

从机

root@senrslk8snode:/home/senrsl/k8s# docker build -t test-provider:v1 -f Dockerfile_provider .

主机

root@senrsl:/home/senrsl/k8s# kubectl create deployment test-provider --image=test-provider:v1 --dry-run=client -o yaml > test-provider.yaml
root@senrsl:/home/senrsl/k8s# vi test-provider.yaml

修改imagePullPolicy为 IfNotPresent

然后应用

变成了 ErrImagePull 过一会又变成了 ImagePullBackOff

镜像拉取方式

以下列表包含了 imagePullPolicy 可以设置的值,以及这些值的效果:

IfNotPresent
只有当镜像在本地不存在时才会拉取。
Always
每当 kubelet 启动一个容器时,kubelet 会查询容器的镜像仓库, 将名称解析为一个镜像摘要。 如果 kubelet 有一个容器镜像,并且对应的摘要已在本地缓存,kubelet 就会使用其缓存的镜像; 否则,kubelet 就会使用解析后的摘要拉取镜像,并使用该镜像来启动容器。
Never
Kubelet 不会尝试获取镜像。如果镜像已经以某种方式存在本地, kubelet 会尝试启动容器;否则,会启动失败。 更多细节见提前拉取镜像。


特么的,不知道为啥。。。。


16,架构及组件介绍

https://kubernetes.io/zh-cn/docs/concepts/overview/components/

算了,够够的了。。。。

17,动态扩容

直接修改部署用的yml文件

spec -> replicas: 值,默认1,就是1个容器部署,修改值,改成N个容器部署

修改后直接 kubectl apply -f depoly.yml 即可重新应用部署。

2022年07月12日23:08:54

--
senRsl
2022年07月08日18:12:03

没有评论 :

发表评论