安装 kubernetes

https://segmentfault.com/a/1190000044830019

系统配置

关闭 selinux

sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
setenforce 0 && getenforce

cgroupv2

grubby --update-kernel=ALL --args=systemd.unified_cgroup_hierarchy=1

开启 ipforward

vi /etc/sysctl.conf

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

sysctl --system

停止 firewalld

systemctl stop firewalld.service
systemctl disable firewalld.service

关闭 swap 分区

swapoff -a && sysctl -w vm.swappiness=0
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab

安装 ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -y

加载内核模块

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- br_netfilter
modprobe -- iscsi_tcp

配置 kubernetes.conf, 加入以下内容

vim /etc/modules-load.d/kubernetes.conf

overlay
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
br_netfilter
iscsi_tcp

服务生效 systemctl enable –now systemd-modules-load.service

安装 containerd

https://github.com/containerd/containerd/blob/main/docs/getting-started.md

Download the containerd.tar.gz archive from https://github.com/containerd/containerd/releases , verify its sha256sum, and extract it under /usr/local:

tar Cxzvf /usr/local containerd-1.6.2-linux-amd64.tar.gz

Download containerd.service

wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service -O /usr/lib/systemd/system/containerd.service
systemctl daemon-reload
systemctl enable --now containerd

Installing runc Download the runc binary from https://github.com/opencontainers/runc/releases , verify its sha256sum, and install it as /usr/local/sbin/runc.

install -m 755 runc.amd64 /usr/local/sbin/runc

Installing CNI plugins

Download the cni-plugins—.tgz archive from https://github.com/containernetworking/plugins/releases , verify its sha256sum, and extract it under /opt/cni/bin:

mkdir -p /opt/cni/bin  
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

配置 /etc/containerd/config.toml

SystemdCgroup = true

# 本地仓库配置 /etc/containerd/config.toml,假设域名是 xx.xx.com,我这里使用的镜像仓库是 harbor
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."xx.xx.com"]
      endpoint = ["https://xx.xx.com"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["https://xx.xx.com/v2/docker.io"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."quay.io"]
      endpoint = ["https://xx.xx.com/v2/quay.io"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.k8s.io"]
      endpoint = ["https://xx.xx.com/v2/registry.k8s.io"]

Install Kubeadm

https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/install-kubeadm

配置 yum.repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

安装 kubelet、kubeadm 和 kubectl,并启用 kubelet 以确保它在启动时自动启动:

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

配置磁盘(如需)

# 分区
fdisk /dev/xxx
# 查看磁盘信息
p
# 创建分区
n
# 主分区
p
# 是否写入
w

# 创建文件系统
mkfs.ext4 /dev/xxx1

# 查看分区ID
blkid

# 修改 /etc/fstab,$uuid $mount_path 分别为分区ID 和 挂载路径
UUID=$uuid       $mount_path     ext4    defaults 1 2

ETCD数据盘

/var/lib/etcd

longhorn数据盘

/var/lib/longhorn

Init Cluster Master

在第一个节点上创建文件 kubeadm.yaml:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    token: abcdef.0123456789abcdef
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: xx.xx.xx.xx # master ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node1
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs:
    - localhost
    - node1
    - node2
    - node3
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  imageRepository: registry.k8s.io/coredns
etcd:
  local:
    dataDir: /var/lib/etcd
    peerCertSANs:
      - localhost
      - node1
      - node2
      - node3
    serverCertSANs:
      - localhost
      - node1
      - node2
      - node3
imageRepository:  registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.30.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
controlPlaneEndpoint: kubemaster

创建集群:

kubeadm init --config kubeadm.yaml

Join Cluster 在其他节点上执行:

kubeadm join kubemaster:6443 --token xxxxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxx --control-plane --certificate-key xxxxxxxxxxxxxxxx
etcdctl --cacert="ca.crt" --cert="healthcheck-client.crt" --key="healthcheck-client.key" member promote etdcd_memberid

如果token过期了,可以使用以下命令行

# 获得 certificate key
kubeadm init phase upload-certs --upload-certs
# 获得 --token 和 discovery-token-ca-cert-hash
kubeadm token create --print-join-command

有时候会需要拷贝

/etc/cni/net.d/10-calico.conflist

安装 Calico

curl https://docs.projectcalico.org/v3.7/manifests/calico.yaml

如果出现了apply CRD时报错内容过大,可以使用

kubectl replace --force -f 

如果有私有镜像仓库,需要修改image路径

kubectl apply -f calico.yaml

安装 ingress

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/refs/heads/release-1.11/deploy/static/provider/cloud/deploy.yaml

查看 controller 所在的主机,后续添加映射时在/etc/hosts中添加域名->controller的映射

kubectl -n ingress-nginx get pods -o wide
NAME                                        READY   STATUS      RESTARTS   AGE   IP             NODE   NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-xd4tb        0/1     Completed   0          87m   10.1.219.48    k8s3   <none>           <none>
ingress-nginx-admission-patch-xxbhk         0/1     Completed   0          87m   10.1.166.235   k8s1   <none>           <none>
ingress-nginx-controller-775fcf4864-w5htg   1/1     Running     0          87m   10.1.219.49    k8s3   <none>           <none>

查看 LoadBalancer 的端口,80:32100,后续访问 http 时,直接访问 controller 宿主机的 32100 端口

kubectl -n ingress-nginx get service
NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.99.42.227    <pending>     80:32100/TCP,443:31476/TCP   88m
ingress-nginx-controller-admission   ClusterIP      10.96.161.135   <none>        443/TCP                      88m

安装 Longhorn

wget https://raw.githubusercontent.com/longhorn/longhorn/v1.7.x/deploy/longhorn.yaml

kubectl apply -f longhorn.yaml

安装 Prometheus 和 Grafana

https://github.com/prometheus-operator/kube-prometheus/tree/release-0.14/manifests

kubectl apply -f manifests/setup -f manifests

如果需要配置多个节点都能访问 grafana,那么需要处理 NetworkPolicy,我这里直接删除了

kubectl delete -f kube-prometheus/manifests/grafana-networkPolicy.yaml

openresty,配置之后访问 kubemaster 即可代理到任意一个 apiserver

stream {
    log_format detailed '$remote_addr - [$time_local] '
                        'sni: "$ssl_preread_server_name" '
                        'status: $status '
                        'sent: $bytes_sent received: $bytes_received '
                        'duration: $session_time '
                        'upstream: "$upstream_addr"';

    access_log /usr/local/openresty/nginx/logs/stream-access.log detailed;
    
    upstream ingress {
        server xx.xx.xx.82:443 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.83:443 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.84:443 max_fails=2 fail_timeout=30s;
    }
    map $ssl_preread_server_name $backend {
        default ingress;
    }
    server {
        listen 443;
        ssl_preread on;
        proxy_pass $backend;
        proxy_timeout 600s;
    }
    
    upstream kubernetes {
        server xx.xx.xx.82:6443 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.83:6443 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.84:6443 max_fails=2 fail_timeout=30s;
    }
    map $ssl_preread_server_name $kubemaster {
        kubemaster kubernetes;
    }
    server {
        listen 6443;
        ssl_preread on;
        proxy_pass $kubemaster;
        proxy_timeout 600s;
    }
}

http {
    upstream ingress {
        server xx.xx.xx.82:80 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.83:80 max_fails=2 fail_timeout=30s;
        server xx.xx.xx.84:80 max_fails=2 fail_timeout=30s;
    }

    server {
        listen 80;
        server_name _;

        location / {
            proxy_pass http://ingress;
            proxy_set_header Host $host;
        }
    }
}