k8s集群部署

1. 角色分配

角色 IP 安装组件
k8s-master 10.0.0.170 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node0 10.0.0.113 kubelet,kube-proxy,docker,flannel,etcd
k8s-node1 10.0.0.56 kubelet,kube-proxy,docker,flannel,etcd

2. 环境预备

#系统更新
sudo yum install -y epel-release; sudo yum update -y
sudo apt-get install -y epel-release; sudo apt-get update -y

3. 下载组件

链接:https://pan.baidu.com/s/1cOUJyQ8tuT_lT4a1ofY1Jw 提取码:6n31 

4. cfssl安装

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

5. etcd集群部署

Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息

5.1 etcd证书生成

  • etcd ca配置
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "etcd": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF
  • etcd ca证书
cat << EOF | tee ca-csr.json
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF
  • etcd server证书
cat << EOF | tee server-csr.json
{
    "CN": "etcd",
    "hosts": [
    "10.0.0.170",
    "10.0.0.113",
    "10.0.0.56"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

生成证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server

[centos@jiliguo ssl]$ ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

5.2 etcd安装

  • 解压缩
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
  • 配置etcd配置文件
vim /k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://10.0.0.170:2380"
ETCD_LISTEN_CLIENT_URLS="https://10.0.0.170:2379,https://127.0.0.1:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.170:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.170:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://10.0.0.170:2380,etcd02=https://10.0.0.113:2380,etcd03=https://10.0.0.56:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
  • 配置etcd启动文件
mkdir /data1/etcd
vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/data1/etcd/
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd"
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5.2 etcd集群测试

[centos@jiliguo etcd]$ sudo /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379" cluster-health
member 5369b6e78dc66d92 is healthy: got healthy result from https://10.0.0.113:2379
member aa55e29e6ec202ab is healthy: got healthy result from https://10.0.0.170:2379
member b5010a06fcd898a8 is healthy: got healthy result from https://10.0.0.56:2379

6. kubernets证书与私钥

  • kubernetes ca证书
cat << EOF | tee ca-config.json
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat << EOF | tee ca-csr.json
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • apiserver证书
cat << EOF | tee server-csr.json
{
    "CN": "kubernetes",
    "hosts": [
      "10.254.0.1",
      "127.0.0.1",
      "10.0.0.170",
      "10.0.0.113",
      "10.0.0.56",
      "kubernetes",
      "kubernetes.default",
      "kubernetes.default.svc",
      "kubernetes.default.svc.cluster",
      "kubernetes.default.svc.cluster.local"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "k8s",
            "OU": "System"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
  • kube-proxy证书
cat << EOF | tee kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

4、5、6在一台机器执行即可,后续将配置文件传输到其他节点

7. 部署master节点

tar -zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/

7.1 部署kube-apiserver组件

  • 部署kube-apiserver组件 创建TLS Bootstrapping Token
[centos@jiliguo bin]$ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
c3daa4aea38e73da480df317c0fd63f9
vim /k8s/kubernetes/cfg/token.csv
c3daa4aea38e73da480df317c0fd63f9,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
  • 创建Apiserver配置文件
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379 \
--bind-address=10.0.0.170 \
--secure-port=6443 \
--advertise-address=10.0.0.170 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem  \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
  • 创建apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

7.2 部署kube-scheduler组件

  • 创建kube-scheduler配置文件
vim  /k8s/kubernetes/cfg/kube-scheduler 

KUBE_SCHEDULER_OPTS=" --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect "
  • 创建kube-scheduler systemd文件
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

7.3 部署kube-controller-manager组件

  • 创建kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem  \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
  • 创建kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动服务
systemctl daemon-reload
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service

7.4 验证kubeserver服务

vim /etc/profile
export PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile

[centos@jiliguo ~]$ kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok
componentstatus/scheduler            Healthy   ok
componentstatus/etcd-2               Healthy   {"health":"true"}
componentstatus/etcd-0               Healthy   {"health":"true"}
componentstatus/etcd-1               Healthy   {"health":"true"}  

8. 部署node节点

8.1 部署kubelet组件

以下2步骤在master节点执行

  • 将kubelet-bootstrap用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
  • 创建kubelet-bootsreap kubeconfig文件
#!/bin/bash
#创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=c3daa4aea38e73da480df317c0fd63f9
KUBE_APISERVER="https://10.0.0.170:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/k8s/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

# 此处需要换个路径,普通用户出现无法读取,加sudo出现没有找到命令
kubectl config set-credentials kube-proxy \
  --client-certificate=/home/centos/k8s/kubernetes/ssl/kube-proxy.pem \
  --client-key=/home/centos/k8s/kubernetes/ssl/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
sh /k8s/kubernetes/cfg/environment.sh 

将配置文件传输到node节点目录下/k8s/kubernetes/cfg

node节点

  • 安装二进制文件
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
  • 创建kubelet参数配置模板文件
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 10.0.0.113
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
  • 创建kubelet配置文件
vim /k8s/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.113 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
  • 创建kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target
  • 启动服务
sudo systemctl daemon-reload
sudo systemctl enable kubelet
sudo systemctl start kubelet
sudo systemctl status kubelet
  • master节点 Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表
[centos@jiliguo k8s]$ kubectl get csr
NAME                                                   AGE   REQUESTOR           CONDITION
node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM   67s   kubelet-bootstrap   Pending

[centos@jiliguo k8s]$ kubectl certificate approve node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM
certificatesigningrequest.certificates.k8s.io/node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM

[centos@jiliguo k8s]$ kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITION
node-csr-uOJ3ElpCCtzi4E1JKCGzf6zm8B8769kKX4OHRtYuPVM   3m20s   kubelet-bootstrap   Approved,Issued

8.2 部署kube-proxy组件

kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint的变化情况,创建路由规则来进行服务负载均衡

  • 创建 kube-proxy 配置文件
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=10.0.0.113 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
  • 创建kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动服务
sudo systemctl daemon-reload
sudo systemctl enable kube-proxy
sudo systemctl start kube-proxy
sudo systemctl status kube-proxy

按照上述流程操作另一node节点

9. 安装flannel网络

默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中

以下操作在需要加入flannel网络节点间操作

9.1 etcd注册网段

/k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379"  set /k8s/network/config  '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}

9.2 flannel部署

  • 解压安装
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
  • 配置flanneld
vim /k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://10.0.0.170:2379,https://10.0.0.113:2379,https://10.0.0.56:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
  • 创建flanneld systemd文件
vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
  • 启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

sudo systemctl daemon-reload
sudo systemctl stop docker
sudo systemctl start flanneld
sudo systemctl enable flanneld
sudo systemctl start docker
master节点:

sudo systemctl restart kube-apiserver.service
sudo systemctl restart kube-controller-manager.service
sudo systemctl restart kube-scheduler.service

node 节点: 

sudo systemctl restart kubelet
sudo systemctl restart kube-proxy
  • 验证服务
[centos@jiliguo k8s1.13]$ cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=10.254.55.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.55.1/24 --ip-masq=false --mtu=1450"

[centos@jiliguo k8s1.13]$ ip a
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:d4:14:40:81 brd ff:ff:ff:ff:ff:ff
    inet 10.254.55.1/24 brd 10.254.55.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:d4ff:fe14:4081/64 scope link
       valid_lft forever preferred_lft forever

10. 集群测试

通过部署nginx服务来进行测试。

  • 部署nginx deployment
kubectl create deployment nginx --image=nginxkubectl
  • 创建nginx服务
kubectl create service nodeport nginx --tcp 80:80
  • 查看nginx服务
[centos@jiliguo ~]$ kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1       <none>        443/TCP        25h
nginx        NodePort    10.254.219.203   <none>        80:31900/TCP   13m
  • 测试服务
在node节点执行
curl 127.0.0.1:31900

或在浏览器输入node节点ip:31900

问题

  • 用192.168.9段ip etcd服务无法启动,一直显示cannot assign requested address,但是换成本机ens0 10.0.0段ip后成功

参考:https://github.com/minminmsn/k8s1.13/blob/master/kubernetes/kubernetes1.13.1%2Betcd3.3.10%2Bflanneld0.10%E9%9B%86%E7%BE%A4%E9%83%A8%E7%BD%B2.md

菜鸟系列k8s——k8s集群部署(2)的更多相关文章

  1. 这一篇 K8S(Kubernetes)集群部署 我觉得还可以!!!

    点赞再看,养成习惯,微信搜索[牧小农]关注我获取更多资讯,风里雨里,小农等你,很高兴能够成为你的朋友. 国内安装K8S的四种途径 Kubernetes 的安装其实并不复杂,因为Kubernetes 属 ...

  2. K8s 离线集群部署(二进制包无dashboard)

    https://www.cnblogs.com/cocowool/p/install_k8s_offline.html https://www.jianshu.com/p/073577bdec98 h ...

  3. 手把手带你部署K8s二进制集群

    集群环境准备: [etcd集群证书生成] #mkdir -p k8s/{k8s-cert,etcd-cert}#cd k8s/etcd-cert/ #cat > ca-config.json & ...

  4. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  5. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...

  6. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...

  7. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...

  8. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

  9. K8S从入门到放弃系列-(16)Kubernetes集群Prometheus-operator监控部署

    Prometheus Operator不同于Prometheus,Prometheus Operator是 CoreOS 开源的一套用于管理在 Kubernetes 集群上的 Prometheus 控 ...

随机推荐

  1. springboot中spring.profiles.include的妙用

    我们在开发Spring Boot应用时,通常同一套程序会被应用和安装到几个不同的环境,比如:开发.测试.生产等.其中每个环境的数据库地址.服务器端口等等配置都会不同,如果在为不同环境打包时都要频繁修改 ...

  2. jenkins发送jemter邮件附件格式配置

    原文:https://www.cnblogs.com/chenchen-tester/p/6930200.html build.xml <?xml version="1.0" ...

  3. 【CSS】三栏/两栏宽高自适应布局大全

    页面布局 注意方案多样性.各自原理.各自优缺点.如果不定高呢.兼容性如何 三栏自适应布局,左右两侧300px,中间宽度自适应 (1) 给出5种方案 方案一: float (左右浮动,中间不用给宽,设置 ...

  4. 使用 VS2015 编译并调试 ffmpeg

    导读 ffmpeg 是音频处理方面非常强大非常有名的开源项目了,然而如 雷神 所说,“FFMPEG 难度比较大,却没有一个循序渐进,由简单到复杂的教程.现在网上的有关FFMPEG的教程多半难度比较大, ...

  5. Robotframework之SSHLibrary库

    Robotframework之SSHLibrary库     使用robotframework做自动化测试,在流程中可能需要远程连接机器做一些简单操作,比如连接linux服务器,外面平时用的工具去连接 ...

  6. How to intercept any postback in a page? - ASP.NET

    How to intercept any postback in a page? - ASP.NET There's a couple of things you can do to intercep ...

  7. json-lib json反序列化——日期转换

    将json格式的字符串转为对象,其中key-value有将String的日期转为Date类型,怪现象就是,转出来的Date类型的值是当前的系统时间. 网上有许多答案,在反序列化之前需要注册Date解析 ...

  8. LC 889. Construct Binary Tree from Preorder and Postorder Traversal

    Return any binary tree that matches the given preorder and postorder traversals. Values in the trave ...

  9. EDM数据营销之电商篇| 六大事务性邮件,环环相扣打造极致用户体验!

    “以用户为中心”的时代,电商们致力于打造极致的用户体验,想尽各式新颖营销办法,但难免还是会出现营销断层,以至于和用户间无法达到完整的交互. 本次Focussend以邮件营销为例,聚焦用户从浏览到支付等 ...

  10. Rxjava2实战--第四章 Rxjava的线程操作

    Rxjava2实战--第四章 Rxjava的线程操作 1 调度器(Scheduler)种类 1.1 RxJava线程介绍 默认情况下, 1.2 Scheduler Sheduler 作用 single ...