[kube-proxy]http://www.cnblogs.com/xuxinkun/p/5799986.html

[flannel]

  • 安装Flannel
  1. [root@master ~]# cd ~/k8s
  2. [root@master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  3. [root@master ~]# kubectl apply -f kube-flannel.yml
  4. clusterrole "flannel" created
  5. clusterrolebinding "flannel" created
  6. serviceaccount "flannel" created
  7. configmap "kube-flannel-cfg" created
  8. daemonset "kube-flannel-ds" created
  • 指定网卡
    如果有多个网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=。
  1. ......
  2. apiVersion: extensions/v1beta1
  3. kind: DaemonSet
  4. metadata:
  5. name: kube-flannel-ds
  6. ......
  7. containers:
  8. - name: kube-flannel
  9. image: quay.io/coreos/flannel:v0.9.0-amd64
  10. command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth1" ]

k8s 官方文档:

  • kubectl 大家应该都知道,是跟 k8s 服务交互的命令行工具。
  • kubeadm 就是安装 k8s 测试环境的命令行工具
  • kubelet 就比较重要了,没有 kubelet,kubeadm 啥也干不了。kubelet 其实就类似于 Nova 中的
    nova-compute 进程(管理 VM),负责管理 container。安装完 kubelet,系统中就多了一个
    kubelet 服务。关于 kubelet,

kubeadm init 干了什么:

  • 系统状态检查
  • 生成 token
  • 生成自签名证书
  • 生成 kubeconfig 用于跟 api talk
  • 为管理面服务容器生成 manifest,放在/etc/kubernetes/manifests
    目录下
  • 配置 RBAC,并设置master node只运行管理容器
  • 创建附加服务,比如 kube-proxy 和 kube-dns 等

安装成功后就可以查看系统中创建的 container 和 pod:

 docker ps | grep -v '/pause'

kubectl get pods --all-namespaces
熟悉 k8s 的朋友其实已经知道那些处于 pause 的 container 的作用。当你在 k8s
中创建包含一个 container 的 pod 时,其实 k8s 会在这个 pod 里偷偷创建一个叫 infra-container
的容器,初始化 pod 的网络、命名空间,pod 中的其他 container 就会共享这个网络和命名空间。所以完成网络初始化后,这些
infra-container 就会永久睡眠,直到收到 SIGINT 或 SIGTERM 信号。

这里看到 kube-dns 卡在 Pending 状态,是因为它必须在安装 pod network 组件后才能启动成功。

Revert k8s master and nodes

当我再往下准备安装网络组件时,发现 calico 要求执行 kubeadm 时有额外参数,所以我就回退了 kubeadm 的安装:

kubectl get nodes
kubectl drain lingxian-XXXX-kubeadm --delete-local-data --force --ignore-daemonsets
kubectl delete node lingxian-XXXX-kubeadm
kubeadm reset
kubeadm init --pod-network-cidr=192.168.0.0/16
  • Node节点操作
  1. kubeadm reset
  2. ifconfig cni0 down
  3. ip link delete cni0
  4. ifconfig flannel.1 down
  5. ip link delete flannel.1
  6. rm -rf /var/lib/cni/
[Kubeadm reset]
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
kubeadm reset

Allow Pod schedule to Master

我只有一个 node,所以就把 master 直接当 worker 用:

root@lingxian-test-kubeadm:~# kubectl taint nodes --all node-role.kubernetes.io/master-
node "lingxian-test-kubeadm" untainted

Master节点参与工作负载

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,可使用如下命令使Master节点参与工作负载。

  1. kubectl taint nodes node1 node-role.kubernetes.io/master-
Run
(XX)kubectl run mynginx --image=nginx --expose --port 8088
kubectl delete deployment mynginx

kubectl delete svc mynginx

kubectl run unginx --image=nginx --expose --port 80

kubectl run centos --image=cu.eshore.cn/library/java:jdk8 --command -- vi
kubectl scale --replicas=4 deployment/centos

向Kubernetes集群添加Node

  • 查看master的token
  1. kubeadm token list | grep authentication,signing | awk '{print $1}'
  • 查看discovery-token-ca-cert-hash
  1. openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • 添加节点到Kubernetes集群
  1. kubeadm join --token=a20844.654ef6410d60d465 --discovery-token-ca-cert-hash sha256:0c2dbe69a2721870a59171c6b5158bd1c04bc27665535ebf295c918a96de0bb1 master.k8s.samwong.im:6443

token失效被删除。在Master上查看token,结果为空。

  1. kubeadm token list
  • 解决方法
    重新生成token,默认token有效期为24小时,生成token时通过指定--ttl 0可设置token永久有效。
  1. [root@master ~]# kubeadm token create --ttl 0
 
 https://lingxiankong.github.io/2018-01-20-install-k8s.html[step by step]
时间同步,主机名,/etc/hosts,防火墙,selinux, 无密钥登录,安装docker-1.12.6 这些都已经配置好了的。

https://anthonychu.ca/post/api-versioning-kubernetes-nginx-ingress/

[dashboard]

https://www.zybuluo.com/ncepuwanghui/note/953929

授予Dashboard账户集群管理权限

创建一个kubernetes-dashboard-admin的ServiceAccount并授予集群admin的权限,创建kubernetes-dashboard-admin.rbac.yaml。

kubectl create -f kubernetes-dashboard-admin.rbac.yaml

  • 查看kubernete-dashboard-admin的token
  1. kubectl -n kube-system get secret | grep kubernetes-dashboard-admin
  2. kubectl describe -n kube-system secret/xxxxxxx
  3. 查看Dashboard服务端口
    1. kubectl get svc -n kube-system

[heapster]

[heapster]http://www.mamicode.com/info-detail-1715935.html

https://www.slahser.com/2016/11/18/k8s%E5%90%8E%E6%97%A5%E8%B0%88-Heaspter%E7%9B%91%E6%8E%A7/

安装Heapster为集群添加使用统计和监控功能,为Dashboard添加仪表盘。

  1. mkdir -p ~/k8s/heapster
  2. cd ~/k8s/heapster
  3. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml
  4. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
  5. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml
  6. wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml
  7. kubectl create -f ./

docker pull mirrorgooglecontainers/heapster-grafana-amd64:v4.4.3

docker tag  mirrorgooglecontainers/heapster-grafana-amd64:v4.4.3
k8s.gcr.io/heapster-grafana-amd64:v4.4.3

docker pull mirrorgooglecontainers/heapster-amd64:v1.4.2

docker tag mirrorgooglecontainers/heapster-amd64:v1.4.2 
k8s.gcr.io/heapster-amd64:v1.4.2

docker pull mirrorgooglecontainers/heapster-influxdb-amd64:v1.3.3

docker tag mirrorgooglecontainers/heapster-influxdb-amd64:v1.3.3 
k8s.gcr.io/heapster-influxdb-amd64:v1.3.3

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++=

docker pull ist0ne/kubernetes-dashboard-amd64:v1.8.0

docker tag  ist0ne/kubernetes-dashboard-amd64:v1.8.0
k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

v1.8.4

kubectl replace --force -f

sysctl -P 报错解决办法
问题症状
修改 linux 内核文件
#vi /etc/sysctl.conf后执行sysctl  -P 报错
error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key
error: "net.bridge.bridge-nf-call-iptables" is an unknown key
error: "net.bridge.bridge-nf-call-arptables" is an unknown key
 
解决方法如下:
modprobe bridge
lsmod|grep bridge

Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"

https://www.zybuluo.com/ncepuwanghui/note/953929
[kexueshangwang--]

docker tag
registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5

刚刚查看了一下,二者确实不一样。docker的cgroup-driver是cgroupfs。将此conf文件对应修改成cgroupfs,正在重新运行,检查kubelet运行正常。但又卡在

跳过(依赖问题):
  docker-engine.x86_64
0:1.12.6-1.el7.centos                                   
  docker-engine-selinux.noarch
0:17.05.0.ce-1.el7.centos                       
  libtool-ltdl.x86_64 0:2.4.2-22.el7_3



images=(kube-apiserver-amd64:v1.8.4 kube-controller-manager-amd64:v1.8.4 kube-scheduler-amd64:v1.8.4 kube-proxy-amd64:v1.8.4 etcd-amd64:3.0.17 pause-amd64:3.0 k8s-dns-sidecar-amd64:1.14.5 k8s-dns-kube-dns-amd64:1.14.5 k8s-dns-dnsmasq-nanny-amd64:1.14.5 flannel:v0.9.1-amd64)

for imageName in ${images[@]} ; do
docker pull mritd/$imageName
docker tag mritd/$imageName gcr.io/google_containers/$imageName
docker rmi mritd/$imageName
done

问题2

新打开一个窗口,查看 /var/log/messages 有如下错误:

1
Aug 12 23:40:10 cu3 kubelet: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"

docker和kubelet的cgroup driver不一样,修改kubelet的配置。把docker启动参数 masq 一起改了。

1
2
3
4
[root@cu3 ~]# sed -i 's/KUBELET_CGROUP_ARGS=--cgroup-driver=systemd/KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[root@cu3 ~]# sed -i 's#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#' /usr/lib/systemd/system/docker.service [root@cu3 ~]# systemctl daemon-reload; systemctl restart docker kubelet
sed -i 's#/usr/bin/dockerd.*#/usr/bin/dockerd --ip-masq=false#' /usr/lib/systemd/system/docker.service 

注意:加了 ip-masq=false 后,docker0就不能上外网了。也就是单独起的docker容器不能上外网!

1
ExecStart=/usr/bin/dockerd --ip-masq=false

节点防火墙(由于是云主机,增加防火墙):

1
2
3
firewall-cmd --zone=trusted --add-source=192.168.0.0/16 --permanent
firewall-cmd --zone=trusted --add-source=10.0.0.0/8 --permanent
firewall-cmd --complete-reload

[download k8s]

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#v181

[k8s metrics]

http://www.cnblogs.com/iiiiher/p/7999761.html

[k8s cookbook]

https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook

[infra esta]

http://blog.csdn.net/qq_32971807/article/details/54693254

[ranchor]

https://www.cnrancher.com/rancher-k8s-accelerate-installation-document/

docker run -d
--restart always --name rancher_server -p 6080:8080
rancher/server &&  docker logs -f rancher-server

[k8s 1.6.1 install]

https://www.jianshu.com/p/8ce11f947410

[CNI]

https://segmentfault.com/a/1190000008803805

http://www.cnblogs.com/whtydn/p/4353695.html

(+++)https://www.jianshu.com/p/a2039a8855ec

(!!!!!)https://www.cnblogs.com/liangDream/p/7358847.html

http://ju.outofmemory.cn/entry/231591

https://segmentfault.com/a/1190000007074726

http://www.infoq.com/cn/articles/centos7-practical-kubernetes-deployment

http://www.cnblogs.com/zhenyuyaodidiao/p/6500720.html
(kernel concept)

https://www.jianshu.com/p/8d3204b96cf9

http://blog.csdn.net/hackstoic/article/details/50574886
(mesos)

kubenets installation--ranchor-mesos的更多相关文章

  1. 基于zookeeper+mesos+marathon的docker集群管理平台

    参考文档: mesos:http://mesos.apache.org/ mesosphere社区版:https://github.com/mesosphere/open-docs mesospher ...

  2. mysql-5.6.34 Installation from Source code

    Took me a while to suffer from the first successful souce code installation of mysql-5.6.34. Just pu ...

  3. Create an offline installation of Visual Studio 2017 RC

    Create an offline installation of Visual Studio 2017 RC ‎2016‎年‎12‎月‎7‎日                             ...

  4. An error occurred during the installation of assembly 'Microsoft.VC90.CRT……的问题

    有一段时间没有用到AnkhSvn了,今天工作需要安装了一下.结果安装到一半就无法继续了,提示An error occurred during the installation of assembly ...

  5. "Installation failed !" in GUI but not in CLI (/usr/bin/winusb: line 78: 18265 Terminated )

    "Installation failed !" in GUI but not in CLI (/usr/bin/winusb: line 78: 18265 Terminated ...

  6. pymol installation

    # download (1) python wget https://www.python.org/ftp/python/2.7.9/python-2.7.9.amd64.msi (2) pymol ...

  7. 安卓真机调试 出现Installation error: INSTALL_FAILED_UPDATE_INCOMPATIBLE....

    [2016-08-20 14:38:39 - hybrid-android] Installation error: INSTALL_FAILED_UPDATE_INCOMPATIBLE[2016-0 ...

  8. Mesos高可用解决方案剖析

    本文作者王勇桥,80后的IT攻城狮,供职于IBM多年,Mesos和Swarm社区的贡献者.本文是他根据自己对Mesos的高可用(High-Availability)设计方案的了解以及在Mesos社区贡 ...

  9. The Installation and Compilation of OpenCASCADE

    OpenCASCADE的编译 The Installation and Compilation of OpenCASCADE eryar@163.com 一. 安装OpenCASCADE 可以从Ope ...

随机推荐

  1. Eclipse插件的安装(手动安装),以安装SVN插件和中文语言包为例

    Eclipse 插件的手动配置 今天自己亲自手动安装了Eclipse插件,参考了网络上的一些文章,总结一下安装的方法.下面通过两个例子来分享一下自己的收获. 例1:SVN插件安装 1.在Eclipse ...

  2. 每日英语:Who Needs to Know How to Code

    Like many 10-year-olds, Nick Wald takes private lessons. His once-a-week tutor isn't helping him wit ...

  3. Swift 4迁移总结:喜忧参半,新的起点

    Swift 4迁移总结:喜忧参半,新的起点 每日一篇优秀博文 这次Swift 3 到 4 的迁移代码要改动的地方比较少,花了一个下午的时间就完成了迁移.Swift 把原来 4.0 的目标从 ABI 稳 ...

  4. 点云PCL中小细节

    计算点与点之间的距离的平局距离 double computeCloudResolution (const pcl::PointCloud<PointType>::ConstPtr & ...

  5. one-to-all及all-to-all网络通信模式

    在这两种模式下,因为 占用的通信通道非常高,形成了一个一对多的通道 甚至是多对多的通道,导致现有的fattree网络结构负载太大.

  6. 嵌入式开发之hi3519---进程线程间的同步和互斥,条件变量、信号了、互斥锁等

    sem_post 最安全 sem  有序,会卡顿 阻塞 mutex  无序,不能同步 http://blog.chinaunix.net/uid-20671208-id-4935154.html ht ...

  7. msyql同步的时候报错 : 错误代码: 1293 Incorrect table definition;there can be only one TIMESTAMP column with CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause

    场景,两个不同服务器上的数据库,进行数据库同步 但是执行之后,提示报错 错误代码: 1293 Incorrect table definition; there can be only one TIM ...

  8. JConsole & JVisualVM远程监视Websphere服务器JVM的配置方法

    原文链接:http://xjsunjie.blog.51cto.com/999372/1331880/ jconsole是JDK里自带的一个工具,可以监测Java程序运行时所有对象的申请.释放等动作, ...

  9. circRNA 在人和小鼠脑组织中的表达

    circRNA 是一类动物体内的内源性的RNA,尽管circRNA的种类丰富,但是其在神经系统中的 功能,并不清楚.科学家通过对人和小鼠的不同脑部组织的RNA 测序,发现了上千种circRNA,经过分 ...

  10. linq 把list分组为 List<List>

    public class User { public int UserID { get; set; } public string UserName { get; set; } public int ...