(这一篇中很多错误,勿参考!)
The reference urls:
https://github.com/projectcalico/calico-docker/blob/master/docs/kubernetes/KubernetesIntegration.md
 
I have 3 hosts: 10.11.151.97, 10.11.151.100, 10.11.150.101. Unfortunately, there is no internet access in all 3 hosts.  Following the guide, I build the Kubernetes cluster in 'bash command' mode, rather than the 'service mode' described in the reference.
10.11.151.97 is the kubernetes master, the other two are its nodes.
 

1, Run Etcd Cluster

etcd_token=kb3-etcd-cluster
local_name=kbetcd0
local_ip=10.11.151.97
local_peer_port=4010
local_client_port1=4011
local_client_port2=4012
node1_name=kbetcd1
node1_ip=10.11.151.100
node1_port=4010
node2_name=kbetcd2
node2_ip=10.11.151.101
node2_port=4010 ./etcd -name $local_name \
-initial-advertise-peer-urls http://$local_ip:$local_peer_port \
-listen-peer-urls http://0.0.0.0:$local_peer_port \
-listen-client-urls http://0.0.0.0:$local_client_port1,http://0.0.0.0:$local_client_port2 \
-advertise-client-urls http://$local_ip:$local_client_port1,http://$local_ip:$local_client_port2 \
-initial-cluster-token $etcd_token \
-initial-cluster $local_name=http://$local_ip:$local_peer_port,$node1_name=http://$node1_ip:$node1_port,$node2_name=http://$node2_ip:$node2_port \
-initial-cluster-state new &

  

In each host, run etcd as this command since the etcd should run in cluster mode. If succeed, you should see 'published {Name: *} to cluster *' output. 
 

2, Setup Master

2.1 Start Kubernetes

Run kube-apiserver:
./kube-apiserver --logtostderr=true --v=0 --etcd_servers=http://127.0.0.1:4012 --kubelet_port=10250 --allow_privileged=false --service-cluster-ip-range=172.16.0.0/12 --insecure-bind-address=0.0.0.0 --insecure-port=8080 2>&1 > apiserver.out &
Run kube-controller-manager:
./kube-controller-manager --logtostderr=true --v=0 --master=http://tc-151-97:8080 --cloud-provider="" 2>&1 >controller.out &

  Run kube-scheduler:

./kube-scheduler --logtostderr=true --v=0 --master=http://tc-151-97:8080 2>&1 > scheduler.out &

2.2 Install calico in on Master

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

  

3, Setup Nodes

3.1 Install calico

For the nodes have no internet access, I downloaded the calico plugin mannual from:
https://github.com/projectcalico/calico-kubernetes/releases/tag/v0.6.0

Move the plugin to the kubernetes plugin directory:

sudo mv calico_kubernetes /usr/libexec/kubernetes/kubelet-plugins/net/exec/calico/calico

Start the calico:

sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl node

3.2 Start kubelet with calico network:

Start the kubelet with --network-plugin parameter:
./kube-proxy --logtostderr=true --v=0 --master=http://tc-151-97:8080 --proxy-mode=iptables &
./kubelet --logtostderr=true --v=0 --api_servers=http://tc-151-97:8080 --address=0.0.0.0 —-network-plugin=calico --allow_privileged=false --pod-infra-container-image=10.11.150.76:5000/kubernetes/pause:latest &

Here is the kubelet command output:

I1124 15:11:52.226324 28368 server.go:808] Watching apiserver
I1124 15:11:52.393448 28368 plugins.go:56] Registering credential provider: .dockercfg
I1124 15:11:52.398087 28368 server.go:770] Started kubelet
E1124 15:11:52.398190 28368 kubelet.go:756] Image garbage collection failed: unable to find data for container /
I1124 15:11:52.398165 28368 server.go:72] Starting to listen on 0.0.0.0:10250
W1124 15:11:52.401695 28368 kubelet.go:775] Failed to move Kubelet to container "/kubelet": write /sys/fs/cgroup/memory/kubelet/memory.swappiness: invalid argument
I1124 15:11:52.401748 28368 kubelet.go:777] Running in container "/kubelet"
I1124 15:11:52.497377 28368 factory.go:194] System is using systemd
I1124 15:11:52.610946 28368 kubelet.go:885] Node tc-151-100 was previously registered
I1124 15:11:52.734788 28368 factory.go:236] Registering Docker factory
I1124 15:11:52.735851 28368 factory.go:93] Registering Raw factory
I1124 15:11:52.969060 28368 manager.go:1006] Started watching for new ooms in manager
I1124 15:11:52.969114 28368 oomparser.go:199] OOM parser using kernel log file: "/var/log/messages"
I1124 15:11:52.970296 28368 manager.go:250] Starting recovery of all containers
I1124 15:11:53.148967 28368 manager.go:255] Recovery completed
I1124 15:11:53.240408 28368 manager.go:104] Starting to sync pod status with apiserver
I1124 15:11:53.240439 28368 kubelet.go:1953] Starting kubelet main sync loop.

  

I do not know wheather the kubelet is run right. Someone tell me how to verify it ?
 
I do the same process in another node.
 

3, Create some pods and test.

apiVersion: v1
kind: ReplicationController
metadata:
name: test-1
spec:
replicas: 1
template:
metadata:
labels:
app: test-1
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-2
spec:
replicas: 1
template:
metadata:
labels:
app: test-2
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-100
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-3
spec:
replicas: 1
template:
metadata:
labels:
app: test-3
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
---
apiVersion: v1
kind: ReplicationController
metadata:
name: test-4
spec:
replicas: 1
template:
metadata:
labels:
app: test-4
spec:
containers:
- name: iperf
image: 10.11.150.76:5000/openxxs/iperf:1.2
nodeSelector:
kubernetes.io/hostname: tc-151-101
./kubectl create -f test.yaml

This command create 4 pods, 2 for 10.11.151.100, 2 for 10.11.151.101.

[@tc_151_97 /home/domeos/openxxs/bin]# ./kubectl get pods
NAME READY STATUS RESTARTS AGE
test-1-1ztr2 1/1 Running 0 5m
test-2-8p2sr 1/1 Running 0 5m
test-3-1hkwa 1/1 Running 0 5m
test-4-jbdbq 1/1 Running 0 5m

  

[@tc-151-100 /home/domeos/openxxs/bin]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6dfc83ec1d12 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_ca4496d0
78087a93da00 10.11.150.76:5000/openxxs/iperf:1.2 "/block" 6 minutes ago Up 6 minutes k8s_iperf.a4ede594_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_330d815c
f80a1474f4c4 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-2-8p2sr_default_f1c2da7d-927c-11e5-a77a-782bcb435e46_af7199c0
eb14879757e6 10.11.150.76:5000/kubernetes/pause:latest "/pause" 6 minutes ago Up 6 minutes k8s_POD.34f4dfd2_test-1-1ztr2_default_f1b54d0b-927c-11e5-a77a-782bcb435e46_af2cc1c3
8accff535ff9 calico/node:latest "/sbin/start_runit" 27 minutes ago Up 27 minutes calico-node
In the node 10.11.151.100, the calico status:
[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 24 minutes
Running felix version 1.2.0 IPv4 BGP status
+---------------+-------------------+-------+----------+------------------------------------------+
| Peer address | Peer type | State | Since | Info |
+---------------+-------------------+-------+----------+------------------------------------------+
| 10.11.151.101 | node-to-node mesh | start | 07:18:44 | Connect Socket: Connection refused |
| 10.11.151.97 | node-to-node mesh | start | 07:07:40 | Active Socket: Connection refused |
+---------------+-------------------+-------+----------+------------------------------------------+ IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+ 
However, in another node 10.11.151.101:
[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ sudo ETCD_AUTHORITY=127.0.0.1:4011 ./calicoctl status
calico-node container is running. Status: Up 2 minutes
Running felix version 1.2.0 IPv4 BGP status
Unable to connect to server control socket (/etc/service/bird/bird.ctl): Connection refused IPv6 BGP status
+--------------+-----------+-------+-------+------+
| Peer address | Peer type | State | Since | Info |
+--------------+-----------+-------+-------+------+
+--------------+-----------+-------+-------+------+

What has happened ?

 
And that, there is no calico ip route in both nodes:
[@tc-151-100 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
[@tc-151-101 ~/baoquanwang/calico-docker-utils]$ ip route
default via 10.11.151.254 dev em1 proto static metric 1024
10.11.151.0/24 dev em1 proto kernel scope link src 10.11.151.101
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
There is no log output in /var/log/calico/kubernetes/calico.log.
 
 
 
 
 
 

calico for kubernetes的更多相关文章

  1. Calico在Kubernetes中的搭建

    一,需求 Kubernetes官方推荐的是Flannel,但是Flannel是一个overlay的网络,对性能会有一定的影响.Calico恰好能解决一下overlay网络的不足. Calico在Kub ...

  2. [转帖]Kubernetes CNI网络最强对比:Flannel、Calico、Canal和Weave

    Kubernetes CNI网络最强对比:Flannel.Calico.Canal和Weave https://blog.csdn.net/RancherLabs/article/details/88 ...

  3. 容器、容器集群管理平台与 Kubernetes 技术漫谈

    原文:https://www.kubernetes.org.cn/4786.html 我们为什么使用容器? 我们为什么使用虚拟机(云主机)? 为什么使用物理机? 这一系列的问题并没有一个统一的标准答案 ...

  4. Kubernetes实战(一):k8s v1.11.x v1.12.x 高可用安装

    说明:部署的过程中请保证每个命令都有在相应的节点执行,并且执行成功,此文档已经帮助几十人(仅包含和我取得联系的)快速部署k8s高可用集群,文档不足之处也已更改,在部署过程中遇到问题请先检查是否遗忘某个 ...

  5. calico 原理分析

    1.calico没有使用CNI的网桥模式,calico的CNI插件还需要在host机器上为每个容器的veth pair配置一条路由规则.cni插件是calico与kubernetes对接部分. 2.B ...

  6. calico集成详解

    一.摘要 ======================================================================================= 包括三项: c ...

  7. Kubernetes 概述和搭建(多节点)

    一.Kubernetes整体概述和架构 Kubernetes是什么 Kubernetes是一个轻便的和可扩展的开源平台,用于管理容器化应用和服务.通过Kubernetes能够进行应用的自动化部署和扩缩 ...

  8. Calico相关资料链接

    部署calico的两个yaml文件: kubectl apply -f http://docs.projectcalico.org/v2.3/getting-started/kubernetes/in ...

  9. 基于kubernetes自研容器管理平台的技术实践

    一.容器云的背景 伴随着微服务的架构的普及,结合开源的Dubbo和Spring Cloud等微服务框架,宜信内部很多业务线逐渐了从原来的单体架构逐渐转移到微服务架构.应用从有状态到无状态,具体来说将业 ...

随机推荐

  1. java设计优化--观察者模式

    观察者模式介绍 观察者模式是一种非常有用的设计模式,在软件系统中,当一个对象的行为依赖于另一个对象的状态时,观察者模式就非常有用.如果不适用观察者模式,而实现类似的功能,可能就需要另外启动一个线程不停 ...

  2. "use strict"

    "use strict";//严格模式 <!doctype html> <html> <head> <meta charset=" ...

  3. SPOJ QTREE 树链剖分

    树链剖分的第一题,易懂,注意这里是边. #include<queue> #include<stack> #include<cmath> #include<cs ...

  4. replace和replaceAll的区别

      replace和replaceAll是JAVA中常用的替换字符的方法,它们的区别是: 1)replace的参数是char和CharSequence,即可以支持字符的替换,也支持字符串的替换(Cha ...

  5. struts2中怎么把action中的值传递到jsp页面

    对于如何把struts2的action中的值传到jsp页面中,主要的方法有2种: 使用转发视图利用request域中储存所需的值 使用重定向时存储数据进入session使其在jsp中可以获得 下面,让 ...

  6. JS~json日期格式化

    起因 对于从C#返回的日期字段,当进行JSON序列化后,在前台JS里显示的并不是真正的日期,这让我们感觉很不爽,我们不可能为了这东西,把所有日期字段都变成string吧,所以,找了一个JS的扩展方法, ...

  7. 【codevs1044】导弹拦截问题与Dilworth定理

    题目描述 Description 某国为了防御敌国的导弹袭击,发展出一种导弹拦截系统.但是这种导弹拦截系统有一个缺陷:虽然它的第一发炮弹能够到达任意的高度,但是以后每一发炮弹都不能高于前一发的高度.某 ...

  8. Ubuntu安装VMware Tools的方法

    最后我将提供一个12版本的VMware Tools集合,包括了linux.iso. 背景: VMware Tools是VMware虚拟机中自带的一种增强工具,相当于VirtualBox中的增强功能(S ...

  9. CVE-2014-0160 Heartbleed Vul Analysis && OpenSSL Cryptographic Software Library Bug

    目录 . Heartbleed漏洞简介 . 漏洞造成的风险和影响 . 漏洞的测试.POC . OpenSSL漏洞源代码分析 . 防御.修复方案 . 从漏洞中得到的攻防思考 1. Heartbleed漏 ...

  10. 迪杰斯特拉(Java)

    public class Dijsktra { public static void main(String[] args) { Dijsktra d=new Dijsktra(); int[][] ...