1、安装方式

1、  传统方式,以下组件全部运行在系统层面(yum或者rpm包),都为系统级守护进程

2、  kubeadm方式,master和node上的组件全部运行为pod容器,k8s也为pod

  a)         master/nodes

  安装kubelet,kubeadm,docker

  b)         master:kubeadm init(完成集群初始化)

  c)         nodes:kubeadm join

2、规划

主机           IP

k8smaster    192.168.2.19

k8snode01    192.168.2.21

k8snode02    192.168.2.5

版本号:

docker: 18.06.1.ce-3.el7

OS: CentOS release 7

kubernetes: 1.11.2

kubeadm 1.11.2

kubectl 1.11.2

kubelet 1.11.2

etcdctl: 3.2.15

3、主机设置

3.1、关闭交换分区

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

3.2、初始化系统

关闭SELinux和Firewall

sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config
setenforce
systemctl disable firewalld.service
systemctl disable firewalld
systemctl stop firewalld

防火墙开启转发

iptables -P FORWARD ACCEPT

安装必要和常用工具

yum install -y epel-release vim vim-enhanced lrzsz unzip ntpdate sysstat dstat wget mlocate mtr lsof iotop bind-tools git net-tools

其他

#时间设置
timedatectl set-timezone Asia/Shanghai
ntpdate ntp1.aliyun.com
rm -f /etc/localtime
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
#设置日志格式
systemctl restart rsyslog
echo 'export HISTTIMEFORMAT="%m/%d %T "' >> ~/.bashrc
source ~/.bashrc
#设置连接符
cat >> /etc/security/limits.conf << EOF
* soft nproc
* hard nproc
* soft nofile
* hard nofile
EOF
echo "ulimit -SHn 65535" >> /etc/profile
echo "ulimit -SHn 65535" >> /etc/rc.local
ulimit -SHn
sed -i 's/4096/10240/' /etc/security/limits.d/-nproc.conf
modprobe ip_conntrack
modprobe br_netfilter
cat >> /etc/rc.d/rc.local <<EOF
modprobe ip_conntrack
modprobe br_netfilter
EOF
chmod /etc/rc.d/rc.local
#设置内核
cat <<EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables =
net.ipv6.conf.all.disable_ipv6 =
net.ipv6.conf.default.disable_ipv6 =
net.ipv6.conf.lo.disable_ipv6 =
vm.swappiness =
vm.overcommit_memory=
vm.panic_on_oom=
kernel/panic=
kernel/panic_on_oops=
kernel.pid_max =
vm.max_map_count =
fs.aio-max-nr =
fs.file-max =
EOF
sysctl -p /etc/sysctl.conf
/sbin/sysctl -p # vim /etc/sysconfig/kubelet
#由于云环境默认没有swap分区,关闭swap选项
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
#设置kubeproxy模式为ipvs
KUBE_PROXY_MODE=ipvs

4、master设置yum源

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo vim kubernetes.repo
i
[kubernetes]
name=Kuberneters Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled= wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg rpm --import yum-key.gpg
rpm --import rpm-package-key.gpg

4.1、拷贝到nodes

由于本例是实验环境,如果生产环境存在saltstack和ansible等分发工具,可以用分发工具去做

1. 在服务器 S 上执行如下命令来生成配对密钥:

node01:
mkdir /root/.ssh -p
node02:
mkdir /root/.ssh -p
master:
ssh-keygen -t rsa
scp .ssh/id_rsa.pub node01:/root/.ssh/authorized_keys
scp .ssh/id_rsa.pub node02:/root/.ssh/authorized_keys

2、分发镜像源和验证秘钥

scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node01:/etc/yum.repos.d/
scp /etc/yum.repos.d/kubernetes.repo /etc/yum.repos.d/docker-ce.repo node02:/etc/yum.repos.d/ scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node01:/root
scp /etc/yum.repos.d/yum-key.gpg /etc/yum.repos.d/rpm-package-key.gpg node02:/root

4.2、nodes上执行

rpm --import yum-key.gpg

rpm --import rpm-package-key.gpg

5、master安装k8s

yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y

5.1、配置镜像源

安装完毕后需要初始化,此时需要下载镜像

由于国内没办法访问Google的镜像源,变通的方法有两种:

1、是从其他镜像源下载后,修改tag。

A、执行下面这个Shell脚本即可。

#!/bin/bash
images=(kube-proxy-amd64:v1.11.2 kube-scheduler-amd64:v1.11.2 kube-controller-manager-amd64:v1.11.2 kube-apiserver-amd64:v1.11.2
etcd-amd64:3.2. coredns:1.1. pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14. k8s-dns-kube-dns-amd64:1.14.
k8s-dns-dnsmasq-nanny-amd64:1.14. )
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com
done
docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

B、从官方拉取

docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.
docker pull coredns/coredns:1.1.
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-scheduler-amd64:v1.11.2 k8s.gcr.io/kube-scheduler-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.2 k8s.gcr.io/kube-apiserver-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.2 k8s.gcr.io/kube-controller-manager-amd64:v1.11.2
docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2. k8s.gcr.io/etcd-amd64:3.2.
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/coredns/coredns:1.1. k8s.gcr.io/coredns:1.1.

C、从阿里云拉取

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14. k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14. k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14. k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1. k8s.gcr.io/etcd-amd64:3.1.
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

2、是有代理服务器

vim /usr/lib/systemd/system/docker.service
[Service]下增加:
Environment="HTTPS_PROXY=$PROXY_SERVER_IP:PORT"
Environment="NO_PROXY=127.0.0.0/8,192.168.0.0/16"

5.2、配置镜像加速器

Docker 官方和国内很多云服务商都提供了国内加速器服务,例如:

当配置某一个加速器地址之后,若发现拉取不到镜像,请切换到另一个加速器地址。

国内各大云服务商均提供了 Docker 镜像加速服务,建议根据运行 Docker 的云平台选择对应的镜像加速服务。

使用阿里云加速器或者其他加速器:https://yeasy.gitbooks.io/docker_practice/content/install/mirror.html

配置镜像加速器,您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry-mirror.qiniu.com","https://zsmigm0p.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

5.3、重启服务

systemctl daemon-reload
systemctl restart docker.service

设置K8S开机自启

systemctl enable kubelet

systemctl enable docker

5.4、初始化master节点

kubeadm init --kubernetes-version=v1.11.2 --pod-network-cidr=10.244.0.0/ --service-cidr=10.96.0.0/ --ignore-preflight-errors=all

过程简介

[init] using Kubernetes version: v1.11.2
[preflight] running pre-flight checks
I0829 ::00.722283 kernel_validator.go:] Validating kernel version
I0829 ::00.722398 kernel_validator.go:] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
############certificates 证书##############
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.2.19]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master localhost] and IPs [127.0.0.1 ::]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.2.19 127.0.0.1 ::]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
############ 配置文件 ##############
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 43.503085 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master" as an annotation
[bootstraptoken] using token: 0tjvur.56emvwc4k4ghwenz
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
v1.11版本正式命名
# 早期版本演进:skyDNS ---->kubeDNS---->CoreDNS
[addons] Applied essential addon: kube-proxy
# 作为附件运行,自托管在K8S上,动态为service资源生成ipvs(iptables)规则,.11版本开始默认使用IPvs,不支持的话自动降级到iptables
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube #在家目录下创建kube文件
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #admin.conf包含一些kubectl拿来做配置文件指定连接K8S的认证信息
sudo chown $(id -u):$(id -g) $HOME/.kube/config #改属主属组 You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
as root:
#以root在其他任意节点上执行以下命令加入K8S集群中。
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf
注释
#--discovery-token-ca-cert-hash #发现master使用的哈希码验证

根据提示执行如下操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

master部署网络组件

  CNI(容器网络插件):         flannel      #不支持网络策略         calico         canel         kube-router

注:如果是有代理服务器此处可以正常部署,如果没有,那么节点需要下载镜像后才可以部署

nodes:

docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.2 k8s.gcr.io/kube-proxy-amd64:v1.11.2

master:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

6、nodes安装k8s

下面的命令需要在所有的node上执行。

yum install docker-ce kubelet-1.11. kubeadm-1.11. kubectl-1.11. -y

6.1、nodes启动docker kubelet

master端

scp /usr/lib/systemd/system/docker.service node01:/usr/lib/systemd/system/docker.service
scp /usr/lib/systemd/system/docker.service node02:/usr/lib/systemd/system/docker.service
scp /etc/sysconfig/kubelet node01:/etc/sysconfig/
scp /etc/sysconfig/kubelet node02:/etc/sysconfig/

nodes端

systemctl start docker
systemctl enable docker kubelet

6.2、加入master

这个token是在master节点上kubeadm初始化时最后给出的结果:
kubeadm join 192.168.2.19: --token 0tjvur.56emvwc4k4ghwenz --discovery-token-ca-cert-hash sha256:f768ad7522e7bb5ccf9d0e0590c478b2a7161912949d1be3d5cd3f46ace7f4bf

###########如果token失效了##############

[root@master ~]# kubeadm token create

0wnvl7.kqxkle57l4adfr53

[root@node02 yum.repos.d]# kubeadm join 192.168.2.19:6443 --token 0wnvl7.kqxkle57l4adfr53 --discovery-token-unsafe-skip-ca-verification

7、验证

此时master执行如下命令,应该会出现master和nodes的状态为Ready。

# kubectl get nodes

Centos7使用kubeadm部署kubernetes-1.11.2的更多相关文章

  1. Centos7 使用 kubeadm 安装Kubernetes 1.13.3

    目录 目录 什么是Kubeadm? 什么是容器存储接口(CSI)? 什么是CoreDNS? 1.环境准备 1.1.网络配置 1.2.更改 hostname 1.3.配置 SSH 免密码登录登录 1.4 ...

  2. kubeadm安装kubernetes V1.11.1 集群

    之前测试了离线环境下使用二进制方法安装配置Kubernetes集群的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下.安装过程中,也有一些坑,相对来说操作上要比二进制方 ...

  3. 02 . Kubeadm部署Kubernetes及简单应用

    kubeadm部署Kubernetes kubeadm简介 # kubeadm是一位高中生的作品,他叫Lucas Kaldstrom,芬兰人,17岁用业余时间完成的一个社区项目: # kubeadm的 ...

  4. 使用kubeadm部署Kubernetes v1.13.3

    kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具. 这个工具能通过两条指令完成一个kubernetes集群的部署: 1. 安装要求 在开始之前,部署Kubernetes集群 ...

  5. [原]使用kubeadm部署kubernetes(一)

    #######################    以下为声明  ##################### 在公众号  木子李的菜田 输入关键词:   k8s 有系列安装文档 此文档是之前做笔记在 ...

  6. [转帖]CentOS 7 使用kubeadm 部署 Kubernetes

    CentOS 7 使用kubeadm 部署 Kubernetes   关闭swap 执行swapoff临时关闭swap. 重启后会失效,若要永久关闭,可以编辑/etc/fstab文件,将其中swap分 ...

  7. 附025.kubeadm部署Kubernetes更新证书

    一 查看证书 1.1 查看过期时间-方式一 1 [root@master01 ~]# tree /etc/kubernetes/pki/ 2 [root@master01 ~]# for tls in ...

  8. Kubeadm部署Kubernetes

    Kubeadm部署Kubernetes 1.环境准备 主机名 IP 说明 宿主机系统 k8s-master 10.0.0.101 Kubernetes集群的master节点 Ubuntu2004 k8 ...

  9. centos7.1使用kubeadm部署kubernetes 1.16.2的master高可用

    机器列表,配置域名解析 cat /etc/hosts192.168.200.210 k8s-master1192.168.200.211 k8s-master2192.168.200.212 k8s- ...

随机推荐

  1. docker常用命令总结

    1.docker ps  查看当前正在运行的容器 2.docker ps -a 查看所有容器的状态 3.docker start/stop id/name     启动/停止某个容器 4.docker ...

  2. 使用scrapy选择器selector解析获取百度结果

    0x00 概述 需要成功安装scrapy,安装方法与本文无关,不在这多说. 0x01 配置settings 由于百度对于user-agent进行验证,所以需要添加. settings.py中找到DEF ...

  3. JAVA集合1--总体框架

    JAVA集合是JAVA提供的工具包,包含了常用的数据结构:集合.链表.栈.队列.数组.映射等.JAVA集合工具包的位置是java.util.* JAVA集合主要可以分为4个部分:List.Set.Ma ...

  4. requests 获取token

    # encoding:utf-8 import reimport jsonimport randomfrom requests.sessions import Session class Regist ...

  5. Selenium中三种等待的使用方式---规避网络延迟、代码不稳定问题

    在UI自动化测试中,必然会遇到环境不稳定,网络慢的情况,这时如果你不做任何处理的话,代码会由于没有找到元素,而报错.这时我们就要用到wait(等待),而在Selenium中,我们可以用到一共三种等待, ...

  6. go之路

    目录 go初识[第一篇]初识 go初识[第二篇]包.变量.函数

  7. ueditor接入秀米编辑器

    秀米编辑器用来编辑微信页面很方便,功能也比较强大.秀米提供了第三方编辑器接入的功能,接入方法可以参照官网示例:http://hgs.xiumi.us/uedit/ 但是这里有几点要注意: 1. 示例中 ...

  8. 贯穿RobotFramework框架 - 关键字(一) 最全面的疏理

    在RF中,关键字是一个非常重要的存在.想做任何事情,都是通过关键字来实现的. 这篇文章对RobotFramework中的关键字做个整理.大概分为以下几点内容: 1.什么是关键字 2.关键字来自哪里.有 ...

  9. dubbo注册中心

    官方推荐的是zookeeper注册中心. 1.Multicast 注册中心 Multicast 注册中心不需要启动任何中心节点,只要广播地址一样,就可以互相发现. 提供方启动时广播自己的地址消费方启动 ...

  10. [转] C/C++ 调用Python

    from :  https://cyendra.github.io/2018/07/10/pythoncpp/ 目录 前言 官方文档 环境搭建 编译链接 Demo 解释器 初始化 GIL Object ...