参考:https://blog.csdn.net/networken/article/details/84991940

# k8s工具部署方案

# 1.集群规划

| **服务器** | |
| ------------ | ---------------------------------------- |
| **数量** | >1(根据实际提供的服务器分配模块) |
| **配置** | 16 core /32 memory / 300GB硬盘/50M带宽 |
| **操作系统** | CentOS linux 7.2 master节点需要外网环境 |
| **文件系统** | 300G硬盘安装在/ data目录下 |
| **其他条件** | master节点必须具备外网环境 |

| 节点名称 | 主机名 | IP地址 | 操作系统 |
| -------- | ------------- | ----------- | ---------- |
| master | centos01 | 192.168.0.1 | CentOS 7.2 |
| node1 | centos02 | 192.168.0.2 | CentOS 7.2 |
| node2 | centos03 | 192.168.0.3 | CentOS 7.2 |

# 2.基础环境配置

## 2.1 hostname配置(可选)

**1)修改主机名**

**在192.168.0.1 root用户下执行:**

hostnamectl set-hostname VM_0_1_centos

**在192.168.0.2 root用户下执行:**

hostnamectl set-hostname VM_0_2_centos

**在192.168.0.3 root用户下执行:**

hostnamectl set-hostname VM_0_3_centos

**2)加入主机映射**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/hosts

192.168.0.1 VM_0_1_centos

192.168.0.2 VM_0_2_centos

192.168.0.3 VM_0_3_centos

## 2.2 关闭selinux(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config

setenforce 0

## 2.3 修改Linux最大打开文件数

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/security/limits.conf

\* soft nofile 65536

\* hard nofile 65536

## 2.4 关闭防火墙(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

systemctl disable firewalld.service

systemctl stop firewalld.service

systemctl status firewalld.service

## 2.5 软件环境初始化

**1)初始化服务器**

groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev

**2)配置sudo**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

vim /etc/sudoers.d/app

app ALL=(ALL) ALL

app ALL=(ALL) NOPASSWD: ALL

Defaults !env_reset

**3)配置ssh无密登录**

**a. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**

su app

ssh-keygen -t rsa

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

chmod 600 \~/.ssh/authorized_keys

**b.合并id_rsa_pub文件**

**在192.168.0.1 app用户下执行**

scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh

输入app密码

**在192.168.0.2 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh

**在192.168.0.3 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh

scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh

**c. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行ssh 测试**

ssh app@192.168.0.1

ssh app@192.168.0.2

ssh app@192.168.0.3

## 2.6 sysctl参数配置

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

**vim /etc/sysctl.conf**

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

**#生效**

sysctl –p

## 2.7 ntpd配置

****1**)服务端配置****

**在192.168.0.1 root用户下操作**

yum install -y ntp ntpdate

**修改etc/ntp.conf**

**注释所有的server和restrict**

**加入:**

server 0.cn.pool.ntp.org

server 0.asia.pool.ntp.org

server 3.asia.pool.ntp.org

restrict 0.cn.pool.ntp.org nomodify notrap noquery

restrict 0.asia.pool.ntp.org nomodify notrap noquery

restrict 3.asia.pool.ntp.org nomodify notrap noquery

server 127.127.1.0 # local clock

fudge 127.127.1.0 stratum 10

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**查看网络中的NTP服务器**

ntpq –p

****2**)客户端配置****

**在192.168.0.2 192.168.0.3 root用户下操作**

yum install -y ntp ntpdate

**在/etc/ntp.conf加入**

server 192.168.0.1 prefer

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**同步**

ntpdate -u 192.168.0.1

执行hwclock --systohc,把系统时间同步到硬件BIOS

ssh app@192.168.0.3

# 3.配置centos源

**在192.168.0.1 root用户下操作,需要外网环境**

**1)安装插件**

yum install -y yum-plugin-downloadonly createrepo rsync

**2)创建目录**

mkdir -p /data/mirrors/centos

**3)下载文件或上传文件**

yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也可以自行下载rpm包到/data/mirrors/centos

**4)创建repo**

createrepo /data/mirrors/centos

**5)安装nginx**

yum -y install nginx
cd /etc/nginx/conf.d

**vim mirrors.conf**

server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**启服务**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx

**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作)**

**vim /etc/yum.repos.d/mirrors.repo**
[yumbase]
name=yum-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#验证
yum clean all && yum makecache
yum repoinfo webank-local-repository

**7)验证(任意机器)**

yum -y install 软件名.版本号

**8)同步清华大学源(在192.168.0.1 root用户下操作)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos

**9)同步阿里云k8s组件(在192.168.0.1 root用户下操作)**

需要手动下载:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下载并拷贝到 /data/mirrors/centos即可。

# 4.安装docker

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm

systemctl enable docker

usermod -G docker app

systemctl start docker

# 5.registry私有仓库配置

****1**)配置registry私有仓库****

**在192.168.0.1(外网环境)app用户下执行**

docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1

docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1

docker pull registry

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下执行**

vim /etc/docker/daemon.json加入

{

​ "registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],

​ "insecure-registries":["192.168.0.1:5000"]

}

systemctl daemon-reload

systemctl restart docker

docker login 192.168.0.1:5000 输入用户名和密码:wb 123

cat ~/.docker/config.json 查看认证信息

**创建secret**

/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123

**查看创建的dockercfg-192**

/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192

**2****)推送images到私有仓库**

**在192.168.0.1 app用户下执行**

**a.****改标签**

docker tag f32a97de94e1 192.168.0.1:5000/registry:latest

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

**b.****推送**

docker push 192.168.0.1:5000/registry: latest

docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

**c.****拉取**

**在192.168.0.2 192.168.0.3 app用户下执行**

docker pull 192.168.0.1:5000/registry:latest

docker tag f32a97de94e1 registry:latest

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1

# 6.安装k8s管理工具

**在192.168.0.1 192.168.0.2 192.168.0.3 root 用户下安装**

yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes

systemctl daemon-reload

systemctl enable kubelet

# 7.部署k8s组件

**1)查看所需要镜像(在master节点192.168.0.1 root用户下操作)**

#kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

**2)下载镜像(在master节点192.168.0.1 root用户下操作)**

**cat kubeadm.sh**

#!/bin/bash

set -e

KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

**执行脚本**

bash kubeadm.sh

**3)初始化(master节点)**

kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16

初始化了需要重新执行 nohup /usr/local/bin/tiller &

**如返回以下信息表示初始化成功**

kubeadm join 192.168.0.1:6443 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b

**#添加节点(所有节点)**

kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all

**#将master节点开放到node节点处**

kubectl taint nodes --all node-role.kubernetes.io/master-

**#导出配置(8080)**

在/etc/profile加入,然后source /etc/profile

export KUBECONFIG=/etc/kubernetes/kubelet.conf

export KUBECONFIG=/etc/kubernetes/admin.conf

**4)安装flannel插件(所有节点)**

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

**#下载镜像**

**vim flanneld.sh**

#!/bin/bash

set -e

FLANNEL_VERSION=v0.11.0

QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done

**执行脚本**

bash flanneld.sh

**#创建**

git clone https://github.com/coreos/flannel.git

cd flannel/Documentation

kubectl apply -f kube-flannel.yml

**#验证节点安装情况**

kubectl get componentstatus

kubectl get node

**k8s组件配置镜像仓库**

**#master节点操作**

docker tag k8s.gcr.io/kube-proxy:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker tag k8s.gcr.io/kube-apiserver:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker tag k8s.gcr.io/kube-scheduler:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker tag k8s.gcr.io/coredns:1.3.1 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker tag k8s.gcr.io/etcd:3.3.10 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker push 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker push 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker push 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

**node节点操作**

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker pull 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker pull 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker tag 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag 192.168.0.1:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

# 8.安装helm

**所有节点安装**

wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

tar xvf helm-v2.14.3-linux-amd64.tar.gz

sudo cp linux-amd64/helm tiller /usr/local/bin

sudo yum install -y socat

sudo yum install -y *rhsm*

sudo yum –y install bridge*

sudo nohup /usr/local/bin/tiller &

sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile

source /etc/profile

helm version

k8s记录-kubeam方式部署k8s的更多相关文章

  1. k8s记录-kubeam部署

    #配置源[kubernetes] name=kubernetes repo baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kuberne ...

  2. k8s记录-node组件部署(十)

    1)CA 证书配置登录 192.168.0.1 app 用户下cd ssl/kubernetes#注意修改 KUBE_HOME,BOOTSTRAP_TOKEN #与 3.5 3)token 一致,KU ...

  3. k8s记录-master组件部署(八)

    在 192.168.0.1 app 用户下执行1)程序准备tar zxvf kubernetes-server-linux-amd64.tar.gzmv kubernetes/server/bin/{ ...

  4. k8s重要概念及部署k8s集群(一)--技术流ken

    重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...

  5. k8s重要概念及部署k8s集群(一)

    k8s介绍 Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg).在Docker技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功 ...

  6. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  7. 使用Kubeadm创建k8s集群之部署规划(三十)

    前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...

  8. 二进制方式安装 k8s

    推荐个好用的安装k8s的工具 https://github.com/easzlab/kubeasz 该工具基于二进制方式部署 k8s, 利用 ansible-playbook 实现自动化    1.1 ...

  9. 使用saltstack自动部署K8S

    使用saltstack自动部署K8S 一.环境准备 1.1 规划 1. 操作系统 CentOS-7.x-x86_64. 2. 关闭 iptables 和 SELinux. 3. 所有节点的主机名和 I ...

随机推荐

  1. HDU5036(bitset加速传递闭包+期望)

    HDU5036 题解 题目链接 思路: 求出破坏or打开所有门所需要的期望炮弹数量,那么根据期望的线性性质,我们可以求出每一个门的期望值最后累加起来就行了. 我们最后的目标就是求对于一个门\(i\), ...

  2. yum下载Zabbix4.0失败的解决方法

    根据官网说明配置的yum源,今天用yum下载Zabbix时莫名的报错,经过几番折腾,找到了解决方法. 一.报错如下: 二. 解决方法: [root@VM_0_6_centos ~]# cat /etc ...

  3. js动画--链式运动

    前面几节我们只是讲述了一种运动,这节课我将讲述链式运动:就以一个动作接着一个动作完成. 对于这个实现,我们只需要改变一下就可以实现了,设置一个回调函数. var timer; window.onloa ...

  4. G6 学习资料

    G6 学习资料 网址 G6 1.x API 文档 http://antvis.github.io/g6/doc/index.html 官方demo列表 https://github.com/antvi ...

  5. 33、安装MySQL

    一.Windows安装MySQL 1.下载 打开网址,页面如下,确认好要下载的操作系统,点击Download. 可以不用登陆或者注册,直接点击No thanks,just start my downl ...

  6. GeoIP简介与资源,定位经纬度,获取用户IP

    所谓GeoIP,就是通过来访者的IP,定位他的经纬度,国家/地区,省市,甚至街道等位置信息.这里面的技术不算难题,关键在于有个精准的数据库.有了准确的数据源就奇货可居赚点小钱,可是发扬合作精神,集体贡 ...

  7. applyMiddleware 沉思录

    let newStore = applyMiddleware(mid1, mid2, mid3, ...)(createStore)(reducer, null); 给({ getState, dis ...

  8. NSData、数据结构与数据转换

    数据结构公式:Data_Structure=(D,R): 只要数据元素与数据(组织关系)能够保持:同一个数据(结构)可以在各种存贮形式间进行转换. 字节流或字符串是所有转化的中间节点(中转站).相当于 ...

  9. Nuxt项目支持import写法的最新解决方案

    最近在看Nuxt开发vue项目的视频,视频中讲到Nuxt项目不支持es6的import写法.并提供了解决方案: 1.在package.json中添加我标红的部分: "scripts" ...

  10. linux 下安装git的步骤方法

    ①.获取github最新的Git安装包下载链接,进入Linux服务器,执行下载,命令为: wget https://github.com/git/git/archive/v2.17.0.tar.gz  ...