参考:https://blog.csdn.net/networken/article/details/84991940

# k8s工具部署方案

# 1.集群规划

| **服务器** | |
| ------------ | ---------------------------------------- |
| **数量** | >1(根据实际提供的服务器分配模块) |
| **配置** | 16 core /32 memory / 300GB硬盘/50M带宽 |
| **操作系统** | CentOS linux 7.2 master节点需要外网环境 |
| **文件系统** | 300G硬盘安装在/ data目录下 |
| **其他条件** | master节点必须具备外网环境 |

| 节点名称 | 主机名 | IP地址 | 操作系统 |
| -------- | ------------- | ----------- | ---------- |
| master | centos01 | 192.168.0.1 | CentOS 7.2 |
| node1 | centos02 | 192.168.0.2 | CentOS 7.2 |
| node2 | centos03 | 192.168.0.3 | CentOS 7.2 |

# 2.基础环境配置

## 2.1 hostname配置(可选)

**1)修改主机名**

**在192.168.0.1 root用户下执行:**

hostnamectl set-hostname VM_0_1_centos

**在192.168.0.2 root用户下执行:**

hostnamectl set-hostname VM_0_2_centos

**在192.168.0.3 root用户下执行:**

hostnamectl set-hostname VM_0_3_centos

**2)加入主机映射**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/hosts

192.168.0.1 VM_0_1_centos

192.168.0.2 VM_0_2_centos

192.168.0.3 VM_0_3_centos

## 2.2 关闭selinux(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config

setenforce 0

## 2.3 修改Linux最大打开文件数

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/security/limits.conf

\* soft nofile 65536

\* hard nofile 65536

## 2.4 关闭防火墙(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

systemctl disable firewalld.service

systemctl stop firewalld.service

systemctl status firewalld.service

## 2.5 软件环境初始化

**1)初始化服务器**

groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev

**2)配置sudo**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

vim /etc/sudoers.d/app

app ALL=(ALL) ALL

app ALL=(ALL) NOPASSWD: ALL

Defaults !env_reset

**3)配置ssh无密登录**

**a. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**

su app

ssh-keygen -t rsa

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

chmod 600 \~/.ssh/authorized_keys

**b.合并id_rsa_pub文件**

**在192.168.0.1 app用户下执行**

scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh

输入app密码

**在192.168.0.2 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh

**在192.168.0.3 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh

scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh

**c. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行ssh 测试**

ssh app@192.168.0.1

ssh app@192.168.0.2

ssh app@192.168.0.3

## 2.6 sysctl参数配置

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

**vim /etc/sysctl.conf**

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

**#生效**

sysctl –p

## 2.7 ntpd配置

****1**)服务端配置****

**在192.168.0.1 root用户下操作**

yum install -y ntp ntpdate

**修改etc/ntp.conf**

**注释所有的server和restrict**

**加入:**

server 0.cn.pool.ntp.org

server 0.asia.pool.ntp.org

server 3.asia.pool.ntp.org

restrict 0.cn.pool.ntp.org nomodify notrap noquery

restrict 0.asia.pool.ntp.org nomodify notrap noquery

restrict 3.asia.pool.ntp.org nomodify notrap noquery

server 127.127.1.0 # local clock

fudge 127.127.1.0 stratum 10

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**查看网络中的NTP服务器**

ntpq –p

****2**)客户端配置****

**在192.168.0.2 192.168.0.3 root用户下操作**

yum install -y ntp ntpdate

**在/etc/ntp.conf加入**

server 192.168.0.1 prefer

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**同步**

ntpdate -u 192.168.0.1

执行hwclock --systohc,把系统时间同步到硬件BIOS

ssh app@192.168.0.3

# 3.配置centos源

**在192.168.0.1 root用户下操作,需要外网环境**

**1)安装插件**

yum install -y yum-plugin-downloadonly createrepo rsync

**2)创建目录**

mkdir -p /data/mirrors/centos

**3)下载文件或上传文件**

yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也可以自行下载rpm包到/data/mirrors/centos

**4)创建repo**

createrepo /data/mirrors/centos

**5)安装nginx**

yum -y install nginx
cd /etc/nginx/conf.d

**vim mirrors.conf**

server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**启服务**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx

**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作)**

**vim /etc/yum.repos.d/mirrors.repo**
[yumbase]
name=yum-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#验证
yum clean all && yum makecache
yum repoinfo webank-local-repository

**7)验证(任意机器)**

yum -y install 软件名.版本号

**8)同步清华大学源(在192.168.0.1 root用户下操作)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos

**9)同步阿里云k8s组件(在192.168.0.1 root用户下操作)**

需要手动下载:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下载并拷贝到 /data/mirrors/centos即可。

# 4.安装docker

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm

systemctl enable docker

usermod -G docker app

systemctl start docker

# 5.registry私有仓库配置

****1**)配置registry私有仓库****

**在192.168.0.1(外网环境)app用户下执行**

docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1

docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1

docker pull registry

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下执行**

vim /etc/docker/daemon.json加入

{

​ "registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],

​ "insecure-registries":["192.168.0.1:5000"]

}

systemctl daemon-reload

systemctl restart docker

docker login 192.168.0.1:5000 输入用户名和密码:wb 123

cat ~/.docker/config.json 查看认证信息

**创建secret**

/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123

**查看创建的dockercfg-192**

/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192

**2****)推送images到私有仓库**

**在192.168.0.1 app用户下执行**

**a.****改标签**

docker tag f32a97de94e1 192.168.0.1:5000/registry:latest

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

**b.****推送**

docker push 192.168.0.1:5000/registry: latest

docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

**c.****拉取**

**在192.168.0.2 192.168.0.3 app用户下执行**

docker pull 192.168.0.1:5000/registry:latest

docker tag f32a97de94e1 registry:latest

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1

# 6.安装k8s管理工具

**在192.168.0.1 192.168.0.2 192.168.0.3 root 用户下安装**

yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes

systemctl daemon-reload

systemctl enable kubelet

# 7.部署k8s组件

**1)查看所需要镜像(在master节点192.168.0.1 root用户下操作)**

#kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

**2)下载镜像(在master节点192.168.0.1 root用户下操作)**

**cat kubeadm.sh**

#!/bin/bash

set -e

KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

**执行脚本**

bash kubeadm.sh

**3)初始化(master节点)**

kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16

初始化了需要重新执行 nohup /usr/local/bin/tiller &

**如返回以下信息表示初始化成功**

kubeadm join 192.168.0.1:6443 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b

**#添加节点(所有节点)**

kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all

**#将master节点开放到node节点处**

kubectl taint nodes --all node-role.kubernetes.io/master-

**#导出配置(8080)**

在/etc/profile加入,然后source /etc/profile

export KUBECONFIG=/etc/kubernetes/kubelet.conf

export KUBECONFIG=/etc/kubernetes/admin.conf

**4)安装flannel插件(所有节点)**

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

**#下载镜像**

**vim flanneld.sh**

#!/bin/bash

set -e

FLANNEL_VERSION=v0.11.0

QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done

**执行脚本**

bash flanneld.sh

**#创建**

git clone https://github.com/coreos/flannel.git

cd flannel/Documentation

kubectl apply -f kube-flannel.yml

**#验证节点安装情况**

kubectl get componentstatus

kubectl get node

**k8s组件配置镜像仓库**

**#master节点操作**

docker tag k8s.gcr.io/kube-proxy:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker tag k8s.gcr.io/kube-apiserver:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker tag k8s.gcr.io/kube-scheduler:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker tag k8s.gcr.io/coredns:1.3.1 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker tag k8s.gcr.io/etcd:3.3.10 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker push 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker push 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker push 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

**node节点操作**

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker pull 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker pull 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker tag 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag 192.168.0.1:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

# 8.安装helm

**所有节点安装**

wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

tar xvf helm-v2.14.3-linux-amd64.tar.gz

sudo cp linux-amd64/helm tiller /usr/local/bin

sudo yum install -y socat

sudo yum install -y *rhsm*

sudo yum –y install bridge*

sudo nohup /usr/local/bin/tiller &

sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile

source /etc/profile

helm version

k8s记录-kubeam方式部署k8s的更多相关文章

  1. k8s记录-kubeam部署

    #配置源[kubernetes] name=kubernetes repo baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kuberne ...

  2. k8s记录-node组件部署(十)

    1)CA 证书配置登录 192.168.0.1 app 用户下cd ssl/kubernetes#注意修改 KUBE_HOME,BOOTSTRAP_TOKEN #与 3.5 3)token 一致,KU ...

  3. k8s记录-master组件部署(八)

    在 192.168.0.1 app 用户下执行1)程序准备tar zxvf kubernetes-server-linux-amd64.tar.gzmv kubernetes/server/bin/{ ...

  4. k8s重要概念及部署k8s集群(一)--技术流ken

    重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...

  5. k8s重要概念及部署k8s集群(一)

    k8s介绍 Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg).在Docker技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功 ...

  6. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  7. 使用Kubeadm创建k8s集群之部署规划(三十)

    前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...

  8. 二进制方式安装 k8s

    推荐个好用的安装k8s的工具 https://github.com/easzlab/kubeasz 该工具基于二进制方式部署 k8s, 利用 ansible-playbook 实现自动化    1.1 ...

  9. 使用saltstack自动部署K8S

    使用saltstack自动部署K8S 一.环境准备 1.1 规划 1. 操作系统 CentOS-7.x-x86_64. 2. 关闭 iptables 和 SELinux. 3. 所有节点的主机名和 I ...

随机推荐

  1. iodine dig A

    (base) bonelee@bonelee-VirtualBox:~/Desktop$ dig paeadgpy.abc.com @45.32.247.94 ; <<>> D ...

  2. Java精通并发-Lock锁机制深入详解

    从这次开始接触Java1.5推出的并发包中的东东,先看一下jdk中的并发包: 接下来咱们则会集中对这些并发包中的核心进行深入了解,不光要学会怎么用这些并发包中的类,而且还得知道这些功能背后运行的原理, ...

  3. vue项目积累

    (1)--save和--save-dev 安装到开发环境npm axios --save-dev 安装到生产环境npm axios --save .

  4. Linux四剑客之awk命令

    AWK详解   awk简介 awk其名称得自于它的创始人 Alfred Aho .Peter Weinberger 和 Brian Kernighan 姓氏的首个字母.实际上 AWK 的确拥有自己的语 ...

  5. PVE授权条款

    授权条款 Proxmox VE 软件授权条款,采用的是 GNU AGPL (Affero General Public License) 条款,而 Proxmox VE 本身是 Free Softwa ...

  6. 正则,js函数math()提取混乱字符串中多个字符串内容

    var a='start111111endstart222222endasdfasdfasdfakjsfhaksdf'+ 'start333333endstart444444end6666666666 ...

  7. vue.js不仅是一种模式,也是一种工程组织方式

    vue.js不仅是一种模式,也是一种工程组织方式

  8. 18-Flutter移动电商实战-首页_火爆专区商品接口制作

    1.获取接口的方法 在service/service_method.dart里制作方法.我们先不接收参数,先把接口调通. Future getHomePageBeloConten() async{   ...

  9. OpenCV 学习笔记(7)vs2015+ffmpeg开发环境配置

    参考教程 https://blog.csdn.net/HUSTLX/article/details/51014307 1.在http://ffmpeg.zeranoe.com/builds/  下载最 ...

  10. BZOJ 3561: DZY Loves Math VI 莫比乌斯反演+复杂度分析

    推到了一个推不下去的形式,然后就不会了 ~ 看题解后傻了:我推的是对的,推不下去是因为不需要再推了. 复杂度看似很大,但其实是均摊 $O(n)$ 的,看来分析复杂度也是一个能力啊 ~ code: #i ...