参考:https://blog.csdn.net/networken/article/details/84991940

# k8s工具部署方案

# 1.集群规划

| **服务器** | |
| ------------ | ---------------------------------------- |
| **数量** | >1(根据实际提供的服务器分配模块) |
| **配置** | 16 core /32 memory / 300GB硬盘/50M带宽 |
| **操作系统** | CentOS linux 7.2 master节点需要外网环境 |
| **文件系统** | 300G硬盘安装在/ data目录下 |
| **其他条件** | master节点必须具备外网环境 |

| 节点名称 | 主机名 | IP地址 | 操作系统 |
| -------- | ------------- | ----------- | ---------- |
| master | centos01 | 192.168.0.1 | CentOS 7.2 |
| node1 | centos02 | 192.168.0.2 | CentOS 7.2 |
| node2 | centos03 | 192.168.0.3 | CentOS 7.2 |

# 2.基础环境配置

## 2.1 hostname配置(可选)

**1)修改主机名**

**在192.168.0.1 root用户下执行:**

hostnamectl set-hostname VM_0_1_centos

**在192.168.0.2 root用户下执行:**

hostnamectl set-hostname VM_0_2_centos

**在192.168.0.3 root用户下执行:**

hostnamectl set-hostname VM_0_3_centos

**2)加入主机映射**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/hosts

192.168.0.1 VM_0_1_centos

192.168.0.2 VM_0_2_centos

192.168.0.3 VM_0_3_centos

## 2.2 关闭selinux(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

sed -i '/\^SELINUX/s/=.\*/=disabled/' /etc/selinux/config

setenforce 0

## 2.3 修改Linux最大打开文件数

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行:**

vim /etc/security/limits.conf

\* soft nofile 65536

\* hard nofile 65536

## 2.4 关闭防火墙(可选)

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

systemctl disable firewalld.service

systemctl stop firewalld.service

systemctl status firewalld.service

## 2.5 软件环境初始化

**1)初始化服务器**

groupadd -g 6000 apps
useradd -s /bin/sh -g apps –d /home/app app
passwd app
yum -y install gcc gcc-c++ make autoconfig openssl-devel supervisor gmp-devel mpfr-devel libmpc-devel libaio numactl autoconf automake libtool libffi-dev

**2)配置sudo**

**在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)root用户下执行**

vim /etc/sudoers.d/app

app ALL=(ALL) ALL

app ALL=(ALL) NOPASSWD: ALL

Defaults !env_reset

**3)配置ssh无密登录**

**a. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行**

su app

ssh-keygen -t rsa

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

chmod 600 \~/.ssh/authorized_keys

**b.合并id_rsa_pub文件**

**在192.168.0.1 app用户下执行**

scp \~/.ssh/authorized_keys app\@192.168.0.2:/home/app/.ssh

输入app密码

**在192.168.0.2 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.3:/home/app/.ssh

**在192.168.0.3 app用户下执行**

cat \~/.ssh/id_rsa.pub \>\> /home/app/.ssh/authorized_keys

scp \~/.ssh/authorized_keys app@192.168.0.1:/home/app/.ssh

scp \~/.ssh/authorized_keys app@192.168.0.2:/home/app/.ssh

**c. 在目标服务器(192.168.0.1 192.168.0.2 192.168.0.3)app用户下执行ssh 测试**

ssh app@192.168.0.1

ssh app@192.168.0.2

ssh app@192.168.0.3

## 2.6 sysctl参数配置

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

**vim /etc/sysctl.conf**

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

net.ipv4.ip_forward=1

net.ipv4.tcp_tw_recycle=0

vm.swappiness=0

vm.overcommit_memory=1

vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.ipv6.conf.all.disable_ipv6=1

net.netfilter.nf_conntrack_max=2310720

**#生效**

sysctl –p

## 2.7 ntpd配置

****1**)服务端配置****

**在192.168.0.1 root用户下操作**

yum install -y ntp ntpdate

**修改etc/ntp.conf**

**注释所有的server和restrict**

**加入:**

server 0.cn.pool.ntp.org

server 0.asia.pool.ntp.org

server 3.asia.pool.ntp.org

restrict 0.cn.pool.ntp.org nomodify notrap noquery

restrict 0.asia.pool.ntp.org nomodify notrap noquery

restrict 3.asia.pool.ntp.org nomodify notrap noquery

server 127.127.1.0 # local clock

fudge 127.127.1.0 stratum 10

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**查看网络中的NTP服务器**

ntpq –p

****2**)客户端配置****

**在192.168.0.2 192.168.0.3 root用户下操作**

yum install -y ntp ntpdate

**在/etc/ntp.conf加入**

server 192.168.0.1 prefer

system enable ntpd

systemctl disable chronyd

systemctl restart ntpd

**同步**

ntpdate -u 192.168.0.1

执行hwclock --systohc,把系统时间同步到硬件BIOS

ssh app@192.168.0.3

# 3.配置centos源

**在192.168.0.1 root用户下操作,需要外网环境**

**1)安装插件**

yum install -y yum-plugin-downloadonly createrepo rsync

**2)创建目录**

mkdir -p /data/mirrors/centos

**3)下载文件或上传文件**

yum install nginx -y --downloadonly --downloaddir= /data/mirrors/centos
也可以自行下载rpm包到/data/mirrors/centos

**4)创建repo**

createrepo /data/mirrors/centos

**5)安装nginx**

yum -y install nginx
cd /etc/nginx/conf.d

**vim mirrors.conf**

server {
listen 88;
server_name localhost;
root /data/mirrors/;
location / {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
}
**启服务**
nginx
nginx -t
nginx -s reload
systemctl enable nginx
systemctl start nginx

**6)配置repo(在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作)**

**vim /etc/yum.repos.d/mirrors.repo**
[yumbase]
name=yum-local-repository
baseurl=http://192.168.0.1:88/centos/
enabled=1
gpgcheck=0
#验证
yum clean all && yum makecache
yum repoinfo webank-local-repository

**7)验证(任意机器)**

yum -y install 软件名.版本号

**8)同步清华大学源(在192.168.0.1 root用户下操作)**
#!/bin/bash
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/centosplus/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/extras/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/os/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/centos/7/updates/x86_64/Packages/ /data/mirrors/centos
/usr/bin/rsync -avz rsync://mirrors.tuna.tsinghua.edu.cn/epel/7Server/x86_64/Packages/ /data/mirrors/centos

**9)同步阿里云k8s组件(在192.168.0.1 root用户下操作)**

需要手动下载:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/Packages,下载并拷贝到 /data/mirrors/centos即可。

# 4.安装docker

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下操作**

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-cli-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.09.0-3.el7.x86_64.rpm

wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh containerd.io-1.2.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm

rpm -ivh docker-ce-cli-18.09.0-3.el7.x86_64.rpm

rpm -ivh docker-ce-18.09.0-3.el7.x86_64.rpm

systemctl enable docker

usermod -G docker app

systemctl start docker

# 5.registry私有仓库配置

****1**)配置registry私有仓库****

**在192.168.0.1(外网环境)app用户下执行**

docker pull registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1

docker tag registry.cn-beijing.aliyuncs.com/zhoujun/pause:3.1 k8s.gcr.io/pause:3.1

docker pull registry

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

**在192.168.0.1 192.168.0.2 192.168.0.3 root用户下执行**

vim /etc/docker/daemon.json加入

{

​ "registry-mirrors": ["https://njrds9qc.mirror.aliyuncs.com"],

​ "insecure-registries":["192.168.0.1:5000"]

}

systemctl daemon-reload

systemctl restart docker

docker login 192.168.0.1:5000 输入用户名和密码:wb 123

cat ~/.docker/config.json 查看认证信息

**创建secret**

/data/projects/common/kubernetes/bin/kubectl create secret docker-registry dockercfg-192 --docker-server=192.168.0.1:5000 --docker-username=wb --docker-password=123

**查看创建的dockercfg-192**

/data/projects/common/kubernetes/bin/kubectl get secret |grep dockercfg-192

**2****)推送images到私有仓库**

**在192.168.0.1 app用户下执行**

**a.****改标签**

docker tag f32a97de94e1 192.168.0.1:5000/registry:latest

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

**b.****推送**

docker push 192.168.0.1:5000/registry: latest

docker push 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

**c.****拉取**

**在192.168.0.2 192.168.0.3 app用户下执行**

docker pull 192.168.0.1:5000/registry:latest

docker tag f32a97de94e1 registry:latest

docker run -d -v /data/registry:/var/lib/registry -p 5000:5000 --restart=always --privileged=true --name registry registry:latest

docker pull 192.168.0.1:5000/ k8s.gcr.io/pause:3.1

docker tag 192.168.0.1:5000/k8s.gcr.io /pause:3.1 k8s.gcr.io/pause:3.1

# 6.安装k8s管理工具

**在192.168.0.1 192.168.0.2 192.168.0.3 root 用户下安装**

yum -y install kubelet-1.16.1 kubeadm-1.15.4 kubectl-1.15.0 --disableexcludes=kubernetes

systemctl daemon-reload

systemctl enable kubelet

# 7.部署k8s组件

**1)查看所需要镜像(在master节点192.168.0.1 root用户下操作)**

#kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.16.1
k8s.gcr.io/kube-controller-manager:v1.16.1
k8s.gcr.io/kube-scheduler:v1.16.1
k8s.gcr.io/kube-proxy:v1.16.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

**2)下载镜像(在master节点192.168.0.1 root用户下操作)**

**cat kubeadm.sh**

#!/bin/bash

set -e

KUBE_VERSION=v1.16.1
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1

GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers

images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})

for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done

**执行脚本**

bash kubeadm.sh

**3)初始化(master节点)**

kubeadm init \
--apiserver-advertise-address 192.168.0.1 \
--kubernetes-version=v1.16.1 \
--image-repository registry.aliyuncs.com/google_containers\
--pod-network-cidr=10.244.0.0/16

初始化了需要重新执行 nohup /usr/local/bin/tiller &

**如返回以下信息表示初始化成功**

kubeadm join 192.168.0.1:6443 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b

**#添加节点(所有节点)**

kubeadm join 192.168.0.1:8080 --token yksijn.pggvc1rweyk7ryv3 \
--discovery-token-ca-cert-hash sha256:7aee53faa90a6ef1ed6a72b5ef7352843bdb0b4b93c76db786a04805ef47607b --ignore-preflight-errors=all

**#将master节点开放到node节点处**

kubectl taint nodes --all node-role.kubernetes.io/master-

**#导出配置(8080)**

在/etc/profile加入,然后source /etc/profile

export KUBECONFIG=/etc/kubernetes/kubelet.conf

export KUBECONFIG=/etc/kubernetes/admin.conf

**4)安装flannel插件(所有节点)**

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

**#下载镜像**

**vim flanneld.sh**

#!/bin/bash

set -e

FLANNEL_VERSION=v0.11.0

QUAY_URL=quay.io/coreos
QINIU_URL=quay-mirror.qiniu.com/coreos

images=(flannel:${FLANNEL_VERSION}-amd64
flannel:${FLANNEL_VERSION}-arm64
flannel:${FLANNEL_VERSION}-arm
flannel:${FLANNEL_VERSION}-ppc64le
flannel:${FLANNEL_VERSION}-s390x)

for imageName in ${images[@]} ; do
docker pull $QINIU_URL/$imageName
docker tag $QINIU_URL/$imageName $QUAY_URL/$imageName
docker rmi $QINIU_URL/$imageName
done

**执行脚本**

bash flanneld.sh

**#创建**

git clone https://github.com/coreos/flannel.git

cd flannel/Documentation

kubectl apply -f kube-flannel.yml

**#验证节点安装情况**

kubectl get componentstatus

kubectl get node

**k8s组件配置镜像仓库**

**#master节点操作**

docker tag k8s.gcr.io/kube-proxy:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker tag k8s.gcr.io/kube-controller-manager:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker tag k8s.gcr.io/kube-apiserver:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker tag k8s.gcr.io/kube-scheduler:v1.16.1 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker tag k8s.gcr.io/coredns:1.3.1 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker tag k8s.gcr.io/etcd:3.3.10 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker tag k8s.gcr.io/pause:3.1 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker tag quay.io/coreos/flannel:v0.11.0-s390x 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker tag quay.io/coreos/flannel:v0.11.0-ppc64le 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker tag quay.io/coreos/flannel:v0.11.0-arm64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker tag quay.io/coreos/flannel:v0.11.0-arm 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker tag quay.io/coreos/flannel:v0.11.0-amd64 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker push 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker push 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker push 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker push 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker push 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

**node节点操作**

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1

docker pull 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1

docker pull 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10

docker pull 192.168.0.1:5000/k8s.gcr.io/pause:3.1

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm

docker pull 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64

docker tag 192.168.0.1:5000/k8s.gcr.io/kube-proxy:v1.16.1 k8s.gcr.io/kube-proxy:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-controller-manager:v1.16.1 k8s.gcr.io/kube-controller-manager:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-apiserver:v1.16.1 k8s.gcr.io/kube-apiserver:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/kube-scheduler:v1.16.1 k8s.gcr.io/kube-scheduler:v1.16.1
docker tag 192.168.0.1:5000/k8s.gcr.io/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag 192.168.0.1:5000/k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag 192.168.0.1:5000/k8s.gcr.io/pause:3.1 k8s.gcr.io/pause:3.1
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-s390x quay.io/coreos/flannel:v0.11.0-s390x
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-ppc64le quay.io/coreos/flannel:v0.11.0-ppc64le
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm64 quay.io/coreos/flannel:v0.11.0-arm64
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-arm quay.io/coreos/flannel:v0.11.0-arm
docker tag 192.168.0.1:5000/quay.io/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64

# 8.安装helm

**所有节点安装**

wget
https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

tar xvf helm-v2.14.3-linux-amd64.tar.gz

sudo cp linux-amd64/helm tiller /usr/local/bin

sudo yum install -y socat

sudo yum install -y *rhsm*

sudo yum –y install bridge*

sudo nohup /usr/local/bin/tiller &

sudo sed -i '$a\export HELM_HOST=localhost:44134' /etc/profile

source /etc/profile

helm version

k8s记录-kubeam方式部署k8s的更多相关文章

  1. k8s记录-kubeam部署

    #配置源[kubernetes] name=kubernetes repo baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kuberne ...

  2. k8s记录-node组件部署(十)

    1)CA 证书配置登录 192.168.0.1 app 用户下cd ssl/kubernetes#注意修改 KUBE_HOME,BOOTSTRAP_TOKEN #与 3.5 3)token 一致,KU ...

  3. k8s记录-master组件部署(八)

    在 192.168.0.1 app 用户下执行1)程序准备tar zxvf kubernetes-server-linux-amd64.tar.gzmv kubernetes/server/bin/{ ...

  4. k8s重要概念及部署k8s集群(一)--技术流ken

    重要概念 1. cluster cluster是 计算.存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用. 2.master master是cluster的大脑,他的主要职责是调度,即决 ...

  5. k8s重要概念及部署k8s集群(一)

    k8s介绍 Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg).在Docker技术的基础上,为容器化的应用提供部署运行.资源调度.服务发现和动态伸缩等一系列完整功 ...

  6. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  7. 使用Kubeadm创建k8s集群之部署规划(三十)

    前言 上一篇我们讲述了使用Kubectl管理k8s集群,那么接下来,我们将使用kubeadm来启动k8s集群. 部署k8s集群存在一定的挑战,尤其是部署高可用的k8s集群更是颇为复杂(后续会讲).因此 ...

  8. 二进制方式安装 k8s

    推荐个好用的安装k8s的工具 https://github.com/easzlab/kubeasz 该工具基于二进制方式部署 k8s, 利用 ansible-playbook 实现自动化    1.1 ...

  9. 使用saltstack自动部署K8S

    使用saltstack自动部署K8S 一.环境准备 1.1 规划 1. 操作系统 CentOS-7.x-x86_64. 2. 关闭 iptables 和 SELinux. 3. 所有节点的主机名和 I ...

随机推荐

  1. linux中网络部分的总结

    二.简述iproute家族命令 静态配置地址的方法有一下几种方式: (1)ifconfig (2)ip命令 (3)GUI工具 (4)TUI工具 (5)编辑配置文件 1.ifconfig 查看接口:if ...

  2. Tensorflow简单实践系列(二):张量

    在上一节中,我们安装 TensorFlow 并运行了最简单的应用,这节我们熟悉 TensorFlow 中的张量. 张量是 TensorFlow 的核心数据类型.数学里面也有张量的概念,但是 Tenso ...

  3. httprunner学习17-linux上安装httprunner环境

    前言 如果你是在linux上安装httprunner环境,用的是python3的环境,安装成功后会发现hrun命令找不到,需添加软链接. 环境准备: centos 7.6 python 3.6 htt ...

  4. C#多线程编程中的锁系统

    C#多线程编程中的锁系统(二) 上章主要讲排他锁的直接使用方式.但实际当中全部都用锁又太浪费了,或者排他锁粒度太大了. 这一次我们说说升级锁和原子操作. 目录 1:volatile 2:  Inter ...

  5. VisualStudio中集成扩展调试SOS

    SOS扩展也是可以和VisualStudio进行集成的,这样真的方便了我们调试一些性能要求比较高的程序,当程序运行一段时间后我们用VS附加到进程,然后查看一些重要的对象数据,但是此时我们看不到.NET ...

  6. 第03组 Alpha冲刺(2/4)

    队名:不等式方程组 组长博客 作业博客 团队项目进度 组员一:张逸杰(组长) 过去两天完成的任务: 文字/口头描述: 制定了初步的项目计划,并开始学习一些推荐.搜索类算法 GitHub签入纪录: 暂无 ...

  7. xshell && xftp 下载

    链接:https://pan.baidu.com/s/1aLdgOSshytIYhArkB7tghQ 提取码:fqjb

  8. 云服务器搭建JDK+Tomcat+MySQL环境

    一.首先租赁一台云服务器(阿里云服务器或者腾讯云服务器) 其实可以在windows电脑上使用VMware workstation来安装虚拟机进行操作,毕竟云服务器低配也是很贵的.不过可以使用学生价去租 ...

  9. zookeeper (二) paxos & fast paxos & FastLeaderElection

    参考文章: http://blog.csdn.net/xhh198781/article/details/10949697 paxos->fast paxos->FastLeaderEle ...

  10. leetcode 221. 最大正方形

    题目描述: 在一个由 0 和 1 组成的二维矩阵内,找到只包含 1 的最大正方形,并返回其面积. 思路分析: 一道动态规划的题.由于是正方形,首先单一的‘1’即为最小的正方形,接下来需要考察其外围区域 ...