Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。

VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封并将数据发送给目的地址。

Fannel:Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方式。

多主机容器网络通信其他主流方案:隧道方案(Weave、OpenvSwitch)、路由方案(Calico)等。

部署Flannel网络



1、写入分配的子网段到etcd,供flanneld使用

[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'

2、下载二进制包

[root@master ~]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@master ~]# ls
flannel-v0.11.0-linux-amd64.tar.gz
[root@master ~]# tar -zxf flannel-v0.11.0-linux-amd64.tar.gz
[root@master ~]# ls
mk-docker-opts.sh flanneld
[root@master ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
[root@master ~]# ls /opt/kubernetes/bin/
etcd etcdctl flanneld mk-docker-opts.sh 在node01和node02重复上述操作。

3、配置flannel

[root@node01 ~]# cat /opt/kubernetes/cfg/flanneld
FIANNEL_OPTIONS="--etcd-endpoints=https://192.168.238.129:2380,https://192.168.238.128:2380,https://192.168.238.130:2380 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

4、systemd管理flannel

[root@node01 ~]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service [Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIOS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure [Install]
WantedBy=multi-user.target

5、配置docker启动指定子网段

6、启动

加载配置
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl start flanneld
Job for flanneld.service failed because the control process exited with error code. See "systemctl status flanneld.service" and "journalctl -xe" for details.
查看系统日志
[root@node01 ~]# tail -n 20 /var/log/messages
Jul 4 20:15:24 localhost etcd: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 16130
Jul 4 20:15:24 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to a7e9807772a004c5 at term 16130
Jul 4 20:15:24 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to 203750a5948d27da at term 16130
Jul 4 20:15:25 localhost etcd: c858c42725f38881 is starting a new election at term 16130
Jul 4 20:15:25 localhost etcd: c858c42725f38881 became candidate at term 16131
Jul 4 20:15:25 localhost etcd: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 16131
Jul 4 20:15:25 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to 203750a5948d27da at term 16131
Jul 4 20:15:25 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to a7e9807772a004c5 at term 16131
Jul 4 20:15:27 localhost etcd: c858c42725f38881 is starting a new election at term 16131
Jul 4 20:15:27 localhost etcd: c858c42725f38881 became candidate at term 16132
Jul 4 20:15:27 localhost etcd: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 16132
Jul 4 20:15:27 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to 203750a5948d27da at term 16132
Jul 4 20:15:27 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to a7e9807772a004c5 at term 16132
Jul 4 20:15:28 localhost etcd: c858c42725f38881 is starting a new election at term 16132
Jul 4 20:15:28 localhost etcd: c858c42725f38881 became candidate at term 16133
Jul 4 20:15:28 localhost etcd: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 16133
Jul 4 20:15:28 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to 203750a5948d27da at term 16133
Jul 4 20:15:28 localhost etcd: c858c42725f38881 [logterm: 7765, index: 18] sent MsgVote request to a7e9807772a004c 5 at term 16133
Jul 4 20:15:28 localhost etcd: health check for peer 203750a5948d27da could not connect: dial tcp 192.168.238.128:2380: getsockopt: no route to host
Jul 4 20:15:28 localhost etcd: health check for peer a7e9807772a004c5 could not connect: dial tcp 192.168.238.130:2380: i/o timeout
初步判定防火墙导致,关闭防火墙
[root@node01 ~]# systemctl stop firewalld.service
[root@node01 ~]# systemctl start flanneld
Job for flanneld.service failed because a timeout was exceeded. See "systemctl status flanneld.service" and "journalctl -xe" for details.
网络故障原因
[root@node01 ~]# tail -n 20 /var/log/messages
Jul 6 08:49:15 localhost systemd: flanneld.service failed.
Jul 6 08:49:15 localhost systemd: flanneld.service holdoff time over, scheduling restart.
Jul 6 08:49:15 localhost systemd: Stopped Flanneld overlay address etcd agent.
Jul 6 08:49:15 localhost systemd: Starting Flanneld overlay address etcd agent...
Jul 6 08:49:15 localhost flanneld: I0706 08:49:15.831267 8741 main.go:514] Determining IP address of default interface
Jul 6 08:49:15 localhost flanneld: I0706 08:49:15.831870 8741 main.go:527] Using interface with name eno16777736 and address 192.168.238.129
Jul 6 08:49:15 localhost flanneld: I0706 08:49:15.831905 8741 main.go:544] Defaulting external address to interface address (192.168.238.129)
Jul 6 08:49:15 localhost flanneld: I0706 08:49:15.831987 8741 main.go:244] Created subnet manager: Etcd Local Manager with Previous Subnet: None
Jul 6 08:49:15 localhost flanneld: I0706 08:49:15.831992 8741 main.go:247] Installing signal handlers
Jul 6 08:49:15 localhost flanneld: E0706 08:49:15.834924 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
Jul 6 08:49:16 localhost flanneld: timed out
Jul 6 08:49:16 localhost flanneld: E0706 08:49:16.837394 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
Jul 6 08:49:17 localhost flanneld: timed out
Jul 6 08:49:17 localhost flanneld: E0706 08:49:17.840183 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
Jul 6 08:49:18 localhost flanneld: timed out
Jul 6 08:49:18 localhost flanneld: E0706 08:49:18.842579 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
Jul 6 08:49:19 localhost flanneld: timed out
Jul 6 08:49:19 localhost flanneld: E0706 08:49:19.845302 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
Jul 6 08:49:20 localhost flanneld: timed out
Jul 6 08:49:20 localhost flanneld: E0706 08:49:20.848554 8741 main.go:382] Couldn't fetch network config: 100: Key not found (/coreos.com) [16]
测试网络是否正常
[root@node01 ~]# telnet 192.168.238.130 2379
Trying 192.168.238.130...
Connected to 192.168.238.130.
Escape character is '^]'.
quit
Connection closed by foreign host.
检查key是否存在
[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" get /coreos.com/network/config
Error: 100: Key not found (/coreos.com) [16]
主节点重新添加网络步骤一
[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'
{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
再次启动
[root@node01 ~]# systemctl start flanneld
[root@node01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:11:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.238.129/24 brd 192.168.238.255 scope global dynamic eno16777736
valid_lft 1633sec preferred_lft 1633sec
inet6 fe80::20c:29ff:fe29:110e/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:aa:0a:b1:a5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 16:22:a1:7a:3a:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.64.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::1422:a1ff:fe7a:3a99/64 scope link
valid_lft forever preferred_lft forever 查看flannel分配的子网信息
[root@node01 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.64.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.64.1/24 --ip-masq=false --mtu=1450" 配置docker,注释相同选项,新增如下内容
[root@node01 ~]# vi /usr/lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
重启docker
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl restart docker
[root@node01 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:29:11:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.238.129/24 brd 192.168.238.255 scope global dynamic eno16777736
valid_lft 1400sec preferred_lft 1400sec
inet6 fe80::20c:29ff:fe29:110e/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:aa:0a:b1:a5 brd ff:ff:ff:ff:ff:ff
inet 172.17.64.1/24 brd 172.17.64.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether 16:22:a1:7a:3a:99 brd ff:ff:ff:ff:ff:ff
inet 172.17.64.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::1422:a1ff:fe7a:3a99/64 scope link
valid_lft forever preferred_lft forever
此时docker0与flannel.1在同一网段内
节点2重复上述操作进行配置
[root@node02 ~]# systemctl start flanneld
[root@node02 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:5a:c2:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.238.128/24 brd 192.168.238.255 scope global dynamic eno16777736
valid_lft 1496sec preferred_lft 1496sec
inet6 fe80::20c:29ff:fe5a:c2eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:63:4f:0b:45 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether ea:b7:55:da:3b:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.89.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e8b7:55ff:feda:3ba7/64 scope link
valid_lft forever preferred_lft forever
设置docker
[root@node02 ~]# vim /usr/lib/systemd/system/docker.service
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
[root@node02 ~]# systemctl daemon-reload
[root@node02 ~]# systemctl restart docker
[root@node02 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:5a:c2:eb brd ff:ff:ff:ff:ff:ff
inet 192.168.238.128/24 brd 192.168.238.255 scope global dynamic eno16777736
valid_lft 1191sec preferred_lft 1191sec
inet6 fe80::20c:29ff:fe5a:c2eb/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:63:4f:0b:45 brd ff:ff:ff:ff:ff:ff
inet 172.17.89.1/24 brd 172.17.89.255 scope global docker0
valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
link/ether ea:b7:55:da:3b:a7 brd ff:ff:ff:ff:ff:ff
inet 172.17.89.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::e8b7:55ff:feda:3ba7/64 scope link
valid_lft forever preferred_lft forever
测试网络是否正常
[root@node02 ~]# ping 172.17.64.1
PING 172.17.64.1 (172.17.64.1) 56(84) bytes of data.
64 bytes from 172.17.64.1: icmp_seq=1 ttl=64 time=0.508 ms
64 bytes from 172.17.64.1: icmp_seq=2 ttl=64 time=0.336 ms
[root@node01 ~]# ping 172.17.64.1
PING 172.17.64.1 (172.17.64.1) 56(84) bytes of data.
64 bytes from 172.17.64.1: icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from 172.17.64.1: icmp_seq=2 ttl=64 time=0.030 ms
启用防火墙的情况下需要配置防火墙策略
[root@master ~]# iptables -I INPUT -s 192.168.0.0/24 -j ACCEPT
列出存储的信息
[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" ls /coreos.com/network/
/coreos.com/network/subnets
/coreos.com/network/config
列出配置的网络
[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.64.0-24
/coreos.com/network/subnets/172.17.89.0-24
获取key
[root@master ~]# /opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" get /coreos.com/network/subnets/172.17.64.0-24
{"PublicIP":"192.168.238.129","BackendType":"vxlan","BackendData":{"VtepMAC":"16:22:a1:7a:3a:99"}}
查看路由表信息
[root@node01 ~]# ip route show
default via 192.168.238.2 dev eno16777736 proto static metric 100
172.17.64.0/24 dev docker0 proto kernel scope link src 172.17.64.1
172.17.89.0/24 via 172.17.89.0 dev flannel.1 onlink
192.168.238.0/24 dev eno16777736 proto kernel scope link src 192.168.238.129 metric 100

kubernetes容器集群部署Flannel网络的更多相关文章

  1. kubernetes容器集群部署Etcd集群

    安装etcd 二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.2.12 [root@master ~]# GOOGLE_URL=htt ...

  2. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之flanneld网络介绍及部署(三)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.flanneld介绍 ...

  3. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之集群部署环境规划(一)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.环境规划 软件 版本 ...

  4. Kubernetes容器集群管理环境 - 完整部署(中篇)

    接着Kubernetes容器集群管理环境 - 完整部署(上篇)继续往下部署: 八.部署master节点master节点的kube-apiserver.kube-scheduler 和 kube-con ...

  5. Kubernetes容器集群管理环境 - 完整部署(下篇)

    在前一篇文章中详细介绍了Kubernetes容器集群管理环境 - 完整部署(中篇),这里继续记录下Kubernetes集群插件等部署过程: 十一.Kubernetes集群插件 插件是Kubernete ...

  6. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录

    0.目录 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.感谢 在此感谢.net ...

  7. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之自签TLS证书及Etcd集群部署(二)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 一.服务器设置 1.把每一 ...

  8. Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列之部署master/node节点组件(四)

    0.前言 整体架构目录:ASP.NET Core分布式项目实战-目录 k8s架构目录:Kubernetes(k8s)集群部署(k8s企业级Docker容器集群管理)系列目录 1.部署master组件 ...

  9. 搭建Kubernetes容器集群管理系统

    1.Kubernetes 概述 Kubernetes 是 Google 开源的容器集群管理系统,基于 Docker 构建一个容器的调度服务,提供资源调度.均衡容灾.服务注册.劢态扩缩容等功能套件. 基 ...

随机推荐

  1. Java8 stream基础

    List<Integer> list = new ArrayList<Integer>(); list.add(2); list.add(4); list.add(0); li ...

  2. 【串线篇】Mybatis缓存之缓存查询顺序

    1. 不会出现一级缓存和二级缓存中有同一个数据.因为二级缓存是在一级缓存关闭之后才有的 2.任何时候都是先看二级缓存.再看一级缓存,如果大家都没有就去查询数据库,数据库的查询后的结果放在一级缓存中了: ...

  3. 获取配置文件yml的@ConfigurationProperties和@Value的区别

    首先,配置文件的事,我没讲properties,这个写中文的时候,会有乱码,需要去Idea里面设置一下编码格式为UTF-8 还有,我们的类和配置文件直接关联,我用的是ConfigurationProp ...

  4. 使用yum命令报错

    树莓派(Raspberry Pi 3) centos7使用yum命令报错File "/usr/bin/yum", line 30 except KeyboardInterrupt, ...

  5. Android逆向之旅---Android应用的汉化功能(修改SO中的字符串内容)

    一.前言 今天我们继续来讲述逆向的知识,今天我们来讲什么呢?我们在前一篇文章中介绍了关于SO文件的格式,今天我们继续这个话题来看看如何修改SO文件中的内容,看一下我们研究的主题: 需求:想汉化一个Ap ...

  6. vue开发微信公众号--开发准备

    由于工作项目的原因,需要使用vue开发微信公众号,但是这种微信公众号更多是将APP套了一个微信的壳子,除了前面的授权和微信有关系以外,其他的都和微信没多大的关系了,特此记录 开发流程 首先需要在电脑上 ...

  7. AcWing 252. 树 (点分治)打卡

    题目:https://www.acwing.com/problem/content/254/ 题意:求一棵树上,路径<=k的有多少条 思路:点分治,我们用两个指针算solve函数,首先对算出来的 ...

  8. spring boot项目打包成war

    一.修改打包类型 在pom.xml中修改 <packaging>war</packaging> 二.移除嵌入式tomcat插件,并以依赖方式引入 <dependency& ...

  9. 2018-2019-2 20175223 实验四 《Android开发基础》实验报告

    目录 北京电子科技学院(BESTI)实验报告 实验名称:实验四 Android程序设计 实验内容.步骤与体会: 一.实验四 Android程序设计-1 二.实验四 Android程序设计-2 三.实验 ...

  10. 112、TensorFlow初始化变量

    # 创建一个变量 # 最简单创建一个变量的方法就是调用 tf.get_variable function import tensorflow as tf # 下面创建了一个三维的张量大小是 [1,2, ...