容器云平台No.2~kubeadm创建高可用集群v1.19.1
通过kubernetes构建容器云平台第二篇,最近刚好官方发布了V1.19.0,本文就以最新版来介绍通过kubeadm安装高可用的kubernetes集群。
市面上安装k8s的工具很多,但是用于学习的话,还是建议一步步安装,了解整个集群内部运行的组件,以便后期学习排错更方便。。。
本文环境如下:
服务器:3台
操作系统:CentOS 7
拓扑图就不画了,直接copy官网的

概述
简单说下这个图,三台服务器作为master节点,使用keepalive+haproxy对apiserver进行负载均衡,node节点和apiserver通信通过VIP进行。第一篇说过,集群的所有信息存在ETCD集群中。
接下来,开干。。。
配置源
这边配置了三种源,全部替换从国内的镜像源,以加快安装包的速度。
# 系统源
curl -O http://mirrors.aliyun.com/repo/Centos-7.repo
# docker源
curl -O https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
sed -i 's/download.docker.com/mirrors.ustc.edu.cn\/docker-ce/g' docker-ce.repo
# kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置系统相关参数
系统配置完源以后,需要对一些参数进行设置,都是官方的推荐,更多优化后期介绍。
# 临时禁用selinux
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
setenforce 0
# 临时关闭swap
# 永久关闭 注释/etc/fstab文件里swap相关的行
swapoff -a
# 开启forward
# Docker从1.13版本开始调整了默认的防火墙规则
# 禁用了iptables filter表中FOWARD链
# 这样会引起Kubernetes集群中跨Node的Pod无法通信
iptables -P FORWARD ACCEPT
# 配置转发相关参数,否则可能会出错
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
sysctl --system
# 加载ipvs相关内核模块
# 如果重新开机,需要重新加载
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
lsmod | grep ip_vs
安装kubeadm及其相关软件
yum install -y kubelet kubeadm kubectl ipvsadm
配置docker
主要配置加速下载公有镜像和允许从不安全的私有仓库下载镜像
hub.xxx.om需要改成自己的私有仓库地址,如果没有请删除insecure-registries该行
vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://ci7pm4nx.mirror.aliyuncs.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com"],
"insecure-registries":["hub.xxx.om"]
}
写好配置,重启docker
systemctl restart docker
systemctl enable docker.service
查看docker info,输出如下
Insecure Registries:
hub.xxx.com
127.0.0.0/8
Registry Mirrors:
https://ci7pm4nx.mirror.aliyuncs.com/
https://registry.docker-cn.com/
http://hub-mirror.c.163.com/
启动kubelet
systemctl enable --now kubelet
kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。
安装配置haproxy和keepalive (三台机器都要安装配置)
安装软件包yum install -y haproxy keepalived
配置haproxy
需要注意,手动创建/var/log/haproxy.log文件
[root@k8s-master001 ~]# cat /etc/haproxy/haproxy.cfg
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /var/log/haproxy.log local0
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
retries 1
timeout http-request 10s
timeout queue 20s
timeout connect 5s
timeout client 20s
timeout server 20s
timeout http-keep-alive 10s
timeout check 10s
listen admin_stats
mode http
bind 0.0.0.0:1080
log 127.0.0.1 local0 err
stats refresh 30s
stats uri /haproxy-status
stats realm Haproxy\ Statistics
stats auth admin:admin
stats hide-version
stats admin if TRUE
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
bind *:8443
mode tcp
option tcplog
default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
option httpchk GET /healthz
http-check expect status 200
mode tcp
option ssl-hello-chk
balance roundrobin
server k8s-master001 10.26.25.20:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server k8s-master002 10.26.25.21:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
server k8s-master003 10.26.25.22:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
启动haproxy
systemctl start haproxy
systemctl enable haproxy
配置keepalived
[root@k8s-master001 ~]# cat /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_K8S
}
vrrp_script check_apiserver {
script "/etc/keepalived/check_apiserver.sh"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens18
virtual_router_id 51
priority 100
authentication {
auth_type PASS
auth_pass kubernetes
}
virtual_ipaddress {
10.26.25.23
}
track_script {
check_apiserver
}
}
添加keepalive检查脚本
[root@k8s-master001 ~]# cat /etc/keepalived/check_apiserver.sh
#!/bin/sh
errorExit() {
echo "*** $*" 1>&2
exit 1
}
curl --silent --max-time 2 --insecure https://localhost:8443/ -o /dev/null || errorExit "Error GET https://localhost:8443/"
if ip addr | grep -q 10.26.25.23; then
curl --silent --max-time 2 --insecure https://10.26.25.23:8443/ -o /dev/null || errorExit "Error GET https://10.26.25.23:8443/"
fi
chmod +x /etc/keepalived/check_apiserver.sh
启动keepalived
systemctl start keepalived
systemctl enable keepalived
现在你可以通过访问master IP:1080/aproxy-status 访问haproxy管理界面,用户名密码在配置文件中。本文是admin/admin,可以自己修改。
刚开始apiserver的行都是红的,表示服务还未启动,我这里图是后截的,所以是绿的

接下开,开始初始化kubernetes集群
初始化第一个控制节点master001
[root@k8s-master001 ~]# kubeadm init --control-plane-endpoint 10.26.25.23:8443 --upload-certs --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr 10.244.0.0/16
W0910 05:09:41.166260 29186 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
........忽略了部分信息
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
............忽略了部分信息
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.26.25.23:8443 --token f28iti.c5fgj45u28332ga7 \
--discovery-token-ca-cert-hash sha256:81ec8f1d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41 \
--control-plane --certificate-key 93f9514164e2ecbd85293a9c671344e06a1aa811faf1069db6f678a1a5e6f38b
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.26.25.23:8443 --token f28iti.c5fgj45u28332ga7 \
--discovery-token-ca-cert-hash sha256:81ec8f1d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41
看到输出如上,代表初始化成功
初始化命令说明:
kubeadm init --control-plane-endpoint 10.26.25.23:8443 --upload-certs --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr 10.244.0.0/16
- --control-plane-endpoint 10.26.25.23:8443 这里的10.26.25.23就是keepalived配置的VIP
- --image-repository registry.aliyuncs.com/google_containers 更改了默认下载镜像的地址,默认是k8s.gcr.io,国内下载不了,或者自行爬墙~~~
- --pod-network-cidr 10.244.0.0/16 定义了pod的网段,需要与flannel定义的网段一直,否则在安装flannel时可能会出现flannel的pod一直重启,后面安装flannel的时候会提到
初始化过程简介:
- 下载需要的镜像
- 创建证书
- 创建服务的yaml配置文件
- 启动静态pod
初始化完成以后,现在就可以根据提示,配置kubectl客户端,使用kubernetes了,虽然现在只有一个master节点
开始使用集群
[root@k8s-master001 ~]# mkdir -p $HOME/.kube
[root@k8s-master001 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master001 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master001 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master001 NotReady master 105s v1.19.0
现在可以看到集群中只有一个节点,状态为NotReady,这是因为网络插件还没有安装
接下来安装网络插件Flannel
Flannel安装
下载安装需要的yalm文件:wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel.yml
因为现在安装的是最新版本的kubernetes,rbac的api版本需要修改为rbac.authorization.k8s.io/v1,DaemonSet的api版本改为 apps/v1,同时添加selector,这里只贴出配置的一部分。
[root@k8s-master001 ~]# cat kube-flannel.yml
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
tier: node
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
接下来,通过kubectl安装Flannel,并通过kubectl查看flannel pod的状态是否运行。
kubectl apply -f kube-flannel.yaml
[root@k8s-master001 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master001 Ready master 6m35s v1.19.0
[root@k8s-master001 ~]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-9cr5l 1/1 Running 0 6m51s
coredns-6d56c8448f-wsjwx 1/1 Running 0 6m51s
etcd-k8s-master001 1/1 Running 0 7m
kube-apiserver-k8s-master001 1/1 Running 0 7m
kube-controller-manager-k8s-master001 1/1 Running 0 7m
kube-flannel-ds-nmfwd 1/1 Running 0 4m36s
kube-proxy-pqrnl 1/1 Running 0 6m51s
kube-scheduler-k8s-master001 1/1 Running 0 7m
可以看到一个名字叫kube-flannel-ds-nmfwd的pod,状态为running,表示flannel已经安装好了
因为现在只有一个节点,只看到一个flannel的pod,后面继续添加另外两个节点,就会看到更多的pod了
接下来继续添加master节点
添加另外控制节点master002,master003
因为现在已经有一个控制节点,集群已经存在,只需要将剩下的机器添加到集群中即可,添加信息在刚在初始化节点的时候输出中可以看到,命令如下
因为输出太多,这里会删除一部分不重要的输出信息
在master002上操作:
[root@k8s-master002 ~]# kubeadm join 10.26.25.23:8443 --token f28iti.c5fgj45u28332ga7 --discovery-token-ca-cert-hash sha256:81ec8f1d1db0bb8a31d64ae31091726a92b9294bcfa0e2b4309b9d8c5245db41 --control-plane --certificate-key 93f9514164e2ecbd85293a9c671344e06a1aa811faf1069db6f678a1a5e6f38b
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
..............
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
看到这样的输出,表示添加成功了。
现在来查看下集群节点信息
[root@k8s-master002 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master001 Ready master 21m v1.19.0
k8s-master002 Ready master 6m5s v1.19.0
从输出能看到两个master节点,添加master003节点操作和master002一样,不再多说
最后三个节点全部添加以后,通过kubectl可以看到集群的具体信息
[root@k8s-master003 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-master001 Ready master 25m v1.19.0
k8s-master002 Ready master 10m v1.19.0
k8s-master003 Ready master 26s v1.19.0
最后查看现在运行的所有pod
[root@k8s-master003 ~]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-9cr5l 1/1 Running 0 27m
coredns-6d56c8448f-wsjwx 1/1 Running 0 27m
etcd-k8s-master001 1/1 Running 0 27m
etcd-k8s-master002 1/1 Running 0 8m19s
etcd-k8s-master003 1/1 Running 0 83s
kube-apiserver-k8s-master001 1/1 Running 0 27m
kube-apiserver-k8s-master002 1/1 Running 0 12m
kube-apiserver-k8s-master003 1/1 Running 0 85s
kube-controller-manager-k8s-master001 1/1 Running 1 27m
kube-controller-manager-k8s-master002 1/1 Running 0 12m
kube-controller-manager-k8s-master003 1/1 Running 0 81s
kube-flannel-ds-2lh42 1/1 Running 0 2m31s
kube-flannel-ds-nmfwd 1/1 Running 0 25m
kube-flannel-ds-w276b 1/1 Running 0 11m
kube-proxy-dzpdz 1/1 Running 0 2m39s
kube-proxy-hd5tb 1/1 Running 0 12m
kube-proxy-pqrnl 1/1 Running 0 27m
kube-scheduler-k8s-master001 1/1 Running 1 27m
kube-scheduler-k8s-master002 1/1 Running 0 12m
kube-scheduler-k8s-master003 1/1 Running 0 76s
现在可以看到,kubernetes的核心服务apiserver,-controller-manager,scheduler都是3个pod。
以上,kubernetes的master高科用就部署完毕了。
现在你可以通过haproxy的web管理界面,可以看到三个master已经可用了。
故障排除
如果master初始化失败,或者添加节点失败,可以使用kubeadm reset重置,然后重新安装
重置节点
[root@k8s-node003 haproxy]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W0910 05:31:57.345399 20386 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: node k8s-node003 doesn't have kubeadm.alpha.kubernetes.io/cri-socket annotation
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0910 05:31:58.580982 20386 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
一篇内容太多,后续的内容看下篇。。。
Tips: 更多好文章,请关注首发微信公众号“菜鸟运维杂谈”!!!
容器云平台No.2~kubeadm创建高可用集群v1.19.1的更多相关文章
- kubernetes kubeadm部署高可用集群
k8s kubeadm部署高可用集群 kubeadm是官方推出的部署工具,旨在降低kubernetes使用门槛与提高集群部署的便捷性. 同时越来越多的官方文档,围绕kubernetes容器化部署为环境 ...
- Redis创建高可用集群教程【Windows环境】
模仿的过程中,加入自己的思考和理解,也会有进步和收获. 在这个互联网时代,在高并发和高流量可能随时爆发的情况下,单机版的系统或者单机版的应用已经无法生存,越来越多的应用开始支持集群,支持分布式部署了. ...
- kubeadm部署高可用集群Kubernetes 1.14.1版本
Kubernetes高可用集群部署 部署架构: Master 组件: kube-apiserver Kubernetes API,集群的统一入口,各组件协调者,以HTTP API提供接口服务,所有对象 ...
- 用kubeadm 搭建 高可用集群问题记录和复盘整个过程 - 通过journalctl -u kubelet.service命令来查看kubelet服务的日志
1.根据 https://github.com/cookeem/kubeadm-ha/blob/master/README_CN.md 去搭建ha集群,遇到几个问题: runtime networ ...
- 使用kubeadm 搭建高可用集群 多master
很快很简单 只要三分钟就能看完 三台服务器 k8s-vip 负载均衡器 k8s-master1 主节点一 k8s-master2 主节点一 官方文档 首先搭建负载均衡器 用的Haproxy yum ...
- Redis高可用集群-哨兵模式(Redis-Sentinel)搭建配置教程【Windows环境】
No cross,no crown . 不经历风雨,怎么见彩虹. Redis哨兵模式,用现在流行的话可以说就是一个"哨兵机器人",给"哨兵机器人"进行相应的配置 ...
- Kubeadm 1.9 HA 高可用集群本地离线镜像部署【已验证】
k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,易宝支付,北森等等. kubernetes1.9版本发布2017年12月15日,每三个月一个迭代 ...
- [K8s 1.9实践]Kubeadm 1.9 HA 高可用 集群 本地离线镜像部署
k8s介绍 k8s 发展速度很快,目前很多大的公司容器集群都基于该项目,如京东,腾讯,滴滴,瓜子二手车,北森等等. kubernetes1.9版本发布2017年12月15日,每是那三个月一个迭代, W ...
- kubeadm使用外部etcd部署kubernetes v1.17.3 高可用集群
文章转载自:https://mp.weixin.qq.com/s?__biz=MzI1MDgwNzQ1MQ==&mid=2247483891&idx=1&sn=17dcd7cd ...
随机推荐
- git 常规操作 windows版
首先在本地建立好文件夹,然后初始化git仓库: git init 接下来在github上面克隆项目: git clone 这里写你的项目地址 然后就可以修改,删除,提交代码了 如果需要在新分支上面开 ...
- 计算机网络-应用层(3)Email应用
因特网电子邮箱系统主要由用户代理(user agent) .邮件服务器(mail server) 和简单邮件传输协议(SMTP)组成 邮件服务器(Mail Server) 邮箱:存储发给该用户的E ...
- 怎么下载chrome的扩展程序
很多时候我们是没办法访问谷歌扩展应用程序 chrome应用商店的,这时候我们最好能把对应扩展应用程序下载保存,以便提供给其他人员使用. 搜索得到知乎有很全的方法: 如何导出并打包第三方chrome扩展 ...
- Ajax、XMLHttpRequest、JSONP的区别
来自2020年搜狗的笔试题,第一题就不会
- Vue H5拖拽实例
需求:需要把左侧的数据表,拖拽到右侧的表关联区域 左侧数据表HTML: <h3 class="data-block">数据表</h3> <a-inpu ...
- 题解 P2426 【删数】
洛谷题目传送门 一眼看去:区间DP 数据范围:三重循环 好了不装B了,开始说正事 这题非常明显是区间DP. 按照惯例,先定义状态. 分析题目,发现除了区间左端点和右端点之外,什么也不需要加进状态里.因 ...
- 光年数据分析表(seo数据监控表和爬虫数据监控表)
http://www.wocaoseo.com/thread-307-1-1.html 光年seo培训想必很多人都知道,他们提出的数据化操作影响了很多的seo从业者,下面是他们的2个数据表,搜集于网络 ...
- ssm框架之异常处理
异常处理思路 系统中异常包括两类:预期异常和运行时异常runtimeexception,前者通过捕获异常从而获取异常信息,后者主要通过规范代码开发.测试通过手段减少运行时异常的发生. 系统的dao.s ...
- Unity双开
open -n /Applications/Unity/Unity.app
- 常用mac命令
~/.bash_profile 可以添加常用的一些命令别名alias unity="open -n /Applications/Unity/Unity.app"