环境:

主机 IP地址 组件
ansible 192.168.175.130 ansible
master 192.168.175.140 docker,kubectl,kubeadm,kubelet
node 192.168.175.141 docker,kubectl,kubeadm,kubelet
node 192.168.175.142 docker,kubectl,kubeadm,kubelet

检查及调试相关命令:

$ ansible-playbook -v k8s-time-sync.yaml --syntax-check
$ ansible-playbook -v k8s-*.yaml -C
$ ansible-playbook -v k8s-yum-cfg.yaml -C --start-at-task="Clean origin dir" --step
$ ansible-playbook -v k8s-kernel-cfg.yaml --step

主机inventory文件:

/root/ansible/hosts

[k8s_cluster]
master ansible_host=192.168.175.140
node1 ansible_host=192.168.175.141
node2 ansible_host=192.168.175.142 [k8s_cluster:vars]
ansible_port=22
ansible_user=root
ansible_password=hello123

检查网络:k8s-check.yaml

  • 检查k8s各主机的网络是否可达;
  • 检查k8s各主机操作系统版本是否达到要求;
- name: step01_check
hosts: k8s_cluster
gather_facts: no
tasks:
- name: check network
shell:
cmd: "ping -c 3 -m 2 {{ansible_host}}"
delegate_to: localhost - name: get system version
shell: cat /etc/system-release
register: system_release - name: check system version
vars:
system_version: "{{ system_release.stdout | regex_search('([7-9].[0-9]+).*?') }}"
suitable_version: 7.5
debug:
msg: "{{ 'The version of the operating system is '+ system_version +', suitable!' if (system_version | float >= suitable_version) else 'The version of the operating system is unsuitable' }}"

调试命令:

$ ansible-playbook --ssh-extra-args '-o StrictHostKeyChecking=no' -v -C k8s-check.yaml

$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml

$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml --start-at-task="get system version"

连接配置:k8s-conn-cfg.yaml

  • ansible服务器的/etc/hosts文件中添加k8s主机名解析配置
  • 生成密钥对,配置ansible免密登录到k8s各主机
- name: step02_conn_cfg
hosts: k8s_cluster
gather_facts: no
vars_prompt:
- name: RSA
prompt: Generate RSA or not(Yes/No)?
default: "no"
private: no - name: password
prompt: input your login password?
default: "hello123" tasks:
- name: Add DNS of k8s to ansible
delegate_to: localhost
lineinfile:
path: /etc/hosts
line: "{{ansible_host}} {{inventory_hostname}}"
backup: yes - name: Generate RSA
run_once: true
delegate_to: localhost
shell:
cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''
creates: /root/.ssh/id_rsa
when: RSA | bool - name: Configure password free login
delegate_to: localhost
shell: |
/usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null
/usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null
/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ ansible_host }}
#/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ inventory_hostname }} - name: Test ssh
shell: hostname

执行:

$ ansible-playbook k8s-conn-cfg.yaml
Generate RSA or not(Yes/No)? [no]: yes
input your login password? [hello123]: PLAY [step02_conn_cfg] ********************************************************************************************************** TASK [Add DNS of k8s to ansible] ************************************************************************************************
ok: [master -> localhost]
ok: [node1 -> localhost]
ok: [node2 -> localhost] TASK [Generate RSA] *************************************************************************************************************
changed: [master -> localhost] TASK [Configure password free login] ********************************************************************************************
changed: [node1 -> localhost]
changed: [master -> localhost]
changed: [node2 -> localhost] TASK [Test ssh] *****************************************************************************************************************
changed: [master]
changed: [node1]
changed: [node2] PLAY RECAP **********************************************************************************************************************
master : ok=4 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node2 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置k8s集群dns解析: k8s-hosts-cfg.yaml

  • 设置主机名
  • /etc/hosts文件中互相添加dns解析
- name: step03_cfg_host
hosts: k8s_cluster
gather_facts: no
tasks:
- name: set hostname
hostname:
name: "{{ inventory_hostname }}"
use: systemd
- name: Add dns to each other
lineinfile:
path: /etc/hosts
backup: yes
line: "{{item.value.ansible_host}} {{item.key}}"
loop: "{{ hostvars | dict2items }}"
loop_control:
label: "{{ item.key }} {{ item.value.ansible_host }}"

执行:

$ ansible-playbook k8s-hosts-cfg.yaml

PLAY [step03_cfg_host] **********************************************************************************************************

TASK [set hostname] *************************************************************************************************************
ok: [master]
ok: [node1]
ok: [node2] TASK [Add dns to each other] ****************************************************************************************************
ok: [node2] => (item=node1 192.168.175.141)
ok: [master] => (item=node1 192.168.175.141)
ok: [node1] => (item=node1 192.168.175.141)
ok: [node2] => (item=node2 192.168.175.142)
ok: [master] => (item=node2 192.168.175.142)
ok: [node1] => (item=node2 192.168.175.142)
ok: [node2] => (item=master 192.168.175.140)
ok: [master] => (item=master 192.168.175.140)
ok: [node1] => (item=master 192.168.175.140) PLAY RECAP **********************************************************************************************************************
master : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node2 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置yum源:k8s-yum-cfg.yaml

- name: step04_yum_cfg
hosts: k8s_cluster
gather_facts: no
tasks: - name: Create back-up directory
file:
path: /etc/yum.repos.d/org/
state: directory - name: Back-up old Yum files
shell:
cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/
removes: /etc/yum.repos.d/org/ - name: Add new Yum files
copy:
src: ./files_yum/
dest: /etc/yum.repos.d/ - name: Check yum.repos.d
shell:
cmd: ls /etc/yum.repos.d/*

时钟同步:k8s-time-sync.yaml

- name: step05_time_sync
hosts: k8s_cluster
gather_facts: no
tasks: - name: Start chronyd.service
systemd:
name: chronyd.service
state: started
enabled: yes - name: Modify time zone & clock
shell: |
cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
clock -w
hwclock -w - name: Check time now
command: date

禁用iptable、firewalld、NetworkManager服务

- name: step06_net_service
hosts: k8s_cluster
gather_facts: no
tasks: - name: Stop some services for net
systemd:
name: "{{ item }}"
state: stopped
enabled: no
loop:
- firewalld
- iptables
- NetworkManager

执行:

$ ansible-playbook -v k8s-net-service.yaml
... ...
failed: [master] (item=iptables) => {
"ansible_loop_var": "item",
"changed": false,
"item": "iptables"
} MSG: Could not find the requested service iptables: host
... ... PLAY RECAP **********************************************************************************************************************
master : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
node1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
node2 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0

禁用SElinux、swap:k8s-SE-swap-disable.yaml

- name: step07_net_service
hosts: k8s_cluster
gather_facts: no
tasks: - name: SElinux disabled
lineinfile:
path: /etc/selinux/config
line: SELINUX=disabled
regexp: ^SELINUX=
state: present
backup: yes - name: Swap disabled
lineinfile:
path: /etc/fstab
line: '#\1'
regexp: '(^/dev/mapper/centos-swap.*$)'
backrefs: yes
state: present
backup: yes

修改内核:k8s-kernel-cfg.yaml

- name: step08_kernel_cfg
hosts: k8s_cluster
gather_facts: no
tasks: - name: Create /etc/sysctl.d/kubernetes.conf
copy:
content: ''
dest: /etc/sysctl.d/kubernetes.conf
force: yes - name: Cfg bridge and ip_forward
lineinfile:
path: /etc/sysctl.d/kubernetes.conf
line: "{{ item }}"
state: present
loop:
- 'net.bridge.bridge-nf-call-ip6tables = 1'
- 'net.bridge.bridge-nf-call-iptables = 1'
- 'net.ipv4.ip_forward = 1' - name: Load cfg
shell:
cmd: |
sysctl -p
modprobe br_netfilter
removes: /etc/sysctl.d/kubernetes.conf - name: Check cfg
shell:
cmd: '[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3'

执行:

$ ansible-playbook -v k8s-kernel-cfg.yaml --step

TASK [Check cfg] ****************************************************************************************************************
changed: [master] => {
"changed": true,
"cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
"delta": "0:00:00.011574",
"end": "2022-02-27 04:26:01.332896",
"rc": 0,
"start": "2022-02-27 04:26:01.321322"
}
changed: [node2] => {
"changed": true,
"cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
"delta": "0:00:00.016331",
"end": "2022-02-27 04:26:01.351208",
"rc": 0,
"start": "2022-02-27 04:26:01.334877"
}
changed: [node1] => {
"changed": true,
"cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3",
"delta": "0:00:00.016923",
"end": "2022-02-27 04:26:01.355983",
"rc": 0,
"start": "2022-02-27 04:26:01.339060"
} PLAY RECAP **********************************************************************************************************************
master : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node1 : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
node2 : ok=4 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

配置ipvs:k8s-ipvs-cfg.yaml

- name: step09_ipvs_cfg
hosts: k8s_cluster
gather_facts: no
tasks: - name: Install ipset and ipvsadm
yum:
name: "{{ item }}"
state: present
loop:
- ipset
- ipvsadm - name: Load modules
shell: |
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4 - name: Check cfg
shell:
cmd: '[ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3'

安装docker:k8s-docker-install.yaml

- name: step10_docker_install
hosts: k8s_cluster
gather_facts: no
tasks: - name: Install docker-ce
yum:
name: docker-ce-18.06.3.ce-3.el7
state: present - name: Cfg docker
copy:
src: ./files_docker/daemon.json
dest: /etc/docker/ - name: Start docker
systemd:
name: docker.service
state: started
enabled: yes - name: Check docker version
shell:
cmd: docker --version

安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml

- name: step11_k8s_install_kubepkgs
hosts: k8s_cluster
gather_facts: no
tasks: - name: Install k8s components
yum:
name: "{{ item }}"
state: present
loop:
- kubeadm-1.17.4-0
- kubelet-1.17.4-0
- kubectl-1.17.4-0 - name: Cfg k8s
copy:
src: ./files_k8s/kubelet
dest: /etc/sysconfig/
force: no
backup: yes - name: Start kubelet
systemd:
name: kubelet.service
state: started
enabled: yes

安装集群镜像:k8s-apps-images.yaml

- name: step12_apps_images
hosts: k8s_cluster
gather_facts: no vars:
apps:
- kube-apiserver:v1.17.4
- kube-controller-manager:v1.17.4
- kube-scheduler:v1.17.4
- kube-proxy:v1.17.4
- pause:3.1
- etcd:3.4.3-0
- coredns:1.6.5
vars_prompt:
- name: cfg_python
prompt: Do you need to install docker pkg for python(Yes/No)?
default: "no"
private: no tasks: - block:
- name: Install python-pip
yum:
name: python-pip
state: present - name: Install docker pkg for python
shell:
cmd: |
pip install docker==4.4.4
pip install websocket-client==0.32.0
creates: /usr/lib/python2.7/site-packages/docker/
when: cfg_python | bool - name: Pull images
community.docker.docker_image:
name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
source: pull
loop: "{{ apps }}" - name: Tag images
community.docker.docker_image:
name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
repository: "k8s.gcr.io/{{ item }}"
force_tag: yes
source: local
loop: "{{ apps }}" - name: Remove images for ali
community.docker.docker_image:
name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"
state: absent
loop: "{{ apps }}"

执行:

$ ansible-playbook k8s-apps-images.yaml
Do you need to install docker pkg for python(Yes/No)? [no]: PLAY [step12_apps_images] ******************************************************************************************************* TASK [Install python-pip] *******************************************************************************************************
skipping: [node1]
skipping: [master]
skipping: [node2] TASK [Install docker pkg for python] ********************************************************************************************
skipping: [master]
skipping: [node1]
skipping: [node2] TASK [Pull images] **************************************************************************************************************
changed: [node1] => (item=kube-apiserver:v1.17.4)
changed: [node2] => (item=kube-apiserver:v1.17.4)
changed: [master] => (item=kube-apiserver:v1.17.4)
changed: [node1] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-controller-manager:v1.17.4)
changed: [node1] => (item=kube-scheduler:v1.17.4)
changed: [master] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=kube-proxy:v1.17.4)
changed: [node2] => (item=kube-controller-manager:v1.17.4)
changed: [master] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=pause:3.1)
changed: [master] => (item=pause:3.1)
changed: [node2] => (item=kube-scheduler:v1.17.4)
changed: [node1] => (item=etcd:3.4.3-0)
changed: [master] => (item=etcd:3.4.3-0)
changed: [node2] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=coredns:1.6.5)
changed: [master] => (item=coredns:1.6.5)
changed: [node2] => (item=pause:3.1)
changed: [node2] => (item=etcd:3.4.3-0)
changed: [node2] => (item=coredns:1.6.5) TASK [Tag images] ***************************************************************************************************************
ok: [node1] => (item=kube-apiserver:v1.17.4)
ok: [master] => (item=kube-apiserver:v1.17.4)
ok: [node2] => (item=kube-apiserver:v1.17.4)
ok: [node1] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-controller-manager:v1.17.4)
ok: [node2] => (item=kube-controller-manager:v1.17.4)
ok: [master] => (item=kube-scheduler:v1.17.4)
ok: [node1] => (item=kube-scheduler:v1.17.4)
ok: [node2] => (item=kube-scheduler:v1.17.4)
ok: [master] => (item=kube-proxy:v1.17.4)
ok: [node1] => (item=kube-proxy:v1.17.4)
ok: [node2] => (item=kube-proxy:v1.17.4)
ok: [master] => (item=pause:3.1)
ok: [node1] => (item=pause:3.1)
ok: [node2] => (item=pause:3.1)
ok: [master] => (item=etcd:3.4.3-0)
ok: [node1] => (item=etcd:3.4.3-0)
ok: [node2] => (item=etcd:3.4.3-0)
ok: [master] => (item=coredns:1.6.5)
ok: [node1] => (item=coredns:1.6.5)
ok: [node2] => (item=coredns:1.6.5) TASK [Remove images for ali] ****************************************************************************************************
changed: [master] => (item=kube-apiserver:v1.17.4)
changed: [node2] => (item=kube-apiserver:v1.17.4)
changed: [node1] => (item=kube-apiserver:v1.17.4)
changed: [master] => (item=kube-controller-manager:v1.17.4)
changed: [node1] => (item=kube-controller-manager:v1.17.4)
changed: [node2] => (item=kube-controller-manager:v1.17.4)
changed: [node1] => (item=kube-scheduler:v1.17.4)
changed: [master] => (item=kube-scheduler:v1.17.4)
changed: [node2] => (item=kube-scheduler:v1.17.4)
changed: [master] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=kube-proxy:v1.17.4)
changed: [node2] => (item=kube-proxy:v1.17.4)
changed: [node1] => (item=pause:3.1)
changed: [master] => (item=pause:3.1)
changed: [node2] => (item=pause:3.1)
changed: [master] => (item=etcd:3.4.3-0)
changed: [node1] => (item=etcd:3.4.3-0)
changed: [node2] => (item=etcd:3.4.3-0)
changed: [master] => (item=coredns:1.6.5)
changed: [node1] => (item=coredns:1.6.5)
changed: [node2] => (item=coredns:1.6.5) PLAY RECAP **********************************************************************************************************************
master : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
node1 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0
node2 : ok=3 changed=2 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0

k8s集群初始化:k8s-cluster-init.yaml

- name: step13_cluster_init
hosts: master
gather_facts: no
tasks:
- block:
- name: Kubeadm init
shell:
cmd:
kubeadm init
--apiserver-advertise-address={{ ansible_host }}
--kubernetes-version=v1.17.4
--service-cidr=10.96.0.0/12
--pod-network-cidr=10.244.0.0/16
--image-repository registry.aliyuncs.com/google_containers - name: Create /root/.kube
file:
path: /root/.kube/
state: directory
owner: root
group: root - name: Copy /root/.kube/config
copy:
src: /etc/kubernetes/admin.conf
dest: /root/.kube/config
remote_src: yes
backup: yes
owner: root
group: root - name: Copy kube-flannel
copy:
src: ./files_k8s/kube-flannel.yml
dest: /root/
backup: yes - name: Apply kube-flannel
shell:
cmd: kubectl apply -f /root/kube-flannel.yml - name: Get token
shell:
cmd: kubeadm token create --print-join-command
register: join_token - name: debug join_token
debug:
var: join_token.stdout

Ansible部署K8s集群的更多相关文章

  1. Ansible自动化部署K8S集群

    Ansible自动化部署K8S集群 1.1 Ansible介绍 Ansible是一种IT自动化工具.它可以配置系统,部署软件以及协调更高级的IT任务,例如持续部署,滚动更新.Ansible适用于管理企 ...

  2. 【02】Kubernets:使用 kubeadm 部署 K8S 集群

    写在前面的话 通过上一节,知道了 K8S 有 Master / Node 组成,但是具体怎么个组成法,就是这一节具体谈的内容.概念性的东西我们会尽量以实验的形式将其复现. 部署 K8S 集群 互联网常 ...

  3. 部署K8S集群

    1.Kubernetes 1.1.概念 kubernetes(通常称为k8s)用于自动部署.扩展和管理容器化应用程序的开源系统.它旨在提供“跨主机集群的自动部署.扩展以及运行应用程序容器的平台”.支持 ...

  4. 菜鸟系列k8s——快速部署k8s集群

    快速部署k8s集群 1. 安装Rancher Rancher是业界唯一完全开源的企业级容器管理平台,为企业用户提供在生产环境中落地使用容器所需的一切功能与组件. Rancher2.0基于Kuberne ...

  5. 使用RKE快速部署k8s集群

    一.环境准备 1.1环境信息 IP地址 角色 部署软件 10.10.100.5 K8s Master Etcd.Control 10.10.100.17 K8s Worker1 Worker 10.1 ...

  6. 使用kubeadm部署k8s集群[v1.18.0]

    使用kubeadm部署k8s集群 环境 IP地址 主机名 节点 10.0.0.63 k8s-master1 master1 10.0.0.63 k8s-master2 master2 10.0.0.6 ...

  7. centos7.8 安装部署 k8s 集群

    centos7.8 安装部署 k8s 集群 目录 centos7.8 安装部署 k8s 集群 环境说明 Docker 安装 k8s 安装准备工作 Master 节点安装 k8s 版本查看 安装 kub ...

  8. 二进制方法-部署k8s集群部署1.18版本

    二进制方法-部署k8s集群部署1.18版本 1. 前置知识点 1.1 生产环境可部署kubernetes集群的两种方式 目前生产部署Kubernetes集群主要有两种方式 kuberadm Kubea ...

  9. 通过kubeadm工具部署k8s集群

    1.概述 kubeadm是一工具箱,通过kubeadm工具,可以快速的创建一个最小的.可用的,并且符合最佳实践的k8s集群. 本文档介绍如何通过kubeadm工具快速部署一个k8s集群. 2.主机规划 ...

随机推荐

  1. JQuery选择器的使用和分类

    jQuery选择器 id选择器格式 $("#box") //获取标签里的id是box的标签 类选择器格式 $(".a") //获取标签里的类名是a的标签 标签选 ...

  2. 【采坑小计】prometheus的remote write协议遇到的问题

    没有读懂源码以前,无脑试错总是效率很低的! 1.thanos receiver报store locally for endpoint : conflict 接口返回的日志: store locally ...

  3. 豆瓣爬虫——通过json接口获取数据

    最近在复习resqusts 爬虫模块,就重新写了一个豆瓣爬虫,这个网页从HTML 源码上来看是没有任何我想要的信息的,如下图所示: 这是网页视图,我在源码中查找影片信息,没有任何信息,如图: 由此我判 ...

  4. java-包概述

    1 package face_package; 2 3 import face_packagedemo.DemoA; 4 5 /* 包(package) 6 * 1,对类文件进行分类管理. 7 * 2 ...

  5. jstack定位java程序CPU使用高问题

    top top -Hp 进程id printf "0x%x\n" 2769746 jstack 进程id [root ~]$ printf "0x%x\n" 2 ...

  6. STM32定时器触发ADC多通道连续采样,DMA缓存结果

    STM32的ADC使用非常灵活,采样触发方面:既支持软件触发,定时器或其他硬件电路自动触发,也支持转换完成后自动触发下一通道/轮转换.转换结果存储方面:既支持软件读取和转存,也支持DMA自动存储转换结 ...

  7. RPC调用获取参数值

    本文以 RPC 获取百度登录password加密值为例: 涉及的知识点有: 1.js调试,寻找加密代码 2. 浏览器本地代码替换 3. js自执行函数 4. 插桩 5. RPC 远程调用 6. pyt ...

  8. Vue 之 浏览本地图片功能

      template <input type="file" ref="input_file" @change="fileChange" ...

  9. Vue 之 Nginx 部署

    nginx 下载地址:http://nginx.org/en/download.html 下载后直接解压,cmd 进入到解压目录运行 start nginx 即可启动 常用命令: nginx -s s ...

  10. Redis学习笔记(二)redis 底层数据结构

    在上一节提到的图中,我们知道,可以通过 redisObject 对象的 type 和 encoding 属性.可以决定Redis 主要的底层数据结构:SDS.QuickList.ZipList.Has ...