CRD创建

Group表示CRD所属的组,它可以支持多种不同版本、不同类型的资源构建,Version表示CRD的版本号,Kind表示CRD的类型

kubebuilder create api --group ship --version v1beta1 --kind Demo
kubebuilder create api --group ship --version v1 --kind Test
kubebuilder create api --group ship --version v1beta1 --kind Test2

生成的资源目录结构如下

[root@k8s-01 project2]# tree api/
api/
├── v1
│   ├── groupversion_info.go
│   ├── test_types.go
│   └── zz_generated.deepcopy.go
└── v1beta1
├── demo_types.go
├── groupversion_info.go
├── test2_types.go
└── zz_generated.deepcopy.go 2 directories, 7 files
[root@k8s-01 project2]#
[root@k8s-01 project2]# tree .
.
├── api
│   ├── v1
│   │   ├── groupversion_info.go
│   │   ├── test_types.go
│   │   └── zz_generated.deepcopy.go
│   └── v1beta1
│   ├── demo_types.go
│   ├── groupversion_info.go
│   ├── test2_types.go
│   └── zz_generated.deepcopy.go
├── bin
│   └── controller-gen
├── config
│   ├── crd
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │   ├── cainjection_in_demoes.yaml
│   │   ├── cainjection_in_test2s.yaml
│   │   ├── cainjection_in_tests.yaml
│   │   ├── webhook_in_demoes.yaml
│   │   ├── webhook_in_test2s.yaml
│   │   └── webhook_in_tests.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── controller_manager_config.yaml
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── demo_editor_role.yaml
│   │   ├── demo_viewer_role.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── service_account.yaml
│   │   ├── test2_editor_role.yaml
│   │   ├── test2_viewer_role.yaml
│   │   ├── test_editor_role.yaml
│   │   └── test_viewer_role.yaml
│   └── samples
│   ├── ship_v1beta1_demo.yaml
│   ├── ship_v1beta1_test2.yaml
│   └── ship_v1_test.yaml
├── controllers
│   ├── demo_controller.go
│   ├── suite_test.go
│   ├── test2_controller.go
│   └── test_controller.go
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go
├── Makefile
├── PROJECT
└── README.md 14 directories, 54 files

{kind}controller.go,字段生成的Reconciler对象名称时{kind}Reconsiler,它的主要方法是Reconcile(),即通过在这个函数的空白处填入逻辑完成对应的CRD构造工作。 SetupWithManager方法,可以完成CRD在Manager对象中的安装,最后通过Manager对象的start方法来完成CRD Controller的运行。

//+kubebuilder:rbac:groups=ship.demo.domain,resources=tests,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=ship.demo.domain,resources=tests/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=ship.demo.domain,resources=tests/finalizers,verbs=update // Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the Test object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.12.1/pkg/reconcile
func (r *TestReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx) // TODO(user): your logic here return ctrl.Result{}, nil
} // SetupWithManager sets up the controller with the Manager.
func (r *TestReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&shipv1.Test{}).
Complete(r)
}

Manager初始化

// controller-tuntime/pkg/manager/manager.go 文件中的New方法

Controller初始化

通过SetupWithManager方法,就可以完成CRD在Manager对象中的安装,最后通过Manager对象的start方法来完成CRD Controller的运行

// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.12.1/pkg/reconcile
func (r *TestReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx) // TODO(user): your logic here return ctrl.Result{}, nil
} // SetupWithManager sets up the controller with the Manager.
func (r *TestReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&shipv1.Test{}).
Complete(r)
}

它首先借助Controller-runtime包初始化Builder对象,当它完成Complete方法时,实例完成了CRD Reconciler对象的初始化,而这个对象是一个接口方法,它必须实现Reconcile方法。(代码片段)

// pkg/builder/controller.go

// Complete builds the Application Controller.
func (blder *Builder) Complete(r reconcile.Reconciler) error {
_, err := blder.Build(r)
return err
} // Build builds the Application Controller and returns the Controller it created.
func (blder *Builder) Build(r reconcile.Reconciler) (controller.Controller, error) {
if r == nil {
return nil, fmt.Errorf("must provide a non-nil Reconciler")
}
if blder.mgr == nil {
return nil, fmt.Errorf("must provide a non-nil Manager")
}
if blder.forInput.err != nil {
return nil, blder.forInput.err
}
// Checking the reconcile type exist or not
if blder.forInput.object == nil {
return nil, fmt.Errorf("must provide an object for reconciliation")
} // Set the ControllerManagedBy
if err := blder.doController(r); err != nil {
return nil, err
} // Set the Watch
if err := blder.doWatch(); err != nil {
return nil, err
} return blder.ctrl, nil
}

在构建Controller的方法中最重要的两个步骤时doController和DoWatch。在doController的过程中,实际的核心步骤是完成Controller对象的构建,从而实现基于Scheme和Controller对象的CRD的监听流程。而在构建Controller的过程中,它的Do字段实际对应的是Reconciler接口类型定义的方法,也就是在Controller对象生成之后,必须实现这个定义的方法。它是如何使Reconciler对象同Controller产生联系的? 实际上,在Controller初始化的过程中,借助了Options参数对象中设计的Reconciler对象,并将其传递给了Controller对象的do字段。所以当我们调用SetupWithManager方法的时候,不仅完成了Controller的初始化,还完成了Controller监听资源的注册于发现过程,同时将CRD的必要实现方法(Reconcile方法)进行了再现。

// Controller implements controller.Controller.
type Controller struct {
// Name is used to uniquely identify a Controller in tracing, logging and monitoring. Name is required.
Name string // MaxConcurrentReconciles is the maximum number of concurrent Reconciles which can be run. Defaults to 1.
MaxConcurrentReconciles int // Reconciler is a function that can be called at any time with the Name / Namespace of an object and
// ensures that the state of the system matches the state specified in the object.
// Defaults to the DefaultReconcileFunc.
Do reconcile.Reconciler
} Reconciliation is level-based, meaning action isn't driven off changes in individual Events, but instead is
driven by actual cluster state read from the apiserver or a local cache.
For example if responding to a Pod Delete Event, the Request won't contain that a Pod was deleted,
instead the reconcile function observes this when reading the cluster state and seeing the Pod as missing.
*/
type Reconciler interface {
// Reconcile performs a full reconciliation for the object referred to by the Request.
// The Controller will requeue the Request to be processed again if an error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
Reconcile(context.Context, Request) (Result, error)
}

Client初始化

实现Controller时,不可避免地需要对某些资源类型进行创建、删除、更新和查询,这些操作就是通过Client实现的,查询功能实际查询的是本地的Cache,写操作是直接访问APIServer。Client是进行初始化的过程见如下代码

manager, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{})
if err != nil {
log.Error(err, "could not create manager")
os.Exit(1)
}

Finalizers

Finalizers是每种资源在声明周期结束时都会用到的字段。该字段属于Kubernetes GC来及收集器,它是一种删除拦截机制,可以让控制器在删除资源前(Pre-delete)进行回调。

Finalizers是在对象删除之前需要执行的逻辑,比如你给资源类型中的每个对象都创建了对应的外部资源,并且希望在Kubernetes删除对应资源的同时删除关联的外部资源,那么可以通过Finalizers来实现。当Finalizers字段存在时,相关资源不允许被强制删除。所有的对象在被彻底删除之前,它的Finalizers字段必须为空,即必须保证在所有对象被彻底删除之前,与它关联的所有相关资源已被删除。

Finzlizers存在于任何一个资源对象的Meta中,在Kubenertes资源中声明为Finalizers []string类型

type ObjectMeta struct {
// Must be empty before the object is deleted from the registry. Each entry
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
// Finalizers may be processed and removed in any order. Order is NOT enforced
// because it introduces significant risk of stuck finalizers.
// finalizers is a shared field, any actor with permission can reorder it.
// If the finalizer list is processed in order, then this can lead to a situation
// in which the component responsible for the first finalizer in the list is
// waiting for a signal (field value, external system, or other) produced by a
// component responsible for a finalizer later in the list, resulting in a deadlock.
// Without enforced ordering finalizers are free to order amongst themselves and
// are not vulnerable to ordering changes in the list.
// +optional
// +patchStrategy=merge
Finalizers []string `json:"finalizers,omitempty" patchStrategy:"merge" protobuf:"bytes,14,rep,name=finalizers"` // DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This
// field is set by the server when a graceful deletion is requested by the user, and is not
// directly settable by a client. The resource is expected to be deleted (no longer visible
// from resource lists, and not reachable by name) after the time in this field, once the
// finalizers list is empty. As long as the finalizers list contains items, deletion is blocked.
// Once the deletionTimestamp is set, this value may not be unset or be set further into the
// future, although it may be shortened or the resource may be deleted prior to this time.
// For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react
// by sending a graceful termination signal to the containers in the pod. After that 30 seconds,
// the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup,
// remove the pod from the API. In the presence of network partitions, this object may still
// exist after this timestamp, until an administrator or automated process can determine the
// resource is fully terminated.
// If not set, graceful deletion of the object has not been requested.
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
// More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
DeletionTimestamp *Time `json:"deletionTimestamp,omitempty" protobuf:"bytes,9,opt,name=deletionTimestamp"`
}

存在Finalizers字段的资源对象接收的第一个删除请求,设置metadata.Deletion-Timestamp字段的值,但不删除具体资源,在设置该字段后,Finalizers列表的对象只能被删除,不能进行其它操作。

当metadata.DeletionTimestamp字段为非空时,Controller监听对象并执行对应Finalizers对象,在所有动作执行完成后,将该Finalizer从列表中移除。一旦Finalizers列表为空,就意味着所有Finzliser都被执行过,最终Kubernetes会删除该资源。

在Operator Controller中,最重要的逻辑就是Reconcile方法,Finalizers也是在Reconcile中实现的,代码如下:

//+kubebuilder:rbac:groups=webapp.my.domain,resources=guestbooks,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=webapp.my.domain,resources=guestbooks/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=webapp.my.domain,resources=guestbooks/finalizers,verbs=update // Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the Guestbook object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.11.0/pkg/reconcile
func (r *GuestbookReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx) // TODO(user): your logic here var cronJob v1beta1.CronJob
if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
log.Log.Error(err, "unable to fetch CronJob")
return ctrl.Result{}, client.IgnoreNotFound(err)
} // 声明Finalizer字段,类型为字符串
// 自定义Finalizer的标识符包含一个域名、一个正向斜线和Finalizer名称
myFinalizerName := "storage.finalizers.tutorial.kubebuilder.io" // 通过检查DeletionTimestamp 字段是否为0, 判断资源是否被删除
if cronJob.ObjectMeta.DeletionTimestamp.IsZero() {
// 如果DeletionTimestamp字段为0,说明资源未被删除,此时需要检测是否存在Finalizer,如果不存在,则添加,并更新到资源对象中
if !containsString(cronJob.ObjectMeta.Finalizers, myFinalizerName) {
cronJob.ObjectMeta.Finalizers = append(cronJob.ObjectMeta.Finalizers, myFinalizerName)
if err := r.Update(ctx, &cronJob); err != nil {
return ctrl.Result{}, err
}
}
} else {
// 如果DeletionTimestamp字段不为0,说明对象处于删除状态中
if containsString(cronJob.ObjectMeta.Finalizers, myFinalizerName) {
// 如果存在Finalizer且与上述声明的finalizer匹配,那么执行对应的hook逻辑
if err := r.deleteExternalResources(&cronJob); err != nil {
// 如果删除失败,则直接返回对应的err, Controller会自动执行重试逻辑
return ctrl.Result{}, err
} // 如果对应的hook执行成功,那么清空finalizers, kubernetes删除对应资源
cronJob.ObjectMeta.Finalizers = removeString(cronJob.ObjectMeta.Finalizers, myFinalizerName)
if err := r.Update(ctx, &cronJob); err != nil {
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
} func containsString(slice []string, s string) bool {
for _, item := range slice {
if item == s {
return true
}
}
return false
} func removeString(slice []string, s string) (result []string) {
for _, item := range slice {
if item == s {
continue
}
result = append(result, item)
}
return
} func (r *GuestbookReconciler) deleteExternalResources(cronJob *v1beta1.CronJob) error {
// 删除cronJob关联的外部资源逻辑
// 需要确保实现幂等
return nil
}

在Kubernetes中,只要对象ObjectMeta中的Finalizers不为空,该对象的Delete操作就会转变为Update操作,Update DeletionTimestamp字段的意义是告诉Kubernetes的垃圾回收器,在DeletionTimestamp这个时刻之后,只要Finalizers为空,就立马删除该对象。

所以一般的使用方法就是在创建对象时把Finalizers设置好(任意String),然后处理DeletionTimestamp不为空的Update操作(实际是Delete),根据Finalizers的值执行完所有的Pre-delete Hook(此时可以在Cache中读取被删除对象的任何信息)之后将Finalizers设置为空即可。

Kubebuilder模块的更多相关文章

  1. kubebuilder实战之二:初次体验kubebuilder

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  2. kubebuilder实战之六:构建部署运行

    欢迎访问我的GitHub https://github.com/zq2599/blog_demos 内容:所有原创文章分类汇总及配套源码,涉及Java.Docker.Kubernetes.DevOPS ...

  3. Kubebuilder简介与架构

    什么是Kubebuilder Kubebuilder是一个用Go原因构建Kubernetes APIs的框架,通过使用KubeBuilder,用户可以遵循一套简单的编程框架,使用CRD构建API.Co ...

  4. npm 私有模块的管理使用

    你可以使用 NPM 命令行工具来管理你在 NPM 仓库的私有模块代码,这使得在项目中使用公共模块变的更加方便. 开始前的工作 你需要一个 2.7.0 以上版本的 npm ,并且需要有一个可以登陆 np ...

  5. node.js学习(三)简单的node程序&&模块简单使用&&commonJS规范&&深入理解模块原理

    一.一个简单的node程序 1.新建一个txt文件 2.修改后缀 修改之后会弹出这个,点击"是" 3.运行test.js 源文件 使用node.js运行之后的. 如果该路径下没有该 ...

  6. ES6模块import细节

    写在前面,目前浏览器对ES6的import支持还不是很好,需要用bable转译. ES6引入外部模块分两种情况: 1.导入外部的变量或函数等: import {firstName, lastName, ...

  7. Python标准模块--ContextManager

    1 模块简介 在数年前,Python 2.5 加入了一个非常特殊的关键字,就是with.with语句允许开发者创建上下文管理器.什么是上下文管理器?上下文管理器就是允许你可以自动地开始和结束一些事情. ...

  8. Python标准模块--Unicode

    1 模块简介 Python 3中最大的变化之一就是删除了Unicode类型.在Python 2中,有str类型和unicode类型,例如, Python 2.7.6 (default, Oct 26 ...

  9. Python标准模块--Iterators和Generators

    1 模块简介 当你开始使用Python编程时,你或许已经使用了iterators(迭代器)和generators(生成器),你当时可能并没有意识到.在本篇博文中,我们将会学习迭代器和生成器是什么.当然 ...

随机推荐

  1. Typora基本使用语法(超好用的代码编辑工具)

    Typora代码编辑软件,一款适合新手小白的做笔记工具,操作简单,大家可以去试试......

  2. Educational Codeforces Round 119 (Div. 2), (C) BA-String硬着头皮做, 能做出来的

    题目链接 Problem - C - Codeforces 题目 Example input 3 2 4 3 a* 4 1 3 a**a 6 3 20 **a*** output abb abba b ...

  3. 原生js 复选框全选案例

    注 : 本文章主要写功能 代码示例 : <body> <input type="checkbox" id="che" /><br& ...

  4. 用 GraphScope 像 NetworkX 一样做图分析

    NetworkX 是 Python 上最常用的图分析包,GraphScoep 兼容 NetworkX 接口.本文中我们将分享如何用 GraphScope 像 NetworkX 一样在(大)图上进行分析 ...

  5. XCTF练习题---MISC---神奇的Modbus

    XCTF练习题---MISC---神奇的Modbus flag:sctf{Easy_Modbus} 解题步骤: 1.观察题目,下载附件 2.打开下载文件,发现可以用WireShark打开,打开看看是啥 ...

  6. QtWebEngine性能问题

    目录 1. 概述 2. 详论 2.1. 图形属性设置 2.2. 硬件加速设置 2.3. Qt6 3. 参考 1. 概述 Qt的Qt WebEngine模块是基于Chromium项目,但是本人在使用QW ...

  7. Linux内存、Swap、Cache、Buffer详细解析

    关注「开源Linux」,选择"设为星标" 回复「学习」,有我为您特别筛选的学习资料~ 1. 通过free命令看Linux内存 total:总内存大小. used:已经使用的内存大小 ...

  8. Thumbnails 图片处理

    Thumbnails 是由谷歌提供的图片处理包,目前版本0.4.8. 可以简洁的实现图片的缩放.压缩.旋转.水印.格式转换等操作. 示例代码: package test;import net.coob ...

  9. IX交换中心网络架构分析

    拓扑如上 IX功能介绍 IX交换中心,客户接入交换中心只收取端口费用,在交换中心网内的流量不收取任何费用,一个交换中心是否值得接入主要看该ix所接入的用户 假如客户A是做视频网站,用的视频源是IQY的 ...

  10. Spring Boot 配置 HikariCP

    HikariCP 是一个可靠的.高性能的 JDBC 连接池 本来用的 alibaba/druid,但实际并没有怎么用其内置的监控网页,然后多方调查,决定弃用 druid,替换为 HikariCP Sp ...