BERT 简介

BERT是2018年google 提出来的预训练的语言模型,并且它打破很多NLP领域的任务记录,其提出在nlp的领域具有重要意义。预训练的(pre-train)的语言模型通过无监督的学习掌握了很多自然语言的一些语法或者语义知识,之后在做下游的nlp任务时就会显得比较容易。BERT在做下游的有监督nlp任务时就像一个做了充足预习的学生去上课,那效果肯定事半功倍。之前的word2vec,glove等Word Embedding技术也是通过无监督的训练让模型预先掌握了一些基础的语言知识,但是word embeding技术无论从预训练的模型复杂度(可以理解成学习的能力),以及无监督学习的任务难度都无法和BERT相比。

模型部分

首先BERT模型采用的是12层或者24层的双向的Transformer的Encoder作为特征提取器,如下图所示。要知道在nlp领域,特征提取能力方面的排序大致是Transformer>RNN>CNN。对Transformer不了解的同学可以看看笔者之前的这篇文章,而且一用就是12层,将nlp真正的往深度的方向推进了一大步。

 
bert base
预训练任务方面

BERT 为了让模型能够比较好的掌握自然语言方面的知识,提出了下面两种预训练的任务:
1.遮盖词的预测任务(mask word prediction),如下图所示:
将输入文本中15%的token随机遮盖,然后输入给模型,最终希望模型能够输出遮盖的词是什么,这就是让模型在做完形填空啊,而且还是选项的可能是所有词的完形填空,想想我们经常考试时做完形填空,给你四个选项你都不一定能够做对,这个任务可以让模型学到很多语法,甚至语义方面的知识。

 
mask word prediction

2.下一个句子预测任务(next sentence prediction)
下一个句子预测任务如下图所示:给模型输入A,B两个句子,让模型判断B句子是否是A句子的下一句。这个任务是希望模型能够学到句子间的关系,更近一步的加强模型对自然语言的理解。

 
next sentence prediction

这两个颇具难度的预训练任务,让模型在预训练阶段就对自然语言有了比较深入的学习和认知,而这些知识对下游的nlp任务有着巨大的帮助。当然,想要模型通过预训练掌握知识,我们需要花费大量的语料,大量的计算资源和大量的时间。但是训练一遍就可以一直使用,这种一劳永逸的工作,依然很值得去做一做。

BERT的NER实战

这里笔者先介绍一下kashgari这个框架,此框架的github链接在这,封装这个框架的作者希望大家能够很方便的调用一些NLP领域高大上的技术,快速的进行一些实验。kashgari封装了BERT embedingg模型,LSTM-CRF实体识别模型,还有一些经典的文本分类的网络模型。这里笔者就是利用这个框架五分钟在自己的数据集上完成了基于BERT的NER实战。

数据读入
with open("train_data","rb") as f:
data = f.read().decode("utf-8")
train_data = data.split("\n\n")
train_data = [token.split("\n") for token in train_data]
train_data = [[j.split() for j in i ] for i in train_data]
train_data.pop()
数据预处理
train_x = [[token[0] for token in sen] for sen in train_data]
train_y = [[token[1] for token in sen] for sen in train_data]

这里 train_x和 train_y都是一个list,
train_x: [[char_seq1],[char_seq2],[char_seq3],..... ]
train_y:[[label_seq1],[label_seq2],[label_seq3],..... ]
其中 char_seq1:["我","爱","荆","州"]
对应的的label_seq1:["O","O","B_LOC","I_LOC"]
数据预处理成一个字对应一个label就可以了,是不是很方便。kashgari已经封装了数据数值化,向量化的模块了,所以你已经不用操心文本和label的数值化问题了。这里要强调一下,由于google开源的BERT中文预训练模型采用的是字符级别的输入,所以数据预处理部分只能将文本处理成字符。

载入BERT

只需通过下面三行就可以轻易的加载BERT模型。

from kashgari.embeddings import BERTEmbedding
from kashgari.tasks.seq_labeling import BLSTMCRFModel
embedding = BERTEmbedding("bert-base-chinese", 200)

运行后,代码会自动到BERT模型储存的地方下载预训练模型的参数,这里谷歌已经帮我们预训练好BERT模型了,所以我们只需要用它做下游任务即可。

 
download_bert_chinese
搭建模型并训练

使用kashgari封装的LSTM+CRF模型,将数据喂给模型,同时设置好batch-size,就可以训练起来了。感觉整个过程是不是不需要五分钟(当然如果网速慢,下载预训练的BERT模型可能就超过五分钟了)。

model = BLSTMCRFModel(embedding)
model.fit(train_x,train_y,epochs=1,batch_size=100)
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
Input-Token (InputLayer) (None, 200) 0
__________________________________________________________________________________________________
Input-Segment (InputLayer) (None, 200) 0
__________________________________________________________________________________________________
Embedding-Token (TokenEmbedding [(None, 200, 768), ( 16226304 Input-Token[0][0]
__________________________________________________________________________________________________
Embedding-Segment (Embedding) (None, 200, 768) 1536 Input-Segment[0][0]
__________________________________________________________________________________________________
Embedding-Token-Segment (Add) (None, 200, 768) 0 Embedding-Token[0][0]
Embedding-Segment[0][0]
__________________________________________________________________________________________________
Embedding-Position (PositionEmb (None, 200, 768) 153600 Embedding-Token-Segment[0][0]
__________________________________________________________________________________________________
Embedding-Dropout (Dropout) (None, 200, 768) 0 Embedding-Position[0][0]
__________________________________________________________________________________________________
Embedding-Norm (LayerNormalizat (None, 200, 768) 1536 Embedding-Dropout[0][0]
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 200, 768) 2362368 Embedding-Norm[0][0]
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-1-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 200, 768) 0 Embedding-Norm[0][0]
Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-1-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-1-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-1-MultiHeadSelfAttention-
Encoder-1-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-1-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-1-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-1-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-2-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-1-FeedForward-Norm[0][0]
Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-2-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-2-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-2-MultiHeadSelfAttention-
Encoder-2-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-2-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-2-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-2-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-3-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-2-FeedForward-Norm[0][0]
Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-3-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-3-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-3-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-3-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-3-MultiHeadSelfAttention-
Encoder-3-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-3-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-3-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-3-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-4-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-3-FeedForward-Norm[0][0]
Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-4-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-4-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-4-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-4-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-4-MultiHeadSelfAttention-
Encoder-4-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-4-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-4-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-4-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-5-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-4-FeedForward-Norm[0][0]
Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-5-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-5-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-5-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-5-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-5-MultiHeadSelfAttention-
Encoder-5-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-5-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-5-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-5-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-6-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-5-FeedForward-Norm[0][0]
Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-6-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-6-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-6-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-6-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-6-MultiHeadSelfAttention-
Encoder-6-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-6-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-6-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-6-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-7-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-6-FeedForward-Norm[0][0]
Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-7-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-7-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-7-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-7-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-7-MultiHeadSelfAttention-
Encoder-7-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-7-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-7-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-7-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-8-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-7-FeedForward-Norm[0][0]
Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-8-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-8-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-8-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-8-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-8-MultiHeadSelfAttention-
Encoder-8-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-8-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-8-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 200, 768) 2362368 Encoder-8-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-9-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 200, 768) 0 Encoder-8-FeedForward-Norm[0][0]
Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-MultiHeadSelfAttentio (None, 200, 768) 1536 Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-FeedForward (FeedForw (None, 200, 768) 4722432 Encoder-9-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-9-FeedForward-Dropout ( (None, 200, 768) 0 Encoder-9-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-9-FeedForward-Add (Add) (None, 200, 768) 0 Encoder-9-MultiHeadSelfAttention-
Encoder-9-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-9-FeedForward-Norm (Lay (None, 200, 768) 1536 Encoder-9-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 200, 768) 2362368 Encoder-9-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-9-FeedForward-Norm[0][0]
Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-MultiHeadSelfAttenti (None, 200, 768) 1536 Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-FeedForward (FeedFor (None, 200, 768) 4722432 Encoder-10-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-10-FeedForward-Dropout (None, 200, 768) 0 Encoder-10-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-10-FeedForward-Add (Add (None, 200, 768) 0 Encoder-10-MultiHeadSelfAttention
Encoder-10-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-10-FeedForward-Norm (La (None, 200, 768) 1536 Encoder-10-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 200, 768) 2362368 Encoder-10-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-10-FeedForward-Norm[0][0]
Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-MultiHeadSelfAttenti (None, 200, 768) 1536 Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-FeedForward (FeedFor (None, 200, 768) 4722432 Encoder-11-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-11-FeedForward-Dropout (None, 200, 768) 0 Encoder-11-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-11-FeedForward-Add (Add (None, 200, 768) 0 Encoder-11-MultiHeadSelfAttention
Encoder-11-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-11-FeedForward-Norm (La (None, 200, 768) 1536 Encoder-11-FeedForward-Add[0][0]
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 200, 768) 2362368 Encoder-11-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 200, 768) 0 Encoder-11-FeedForward-Norm[0][0]
Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-MultiHeadSelfAttenti (None, 200, 768) 1536 Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-FeedForward (FeedFor (None, 200, 768) 4722432 Encoder-12-MultiHeadSelfAttention
__________________________________________________________________________________________________
Encoder-12-FeedForward-Dropout (None, 200, 768) 0 Encoder-12-FeedForward[0][0]
__________________________________________________________________________________________________
Encoder-12-FeedForward-Add (Add (None, 200, 768) 0 Encoder-12-MultiHeadSelfAttention
Encoder-12-FeedForward-Dropout[0]
__________________________________________________________________________________________________
Encoder-12-FeedForward-Norm (La (None, 200, 768) 1536 Encoder-12-FeedForward-Add[0][0]
__________________________________________________________________________________________________
non_masking_layer_4 (NonMasking (None, 200, 768) 0 Encoder-12-FeedForward-Norm[0][0]
__________________________________________________________________________________________________
bidirectional_3 (Bidirectional) (None, 200, 512) 2099200 non_masking_layer_4[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 200, 128) 65664 bidirectional_3[0][0]
__________________________________________________________________________________________________
crf_3 (CRF) (None, 200, 10) 1410 dense_3[0][0]
==================================================================================================
Total params: 103,603,714
Trainable params: 2,166,274
Non-trainable params: 101,437,440
__________________________________________________________________________________________________
Epoch 1/1
506/506 [==============================] - 960s 2s/step - loss: 0.0377 - crf_accuracy: 0.9892 - acc: 0.7759

从模型的可视化输出里面可以清晰的看到BERT的12层Transformer结构,以及它的参数量。kashgari的作者做实验对比了基于BERT的NER和其他NER方法的效果,发现BERT确实强过其他方法。预训练的语言模型确实展现出惊人的能力。

结语

BERT 就像图像领域的Imagenet,通过高难度的预训练任务,以及强网络模型去预先学习到领域相关的知识,然后去做下游任务。 想较于一些比较于直接使用naive的模型去做深度学习任务,BERT就像班里赢在起跑线上的孩子,肯定比其他孩子要强出一大截。现在是不是感受到BERT的威力了,尝试用起来吧。

参考文献

https://jalammar.github.io/illustrated-bert/
https://eliyar.biz/nlp_chinese_bert_ner/

作者:王鹏你妹
链接:https://www.jianshu.com/p/1d6689851622
来源:简书
简书著作权归作者所有,任何形式的转载都请联系作者获得授权并注明出处。

五分钟搭建一个基于BERT的NER模型的更多相关文章

  1. 10分钟搭建一个小型网页(python django)(hello world!)

    10分钟搭建一个小型网页(python django)(hello world!) 1.安装django pip install django 安装成功后,在Scripts目录下存在django-ad ...

  2. Uber如何搭建一个基于Kafka的跨数据中心复制平台 原创: 徐宏亮 AI前线 今天

    Uber如何搭建一个基于Kafka的跨数据中心复制平台 原创: 徐宏亮 AI前线 今天

  3. 快速搭建一个基于react的项目

    最近在学习react,快速搭建一个基于react的项目 1.创建一个放项目文件夹,用编辑器打开 2.打开集成终端输入命令: npm install -g create-react-app 3. cre ...

  4. Docker五分钟搭建Wordpress

    当你看到这篇文章的时候,表明你已经有docker的基础知识了,或者可以看上一篇文章 Docker 入门教程. 传统的使用wordpress搭建网站,意味着你需要搭建以下四个环境: php: apach ...

  5. [原]五分钟搭建gitserver

    本来在忙一些事情,结果刚才突然收到一个临时的事情,号称很着急. 问了一下,原来是需要在本地搭建一个git库,但其实之前我是有做过gitserver的,不过是在阿里云(部分分布在青云)上,而且目前在使用 ...

  6. 从0搭建一个基于 ELK 的日志、指标收集与监控系统

    为了使得私有化部署的系统能更健壮,同时不增加额外的部署运维工作量,本文提出了一种基于 ELK 的开箱即用的日志和指标收集方案. 在当前的项目中,我们已经使用了 Elasticsearch 作为业务的数 ...

  7. 亲手搭建一个基于Asp.Net WebApi的项目基础框架1

    目标:教大家搭建一个简易的前后端分离的项目框架. 目录: 1:关于项目架构的概念 2:前后端分离的开发模式 3:搭建框架的各个部分 这段时间比较闲,所以想把之前项目里用到的一些技术写到博客里来,分享给 ...

  8. 使用webpack4搭建一个基于Vue的组件库

    组内负责的几个项目都有一些一样的公共组件,所以就着手搭建了个公共组件开发脚手架,第一次开发 library,所以是参考着 iview 的配置来搭建的.记录如何使用webpack4搭建一个library ...

  9. 如何用vue-cli3脚手架搭建一个基于ts的基础脚手架

    目录 准备工作 搭建项目 vue 中 ts 语法 项目代理及 webpack 性能优化 其他 忙里偷闲,整理了一下关于如何借助 vue-cli3 搭建 ts + 装饰器 的脚手架,并如何自定义 web ...

随机推荐

  1. agc015F Kenus the Ancient Greek

    题意: 有$Q$次询问,每次给定$X_i$和$Y_i$,求对于$1\leq x \leq X_i , 1 \leq y \leq Y_i$,$(x,y)$进行辗转相除法的步数的最大值以及取到最大值的方 ...

  2. String int 变量互相转化

    int -> String int i=12345;String s="";第一种方法:s=i+"";第二种方法:s=String.valueOf(i); ...

  3. 洛谷 P1016 旅行家的预算 模拟+贪心

    目录 题面 题目链接 题目描述 输入输出格式 输入格式 输出格式 输入输出样例 输入样例 输出样例 说明 思路 AC代码 总结 题面 题目链接 P1016 旅行家的预算 题目描述 一个旅行家想驾驶汽车 ...

  4. 【水滴石穿】rnTest

    其实就是一个小的demo,不过代码分的挺精巧的 先放地址:https://github.com/linchengzzz/rnTest 来看看效果 确实没有什么可以说的,不过代码部分还行 先入口文件 / ...

  5. 【水滴石穿】imooc_gp

    这个项目应该是一个标杆项目,看到之前很有几个项目都是按照这个项目的页面摆放顺序来的 不过可以作为自己做项目的一种方式 源码地址为:https://github.com/pgg-pgg/imooc_gp ...

  6. windows10 中微信(UWP)版本不显示通知消息

    前言: 前段时间笔者更换了升级了WINDOWS10系统,从应用商店安装微信后,使用期间不会推送消息通知,右下角的通知栏也无法添加微信图标.搜索百度和Google后,发现很多人都是这样,这是微信(UWP ...

  7. Spring_Aop_(二)

    切面的优先级 @Order(1)注解 指定切面的优先级,值越小优先级越高 @Order(1) @Aspect @Component public class VlidationAspect { @Be ...

  8. 大数据技术之HA 高可用

    HDFS HA高可用 1.1 HA概述 1)所谓HA(High Available),即高可用(7*24小时不中断服务). 2)实现高可用最关键的策略是消除单点故障.HA严格来说应该分成各个组件的HA ...

  9. Spring → 03:核心机制

    一.控制反转 1.1.控制反转的概念 (1).Inverse of Controller被称为控制反转或反向控制,其实真正体现的是“控制转移”.(2).所谓的控制指的是负责对象关系的指定.对象创建.初 ...

  10. 两种RBAC权限控制模型详解

    序言 由于最近一直卡在权限控制这个坎上,原来设计的比较简单的权限控制思路已经无法满足比较复杂一些的场景,因此一直在探索一种在大部分场景下比较通用的权限模型. 首先,这里说明一下两种RBAC权限模型分别 ...