ElasticSearch常用的很受欢迎的是IK,这里稍微介绍下安装过程及测试过程。
 

1、ElasticSearch官方分词

自带的中文分词器很弱,可以体检下:

[zsz@VS-zsz ~]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=standard' -d '岁月如梭'

{

    "tokens": [

        {

            "token": "岁",

            "start_offset": 0,

            "end_offset": 1,

            "type": "<IDEOGRAPHIC>",

            "position": 0

        },

        {

            "token": "月",

            "start_offset": 1,

            "end_offset": 2,

            "type": "<IDEOGRAPHIC>",

            "position": 1

        },

        {

            "token": "如",

            "start_offset": 2,

            "end_offset": 3,

            "type": "<IDEOGRAPHIC>",

            "position": 2

        },

        {

            "token": "梭",

            "start_offset": 3,

            "end_offset": 4,

            "type": "<IDEOGRAPHIC>",

            "position": 3

        }

    ]

}
[zsz@VS-zsz ~]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=standard' -d 'i am an enginner'

{

    "tokens": [

        {

            "token": "i",

            "start_offset": 0,

            "end_offset": 1,

            "type": "<ALPHANUM>",

            "position": 0

        },

        {

            "token": "am",

            "start_offset": 2,

            "end_offset": 4,

            "type": "<ALPHANUM>",

            "position": 1

        },

        {

            "token": "an",

            "start_offset": 5,

            "end_offset": 7,

            "type": "<ALPHANUM>",

            "position": 2

        },

        {

            "token": "enginner",

            "start_offset": 8,

            "end_offset": 16,

            "type": "<ALPHANUM>",

            "position": 3

        }

    ]

}
由此看见,ES的官方中文分词能力较差。
 
2、IK中文分词器
 
2.1、如何你下载的ik是源码半,需要打包该分词器,linux安装maven

tar zxvf apache-maven-3.0.5-bin.tar.gz
mv apache-maven-3.0.5 /usr/local/apache-maven-3.0.5
vi /etc/profile
增加:
export MAVEN_HOME=/usr/local/apache-maven-3.0.5

export PATH=$PATH:$MAVEN_HOME/bin
 
source /etc/profile 
mvn -v
2.2、对源码打包得到target/目录下的内容
 
mvn clean package 
 
将打包好的IK插件内容部署到ES中:
[zsz@VS-zsz ~]$ cd /home/zsz/elasticsearch-analysis-ik-1.10.0/target/releases/
[zsz@VS-zsz releases]$ mkdir /usr/local/elasticsearch-2.4.0/plugins/ik/
[zsz@VS-zsz releases]$ cp elasticsearch-analysis-ik-1.10.0.zip /usr/local/elasticsearch-2.4.0/plugins/ik/elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz releases]$ unzip /usr/local/elasticsearch-2.4.0/plugins/ik/elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz releases]$ cd /usr/local/elasticsearch-2.4.0/plugins/ik/
[zsz@VS-zsz ik]$ rm elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz ik]$ mkdir /usr/local/elasticsearch-2.4.0/config/ik
 
将IK的配置copy到ElasticSearch的配置中:
[zsz@VS-zsz ik]$ cp /home/zsz/elasticsearch-analysis-ik-1.10.0/config /usr/local/elasticsearch-2.4.0/config/ik
 
更改ElasticSearch的配置:
[zsz@VS-zsz ik]$ vi /usr/local/elasticsearch-2.4.0/config/elasticsearch.yml
在最后加上分词解析器的配置:
index.analysis.analyzer.ik.type : "ik"
 
启动ElasticSearch:
[zsz@VS-zsz ik]$ cd  /usr/local/elasticsearch-2.4.0/
[zsz@VS-zsz elasticsearch-2.4.0]$ ./bin/elasticsearch -d
 
测试IK分词器的效果:
[zsz@VS-zsz elasticsearch-2.4.0]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=ik' -d '岁月如梭'
{

    "tokens": [

        {

            "token": "岁月如梭",

            "start_offset": 0,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 0

        },

        {

            "token": "岁月",

            "start_offset": 0,

            "end_offset": 2,

            "type": "CN_WORD",

            "position": 1

        },

        {

            "token": "如梭",

            "start_offset": 2,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 2

        },

        {

            "token": "梭",

            "start_offset": 3,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 3

        }

    ]

}
[zsz@VS-zsz config]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=ik' -d 'elasticsearch很受欢迎的的一款拥有活跃社区开源的搜索解决方案'
{

    "tokens": [

        {

            "token": "elasticsearch",

            "start_offset": 0,

            "end_offset": 13,

            "type": "CN_WORD",

            "position": 0

        },

        {

            "token": "elastic",

            "start_offset": 0,

            "end_offset": 7,

            "type": "CN_WORD",

            "position": 1

        },

        {

            "token": "很受",

            "start_offset": 13,

            "end_offset": 15,

            "type": "CN_WORD",

            "position": 2

        },

        {

            "token": "受欢迎",

            "start_offset": 14,

            "end_offset": 17,

            "type": "CN_WORD",

            "position": 3

        },

        {

            "token": "欢迎",

            "start_offset": 15,

            "end_offset": 17,

            "type": "CN_WORD",

            "position": 4

        },

        {

            "token": "一款",

            "start_offset": 19,

            "end_offset": 21,

            "type": "CN_WORD",

            "position": 5

        },

        {

            "token": "一",

            "start_offset": 19,

            "end_offset": 20,

            "type": "TYPE_CNUM",

            "position": 6

        },

        {

            "token": "款",

            "start_offset": 20,

            "end_offset": 21,

            "type": "COUNT",

            "position": 7

        },

        {

            "token": "拥有",

            "start_offset": 21,

            "end_offset": 23,

            "type": "CN_WORD",

            "position": 8

        },

        {

            "token": "拥",

            "start_offset": 21,

            "end_offset": 22,

            "type": "CN_WORD",

            "position": 9

        },

        {

            "token": "有",

            "start_offset": 22,

            "end_offset": 23,

            "type": "CN_CHAR",

            "position": 10

        },

        {

            "token": "活跃",

            "start_offset": 23,

            "end_offset": 25,

            "type": "CN_WORD",

            "position": 11

        },

        {

            "token": "跃",

            "start_offset": 24,

            "end_offset": 25,

            "type": "CN_WORD",

            "position": 12

        },

        {

            "token": "社区",

            "start_offset": 25,

            "end_offset": 27,

            "type": "CN_WORD",

            "position": 13

        },

        {

            "token": "开源",

            "start_offset": 27,

            "end_offset": 29,

            "type": "CN_WORD",

            "position": 14

        },

        {

            "token": "搜索",

            "start_offset": 30,

            "end_offset": 32,

            "type": "CN_WORD",

            "position": 15

        },

        {

            "token": "索解",

            "start_offset": 31,

            "end_offset": 33,

            "type": "CN_WORD",

            "position": 16

        },

        {

            "token": "索",

            "start_offset": 31,

            "end_offset": 32,

            "type": "CN_WORD",

            "position": 17

        },

        {

            "token": "解决方案",

            "start_offset": 32,

            "end_offset": 36,

            "type": "CN_WORD",

            "position": 18

        },

        {

            "token": "解决",

            "start_offset": 32,

            "end_offset": 34,

            "type": "CN_WORD",

            "position": 19

        },

        {

            "token": "方案",

            "start_offset": 34,

            "end_offset": 36,

            "type": "CN_WORD",

            "position": 20

        }

    ]

}
 
可以看到,中文分词变得更加合理。
 本文地址:http://www.cnblogs.com/zhongshengzhen/p/elasticsearch_ik.html
 

ElasticSearch中文分词(IK)的更多相关文章

  1. java中调用ElasticSearch中文分词ik没有起作用

    问题描述: 项目中已经将'齐鲁壹点'加入到扩展词中,但是使用客户端调用的时候,高亮显示还是按照单个文字分词的: 解决方案: 1.创建Mapping使用的分词使用ik 2.查询使用QueryBuilde ...

  2. Elasticsearch 中文分词(elasticsearch-analysis-ik) 安装

    由于elasticsearch基于lucene,所以天然地就多了许多lucene上的中文分词的支持,比如 IK, Paoding, MMSEG4J等lucene中文分词原理上都能在elasticsea ...

  3. ES5中文分词(IK)

    ElasticSearch5中文分词(IK) ElasticSearch安装 官网:https://www.elastic.co 1.ElasticSearch安装 1.1.下载安装公共密钥 rpm ...

  4. elasticsearch 中文分词(elasticsearch-analysis-ik)安装

    elasticsearch 中文分词(elasticsearch-analysis-ik)安装 下载最新的发布版本 https://github.com/medcl/elasticsearch-ana ...

  5. ElasticSearch(三) ElasticSearch中文分词插件IK的安装

    正因为Elasticsearch 内置的分词器对中文不友好,会把中文分成单个字来进行全文检索,所以我们需要借助中文分词插件来解决这个问题. 一.安装maven管理工具 Elasticsearch 要使 ...

  6. ElasticSearch 中文分词插件ik 的使用

    下载 IK 的版本要与 Elasticsearch 的版本一致,因此下载 7.1.0 版本. 安装 1.中文分词插件下载地址:https://github.com/medcl/elasticsearc ...

  7. elasticsearch中文分词器(ik)配置

    elasticsearch默认的分词:http://localhost:9200/userinfo/_analyze?analyzer=standard&pretty=true&tex ...

  8. ElasticSearch中文分词器-IK分词器的使用

    IK分词器的使用 首先我们通过Postman发送GET请求查询分词效果 GET http://localhost:9200/_analyze { "text":"农业银行 ...

  9. ElasticSearch5中文分词(IK)

    ElasticSearch安装 官网:https://www.elastic.co 1.ElasticSearch安装 1.1.下载安装公共密钥 rpm --import https://artifa ...

随机推荐

  1. iOS开发:Swift多线程GCD的使用

    除了上一篇文章说到到NSThread线程,还有一个GCD(Grand Central Dispath),是Apple新开发的一个解决多核编程的解决方案,充分的利用CPU资源,将所有的任务,放到一个任务 ...

  2. BZOJ2429: [HAOI2006]聪明的猴子

    题目:http://www.lydsy.com/JudgeOnline/problem.php?id=2429 题解:从某一点遍历n个点,且使最长边最短,就是MST了. 代码: #include< ...

  3. Python中文乱码的处理

    为什么Python使用过程中会出现各式各样的乱码问题,明明是中文字符却显示成“\xe4\xb8\xad\xe6\x96\x87”的形式? 为什么会报错“UnicodeEncodeError: 'asc ...

  4. ioctl用法详解 (网络)

    本函数影响由fd参数引用的一个打开的文件. #include#include int ioctl( int fd, int request, .../* void *arg */ );返回0:成功   ...

  5. Python easy_install

    系统中有高版本的Python, 直接pip3 install ipcalc安装,都是装到高版本的Python 系统默认的Python是2.7.6,现在想装到默认版本中,可以使用easy_install ...

  6. 【转】linux线程模型

    一.定义 关于进程.轻量级进程.线程.用户线程.内核线程的定义,这个很容易找到,但是看完之后你可以说你懂了,但实际上你真的明白了么? 在现代操作系统中,进程支持多线程.进程是资源管理的最小单元:而线程 ...

  7. Another Crisis

    题意: 给出一个树,当孩子节点为1的数量占孩子总数的T%时父节点变成1,求使根节点变成1需要叶子节点为1的最小数量. 分析: 简单的树状dp,dp[i]以i为根的子树所需的最小数量,取它所有子树中最小 ...

  8. canvas 模拟小球上抛运动的物理效果

    最近一直想用学的canvas做一个漂亮的小应用,但是,发现事情并不是想的那么简单.比如,游戏的逼真效果,需要自己来coding…… 所以,自己又先做了一个小demo,算是体验一下亲手打造物理引擎的感觉 ...

  9. [原创]从Confluence获取html table并将其序列化为C#类文件的工具

    公司项目的游戏数据模型文档写在Confluence上,由于在项目初期模型变动比较频繁,手工去将文档中最新的模型结构同步到代码中比较费时费力,而且还很容易出错,于是写了一个小工具来自动化这个同步更新模型 ...

  10. My implementation of AVL tree

    C++实现的avl平衡树 #include <stdlib.h> #include <time.h> #include <string.h> #include &l ...