ElasticSearch常用的很受欢迎的是IK,这里稍微介绍下安装过程及测试过程。
 

1、ElasticSearch官方分词

自带的中文分词器很弱,可以体检下:

[zsz@VS-zsz ~]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=standard' -d '岁月如梭'

{

    "tokens": [

        {

            "token": "岁",

            "start_offset": 0,

            "end_offset": 1,

            "type": "<IDEOGRAPHIC>",

            "position": 0

        },

        {

            "token": "月",

            "start_offset": 1,

            "end_offset": 2,

            "type": "<IDEOGRAPHIC>",

            "position": 1

        },

        {

            "token": "如",

            "start_offset": 2,

            "end_offset": 3,

            "type": "<IDEOGRAPHIC>",

            "position": 2

        },

        {

            "token": "梭",

            "start_offset": 3,

            "end_offset": 4,

            "type": "<IDEOGRAPHIC>",

            "position": 3

        }

    ]

}
[zsz@VS-zsz ~]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=standard' -d 'i am an enginner'

{

    "tokens": [

        {

            "token": "i",

            "start_offset": 0,

            "end_offset": 1,

            "type": "<ALPHANUM>",

            "position": 0

        },

        {

            "token": "am",

            "start_offset": 2,

            "end_offset": 4,

            "type": "<ALPHANUM>",

            "position": 1

        },

        {

            "token": "an",

            "start_offset": 5,

            "end_offset": 7,

            "type": "<ALPHANUM>",

            "position": 2

        },

        {

            "token": "enginner",

            "start_offset": 8,

            "end_offset": 16,

            "type": "<ALPHANUM>",

            "position": 3

        }

    ]

}
由此看见,ES的官方中文分词能力较差。
 
2、IK中文分词器
 
2.1、如何你下载的ik是源码半,需要打包该分词器,linux安装maven

tar zxvf apache-maven-3.0.5-bin.tar.gz
mv apache-maven-3.0.5 /usr/local/apache-maven-3.0.5
vi /etc/profile
增加:
export MAVEN_HOME=/usr/local/apache-maven-3.0.5

export PATH=$PATH:$MAVEN_HOME/bin
 
source /etc/profile 
mvn -v
2.2、对源码打包得到target/目录下的内容
 
mvn clean package 
 
将打包好的IK插件内容部署到ES中:
[zsz@VS-zsz ~]$ cd /home/zsz/elasticsearch-analysis-ik-1.10.0/target/releases/
[zsz@VS-zsz releases]$ mkdir /usr/local/elasticsearch-2.4.0/plugins/ik/
[zsz@VS-zsz releases]$ cp elasticsearch-analysis-ik-1.10.0.zip /usr/local/elasticsearch-2.4.0/plugins/ik/elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz releases]$ unzip /usr/local/elasticsearch-2.4.0/plugins/ik/elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz releases]$ cd /usr/local/elasticsearch-2.4.0/plugins/ik/
[zsz@VS-zsz ik]$ rm elasticsearch-analysis-ik-1.10.0.zip
[zsz@VS-zsz ik]$ mkdir /usr/local/elasticsearch-2.4.0/config/ik
 
将IK的配置copy到ElasticSearch的配置中:
[zsz@VS-zsz ik]$ cp /home/zsz/elasticsearch-analysis-ik-1.10.0/config /usr/local/elasticsearch-2.4.0/config/ik
 
更改ElasticSearch的配置:
[zsz@VS-zsz ik]$ vi /usr/local/elasticsearch-2.4.0/config/elasticsearch.yml
在最后加上分词解析器的配置:
index.analysis.analyzer.ik.type : "ik"
 
启动ElasticSearch:
[zsz@VS-zsz ik]$ cd  /usr/local/elasticsearch-2.4.0/
[zsz@VS-zsz elasticsearch-2.4.0]$ ./bin/elasticsearch -d
 
测试IK分词器的效果:
[zsz@VS-zsz elasticsearch-2.4.0]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=ik' -d '岁月如梭'
{

    "tokens": [

        {

            "token": "岁月如梭",

            "start_offset": 0,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 0

        },

        {

            "token": "岁月",

            "start_offset": 0,

            "end_offset": 2,

            "type": "CN_WORD",

            "position": 1

        },

        {

            "token": "如梭",

            "start_offset": 2,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 2

        },

        {

            "token": "梭",

            "start_offset": 3,

            "end_offset": 4,

            "type": "CN_WORD",

            "position": 3

        }

    ]

}
[zsz@VS-zsz config]$ curl -XGET 'http://192.168.31.77:9200/_analyze?analyzer=ik' -d 'elasticsearch很受欢迎的的一款拥有活跃社区开源的搜索解决方案'
{

    "tokens": [

        {

            "token": "elasticsearch",

            "start_offset": 0,

            "end_offset": 13,

            "type": "CN_WORD",

            "position": 0

        },

        {

            "token": "elastic",

            "start_offset": 0,

            "end_offset": 7,

            "type": "CN_WORD",

            "position": 1

        },

        {

            "token": "很受",

            "start_offset": 13,

            "end_offset": 15,

            "type": "CN_WORD",

            "position": 2

        },

        {

            "token": "受欢迎",

            "start_offset": 14,

            "end_offset": 17,

            "type": "CN_WORD",

            "position": 3

        },

        {

            "token": "欢迎",

            "start_offset": 15,

            "end_offset": 17,

            "type": "CN_WORD",

            "position": 4

        },

        {

            "token": "一款",

            "start_offset": 19,

            "end_offset": 21,

            "type": "CN_WORD",

            "position": 5

        },

        {

            "token": "一",

            "start_offset": 19,

            "end_offset": 20,

            "type": "TYPE_CNUM",

            "position": 6

        },

        {

            "token": "款",

            "start_offset": 20,

            "end_offset": 21,

            "type": "COUNT",

            "position": 7

        },

        {

            "token": "拥有",

            "start_offset": 21,

            "end_offset": 23,

            "type": "CN_WORD",

            "position": 8

        },

        {

            "token": "拥",

            "start_offset": 21,

            "end_offset": 22,

            "type": "CN_WORD",

            "position": 9

        },

        {

            "token": "有",

            "start_offset": 22,

            "end_offset": 23,

            "type": "CN_CHAR",

            "position": 10

        },

        {

            "token": "活跃",

            "start_offset": 23,

            "end_offset": 25,

            "type": "CN_WORD",

            "position": 11

        },

        {

            "token": "跃",

            "start_offset": 24,

            "end_offset": 25,

            "type": "CN_WORD",

            "position": 12

        },

        {

            "token": "社区",

            "start_offset": 25,

            "end_offset": 27,

            "type": "CN_WORD",

            "position": 13

        },

        {

            "token": "开源",

            "start_offset": 27,

            "end_offset": 29,

            "type": "CN_WORD",

            "position": 14

        },

        {

            "token": "搜索",

            "start_offset": 30,

            "end_offset": 32,

            "type": "CN_WORD",

            "position": 15

        },

        {

            "token": "索解",

            "start_offset": 31,

            "end_offset": 33,

            "type": "CN_WORD",

            "position": 16

        },

        {

            "token": "索",

            "start_offset": 31,

            "end_offset": 32,

            "type": "CN_WORD",

            "position": 17

        },

        {

            "token": "解决方案",

            "start_offset": 32,

            "end_offset": 36,

            "type": "CN_WORD",

            "position": 18

        },

        {

            "token": "解决",

            "start_offset": 32,

            "end_offset": 34,

            "type": "CN_WORD",

            "position": 19

        },

        {

            "token": "方案",

            "start_offset": 34,

            "end_offset": 36,

            "type": "CN_WORD",

            "position": 20

        }

    ]

}
 
可以看到,中文分词变得更加合理。
 本文地址:http://www.cnblogs.com/zhongshengzhen/p/elasticsearch_ik.html
 

ElasticSearch中文分词(IK)的更多相关文章

  1. java中调用ElasticSearch中文分词ik没有起作用

    问题描述: 项目中已经将'齐鲁壹点'加入到扩展词中,但是使用客户端调用的时候,高亮显示还是按照单个文字分词的: 解决方案: 1.创建Mapping使用的分词使用ik 2.查询使用QueryBuilde ...

  2. Elasticsearch 中文分词(elasticsearch-analysis-ik) 安装

    由于elasticsearch基于lucene,所以天然地就多了许多lucene上的中文分词的支持,比如 IK, Paoding, MMSEG4J等lucene中文分词原理上都能在elasticsea ...

  3. ES5中文分词(IK)

    ElasticSearch5中文分词(IK) ElasticSearch安装 官网:https://www.elastic.co 1.ElasticSearch安装 1.1.下载安装公共密钥 rpm ...

  4. elasticsearch 中文分词(elasticsearch-analysis-ik)安装

    elasticsearch 中文分词(elasticsearch-analysis-ik)安装 下载最新的发布版本 https://github.com/medcl/elasticsearch-ana ...

  5. ElasticSearch(三) ElasticSearch中文分词插件IK的安装

    正因为Elasticsearch 内置的分词器对中文不友好,会把中文分成单个字来进行全文检索,所以我们需要借助中文分词插件来解决这个问题. 一.安装maven管理工具 Elasticsearch 要使 ...

  6. ElasticSearch 中文分词插件ik 的使用

    下载 IK 的版本要与 Elasticsearch 的版本一致,因此下载 7.1.0 版本. 安装 1.中文分词插件下载地址:https://github.com/medcl/elasticsearc ...

  7. elasticsearch中文分词器(ik)配置

    elasticsearch默认的分词:http://localhost:9200/userinfo/_analyze?analyzer=standard&pretty=true&tex ...

  8. ElasticSearch中文分词器-IK分词器的使用

    IK分词器的使用 首先我们通过Postman发送GET请求查询分词效果 GET http://localhost:9200/_analyze { "text":"农业银行 ...

  9. ElasticSearch5中文分词(IK)

    ElasticSearch安装 官网:https://www.elastic.co 1.ElasticSearch安装 1.1.下载安装公共密钥 rpm --import https://artifa ...

随机推荐

  1. [反汇编练习] 160个CrackMe之016

    [反汇编练习] 160个CrackMe之016. 本系列文章的目的是从一个没有任何经验的新手的角度(其实就是我自己),一步步尝试将160个CrackMe全部破解,如果可以,通过任何方式写出一个类似于注 ...

  2. 转《深入理解Java虚拟机》学习笔记之最后总结

    编译器 Java是编译型语言,按照编译的时期不同,编译器可分为: 前端编译器:其实叫编译器的前端更合适些,它把*.java文件转变成*.class文件,如Sun的Javac.Eclipse JDT中的 ...

  3. 【JS】js获得下拉列表选中项的值和id

    function tijiao(){ var elem = document.getElementById("dish_sort"); var index=elem.selecte ...

  4. Banner 广告设计技巧及经验(转自UI中国)

    经常听到banner这个词,但是banner是什么意思呢?引用百度知道的解释:banner可以作为网站页面的横幅广告,也可以作为游行活动时用的旗帜,还可以是报纸杂志上的大标题.Banner主要体现中心 ...

  5. Android-setDefaultKeyMode方法

    在Activity中的onCreate()方法中使用setDefaultKeyMode()可以做到在当前activity中打开拨号器.执行快捷键.启动本地搜索.启动全局搜索: setDefaultKe ...

  6. Android 翻页效果 电子书

    转载请注明来自: 5进制空间-android区 相信做电子书的同学,都遇到过翻页动画的需求吧,如果你不满足与点击滑动翻页的话,这边文章应该能够帮助到你. 先上个效果图: 效果还是很不错的,不过与ibo ...

  7. 35、Android 性能优化、内存优化

    http://blog.csdn.net/a_asinceo/article/details/8222104 http://blog.csdn.net/a_asinceo/article/detail ...

  8. UITableView 点击展开的实现

    推介看下这里的 内容  http://www.cnblogs.com/kenshincui/p/3931948.html IOS8 above UITabliViewCell 利用 autolayou ...

  9. C ~ char int 等数据转换问题

    1,char型数字转换为int型 "; printf(]-');//输出结果为3 2,int转化为char (1)字符串转换成数字,用atoi,atol,atof,分别对应的是整型,long ...

  10. Dubbo服务重载方法在JDK1.8上调用出错的问题(待解决)

    据说是javassist版本太低不支持JDK1.8,但是测试升级了还是调用出错.预留,待解决.