http://blog.sina.com.cn/s/blog_61d2047c010195mo.html
 
 
lucene的这种各种各样的查询类型
1、TermQuery 
     最简单的Query类型,某一个field是否含有一个term的value
 
2、TermRangeQuery 
     由于term在index中是按照字典顺序排列的,可以使用TermRangeQuery查询一个范围内的Term
例如
Query query = new TermRangeQuery("city", "aa", "am", true, true);
TopDocs hits = searcher.search(query, 20);
 
可以查血从aa* ab* ..... am*的term。 后面的true和false代表是否包括aa和am
 
3、NumericRangeQuery 
查询一个数值的范围。 这个必须查血NumericFiled
Query query = NumericRangeQuery.newIntRange("intID", from, to, true,true);
TopDocs hits = searcher.search(query, 20);
 
4、PrefixQuery 前缀查询
      查询一个term是否满足一个前缀。
     比如 prefix =“bri” bridge和“bright”都可以满足
Term t = new Term(field, prefix);
Query query = new PrefixQuery(t);
TopDocs hits = searcher.search(query, 20);
 
5、BooleanQuery 联合多个查找
Term t = new Term("contents", "bri");
Query query1 = new PrefixQuery(t);
Query query2 = NumericRangeQuery.newIntRange("intID", 1, 3, true, true);
 
// create a boolean query
BooleanQuery query = new BooleanQuery();
query.add(query1, BooleanClause.Occur.SHOULD);
query.add(query2, BooleanClause.Occur.MUST);
 
TopDocs hits = searcher.search(query, 20);
注意BooleanClause.Occur.MUST是and的意思,BooleanClause.Occur.SHOULD是or的意思,BooleanClause.Occur.MUST_NOT是not的意思
 
6、PhraseQuery 短语查询
     我们想查询一个短语  fox quick 或者 quick fox 或者quick brown fox,或者quick red fox。
     可以使用phraseQuery,  PhraseQuery使用Edit distance(编辑距离) 来量度,编辑距离是一个字符串变化到另一个字符串需要的替换,删除,插入的次数总和。每一次这种操作叫做一次slop。可以使用setSlop来限制短语slop的最大值。
edit distance如下图
 
    
 
比如: quick fox 到quick [xxx] fox 需要 1 slop
fox quick 到 quick [xxx] fox 需要 3 slop 先用quick替换 fox,再用fox替换quick,再插入一个xxx 总共3次。
     PhraseQuery query = new PhraseQuery();
 
// set max slop to 10
query.setSlop(10);
query.add(new Term("contents", " quick " ));
query.add(new Term("contents", " fox"));
TopDocs hits = searcher.search(query, 20);
 
7、WildcardQuery通配符查询
     PrefixQuery是WildcardQuery 的特殊形式
     *代表一个或者多个,?代表0个或者一个
                // use wildchard "?ridg*"
WildcardQuery query = new WildcardQuery(new Term("contents", "?ridg*"));
TopDocs hits = searcher.search(query, 20);
 
8、FuzzyQuery  模糊查询
    FuzzyQuery与PhraseQury 一类似都是以Edit distance 来做的,只不过 FuzzyQuery是在term内部,而PhraseQuery是在term之间。   
 例如    FuzzyQuery query = new FuzzyQuery(new Term("contents", "Amsteedam")); 可以查出 Amsterdam,他们之间的编辑距离是1。
如下
 IndexSearcher searcher = new IndexSearcher(dir);
// "Amsterdam" is similar to "Amsteedam"
FuzzyQuery query = new FuzzyQuery(new Term("contents", "Amsteedam"));
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
 
package charpter3;
 
import java.io.File;
import java.io.IOException;
 
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.Field.TermVector;
import org.apache.lucene.document.NumericField;
import org.apache.lucene.index.CorruptIndexException;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.Term;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.BooleanClause;
import org.apache.lucene.search.BooleanQuery;
import org.apache.lucene.search.FuzzyQuery;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.NumericRangeQuery;
import org.apache.lucene.search.PhraseQuery;
import org.apache.lucene.search.PrefixQuery;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TermQuery;
import org.apache.lucene.search.TermRangeQuery;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.WildcardQuery;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.util.Version;
 
public class Querys {
private IndexWriter writer;
protected String[] ids = { "1", "2", "3" };
protected String[] unindexed = { "Netherlands", "Italy", "China" };
protected String[] unstored = { "Amsterdam has a lot of bridge",
"Venice has lots of canals", "Amsterddam bridges are a lot" };
protected String[] text = { "Amsterdam", "Venice", "Aeijing" };
 
private Directory dir = null;
private IndexReader indexReader = null;
 
public Querys(String indexDir) throws IOException {
dir = FSDirectory.open(new File(indexDir));
this.writer = new IndexWriter(dir, new StandardAnalyzer(
Version.LUCENE_36), true, IndexWriter.MaxFieldLength.UNLIMITED);
this.writer.setInfoStream(System.out);
 
// create a index reader instance
indexReader = IndexReader.open(dir);
}
 
 
public void addDocuments() throws CorruptIndexException, IOException {
for (int i = 0; i < ids.length; i++) {
Document doc = new Document();
 
NumericField nfield = new NumericField("intID", 10);
nfield.setIntValue(i);
doc.add(nfield);
 
doc.add(new Field("id", ids[i], Field.Store.YES,
Field.Index.NOT_ANALYZED));
doc.add(new Field("country", unindexed[i], Field.Store.YES,
Field.Index.NO));
doc.add(new Field("contents", unstored[i], Field.Store.YES,
Field.Index.ANALYZED));
doc.add(new Field("city", text[i], Field.Store.YES,
Field.Index.ANALYZED));
writer.addDocument(doc);
 
}
 
System.out.println("docs = " + writer.numDocs());
 
}
 
public void index() throws CorruptIndexException, IOException {
this.addDocuments();
this.commit();
}
 
 
public void expressionQuery() throws CorruptIndexException, IOException,
ParseException {
 
IndexSearcher searcher = new IndexSearcher(this.indexReader);
 
QueryParser praser = new QueryParser(Version.LUCENE_CURRENT,
"contents", new StandardAnalyzer(Version.LUCENE_CURRENT));
 
// note
Query query = praser.parse("+bridge -Amsterdam");
System.out.println("query = " + query.toString());
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
 
}
 
 
public void termQuery(String fieldName, String q)
throws CorruptIndexException, IOException, ParseException {
// IndexSearcher searcher = new IndexSearcher(dir);
 
// build a indexSearch on a indexReader
IndexSearcher searcher = new IndexSearcher(this.indexReader);
 
Term t = new Term(fieldName, q.toLowerCase());
Query query = new TermQuery(t);
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
}
 
 
public void termRangeQuery(String fieldName, String q)
throws CorruptIndexException, IOException, ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
Query query = new TermRangeQuery("city", "aa", "am", true, true);
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
}
 
 
public void numericRangeQuery(int from, int to)
throws CorruptIndexException, IOException, ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
Query query = NumericRangeQuery.newIntRange("intID", from, to, true,true);
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
}
 
 
public void prefixQuery(String field, String prefix)
throws CorruptIndexException, IOException, ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
Term t = new Term(field, prefix);
Query query = new PrefixQuery(t);
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
}
 
 
public void booleanQuery() throws CorruptIndexException, IOException,
ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
Term t = new Term("contents", "bri");
Query query1 = new PrefixQuery(t);
 
Query query2 = NumericRangeQuery.newIntRange("intID", 1, 3, true, true);
 
// create a boolean query
BooleanQuery query = new BooleanQuery();
query.add(query1, BooleanClause.Occur.SHOULD);
query.add(query2, BooleanClause.Occur.MUST);
 
TopDocs hits = searcher.search(query, 20);
 
showResult(hits, searcher);
 
}
 
 
public void phraseQuery() throws CorruptIndexException, IOException,
ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
PhraseQuery query = new PhraseQuery();
 
// set max slop to 10
query.setSlop(10);
query.add(new Term("contents", "lot"));
query.add(new Term("contents", "bridges"));
TopDocs hits = searcher.search(query, 20);
 
showResult(hits, searcher);
 
}
 
 
public void wildCardQuery() throws CorruptIndexException, IOException,
ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
// use wildchard "?ridg*"
WildcardQuery query = new WildcardQuery(new Term("contents", "?ridg*"));
TopDocs hits = searcher.search(query, 20);
 
showResult(hits, searcher);
}
 
 
public void fuzzyQuery() throws CorruptIndexException, IOException,
ParseException {
IndexSearcher searcher = new IndexSearcher(dir);
 
// "Amsterdam" is similar to "Amsteedam"
FuzzyQuery query = new FuzzyQuery(new Term("contents", "Amsteedam"));
TopDocs hits = searcher.search(query, 20);
showResult(hits, searcher);
 
}
 
 
public void testReopen() throws ParseException, IOException {
 
IndexSearcher searcher = new IndexSearcher(this.indexReader);
 
QueryParser praser = new QueryParser(Version.LUCENE_CURRENT,
"contents", new StandardAnalyzer(Version.LUCENE_CURRENT));
 
// note
Query query = praser.parse("+bridge -Amsterdam");
System.out.println("query = " + query.toString());
 
TopDocs hits = searcher.search(query, 20);
 
// reopen a index and will cover current modification of index.
IndexReader newReader = indexReader.reopen();
if (indexReader != newReader) {
indexReader = newReader;
 
// if indexReader is changed , searcher must be constructed.
searcher.close();
searcher = null;
searcher = new IndexSearcher(this.indexReader);
}
 
hits = searcher.search(query, 20);
 
showResult(hits, searcher);
 
}
 
 
public void testTopDocs() throws CorruptIndexException, IOException {
IndexSearcher searcher = new IndexSearcher(dir);
 
// "Amsterdam" is similar to "Amsteedam"
FuzzyQuery query = new FuzzyQuery(new Term("contents", "Amsteedam"));
TopDocs hits = searcher.search(query, 20);
 
System.out.println("search result:");
 
for (ScoreDoc doc : hits.scoreDocs) {
// 閸欐牕绶遍崨鎴掕厬閻ㄥ嫭鏋冨锟�
Document d = searcher.doc(doc.doc);
System.out.println(d.get("contents"));
}
}
 
public void commit() throws CorruptIndexException, IOException {
this.writer.commit();
}
 
 
public void showResult(TopDocs hits, IndexSearcher searcher) {
 
try {
System.out.println("search result:");
 
for (ScoreDoc doc : hits.scoreDocs) {
// 閸欐牕绶遍崨鎴掕厬閻ㄥ嫭鏋冨锟�
Document d = searcher.doc(doc.doc);
System.out.println(d.get("contents"));
}
} catch (Exception e) {
e.printStackTrace();
}
}
 
 
public static void main(String[] args) throws IOException, ParseException {
// TODO Auto-generated method stub
Querys ci = new Querys("charpter2-1");
ci.index();
System.out.println("----------termQuery--------------");
ci.termQuery("city", "Venice");
 
System.out.println("----------termRangeQuery--------------");
ci.termRangeQuery(null, null);
 
System.out.println("----------numericRangeQuery--------------");
ci.numericRangeQuery(1, 5);
 
System.out.println("----------prefixQuery--------------");
ci.prefixQuery("contents", "bri");
 
System.out.println("----------booleanQuery--------------");
ci.booleanQuery();
 
System.out.println("----------phraseQuery--------------");
ci.phraseQuery();
 
System.out.println("----------wildCardQuery--------------");
ci.wildCardQuery();
 
System.out.println("----------fuzzyQuery--------------");
ci.fuzzyQuery();
 
System.out.println("----------expressionQuery--------------");
ci.expressionQuery();
 
System.out.println("----------test reopen--------------");
ci.testReopen();
 
}
 
}

lucene 3.0.2 search 各种各样的Query类型的更多相关文章

  1. 关于Lucene 3.0升级到Lucene 4.x 备忘

    最近,需要对项目进行lucene版本升级.而原来项目时基于lucene 3.0的,很古老的一个版本的了.在老版本中中,我们主要用了几个lucene的东西: 1.查询lucene多目录索引. 2.构建R ...

  2. Lucene.Net3.0.3+盘古分词器学习使用

    一.Lucene.Net介绍 Lucene.net是Lucene的.net移植版本,是一个开源的全文检索引擎开发包,即它不是一个完整的全文检索引擎,而是一个全文检索引擎的架构,提供了完整的查询引擎和索 ...

  3. Elasticsearch学习笔记(二)Search API 与 Query DSL

    一. Search API eg: GET /mall/product/_search?q=name:productName&sort=price desc 特点:search的请求参数都是以 ...

  4. Lucene 6.0下使用IK分词器

    Lucene 6.0使用IK分词器需要修改修改IKAnalyzer和IKTokenizer. 使用时先新建一个MyIKTokenizer类,一个MyIkAnalyzer类: MyIKTokenizer ...

  5. Lucene 4.0 正式版发布,亮点特性中文解读[转]

    http://blog.csdn.net/accesine960/article/details/8066877 2012年10月12日,Lucene 4.0正式发布了(点击这里下载最新版),这个版本 ...

  6. Elastic search中使用nested类型的内嵌对象

    在大数据的应用环境中,往往使用反范式设计来提高读写性能. 假设我们有个类似简书的系统,系统里有文章,用户也可以对文章进行赞赏.在关系型数据库中,如果按照数据库范式设计,需要两张表:一张文章表和一张赞赏 ...

  7. 【原创】3. MYSQL++ Query类型与SQL语句执行过程(非template与SSQLS版本)

    我们可以通过使用mysqlpp:: Query来进行SQL语句的增删改查. 首先来看一下mysqlpp::Query的一些最简单的调用, conn.connect(mysqlpp::examples: ...

  8. 执行ldconfig命令后报错的解决过程:ldconfig: 目录 /lib 中的 libpng.so 和 libpng15.so.15.13.0 的 so 名称相同但类型不同。

    执行ldconfig命令后报错: 目录 /lib 中的 libpng.so 和 libpng15.so.15.13.0 的 so 名称相同但类型不同. 解决过程: mv /lib/libpng.so ...

  9. lucene搜索方式(query类型)

    Lucene有多种搜索方式,可以根据需要选择不同的方式. 1.词条搜索(单个关键字查找) 主要对象是TermQuery 调用方式如下: Term term=new Term(字段名,搜索关键字);Qu ...

随机推荐

  1. DML操作对索引的影响

    一:delete操作 现在我们已经知道,索引都是以B树的形式存在的,既然是B树,我们就要看看他们的叶子节点和分支结点,先准备点测试数据,如下图: 按 Ctrl+C 复制代码 按 Ctrl+C 复制代码 ...

  2. BestCoder17 1001.Chessboard(hdu 5100) 解题报告

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=5100 题目意思:有一个 n * n 的棋盘,需要用 k * 1 的瓷砖去覆盖,问最大覆盖面积是多少. ...

  3. mybatis的jdbcType类型

    在用mybatis的时候,如果传过来的参数有可能为空,那么就要指定jdbcType是什么了,否则会有异常,jdbcType有以下几种: BIT         FLOAT      CHAR      ...

  4. IE的安全性设定增加“我的电脑”的安全性设定

    HKEY_CURRE-NT_USER\Software\Microsoft\Windows\CurrentVersion\InternetSettings\Zones\,在右边窗口中找到DWORD值“ ...

  5. Snmp配置

    http://www.07net01.com/linux/CentOSxiaSNMPfuwuanzhuang_496848_1372907142.html

  6. Source Insight 多标签插件

    Source Insight不仅仅是一个强大的程序编辑器,它还能显示reference trees,class inheritance diagrams和call trees.Source Insig ...

  7. SQL 查询CET使用领悟

    用到sql的遍历循环查询,如果不考虑用CET,估计又到了自己造轮子的时代了,现在觉得sql的CET确实是个好东西,针对SQL的递归查询,很是不错的方法: with etcRecommandINfo2( ...

  8. 亿条数据在PHP中实现Mysql数据库分表100张

    当数据量猛增的时候,大家都会选择库表散列等等方式去优化数据读写速度.笔者做了一个简单的尝试,1亿条数据,分100张表.具体实现过程如下: 首先创建100张表: $i=0; while($i<=9 ...

  9. GBK、GB2312、iso-8859-1之间的区别

    转自:http://blog.csdn.net/jerry_bj/article/details/5714745 GBK.GB2312.iso-8859-1之间的区别 GB2312,由中华人民共和国政 ...

  10. TabLayout

    效果图: 标题和fragment联动效果已经封装好了,非常方便 <android.support.design.widget.TabLayout android:id="@+id/ta ...