上文找到了 collect(…) 方法,其形参就是匹配的文档 Id,根据代码上下文,其中 doc 是由 iterator.nextDoc() 获得的,那 DefaultBulkScorer.iterator 是何时赋值的?代码如下。

public abstract class Weight implements SegmentCacheable {
protected static class DefaultBulkScorer extends BulkScorer {
// ...
public DefaultBulkScorer(Scorer scorer) {
// ...
this.scorer = scorer;
this.iterator = scorer.iterator();
this.twoPhase = scorer.twoPhaseIterator();
}
// ...
}
}

构造函数中 scorer.iterator() 即为匹配的文档 Id,那么 scorer 又是从何而来呢?回顾 Weight.bulkScorer(…) 方法,代码如下。根据上文可知 scorer(context) 的实现类是 TermWeight。

public abstract class Weight implements SegmentCacheable {
public BulkScorer bulkScorer(LeafReaderContext context) throws IOException {
Scorer scorer = scorer(context);
// ...
return new DefaultBulkScorer(scorer);
}
} public class TermQuery extends Query {
final class TermWeight extends Weight {
@Override
public Scorer scorer(LeafReaderContext context) throws IOException {
final TermsEnum termsEnum = getTermsEnum(context);
if (termsEnum == null) {
return null;
}
PostingsEnum docs = termsEnum.postings(null, needsScores ? PostingsEnum.FREQS : PostingsEnum.NONE);
assert docs != null;
return new TermScorer(this, docs, similarity.simScorer(stats, context));
}
}
} final class TermScorer extends Scorer {
private final PostingsEnum postingsEnum;
TermScorer(Weight weight, PostingsEnum td, Similarity.SimScorer docScorer) {
super(weight);
this.docScorer = docScorer;
this.postingsEnum = td;
}
@Override
public DocIdSetIterator iterator() {
return postingsEnum;
}
}

至此可以确定 scorer.iterator() 来源于 termsEnum.postings(...) 。倒排索引是不是若隐若现了呢。

下面聚焦于 termsEnum 的实际类型和其 postings(...) 方法。

根据上文可知,termsEnum 来源于 TermQuery.getTermsEnum(...),代码如下。

public class TermQuery extends Query {
private TermsEnum getTermsEnum(LeafReaderContext context) throws IOException {
final TermState state = termStates.get(context.ord);
final TermsEnum termsEnum = context.reader().terms(term.field()).iterator();
termsEnum.seekExact(term.bytes(), state);
return termsEnum;
}
} public final class LeafReaderContext extends IndexReaderContext {
private final LeafReader reader;
}

LeafReader 本身是没有 terms(...) 方法的,也就是说 context.reader() 并不是 LeaferReader,而是其子类。根据上文已知 LeafReaderContext 是 IndexSearcher.leafContexts 其中的一个元素,那么找到 IndexSearcher.leafContexts 的赋值代码也就能知道 context.reader() 的实际类型。

public class IndexSearcher {
public IndexSearcher(IndexReader r) {
this(r, null);
} public IndexSearcher(IndexReader r, ExecutorService executor) {
this(r.getContext(), executor);
} public IndexSearcher(IndexReaderContext context, ExecutorService executor) {
// ...
leafContexts = context.leaves();
// ...
}
}

根据这部分代码可知,IndexSearcher.leafContexts 来源于 IndexReader.getContext().leaves()。一般来说,这个 IndexReader 是 DirectoryReader.open(...) 返回的一个 StandardDirectoryReader 类。代码如下。

public abstract class DirectoryReader extends BaseCompositeReader<LeafReader> {
public static DirectoryReader open(final Directory directory) throws IOException {
return StandardDirectoryReader.open(directory, null);
}
}

那么 IndexSearcher.leafContexts 实际来源于 StandardDirectoryReader.getContext().leaves()

public final class StandardDirectoryReader extends DirectoryReader {
// ...
} public abstract class DirectoryReader extends BaseCompositeReader<LeafReader> {
// ...
} public abstract class BaseCompositeReader<R extends IndexReader> extends CompositeReader {
// ...
} public abstract class CompositeReader extends IndexReader {
@Override
public final CompositeReaderContext getContext() {
// ...
readerContext = CompositeReaderContext.create(this);
return readerContext;
} @Override
public List<LeafReaderContext> leaves() throws UnsupportedOperationException {
return leaves;
} private final List<LeafReaderContext> leaves;
}

CompositeReaderContext.create(…) 是怎么创建的呢?

public final class CompositeReaderContext extends IndexReaderContext {
static CompositeReaderContext create(CompositeReader reader) {
return new Builder(reader).build();
} private static final class Builder {
public Builder(CompositeReader reader) {
this.reader = reader;
} public CompositeReaderContext build() {
return (CompositeReaderContext) build(null, reader, 0, 0);
} private IndexReaderContext build(CompositeReaderContext parent, IndexReader reader, int ord, int docBase) {
if (reader instanceof LeafReader) {
final LeafReader ar = (LeafReader) reader;
final LeafReaderContext atomic = new LeafReaderContext(parent, ar, ord, docBase, leaves.size(), leafDocBase);
leaves.add(atomic);
leafDocBase += reader.maxDoc();
return atomic;
} else {
final CompositeReader cr = (CompositeReader) reader;
final List<? extends IndexReader> sequentialSubReaders = cr.getSequentialSubReaders();
final List<IndexReaderContext> children = Arrays.asList(new IndexReaderContext[sequentialSubReaders.size()]);
final CompositeReaderContext newParent;
if (parent == null) {
newParent = new CompositeReaderContext(cr, children, leaves);
} else {
newParent = new CompositeReaderContext(parent, cr, ord, docBase, children);
}
int newDocBase = 0;
for (int i = 0, c = sequentialSubReaders.size(); i < c; i++) {
final IndexReader r = sequentialSubReaders.get(i);
children.set(i, build(newParent, r, i, newDocBase));
newDocBase += r.maxDoc();
}
assert newDocBase == cr.maxDoc();
return newParent;
}
}
} private CompositeReaderContext(CompositeReaderContext parent, CompositeReader reader, int ordInParent, int docbaseInParent, List<IndexReaderContext> children, List<LeafReaderContext> leaves) {
this.leaves = leaves == null ? null : Collections.unmodifiableList(leaves);
// ...
}
}

build(...) 时,传入的 reader 类型是 StandardDirectoryReader,将执行 getSequentialSubReaders() 得到其所有子 reader,并以 reader 作为成员变量创建 LeafReaderContext,然后将 LeafReaderContext 加入到 leaves 中。

所以 IndexSearcher.leafContexts 的每个元素 LeafReaderContext 的 reader 即为 StandardDirectoryReader 的 getSequentialSubReaders()

public final class StandardDirectoryReader extends DirectoryReader {
static DirectoryReader open(final Directory directory, final IndexCommit commit) throws IOException {
return new SegmentInfos.FindSegmentsFile<DirectoryReader>(directory) {
@Override
protected DirectoryReader doBody(String segmentFileName) throws IOException {
SegmentInfos sis = SegmentInfos.readCommit(directory, segmentFileName);
final SegmentReader[] readers = new SegmentReader[sis.size()];
boolean success = false;
try {
for (int i = sis.size()-1; i >= 0; i--) {
readers[i] = new SegmentReader(sis.info(i), sis.getIndexCreatedVersionMajor(), IOContext.READ);
} DirectoryReader reader = new StandardDirectoryReader(directory, readers, null, sis, false, false);
success = true; return reader;
}
// ...
}
}.run(commit);
} StandardDirectoryReader(Directory directory, LeafReader[] readers, IndexWriter writer, SegmentInfos sis, boolean applyAllDeletes, boolean writeAllDeletes) throws IOException {
super(directory, readers);
this.writer = writer;
this.segmentInfos = sis;
this.applyAllDeletes = applyAllDeletes;
this.writeAllDeletes = writeAllDeletes;
}
} public abstract class DirectoryReader extends BaseCompositeReader<LeafReader> {
protected DirectoryReader(Directory directory, LeafReader[] segmentReaders) throws IOException {
super(segmentReaders);
this.directory = directory;
}
} public abstract class BaseCompositeReader<R extends IndexReader> extends CompositeReader {
protected BaseCompositeReader(R[] subReaders) throws IOException {
this.subReaders = subReaders;
// ...
}
}

可以分析出,reader 的类型是 SegmentReader,而该类(其实是其父类)确实是有 terms(…) 方法的。代码如下。

public final class SegmentReader extends CodecReader {
// ...
final SegmentCoreReaders core; @Override
public FieldsProducer getPostingsReader() {
return core.fields;
}
} public abstract class CodecReader extends LeafReader implements Accountable {
@Override
public final Terms terms(String field) throws IOException {
return getPostingsReader().terms(field);
}
} final class SegmentCoreReaders {
final FieldsProducer fields; SegmentCoreReaders(Directory dir, SegmentCommitInfo si, IOContext context) throws IOException {
// ...
final Codec codec = si.info.getCodec();
final PostingsFormat format = codec.postingsFormat();
fields = format.fieldsProducer(segmentReadState);
// ...
}
}

在 lucene-7.3.0 中默认的 codec 是 Lucene70Codec,默认 postingsFomat 是 Lucene50PostingsFormat,这个分析过程请见 Lucene 源码分析之 segment(后续补上)。

所以 SegmentReader.terms(…) 实际调用的是 Lucene50PostingsFormat.fieldsProducer(…).terms(…)。

public final class Lucene50PostingsFormat extends PostingsFormat {
@Override
public FieldsProducer fieldsProducer(SegmentReadState state) throws IOException {
PostingsReaderBase postingsReader = new Lucene50PostingsReader(state);
FieldsProducer ret = new BlockTreeTermsReader(postingsReader, state);
return ret;
}
}

最终 SegmentReader.terms(…) 实际调用的是 BlockTreeTermsReader.terms(…)。

public final class BlockTreeTermsReader extends FieldsProducer {
@Override
public Terms terms(String field) throws IOException {
return fields.get(field);
} private final TreeMap<String,FieldReader> fields = new TreeMap<>(); public BlockTreeTermsReader(PostingsReaderBase postingsReader, SegmentReadState state) throws IOException {
this.postingsReader = postingsReader;
fields.put(fieldInfo.name, new FieldReader(...));
}
}

则 BlockTreeTermsReader.terms(…) 实际返回的是 FieldReader。

再次回顾上文中的核心代码。

public class TermQuery extends Query {
final class TermWeight extends Weight {
@Override
public Scorer scorer(LeafReaderContext context) throws IOException {
final TermsEnum termsEnum = getTermsEnum(context);
if (termsEnum == null) {
return null;
}
PostingsEnum docs = termsEnum.postings(null, needsScores ? PostingsEnum.FREQS : PostingsEnum.NONE);
assert docs != null;
return new TermScorer(this, docs, similarity.simScorer(stats, context));
}
} private TermsEnum getTermsEnum(LeafReaderContext context) throws IOException {
final TermState state = termStates.get(context.ord);
final TermsEnum termsEnum = context.reader().terms(term.field()).iterator();
termsEnum.seekExact(term.bytes(), state);
return termsEnum;
}
}

则 termsEnum 为 FieldReader.iterator(),是一个 SegmentTermsEnum。

public final class FieldReader extends Terms implements Accountable {
@Override
public TermsEnum iterator() throws IOException {
return new SegmentTermsEnum(this);
}
}

则 termsEnum.postings(…) 为 SegmentTermsEnum.postings(…)。

final class SegmentTermsEnum extends TermsEnum {
@Override
public PostingsEnum postings(PostingsEnum reuse, int flags) throws IOException {
currentFrame.decodeMetaData();
return fr.parent.postingsReader.postings(fr.fieldInfo, currentFrame.state, reuse, flags);
} final FieldReader fr;
} public final class FieldReader extends Terms implements Accountable {
final BlockTreeTermsReader parent;
} public final class BlockTreeTermsReader extends FieldsProducer {
final PostingsReaderBase postingsReader;
}

fr 是在 SegmntTermsEnum 的构造函数里出现的。

final class SegmentTermsEnum extends TermsEnum {
public SegmentTermsEnum(FieldReader fr) throws IOException {
this.fr = fr;
}
}

而这个 FieldReader 是在 BlockTreeTermsReader 的构造函数里构造的。

public final class BlockTreeTermsReader extends FieldsProducer {
public BlockTreeTermsReader(PostingsReaderBase postingsReader, SegmentReadState state) throws IOException {
// ...
fields.put(fieldInfo.name, new FieldReader(this,...));
}
} public final class FieldReader extends Terms implements Accountable {
FieldReader(BlockTreeTermsReader parent,...) throws IOException {
this.parent = parent;
}
}

则 fr.parent 是 BlockTreeTermsReader,则 fr.parent.postingsReader 是 Lucene50PostingsReader,这就是倒排索引的核心类。

Lucene 源码分析之倒排索引(三)的更多相关文章

  1. Lucene 源码分析之倒排索引(一)

    倒排索引是 Lucene 的核心数据结构,该系列文章将从源码层面(源码版本:Lucene-7.3.0)分析.该系列文章将以如下的思路展开. 什么是倒排索引? 如何定位 Lucene 中的倒排索引? 倒 ...

  2. Lucene 源码分析之倒排索引(二)

    本文以及后面几篇文章将讲解如何定位 Lucene 中的倒排索引.内容很多,唯有静下心才能跟着思路遨游. 我们可以思考一下,哪个步骤与倒排索引有关,很容易想到检索文档一定是要查询倒排列表的,那么就从此处 ...

  3. 手机自动化测试:appium源码分析之bootstrap三

    手机自动化测试:appium源码分析之bootstrap三   研究bootstrap源码,我们可以通过代码的结构,可以看出来appium的扩展思路和实现方式,从中可以添加我们自己要的功能,针对app ...

  4. 一个lucene源码分析的博客

    ITpub上的一个lucene源码分析的博客,写的比较全面:http://blog.itpub.net/28624388/cid-93356-list-1/

  5. jQuery-1.9.1源码分析系列(三) Sizzle选择器引擎——词法解析

    jQuery源码9600多行,而Sizzle引擎就独占近2000行,占了1/5.Sizzle引擎.jQuery事件机制.ajax是整个jQuery的核心,也是jQuery技术精华的体现.里面的有些策略 ...

  6. linux中断源码分析 - 中断发生(三)

    本文为原创,转载请注明:http://www.cnblogs.com/tolimit/ 回顾 上篇文章linux中断源码分析 - 初始化(二)已经描述了中断描述符表和中断描述符数组的初始化,由于在初始 ...

  7. lucene源码分析的一些资料

    针对lucene6.1较新的分析:http://46aae4d1e2371e4aa769798941cef698.devproxy.yunshipei.com/conansonic/article/d ...

  8. Netty源码分析之NioEventLoop(三)—NioEventLoop的执行

    前面两篇文章Netty源码分析之NioEventLoop(一)—NioEventLoop的创建与Netty源码分析之NioEventLoop(二)—NioEventLoop的启动中我们对NioEven ...

  9. InnoDB源码分析--缓冲池(三)

    转载请附原文链接:http://www.cnblogs.com/wingsless/p/5582063.html 昨天写到了InnoDB缓冲池的预读:<InnoDB源码分析--缓冲池(二)> ...

随机推荐

  1. get请求URL的转码

    String name = new String(json.getString("name").getBytes("iso8859-1"),"UTF- ...

  2. Pyharm中关于“warning: Debugger speedups using cython not found”问题的解决

    在终端中输入 Python "/Applications/PyCharm CE.app/Contents/helpers/pydev/setup_cython.py" build_ ...

  3. IE的变态

    1.它自身的内容动态调试功能太简陋. 2.另存成静态网页调试,发现网页代码和原先后台写的根本不一样,能稍微守点规矩行不?

  4. linux下安装vld

    将vld-0.10.1下载并传到/home/wangxiaolan/tar 1.进行解压 tar zxvf vld-0.10.tgz 2.进入 cd vld-0.10.1 3.usr/local/ph ...

  5. 从输入一个URL到页面完全显示发生了什么?

    这是经典的前端问题,主要是对浏览器的工作原理有个理解! 网络通信走的一般是五层因特网协议,详见下图.图片来自于https://images2018.cnblogs.com/blog/882926/20 ...

  6. laravel 5.5 安装

    PHP要求 PHP> = 7.0.0 OpenSSL PHP扩展 PDO PHP扩展 Mbstring PHP扩展 Tokenizer PHP扩展 XML PHP扩展 通过Composer创建项 ...

  7. MVC-AOP(面向切面编程)思想-Filter之IExceptionFilter-异常处理

    HandleErrorAttribute MVC中的基本异常分类: Action异常      T view异常 T, service异常     T, 控制器异常      F(异常get不到), ...

  8. 记一次webpack打包优化

    未进行打包优化的痛点: 随着项目的不断扩大,引入的第三方库会越来越多,我们每次build的时候会对所有的文件进行打包,耗时必定很长,不利于日常开发. 解决思路: 第三方库我们只是引入到项目里来,一般不 ...

  9. 剑指Offer_编程题之二维数组中的查找

    题目描述 在一个二维数组中,每一行都按照从左到右递增的顺序排序,每一列都按照从上到下递增的顺序排序.请完成一个函数,输入这样的一个二维数组和一个整数,判断数组中是否含有该整数.

  10. Linux时间子系统之三:jiffies

    1. jiffies背景介绍 jiffies记录了系统启动以来,经过了多少tick. 一个tick代表多长时间,在内核的CONFIG_HZ中定义.比如CONFIG_HZ=200,则一个jiffies对 ...