Solr: a custom Search RequestHandler
As you know, I've been playing with Solr lately, trying to see how feasible it would be to customize it for our needs. We have been a Lucene shop for a while, and we've built our own search framework around it, which has served us well so far. The rationale for moving to Solr is driven primarily by the need to expose our search tier as a service for our internal applications. While it would have been relatively simple (probably simpler) to slap on an HTTP interface over our current search tier, we also want to use the other Solr features such as incremental indexing and replication.
One of our challenges to using Solr is that the way we do search is quite different from the way Solr does search. A query string passed to the default Solr search handler is parsed into a Lucene query and a single search call is made on the underlying index. In our case, the query string is passed to our taxonomy, and depending on the type of query (as identified by the taxonomy), it is sent through one or more sub-handlers. Each sub-handler converts the query into a (different) Lucene query and executes the search against the underlying index. The results from each sub-handler are then layered together to present the final search result.
Conceptually, the customization is quite simple - simply create a custom subclass of RequestHandlerBase (as advised on this wiki page) and override the handleRequestBody(SolrQueryRequest, SolrQueryResponse) method. In reality, I had quite a tough time doing this, admittedly caused (at least partly) by my ignorance of Solr internals. However, I did succeed, so, in this post, I outline my solution, along with some advice I feel would be useful to others embarking on a similar route.
Configuration and Code
The handler is configured to trigger in response to a /solr/mysearch request. Here is the (rewritten for readability) XML snippet from my solrconfig.xml file. I used the "invariants" block to pass in configuration parameters for the handler.
1 |
... |
And here is the (also rewritten for readability) code for the custom handler. I used the SearchHandler and MoreLikeThisHandler as my templates, but diverged from it in several ways in order to accomodate my requirements. I will describe them below.
1 |
package org.apache.solr.handler.ext;
// imports omitted
public class MyRequestHandler extends RequestHandlerBase {
private String prop1;
|
Configuration Parameters - I started out baking most of my "configuration" parameters as constants within the handler code, but later moved them into the invariants block in the XML declaration. Not ideal, since we still need to touch the solrconfig.xml file (which is regarded as application code in our environment) to change behavior. The ideal solution, given the circumstances, would probably be to use JNDI to hold the configuration parameters and have the handler connect to the JNDI to pull the properties it needs.
Using Filter - The MoreLikeThis handler converts the fq (filter query) parameter into a List of Query objects, because this is what is needed to pass into a searcher.getDocList(). In my case, I couldn't use DocListAndSet because DocList is unmodifiable (ie, DocList.add() throws an UnsupportedOperationException). So I fell back to the pattern I am used to, which is getting the ScoreDoc[] array from a standard searcher.search(Query,Filter,numDocs) call. That is why the buildFilter() above returns a Filter and not a List<Query>.
Connect to external services - My handler needs to connect to the taxonomy service. Our taxonomy exposes an RMI service with a very rich and fine-grained API. I tried to use this at first, but ran into problems because it needs access to configuration files on the local system, and Jetty couldn't see these files because it was not within its context. I ended up solving for this by exposing a coarse grained JSON service over HTTP on the taxonomy service. The handler calls it once per query and gets back all the information that it needs in a single call. Probably not ideal, since now the logic is spread out in two places - I will probably revisit the RMI client integration again in the future.
Layer multiple resultsets - This is the main reason for writing the custom handler. Most of the work happens in the append() method above. Each sub-handler calls SolrSearcher.search(Query, Filter, numDocs) and populates its resulting ScoreDocs array into a List<SolrDocument>. Since previous sub-handlers may have already returned a result, subsequent sub-handlers check against a Set of docIds.
Add a pseudo-field to the Document - There are currently two competing initiatives in Solr (SOLR-1566 and SOLR-1298) on how to handle this situation. Since I was populating SolrDocument objects (this was one of the reasons I started using SolrDocumentList), it was relatively simple for me to pass in a Map of extra fields which are just tacked on to the end of the SolrDocument.
Some Miscellaneous advice
Here is some advice and tips which I wish someone had told me before I started out on this.
For your own sanity, standardize on a Solr release. I chose 1.4.1 which is the latest at the time of writing this. Prior to that, I was developing within the Solr trunk. One day (after about 60-70% of my code was working), I decided to do an svn update, and all of a sudden there was a huge bunch of compile failures (in my code as well as the Solr code). Some of them were probably caused by missing/out-of-date JARs in my .classpath. But the point is that Solr code is being actively developed, and there is quite a bit of code churn, and if you really want to work on the trunk (or a pre-release branch), you should be ready to deal with these situtations.
Solr is well designed (so the flow is kind of intuitive) and reasonably well documented, but there are some places where you will probably need to step through the code in a debugger to figure out what's going on. I am still using the Jetty container in the examples subdirectory. This page on Lucid Imagination outlines the steps you need to run Solr within Eclipse using the Jetty plugin, but thanks to the information on this StackOverlow page, all I did was add some command-line parameters to the java call, like so:
1 |
sujit@cyclone:example$ java -Dsolr.solr.home=my_schema \ |
and then set up an external debug configuration for localhost:8883 in Eclipse, and I could step through the code just fine.
Solr has very aggressive caching (which is great for a production environment), but for development, you need to disable it. I did this by commenting out all the cache references for filterCache, queryResultCache and documentCache in solrconfig.xml, and changed the httpCaching to use never304="true". All these are in the solrconfig.xml file.
Conclusion
The approach I described here is not as performant as the "standard" flow. Because I have to do multiple searches in a single request, I am doing more I/O. I am also consuming more CPU cycles since I have to dedup documents across each layer. I am also consuming more memory per request because I populate the SolrDocument inline rather than just pass the DocListAndSet to the ResponseBuilder. I don't see a way around it, though, given the nature of my requirements.
If you are a Solr expert, or someone who is familiar with the internals, I would appreciate hearing your thoughts about this approach - criticisms and suggestions are welcome.
http://sujitpal.blogspot.com/2011/02/solr-custom-search-requesthandler.html
Solr: a custom Search RequestHandler的更多相关文章
- 通过Google Custom Search API 进行站内搜索
今天突然想把博客的搜索改为google的站内搜索,印象中google adsense中好像提高这个站内搜索的代码,但苦逼的是google adsense帐号一直审核不通过,所以只能通过google c ...
- [Angular 2] Filter items with a custom search Pipe in Angular 2
This lessons implements the Search Pipe with a new SearchBox component so you can search through eac ...
- Custom SOLR Search Components - 2 Dev Tricks
I've been building some custom search components for SOLR lately, so wanted to share a couple of thi ...
- Solr调研总结
http://wiki.apache.org/solr/ Solr调研总结 开发类型 全文检索相关开发 Solr版本 4.2 文件内容 本文介绍solr的功能使用及相关注意事项;主要包括以下内容:环境 ...
- solr教程,值得刚接触搜索开发人员一看
http://blog.csdn.net/awj3584/article/details/16963525 Solr调研总结 开发类型 全文检索相关开发 Solr版本 4.2 文件内容 本文介绍sol ...
- Solr总结
http://www.cnblogs.com/guozk/p/3498831.html Solr调研总结 开发类型 全文检索相关开发 Solr版本 4.2 文件内容 本文介绍solr的功能使用及相关注 ...
- 【转载】solr教程,值得刚接触搜索开发人员一看
转载:http://blog.csdn.net/awj3584/article/details/16963525 Solr调研总结 开发类型 全文检索相关开发 Solr版本 4.2 文件内容 本文介绍 ...
- Solr+Tomcat+zookeeper部署实战
一 .安装solr 环境说明:centos 7.3,solr 6.6,zookeeper3.4,Tomcat8.5,jdk1.8 zookeeper的部署请参考:http://www.cnblogs. ...
- 全文搜索引擎——Solr
1.部署solr a.下载并解压Solr b.导入项目(独立项目): 将解压后的 server\solr-webapp 下的 webapp文件夹 拷贝到tomcat的webapps下,并重命名为 so ...
随机推荐
- VRF实例说明
Virtual Routing Forwarding VPN路由转发表,也称VPN-instance(VPN实例),是PE为直接相连的site建立并维护的一个专门实体,每个site在PE上 ...
- Python2.x与Python3.x同时安装时,切换使用方法
Windows环境下允许同时安装Python2.x与Python3.x 一.在命令提示符下使用python2.x或者python3.x可以使用如下方法: 1.找到python的安装目录 2.重命名应用 ...
- Tkinter Toplevel
Tkinter Toplevel:顶层部件的工作,直接由窗口管理器管理的窗口.他们不必在它们上面的父widget 顶层部件的工作,直接由窗口管理器管理的窗口.他们不必在它们上面的父widge ...
- Form Data 和 Request Payload 区别
Form Data 和 Request Payload 区别 如果请求头里设置Content-Type: application/x-www-form-urlencoded,那么这个请求被认为是表单请 ...
- MongoDB出现CPU飚高,如何强制停止正在执行的操作
如果发出了一个执行耗时很长的任务给MongoDB服务器,客户端强制终止会导致任务依然在服务器端执行. 这时MongoDB提供了查询和管理正在执行任务的方式. // db.currentOp() 获得当 ...
- python中的socket模块
熟悉了一下python的socket模块,感觉还是有点好玩的,不过坑也也是不少的. 1.服务器端代码 #!/usr/bin/env python import socket HOST='192.168 ...
- Netty面试题
1.BIO.NIO和AIO的区别? BIO:一个连接一个线程,客户端有连接请求时服务器端就需要启动一个线程进行处理.线程开销大. 伪异步IO:将请求连接放入线程池,一对多,但线程还是很宝贵的资源. N ...
- Oracle在linux中相关设置操作
set linesize 300; -- 设置行长度 set pagesize 300; set long 100000; -- 设置输出长度select dbms_metadata.get_ddl ...
- MyBatis 学习记录2 Mapper对象是如何生成的
主题 以前我一直有一个问题不懂.并且觉得很神奇.就是Mybatis我们开发的时候只需要定义接口,并没有写实现类,为什么我们运行的时候就可以直接使用? 现在我想分享下这部分大致是怎么实现的. 在启动的时 ...
- Linux实战教学笔记35:企业级监控Nagios实践(下)
七,服务器端Nagios图形监控显示和管理 前面搭建的Nagios服务虽然能显示信息,能报警.但是在企业工作中还会需要一个历史趋势图,跟踪每一个业务的长期趋势,并且能以图形的方式展示,例如:根据磁盘的 ...