cassandra的全文检索插件
https://github.com/Stratio/cassandra-lucene-index
Stratio’s Cassandra Lucene Index
Stratio’s Cassandra Lucene Index, derived from Stratio Cassandra, is a plugin for Apache Cassandra that extends its index functionality to provide near real time search such as ElasticSearch or Solr, including full text search capabilities and free multivariable, geospatial and bitemporal search. It is achieved through an Apache Lucene based implementation of Cassandra secondary indexes, where each node of the cluster indexes its own data. Stratio’s Cassandra indexes are one of the core modules on which Stratio’s BigData platform is based.

Index relevance searches allow you to retrieve the n more relevant results satisfying a search. The coordinator node sends the search to each node in the cluster, each node returns its n best results and then the coordinator combines these partial results and gives you the n best of them, avoiding full scan. You can also base the sorting in a combination of fields.
Any cell in the tables can be indexed, including those in the primary key as well as collections. Wide rows are also supported. You can scan token/key ranges, apply additional CQL3 clauses and page on the filtered results.
Index filtered searches are a powerful help when analyzing the data stored in Cassandra with MapReduce frameworks as Apache Hadoop or, even better, Apache Spark. Adding Lucene filters in the jobs input can dramatically reduce the amount of data to be processed, avoiding full scan.

The following benchmark result can give you an idea about the expected performance when combining Lucene indexes with Spark. We do successive queries requesting from the 1% to 100% of the stored data. We can see a high performance for the index for the queries requesting strongly filtered data. However, the performance decays in less restrictive queries. As the number of records returned by the query increases, we reach a point where the index becomes slower than the full scan. So, the decision to use indexes in your Spark jobs depends on the query selectivity. The trade-off between both approaches depends on the particular use case. Generally, combining Lucene indexes with Spark is recommended for jobs retrieving no more than the 25% of the stored data.

This project is not intended to replace Apache Cassandra denormalized tables, inverted indexes, and/or secondary indexes. It is just a tool to perform some kind of queries which are really hard to be addressed using Apache Cassandra out of the box features, filling the gap between real-time and analytics.

More detailed information is available at Stratio’s Cassandra Lucene Index documentation.
Features
Lucene search technology integration into Cassandra provides:
Stratio’s Cassandra Lucene Index and its integration with Lucene search technology provides:
- Full text search (language-aware analysis, wildcard, fuzzy, regexp)
- Boolean search (and, or, not)
- Sorting by relevance, column value, and distance
- Geospatial indexing (points, lines, polygons and their multiparts)
- Geospatial transformations (bounding box, buffer, centroid, convex hull, union, difference, intersection)
- Geospatial operations (intersects, contains, is within)
- Bitemporal search (valid and transaction time durations)
- CQL complex types (list, set, map, tuple and UDT)
- CQL user defined functions (UDF)
- CQL paging, even with sorted searches
- Columns with TTL
- Third-party CQL-based drivers compatibility
- Spark and Hadoop compatibility
Not yet supported:
- Thrift API
- Legacy compact storage option
- Indexing
countercolumns - Indexing static columns
- Other partitioners than Murmur3
Requirements
- Cassandra (identified by the three first numbers of the plugin version)
- Java >= 1.8 (OpenJDK and Sun have been tested)
- Maven >= 3.0
cassandra的全文检索插件的更多相关文章
- 大约SQL/NoSQL数据库搜索/思考查询
转载请注明出处:jiq•钦's technical Blog Hbase特征: 近期在学习Hbase.Hbase基于行健是建立了索引的,查询速度会很快,全然实时. 可是Hbase要基于行健之外的字段进 ...
- Spring cloud gateway
==================================为什么需要API gateway?==================================企业后台微服务互联互通, 因为 ...
- Postgresql-模糊匹配大杀器
# Postgresql-模糊匹配大杀器 ## 问题背景 随着pg越来越强大,abase目前已经升级到5.0(postgresql10.4),目前abase5.0继承了全文检索插件(zhparser) ...
- mysql字段约束-索引-外键---3
本节所讲内容: 字段修饰符 清空表记录 索引 外键 视图 一:字段修饰符 (约束) 1:null和not null修饰符 我们通过这个例子来看看 mysql> create table wo ...
- MySql5.7InnoDB全文索引(针对中文搜索)
1.ngram and MeCab full-text parser plugins 全文检索在MySQL里面很早就支持了,只不过一直以来只支持英文.缘由是他从来都使用空格来作为分词的分隔符,而对于中 ...
- Mysql常见索引介绍
索引是一种特殊的文件,包含了对数据表中所有记录的引用指针.InnoDB引擎的数据库,其上的索引是表空间的一个组成部分. (1).索引的优缺点 优点:加快搜索速度,减少查询时间 缺点:索引是以文件的形式 ...
- search(9)- elastic4s logback-appender
前面写了个cassandra-appender,一个基于cassandra的logback插件.正是cassandra的分布式数据库属性才合适作为akka-cluster-sharding分布式应用的 ...
- 老司机带你玩转面试(1):缓存中间件 Redis 基础知识以及数据持久化
引言 今天周末,我在家坐着掐指一算,马上又要到一年一度的金九银十招聘季了,国内今年上半年受到 YQ 冲击,金三银四泡汤了,这就直接导致很多今年毕业的同学会和明年毕业的同学一起参加今年下半年的秋招,这个 ...
- mysql使用全文索引实现大字段的模糊查询
0.场景说明 centos7 mysql5.7 InnoDB引擎 0.1创建表 DROP TABLE IF EXISTS tbl_article_content; CREATE TABLE tbl_a ...
随机推荐
- js 技巧 (七)JS代码判断集锦(之一)
JS代码判断集锦(之一) ~~~~~~~~~~~~~~~~~~ <script language="JavaScript"> function checkid(iden ...
- js 技巧 (一)
· 事件源对象 event.srcElement.tagName event.srcElement.type · 捕获释放event.srcElement.setCapture(); event ...
- Python之禅 吾心笃定
自从3月19日到现在已经学习python 19天了,博客园也注册8天了.之所以一直没有急着分享学习中的知识是因为我觉得学习一道应该从心开始,所以第一篇随笔不应该说python的知识,而应该说学习心态和 ...
- Python之协程函数
Python之协程函数 什么是协程函数:如果一个函数内部yield的使用方法是表达式形式的话,如x=yield,那么该函数成为协程函数. def eater(name): print('%s star ...
- 第二十三节:scrapy爬虫识别验证码(二)图片验证码识别
图片验证码基本上是有数字和字母或者数字或者字母组成的字符串,然后通过一些干扰线的绘制而形成图片验证码. 例如:知网的注册就有图片验证码 首先我们需要获取验证码图片,通过开发者工具我们可以得到验证码ur ...
- 转载,Django组件
知识预览 一 Django的form组件 二 Django的model form组件 三 Django的缓存机制 四 Django的信号 五 Django的序列化 回到顶部 一 Django的form ...
- [bzoj1058][ZJOI2007][报表统计] (STL)
Description 小Q的妈妈是一个出纳,经常需要做一些统计报表的工作.今天是妈妈的生日,小Q希望可以帮妈妈分担一些工 作,作为她的生日礼物之一.经过仔细观察,小Q发现统计一张报表实际上是维护一个 ...
- C51 keil 注意事项
下载程序需要生成hex文件 仿真 蜂鸣器 音调:频率 音量:高低电平占空比 有源:上面没有加号,只需高低电平即可发声 无源:上面有加号,不仅要电平,还要, 的频率
- BNUOJ 9870 Contestants Division
Contestants Division Time Limit: 3000ms Memory Limit: 131072KB This problem will be judged on UVALiv ...
- HASH的应用(负数下标用偏移量解决)
Input 每组测试数据有两行,第一行有两个数n,m(0<n,m<1000000),第二行包含n个处于区间[-500000,500000]的整数. Output 对每组测试数据按从大到小的 ...