Linux/centos下安装riak
必备的组件:
gcc
gcc-c++
glibc-devel
make
pam-devel
使用yum安装相关组件
sudo yum install gcc gcc-c++ glibc-devel make git pam-devel
开始安装
wget http://s3.amazonaws.com/downloads.basho.com/riak/2.0/2.0.0/riak-2.0.0.tar.gz
tar zxvf riak-2.0.0.tar.gz
cd riak-2.0.0
make rel
修改全局配置:
vi /etc/riak/riak.conf
%% -*- mode: erlang;erlang-indent-level: ;indent-tabs-mode: nil -*-
%% ex: ft=erlang ts= sw= et
[
%% Riak Client APIs config
{riak_api, [
%% pb_backlog is the maximum length to which the queue of pending
%% connections may grow. If .
%% By . If you anticipate a huge number of
%% connections being initialised *simultaneously*, set this number
%% higher.
%% {pb_backlog, },
%% pb is a list of IP addresses and TCP ports that the Riak
%% Protocol Buffers interface will bind.
{pb, [ { } ]}
]},
%% Riak Core config
{riak_core, [
%% Default location of ringstate
{ring_state_dir, "/var/lib/riak/ring"},
%% Default ring creation size. Make sure it ,
%% e.g. , , , , , etc
%{ring_creation_size, },
%% http is a list of IP addresses and TCP ports that the Riak
%% HTTP interface will bind.
{http, [ { } ]},
%% https is a list of IP addresses and TCP ports that the Riak
%% HTTPS interface will bind.
%{https, [{ }]},
%% Default cert and key locations for https can be overridden
%% with the ssl config variable, for example:
%{ssl, [
% {certfile, "/etc/riak/cert.pem"},
% {keyfile, "/etc/riak/key.pem"}
% ]},
%% riak_handoff_port is the TCP port that Riak uses for
%% intra-cluster data handoff.
{handoff_port, },
%% To encrypt riak_core intra-cluster data handoff traffic,
%% uncomment the following line and edit its path to an
%% appropriate certfile and keyfile. (This example uses a
%% single file with both items concatenated together.)
%{handoff_ssl_options, [{certfile, "/tmp/erlserver.pem"}]},
%% DTrace support
%% Do not enable 'dtrace_support' unless your Erlang/OTP
%% runtime is compiled to support DTrace. DTrace is
%% available in R15B01 (supported by the Erlang/OTP
%% official source package) and in R14B04 via a custom
%% source repository & branch.
{dtrace_support, false},
%% Health Checks
%% If disabled, health checks registered by an application will
%% be ignored. NOTE: this option cannot be changed at runtime.
%% To re-enable, the setting must be changed and the node restarted.
%% NOTE: As of Riak , health checks are deprecated as they
%% may interfere with the new overload protection mechanisms.
%% If there is a good reason to re-enable them, you must uncomment
%% this line and also add an entry in the riak_kv section:
%% {riak_kv, [ ..., {enable_health_checks, true}, ...]}
%% {enable_health_checks, true},
%% Platform-specific installation paths (substituted by rebar)
{platform_bin_dir, "/usr/sbin"},
{platform_data_dir, "/var/lib/riak"},
{platform_etc_dir, "/etc/riak"},
{platform_lib_dir, "/usr/lib64/riak/lib"},
{platform_log_dir, "/var/log/riak"}
]},
%% Riak KV config
{riak_kv, [
%% Storage_backend specifies the Erlang module defining the storage
%% mechanism that will be used on this node.
{storage_backend, riak_kv_bitcask_backend},
%% raw_name is the first part of all URLS used by the Riak raw HTTP
%% interface. See riak_web.erl and raw_http_resource.erl for
%% details.
%{raw_name, "riak"},
%% Enable active anti-entropy subsystem + optional debug messages:
%% {anti_entropy, {on|off, []}},
%% {anti_entropy, {on|off, [debug]}},
{anti_entropy, {on, []}},
%% Restrict how fast AAE can build hash trees. Building the tree
%% for a given partition requires a full scan over that partition's
%% data. Once built, trees stay built until they are expired.
%% Config is of the form:
%% {num-builds, per-timespan-in-milliseconds}
%% Default build per hour.
{anti_entropy_build_limit, {, }},
%% Determine how often hash trees are expired after being built.
%% Periodically expiring a hash tree ensures the on-disk hash tree
%% data stays consistent with the actual k/v backend data. It also
%% helps Riak identify silent disk failures and bit rot. However,
%% expiration is not needed for normal AAE operation and should be
%% infrequent for performance reasons. The time is specified in
%% milliseconds. The week.
{anti_entropy_expire, },
%% Limit how many AAE exchanges/builds can happen concurrently.
{anti_entropy_concurrency, },
%% The tick determines how often the AAE manager looks for work
%% to do (building/expiring trees, triggering exchanges, etc).
%% The seconds. Lowering this value will
%% speedup the rate that all replicas are synced across the cluster.
%% Increasing the value is not recommended.
{anti_entropy_tick, },
%% The directory where AAE hash trees are stored.
{anti_entropy_data_dir, "/var/lib/riak/anti_entropy"},
%% The LevelDB options used by AAE to generate the LevelDB-backed
%% on-disk hashtrees.
{anti_entropy_leveldb_opts, [{write_buffer_size, },
{max_open_files, }]},
%% mapred_name is URL used to submit map/reduce requests to Riak.
{mapred_name, "mapred"},
%% mapred_2i_pipe indicates whether secondary-index
%% MapReduce inputs are queued in parallel via their own
%% pipe ('true'), or serially via a helper process
%% ('false' or undefined). Set to 'false' or leave
%% undefined during a rolling upgrade from 1.0.
{mapred_2i_pipe, true},
%% Each of the following entries control how many Javascript
%% virtual machines are available for executing map, reduce,
%% pre- and post-commit hook functions.
{map_js_vm_count, },
{reduce_js_vm_count, },
{hook_js_vm_count, },
%% js_max_vm_mem is the maximum amount of memory, in megabytes,
%% allocated to the Javascript VMs. If unset, the default is
%% 8MB.
{js_max_vm_mem, },
%% js_thread_stack is the maximum amount of thread stack, in megabyes,
%% allocate to the Javascript VMs. If unset, the default is 16MB.
%% NOTE: This is not the same as the C thread stack.
{js_thread_stack, },
%% js_source_dir should point to a directory containing Javascript
%% source files which will be loaded by Riak when it initializes
%% Javascript VMs.
%{js_source_dir, "/tmp/js_source"},
%% http_url_encoding determines how Riak treats URL encoded
%% buckets, keys, and links over the REST API. When set to 'on'
%% Riak always decodes encoded values sent as URLs and Headers.
%% Otherwise, Riak defaults to compatibility mode where links
%% are decoded, but buckets and keys are not. The compatibility
%% mode will be removed in a future release.
{http_url_encoding, on},
%% Switch to vnode-based vclocks rather than client ids. This
%% significantly reduces the number of vclock entries.
%% Only set true if *all* nodes in the cluster are upgraded to 1.0
{vnode_vclocks, true},
%% This option toggles compatibility of keylisting with 1.0
%% and earlier versions. Once a rolling upgrade to a version
%% > 1.0 is completed for a cluster, this should be set to
%% true for better control of memory usage during key listing
%% operations
{listkeys_backpressure, true},
%% This option specifies how many of each type of fsm may exist
%% concurrently. This is for overload protection and is a new
%% mechanism that obsoletes 1.3's health checks. Note that this number
%% represents two potential processes, so +P in vm.args should be at
%% least 3X the fsm_limit.
{fsm_limit, },
%% Uncomment to make non-paginated results be sorted the
%% same way paginated results are: by term, then key.
%% In Riak , all results were sorted this way
%% by default, which can adversely affect performance in some cases.
%% Setting this to true emulates that behavior.
%% {secondary_index_sort_default, true},
%% object_format controls which binary representation of a riak_object
%% is stored on disk.
%% Current options are: v0, v1.
%% v0: Original erlang:term_to_binary format. Higher space overhead.
%% v1: New format for more compact storage of small values.
{object_format, v1}
]},
%% Riak Search Config
{riak_search, [
%% To enable Search functionality set this 'true'.
{enabled, false}
]},
%% Merge Index Config
{merge_index, [
%% The root dir to store search merge_index data
{data_root, "/var/lib/riak/merge_index"},
%% Size, in bytes, of the in-memory buffer. When this
%% threshold has been reached the data is transformed
%% into a segment file which resides on disk.
{buffer_rollover_size, },
%% Overtime the segment files need to be compacted.
%% This is the maximum number of segments that will be
%% compacted at once. A lower value will lead to
%% quicker but more frequent compactions.
{max_compact_segments, }
]},
%% Bitcask Config
{bitcask, [
%% Configure how Bitcask writes data to disk.
%% erlang: Erlang's built-in file API
%% nif: Direct calls to the POSIX C API
%%
%% The NIF mode provides higher throughput for certain
%% workloads, but has the potential to negatively impact
%% the Erlang VM, leading to higher worst-case latencies
%% and possible throughput collapse.
{io_mode, erlang},
{data_root, "/var/lib/riak/bitcask"}
]},
%% eLevelDB Config
{eleveldb, [
{data_root, "/var/lib/riak/leveldb"}
]},
%% Lager Config
{lager, [
%% What handlers to install with what arguments
%% The defaults for the logfiles are to rotate the files when
%% they reach 10Mb or at midnight, whichever comes first, and keep
%% the last rotations. See the lager README for a description of
%% the time rotation format:
%% https://github.com/basho/lager/blob/master/README.org
%%
%% If you wish to disable rotation, you can either
%% and the rotation time to -tuple that only
%% consists of {Logfile, Level}.
%%
%% If you wish to have riak log messages to syslog, you can use a handler
%% like this:
%% {lager_syslog_backend, ["riak", daemon, info]},
%%
{handlers, [
{lager_file_backend, [
{, },
{, }
]}
] },
%% Whether to write a crash log, and where.
%% Commented/omitted/undefined means no crash logger.
{crash_log, "/var/log/riak/crash.log"},
%% Maximum size
{crash_log_msg_size, },
%% Maximum size of the crash log in bytes, before its rotated, set
%% to to disable rotation -
{crash_log_size, },
%% What time to rotate the crash log - default is no time
%% rotation. See the lager README for a description of this format:
%% https://github.com/basho/lager/blob/master/README.org
{crash_log_date, "$D0"},
%% Number of rotated crash logs to keep, means keep only the
%% current one -
{crash_log_count, },
%% Whether to redirect error_logger messages into lager - defaults to true
{error_logger_redirect, true},
%% maximum number of error_logger messages to handle in a second
%% lager shipped with a limit of , which is a little low for riak's startup
{error_logger_hwm, }
]},
%% riak_sysmon config
{riak_sysmon, [
%% To disable forwarding events of a particular type, use a
%% limit of .
{process_limit, },
{port_limit, },
%% Finding reasonable limits for a given workload is a matter
%% of experimentation.
%% NOTE: Enabling the 'gc_ms_limit' monitor (by setting non-zero)
%% can cause performance problems on multi-CPU systems.
{gc_ms_limit, },
{heap_word_limit, },
%% Configure the following items to 'false' to disable logging
%% of that event type.
{busy_port, true},
{busy_dist_port, true}
]},
%% SASL config
{sasl, [
{sasl_error_logger, false}
]},
%% riak_control config
{riak_control, [
%% Set to false to disable the admin panel.
{enabled, false},
%% Authentication style used for access to the admin
%% panel. Valid styles are 'userlist' <TODO>.
{auth, userlist},
%% If auth is set to 'userlist' then this is the
%% list of usernames and passwords for access to the
%% admin panel.
{userlist, [{"user", "pass"}
]},
%% The admin panel is broken up into multiple
%% components, each of which is enabled or disabled
%% by one of these settings.
{admin, true}
]}
].
/etc/init.d/riak restart
web访问:
http://192.168.1.111:8098/riak/test

Linux/centos下安装riak的更多相关文章
- Linux(CentOs)下安装Phantomjs + Casperjs
Linux(CentOs)下安装Phantomjs + Casperjs 是参照cnMiss's Blog http://ju.outofmemory.cn/entry/70691的博客进行安装的 1 ...
- linux/centos下安装nginx(rpm安装和源码安装)详细步骤
Centos下安装nginx rpm包 ...
- Linux CentOS下安装Tomcat9
本文讲解在Linux CentOS下安装Tomcat9,以及Web项目的部署发布. 环境:阿里云ECS 云服务器Linux CentOS 使用XShell客户端连接服务器,进行操作实践. 1.下载To ...
- [Linux]CentOS下安装和使用tmux
前天随意点开博客园,看到了一篇关于tmux的文章 Tmux - Linux从业者必备利器,特意还点进去看了.毕竟Linux对于做游戏服务端开发的我来说,太熟悉不过了.不过我就粗略地看了一眼,就关掉了. ...
- Linux(CentOS)下安装git
上个月把VPS迁到budgetVM,终于不用再受digitalOcean的气了,入手很方便,重点是支持支付宝付款——paypal的界面真是不习惯,开通速度挺快的,1G的内存够我折腾一段时间了~,额外送 ...
- Linux CentOS下安装、配置mysql数据库
假设要在Linux上做j2ee开发.首先得搭建好j2ee的开发环境.包含了jdk.tomcat.eclipse的安装(这个在之前的一篇随笔中已经有具体解说了Linux学习之CentOS(七)--Cen ...
- Linux(CentOS)下安装docker
Linux(CentOS)安装Docker 查看当前内核版本 [docker@localhost ~]$ uname -r 确保yum包更新到最新 [docker@localhost ~]$ sudo ...
- linux/centOS 下安装 ngnix
Nginx 是一款轻量级的 Web 服务器/反向代理服务器,比较流行,建议在 Linux 下安装运行. Nginx 需要的依赖 它们包括:gcc,openssl,zlib,pcre(可通过rpm -q ...
- Linux(centos)下安装JDK
安装 JDK是运行java程序必不可少的环境,服务器上跑程序也不例外.首先在安装之前,要知道Linux下安装软件有两种,一种是使用yum等命令直接下载,一种是使用上传下载工具,上传至Linux下使用, ...
随机推荐
- 通过Nginx和Nginx Plus阻止DDoS攻击
分布式拒绝服务攻击(DDoS)指的是通过多台机器向一个服务或者网站发送大量看似合法的数据包使其网络阻塞.资源耗尽从而不能为正常用户提供正常服务的攻击手段.随着互联网带宽的增加和相关工具的不断发布,这种 ...
- saltstack/salt的state.sls的使用
SLS(代表SaLt State文件)是Salt State系统的核心.SLS描述了系统的目标状态,由格式简单的数据构成.这经常被称作配置管理 首先,在master上面定义salt的主目录,默认是在/ ...
- 比较NHibernate和Entity Framework
葡萄牙的一位开发者 Ricardo Peres 最近发布了一篇文章,以看起来无偏见的形式对领先的两种 .NET ORM:NHibernate 和 Entity Framework 进行了比较. 我们建 ...
- SQL 大数据查询如何进行优化?
1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索 2.应尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索引而 ...
- JDK1.7 中的HashMap源码分析
一.源码地址: 源码地址:http://docs.oracle.com/javase/7/docs/api/ 二.数据结构 JDK1.7中采用数组+链表的形式,HashMap是一个Entry<K ...
- 内存管理_JAVA内存管理
Java虚拟机规范中试图定义一种Java内存模型(Java Memory Model,JMM)来屏蔽各个硬件平台和操作系统的内存访问差异,以实现让Java程序在各种平台下都能达到一致的内存访问效果.那 ...
- Effective C++ -----条款04:确定对象被使用前已被初始化
为内置型对象进行手工初始化,因为C++不保证初始化它们. 构造函数最好使用成员初值列,而不要在构造函数本体内使用赋值操作.初值列列出的成员变量,其排列次序应该和它们在class中的声明次序相同. 为免 ...
- ASM:《X86汇编语言-从实模式到保护模式》第11章:进入保护模式
★PART1:进入保护模式 1. 全局描述符表(Global Descriptor Table,GDT) 32位保护模式下,如果要使用一个段,必须先登记,登记的信息包括段的起始地址,段的 ...
- Divide and conquer:Subset(POJ 3977)
子序列 题目大意:给定一串数字序列,要你从中挑一定个数的数字使这些数字和绝对值最小,求出最小组合数 题目的数字最多35个,一看就是要数字枚举了,但是如果直接枚举,复杂度就是O(2^35)了,显然行不通 ...
- codeforces Good Bye 2015 B. New Year and Old Property
题目链接:http://codeforces.com/problemset/problem/611/B 题目意思:就是在 [a, b] 这个范围内(1 ≤ a ≤ b ≤ 10^18)统计出符合二进制 ...