guslterFS
Installing GlusterFS - a Quick Start Guide
Purpose of this document
This document is intended to give you a step by step guide to setting up GlusterFS for the first time. For this tutorial, we will assume you are using Fedora 21 (or later) virtual machines (other distributions and methods can be found in the new user guide, below. We also do not explain the steps in detail here, this guide is just to help you get it up and running as soon as possible. After you deploy GlusterFS by following these steps, we recommend that you read the GlusterFS Admin Guide to learn how to administer GlusterFS and how to select a volume type that fits your needs. Read the GlusterFS New User Guide for a more detailed explanation of the steps we took here. We want you to be successful in as short a time as possible.
If you would like a more detailed walk through with instructions for installing using different methods (in local virtual machines, EC2 and baremetal) and different distributions, then have a look at the Install guide.
Automatically deploying GlusterFS with Puppet-Gluster+Vagrant
If you'd like to deploy GlusterFS automatically using Puppet-Gluster+Vagrant, have a look at this article.
Step 1 – Have at least two nodes
- Fedora 20 on two nodes named "server1" and "server2"
- A working network connection
- At least two virtual disks, one for the OS installation, and one to be used to serve GlusterFS storage (sdb). This will emulate a real world deployment, where you would want to separate GlusterFS storage from the OS install.
- Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd. If at any point in time GlusterFS is unable to write to these files (for example, when the backing filesystem is full), it will at minimum cause erratic behavior for your system; or worse, take your system offline completely. It is advisable to create separate partitions for directories such as /var/log to ensure this does not happen.
Step 2 - Format and mount the bricks
(on both nodes): Note: These examples are going to assume the brick is going to reside on /dev/sdb1.
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
You should now see sdb1 mounted at /data/brick1
Step 3 - Installing GlusterFS
(on both servers) Install the software
yum install glusterfs-server
Start the GlusterFS management daemon:
service glusterd start
service glusterd status
glusterd.service - LSB: glusterfs server
Loaded: loaded (/etc/rc.d/init.d/glusterd)
Active: active (running) since Mon, 13 Aug 2012 13:02:11 -0700; 2s ago
Process: 19254 ExecStart=/etc/rc.d/init.d/glusterd start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/glusterd.service
├ 19260 /usr/sbin/glusterd -p /run/glusterd.pid
├ 19304 /usr/sbin/glusterfsd --xlator-option georep-server.listen-port=24009 -s localhost...
└ 19309 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/...
Step 4 - Configure the trusted pool
From "server1"
gluster peer probe server2
Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.
From "server2"
gluster peer probe server1
Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.
Step 5 - Set up a GlusterFS volume
On both server1 and server2:
mkdir /data/brick1/gv0
From any single server:
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume start gv0
Confirm that the volume shows "Started":
gluster volume info
Note: If the volume is not started, clues as to what went wrong will be in log files under /var/log/glusterfs on one or both of the servers - usually in etc-glusterfs-glusterd.vol.log
Step 6 - Testing the GlusterFS volume
For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first, as if it were that "client".
mount -t glusterfs server1:/gv0 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
First, check the mount point:
ls -lA /mnt | wc -l
You should see 100 files returned. Next, check the GlusterFS mount points on each server:
ls -lA /data/brick1/gv0
You should see 100 files on each server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 50 files on each one.
Terminologies you should be familiar with.
guslterFS的更多相关文章
随机推荐
- 【Druid】 阿里巴巴推出的国产数据库连接池com.alibaba.druid.pool.DruidDataSource
阿里巴巴推出的国产数据库连接池,据网上测试对比,比目前的DBCP或C3P0数据库连接池性能更好 简单使用介绍 Druid与其他数据库连接池使用方法基本一样(与DBCP非常相似),将数据库的连接信息 ...
- 一步步学习ASP.NET MVC3 (11)——@Ajax,JavaScriptResult(2)
请注明转载地址:http://www.cnblogs.com/arhat 今天在补一章吧,由于明天的事可能比较多,老魏可能顾不上了,所以今天就再加把劲在写一章吧.否则对不起大家了,大家看的比较快,可是 ...
- SpringMVC最基础配置
SpringMVC和Struts2一样,是前后台的一个粘合剂,struts2用得比较熟悉了,现在来配置一下SpringMVC,看看其最基础配置和基本使用.SpriingMVC不是太难,学习成本不高,现 ...
- Viz World and Viz Curious Maps 教程 -- 基础篇
0. 开篇之前的一些废话 本文的内容是之前因为一些原因而写的,现在打算分享出来,内容就不做更改纯迁移了…毕竟我也太久没摸过加密狗了( ╯□╰ ).内容定位是教程,对应的 Curious World M ...
- JQuery需要手动回收xmlHttpRequest对象
今天在园子里面看到kuibono的文章说JQuery不会自动回收xmlHttpRequest对象,并且在每次Ajax请求之后都会创建一个新的xmlHttpRequest对象,感到惊讶,索性写了一个程序 ...
- C++的构造函数和析构函数
1.构造函数和析构函数为什么没有返回值? 构造函数和析构函数是两个非常特殊的函数:它们没有返回值.这与返回值为void的函数显然不同,后者虽然也不返回任何值,但还可以让它做点别的事情,而构造函数和析构 ...
- [JavaScript] 用html5 js实现浏览器全屏
项目中需要将后台浏览器的窗口全屏,也就是我们点击一个按钮要实现按F11全屏的 效果. 在HTML5中,W3C制定了关于全屏的API,就可以实现全屏幕的效果,也可以 让页面中的图片,视频等全屏目前只有g ...
- c++模板实现抽象工厂
类似于rime的rime::Class<factory type, product type>实现方式. C++模板实现的通用工厂方法模式 1.工厂方法(Factory Method)模式 ...
- 正则 ?<= 和 ?= 用法
参考网址:http://baike.baidu.com/link?url=2zORJF9GOjU8AkmuHDLz9cyl9yiL68PdW3frayzLwWQhDvDEM51V_CcY_g1mZ7O ...
- 【HDOJ】1238 Substrings
深搜+剪枝,简单字符串. #include <stdio.h> #include <string.h> #define MAXLEN 105 #define MAXNUM 10 ...