guslterFS
Installing GlusterFS - a Quick Start Guide
Purpose of this document
This document is intended to give you a step by step guide to setting up GlusterFS for the first time. For this tutorial, we will assume you are using Fedora 21 (or later) virtual machines (other distributions and methods can be found in the new user guide, below. We also do not explain the steps in detail here, this guide is just to help you get it up and running as soon as possible. After you deploy GlusterFS by following these steps, we recommend that you read the GlusterFS Admin Guide to learn how to administer GlusterFS and how to select a volume type that fits your needs. Read the GlusterFS New User Guide for a more detailed explanation of the steps we took here. We want you to be successful in as short a time as possible.
If you would like a more detailed walk through with instructions for installing using different methods (in local virtual machines, EC2 and baremetal) and different distributions, then have a look at the Install guide.
Automatically deploying GlusterFS with Puppet-Gluster+Vagrant
If you'd like to deploy GlusterFS automatically using Puppet-Gluster+Vagrant, have a look at this article.
Step 1 – Have at least two nodes
- Fedora 20 on two nodes named "server1" and "server2"
- A working network connection
- At least two virtual disks, one for the OS installation, and one to be used to serve GlusterFS storage (sdb). This will emulate a real world deployment, where you would want to separate GlusterFS storage from the OS install.
- Note: GlusterFS stores its dynamically generated configuration files at /var/lib/glusterd. If at any point in time GlusterFS is unable to write to these files (for example, when the backing filesystem is full), it will at minimum cause erratic behavior for your system; or worse, take your system offline completely. It is advisable to create separate partitions for directories such as /var/log to ensure this does not happen.
Step 2 - Format and mount the bricks
(on both nodes): Note: These examples are going to assume the brick is going to reside on /dev/sdb1.
mkfs.xfs -i size=512 /dev/sdb1
mkdir -p /data/brick1
echo '/dev/sdb1 /data/brick1 xfs defaults 1 2' >> /etc/fstab
mount -a && mount
You should now see sdb1 mounted at /data/brick1
Step 3 - Installing GlusterFS
(on both servers) Install the software
yum install glusterfs-server
Start the GlusterFS management daemon:
service glusterd start
service glusterd status
glusterd.service - LSB: glusterfs server
Loaded: loaded (/etc/rc.d/init.d/glusterd)
Active: active (running) since Mon, 13 Aug 2012 13:02:11 -0700; 2s ago
Process: 19254 ExecStart=/etc/rc.d/init.d/glusterd start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/glusterd.service
├ 19260 /usr/sbin/glusterd -p /run/glusterd.pid
├ 19304 /usr/sbin/glusterfsd --xlator-option georep-server.listen-port=24009 -s localhost...
└ 19309 /usr/sbin/glusterfs -f /var/lib/glusterd/nfs/nfs-server.vol -p /var/lib/glusterd/...
Step 4 - Configure the trusted pool
From "server1"
gluster peer probe server2
Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.
From "server2"
gluster peer probe server1
Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.
Step 5 - Set up a GlusterFS volume
On both server1 and server2:
mkdir /data/brick1/gv0
From any single server:
gluster volume create gv0 replica 2 server1:/data/brick1/gv0 server2:/data/brick1/gv0
gluster volume start gv0
Confirm that the volume shows "Started":
gluster volume info
Note: If the volume is not started, clues as to what went wrong will be in log files under /var/log/glusterfs on one or both of the servers - usually in etc-glusterfs-glusterd.vol.log
Step 6 - Testing the GlusterFS volume
For this step, we will use one of the servers to mount the volume. Typically, you would do this from an external machine, known as a "client". Since using this method would require additional packages to be installed on the client machine, we will use one of the servers as a simple place to test first, as if it were that "client".
mount -t glusterfs server1:/gv0 /mnt
for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
First, check the mount point:
ls -lA /mnt | wc -l
You should see 100 files returned. Next, check the GlusterFS mount points on each server:
ls -lA /data/brick1/gv0
You should see 100 files on each server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 50 files on each one.
Terminologies you should be familiar with.
guslterFS的更多相关文章
随机推荐
- Oracle数据库插入图片和读取图片
package com.basicSql.scroll_page; import java.io.File; import java.io.FileInputStream; import java.i ...
- 导入NGUI插件
在Unity编辑器顶部菜单栏中的Assets菜单中选择Import Package,然后选择Custom Package(自定义资源包),弹出资源路径窗口,在其中找到NGUI资源包所在的位置,单击”打 ...
- JUnit测试工具在项目中的用法
0:33 2013/6/26 三大框架整合时为什么要对项目进行junit测试: |__目的是测试配置文件对不对,能跑通就可以进行开发了 具体测试步骤: |__1.对hibernate进行测试 配置hi ...
- Spring MVC 注解和XML的区别
注解与XML配置的区别 注解:是一种分散式的元数据,与源代码紧绑定. xml:是一种集中式的元数据,与源代码无绑定. 因此注解和XML的选择上可以从两个角度来看:分散还是集中,源代码绑定/无绑定. ...
- poj 3373 Changing Digits (DFS + 记忆化剪枝+鸽巢原理思想)
http://poj.org/problem?id=3373 Changing Digits Time Limit: 3000MS Memory Limit: 65536K Total Submi ...
- sizeof()函数求各类型变量所占空间的方法
#include "stdafx.h" #include <iostream> using namespace std; ]) { cout<<sizeof ...
- POJ 3352 Road Construction (边双连通分量)
题目链接 题意 :有一个景点要修路,但是有些景点只有一条路可达,若是修路的话则有些景点就到不了,所以要临时搭一些路,以保证无论哪条路在修都能让游客到达任何一个景点 思路 :把景点看成点,路看成边,看要 ...
- ANDROID_MARS学习笔记_S01原始版_001_Intent
一.Intent简介 二.代码 1.activity_main.xml <RelativeLayout xmlns:android="http://schemas.android.co ...
- 如何忽略usb host 模式设备连接确认对话框
<li class="alt"><span><span>package android.hardware.usb; </span> ...
- Android WebView 开发详解(三)
转载请注明出处 http://blog.csdn.net/typename/article/details/40302351 powered by miechal zhao 概览 Android ...