环境:

系统

IP地址

主机名(登录用户)

承载角色

Centos 7.4 64Bit 1611

10.199.100.170

dlp(yzyu)

ceph-client(root)

admin-node

ceph-client

Centos 7.4 64Bit 1611

10.199.100.171

node1(yzyu)

添加一块硬盘

mon-node

osd0-node

mds-node

Centos 7.4 64Bit 1611

10.199.100.172

node2(yzyu)

添加一块硬盘

mon-node

osd1-node

  • 配置基础环境

[root@dlp ~]# useradd yzyu
[root@dlp ~]# echo "dhhy" |passwd --stdin dhhy
[root@dlp ~]# cat <<END >>/etc/hosts
10.199.100.170 dlp
10.199.100.171 node1
10.199.100.172 node2
END
[root@dlp ~]# echo "yzyu ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/yzyu
[root@dlp ~]# chmod /etc/sudoers.d/yzyu
[root@node1~]# useradd yzyu
[root@node1 ~]# echo "yzyu" |passwd --stdin yzyu
[root@node1 ~]# cat <<END >>/etc/hosts
10.199.100.170 dlp
10.199.100.171 node1
10.199.100.172 node2
END
[root@node1 ~]# echo "yzyu ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/yzyu
[root@node1 ~]# chmod /etc/sudoers.d/yzyu

[root@node2 ~]# useradd yzyu
[root@node2 ~]# echo "yzyu" |passwd --stdin yzyu
[root@node2 ~]# cat <<END >>/etc/hosts
10.199.100.170 dlp
10.199.100.171 node1
10.199.100.172 node2
END
[root@node2 ~]# echo "yzyu ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/yzyu
[root@node2 ~]# chmod /etc/sudoers.d/yzyu
  • 配置ntp时间服务

[root@dlp ~]# yum -y install ntp ntpdate
[root@dlp ~]# sed -i '/^server/s/^/#/g' /etc/ntp.conf
[root@dlp ~]# sed -i '25aserver 127.127.1.0\nfudge 127.127.1.0 stratum 8' /etc/ntp.conf
[root@dlp ~]# systemctl start ntpd
[root@dlp ~]# systemctl enable ntpd
[root@dlp ~]# netstat -lntup
[root@node1 ~]# yum -y install ntpdate
[root@node1 ~]# /usr/sbin/ntpdate 10.199.100.170
[root@node1 ~]# echo "/usr/sbin/ntpdate 10.199.100.170" >>/etc/rc.local
[root@node1 ~]# chmod +x /etc/rc.local [root@node2 ~]# yum -y install ntpdate
[root@node2 ~]# /usr/sbin/ntpdate 10.199.100.170
[root@node2 ~]# echo "/usr/sbin/ntpdate 10.199.100.170" >>/etc/rc.local
[root@node2 ~]# chmod +x /etc/rc.local
  • 分别在dlp节点、node1、node2节点上安装Ceph

[root@dlp ~]# yum -y install yum-utils
[root@dlp ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@dlp ~]# yum -y install epel-release --nogpgcheck
[root@dlp ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=
END

[root@dlp ~]# ls /etc/yum.repos.d/  ##必须保证有默认的官网源,结合epel源和网易的ceph源,才可以进行安装;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS-Base.repo       CentOS-Media.repo      dl.fedoraproject.org_pub_epel_7_x86_64_.repo

CentOS-CR.repo         CentOS-Sources.repo    epel.repo

CentOS-Debuginfo.repo  CentOS-Vault.repo      epel-testing.repo

[root@dlp ~]# su - yzyu
[dhhy@dlp ~]$ mkdir ceph-cluster ##创建ceph主目录
[dhhy@dlp ~]$ cd ceph-cluster
[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph-deploy ##安装ceph管理工具
[dhhy@dlp ceph-cluster]$ sudo yum -y install ceph --nogpgcheck ##安装ceph主程序
[root@node1 ~]# yum -y install yum-utils
[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@node1 ~]# yum -y install epel-release --nogpgcheck
[root@node1 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=
END [root@node1 ~]# su - yzyu
[dhhy@node1 ~]$ mkdir ceph-cluster
[dhhy@node1~]$ cd ceph-cluster
[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph-deploy
[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck
[dhhy@node1 ceph-cluster]$ sudo yum -y install deltarpm [root@node2 ~]# yum -y install yum-utils
[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/
[root@node2 ~]# yum -y install epel-release --nogpgcheck
[root@node2 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority= [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.163.com/ceph/keys/release.asc
priority=
END [root@node2 ~]# su - yzyu
[dhhy@node2 ~]$ mkdir ceph-cluster
[dhhy@node2 ~]$ cd ceph-cluster
[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph-deploy
[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck
[dhhy@node2 ceph-cluster]$ sudo yum -y install deltarpm
  • 在dlp节点管理node存储节点,安装注册服务,节点信息

[yzyu@dlp ceph-cluster]$ pwd   ##当前目录必须为ceph的安装目录位置
/home/yzyu/ceph-cluster
[yzyu@dlp ceph-cluster]$ ssh-keygen -t rsa   ##主节点需要远程管理mon节点,需要创建密钥对,并且将公钥复制到mon节点
[yzyu@dlp ceph-cluster]$ ssh-copy-id dhhy@dlp
[yzyu@dlp ceph-cluster]$ ssh-copy-id dhhy@node1
[yzyu@dlp ceph-cluster]$ ssh-copy-id dhhy@node2
[yzyu@dlp ceph-cluster]$ ssh-copy-id root@ceph-client
[yzyu@dlp ceph-cluster]$ cat <<END >>/home/dhhy/.ssh/config
Host dlp
Hostname dlp
User yzyu
Host node1
Hostname node1
User yzyu
Host node2
Hostname node2
User yzyu
END
[yzyu@dlp ceph-cluster]$ chmod /home/yzyu/.ssh/config
[yzyu@dlp ceph-cluster]$ ceph-deploy new node1 node2   ##初始化节点

[yzyu@dlp ceph-cluster]$ cat <<END >>/home/yzyu/ceph-cluster/ceph.conf
osd pool default size =
END
[yzyu@dlp ceph-cluster]$ ceph-deploy install node1 node2 ##安装ceph

  • 配置Ceph的mon监控进程

[yzyu@dlp ceph-cluster]$ ceph-deploy mon create-initial   ##初始化mon节点

注解:node节点的配置文件在/etc/ceph/目录下,会自动同步dlp管理节点的配置文件;

 

  • 配置Ceph的osd存储

配置node1节点的osd1存储设备:

[yzyu@node1 ~]$ sudo fdisk /dev/sdc...sdc    ##格式化硬盘,转换为GPT分区
[yzyu@node1 ~]$ pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 ##创建pv
[yzyu@node1 ~]$ vgcreate vg1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 ##创建vg
[yzyu@node1 ~]$ lvcreate -L 130T -n lv1 vg1 ##划分空间
[yzyu@node1 ~]$ mkfs.xfs /dev/vg1/lv1 ##格式化
[yzyu@node1 ~]$ sudo partx -a /dev/vg1/lv1
[yzyu@node1 ~]$ sudo mkfs -t xfs /dev/vg1/lv1
[yzyu@node1 ~]$ sudo mkdir /var/local/osd1
[yzyu@node1 ~]$ sudo vi /etc/fstab
/dev/vg1/lv1 /var/local/osd1 xfs defaults
:wq
[yzyu@node1 ~]$ sudo mount -a
[yzyu@node1 ~]$ sudo chmod /var/local/osd1
[yzyu@node1 ~]$ sudo chown ceph:ceph /var/local/osd1/
[yzyu@node1 ~]$ ls -ld /var/local/osd1/
[yzyu@node1 ~]$ df -hT
[yzyu@node1 ~]$ exit

配置node2节点的osd1存储设备:

[yzyu@node2 ~]$ sudo fdisk /dev/sdc...sdc
[yzyu@node2 ~]$ pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1
[yzyu@node2 ~]$ vgcreate vg2 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1
[yzyu@node2 ~]$ lvcreate -L 130T -n lv2 vg2
[yzyu@node2 ~]$ mkfs.xfs /dev/vg2/lv2
[yzyu@node2 ~]$ sudo partx -a /dev/vg2/lv2
[yzyu@node2 ~]$ sudo mkfs -t xfs /dev/vg2/lv2
[yzyu@node2 ~]$ sudo mkdir /var/local/osd2
[yzyu@node2 ~]$ sudo vi /etc/fstab
/dev/vg2/lv2 /var/local/osd2 xfs defaults
:wq
[yzyu@node2 ~]$ sudo mount -a
[yzyu@node2 ~]$ sudo chmod /var/local/osd2
[yzyu@node2 ~]$ sudo chown ceph:ceph /var/local/osd2/
[yzyu@node2 ~]$ ls -ld /var/local/osd2/
[yzyu@node2 ~]$ df -hT
[yzyu@node2 ~]$ exit

dlp管理节点注册node节点:

[yzyu@dlp ceph-cluster]$ ceph-deploy osd prepare node1:/var/local/osd1 node2:/var/local/osd2   ##初始创建osd节点并指定节点存储文件位置

[yzyu@dlp ceph-cluster]$ chmod +r /home/yzyu/ceph-cluster/ceph.client.admin.keyring
[yzyu@dlp ceph-cluster]$ ceph-deploy osd activate node1:/var/local/osd1 node2:/var/local/osd2   ##激活ods节点

[yzyu@dlp ceph-cluster]$ ceph-deploy admin node1 node2   ##复制key管理密钥文件到node节点中

[yzyu@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.client.admin.keyring /etc/ceph/
[yzyu@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.conf /etc/ceph/
[yzyu@dlp ceph-cluster]$ ls /etc/ceph/
ceph.client.admin.keyring ceph.conf rbdmap
[yzyu@dlp ceph-cluster]$ ceph quorum_status --format json-pretty   ##查看Ceph群集详细信息
  • 验证查看ceph集群状态信息

[yzyu@dlp ceph-cluster]$ ceph health

HEALTH_OK

[yzyu@dlp ceph-cluster]$ ceph -s   ##查看Ceph群集状态

cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2

health HEALTH_OK

monmap e1: 2 mons at {node1=10.199.100.171:6789/0,node2=10.199.100.172:6789/0}

election epoch 6, quorum 0,1 node1,node2

osdmap e10: 2 osds: 2 up, 2 in

flags sortbitwise,require_jewel_osds

pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects

10305 MB used, 30632 MB / 40938 MB avail   ##已使用、剩余、总容量

64 active+clean

[dhhy@dlp ceph-cluster]$ ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default

-2 0.01949     host node1

0 0.01949         osd.0       up  1.00000          1.00000

-3 0.01949     host node2

1 0.01949         osd.1       up  1.00000          1.00000

[yzyu@dlp ceph-cluster]$ ssh yzyu@node1   ##验证node1节点的端口监听状态以及其配置文件以及磁盘使用情况

[yzyu@node1 ~]$ df -hT |grep lv1

/dev/vg1/lv1                   xfs        20G  5.1G   15G   26% /var/local/osd1

[yzyu@node1 ~]$ du -sh /var/local/osd1/

5.1G /var/local/osd1/

[yzyu@node1 ~]$ ls /var/local/osd1/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[yzyu@node1 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmppVBe_2

[yzyu@node1 ~]$ cat /etc/ceph/ceph.conf

[global]

fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 10.199.100.171,10.199.100.172

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

[yzyu@dlp ceph-cluster]$ ssh yzyu@node2   ##验证node2节点的端口监听状态以及其配置文件及其磁盘使用情况

[yzyu@node2 ~]$ df -hT |grep lv2

/dev/vg2/lv2                   xfs        20G  5.1G   15G   26% /var/local/osd2

[yzyu@node2 ~]$ du -sh /var/local/osd2/

5.1G /var/local/osd2/

[yzyu@node2 ~]$ ls /var/local/osd2/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[yzyu@node2 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmpmB_BTa

[yzyu@node2 ~]$ cat /etc/ceph/ceph.conf

[global]

fsid = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 10.199.100.171,10.199.100.172

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

  • 配置Ceph的mds元数据进程

[yzyu@dlp ceph-cluster]$ ceph-deploy mds create node1
[yzyu@dlp ceph-cluster]$ ssh dhhy@node1
[yzyu@node1 ~]$ netstat -utpln |grep
(No info could be read for "-p": geteuid()= but you should be root.)
tcp 0.0.0.0: 0.0.0.0:* LISTEN -
tcp 0.0.0.0: 0.0.0.0:* LISTEN -
tcp 0.0.0.0: 0.0.0.0:* LISTEN -
tcp 0.0.0.0: 0.0.0.0:* LISTEN -
tcp 0.0.0.0: 0.0.0.0:* LISTEN -
tcp 192.168.100.102: 0.0.0.0:* LISTEN -
[yzyu@node1 ~]$ exit
  • 配置Ceph的client客户端

[yzyu@dlp ceph-cluster]$ ceph-deploy install ceph-client   ##提示输入密码

[dhhy@dlp ceph-cluster]$ ceph-deploy admin ceph-client

[yzyu@dlp ceph-cluster]$ su -
[root@dlp ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[yzyu@dlp ceph-cluster]$ ceph osd pool create cephfs_data    ##数据存储池
pool 'cephfs_data' created
[yzyu@dlp ceph-cluster]$ ceph osd pool create cephfs_metadata   ##元数据存储池
pool 'cephfs_metadata' created
[yzyu@dlp ceph-cluster]$ ceph fs new cephfs cephfs_data cephfs_metadata   ##创建文件系统
new fs with metadata pool and data pool [yzyu@dlp ceph-cluster]$ ceph fs ls   ##查看文件系统
name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ]
[yzyu@dlp ceph-cluster]$ ceph -s
cluster 24fb6518---9c8e-d64e43b8f2e2
health HEALTH_WARN
clock skew detected on mon.node2
too many PGs per OSD ( > max )
Monitor clock skew detected
monmap e1: mons at {node1=10.199.100.171:/,node2=10.199.100.172:/}
election epoch , quorum , node1,node2
fsmap e5: // up {=node1=up:active}
osdmap e17: osds: up, in
flags sortbitwise,require_jewel_osds
pgmap v54: pgs, pools, bytes data, objects
MB used, MB / MB avail
active+clean
  • 测试Ceph的客户端存储

[root@ceph-client ~]# mkdir /mnt/ceph

[root@ceph-client ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $3}' >>/etc/ceph/admin.secret

[root@ceph-client ~]# cat /etc/ceph/admin.secret

AQCd/x9bsMqKFBAAZRNXpU5QstsPlfe1/FvPtQ==

[root@ceph-client ~]# mount -t ceph 10.199.100.171:6789:/  /mnt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret

[root@ceph-client ~]# df -hT |grep ceph

10.199.100.171:6789:/      ceph       40G   11G   30G   26% /mnt/ceph

[root@ceph-client ~]# dd if=/dev/zero of=/mnt/ceph/1.file bs=1G count=1

记录了1+0 的读入

记录了1+0 的写出

1073741824字节(1.1 GB)已复制,14.2938 秒,75.1 MB/秒

[root@ceph-client ~]# ls /mnt/ceph/

1.file
[root@ceph-client ~]# df -hT |grep ceph

10.199.100.171:6789:/      ceph       40G   13G   28G   33% /mnt/ceph

  • 错误整理

1. 如若在配置过程中出现问题,重新创建集群或重新安装ceph,那么需要将ceph集群中的数据都清除掉,命令如下;

[dhhy@dlp ceph-cluster]$ ceph-deploy purge node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy purgedata node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy forgetkeys && rm ceph.*

2.dlp节点为node节点和客户端安装ceph时,会出现yum安装超时,大多由于网络问题导致,可以多执行几次安装命令;

3.dlp节点指定ceph-deploy命令管理node节点配置时,当前所在目录一定是/home/dhhy/ceph-cluster/,不然会提示找不到ceph.conf的配置文件;

4.osd节点的/var/local/osd*/存储数据实体的目录权限必须为777,并且属主和属组必须为ceph;

5. 在dlp管理节点安装ceph时出现以下问题

解决方法:

1.重新yum安装node1或者node2的epel-release软件包;

2.如若还无法解决,将软件包下载,使用以下命令进行本地安装;

6.如若在dlp管理节点中对/home/yzyu/ceph-cluster/ceph.conf主配置文件发生变化,那么需要将其主配置文件同步给node节点,命令如下:

node节点收到配置文件后,需要重新启动进程:

7.在dlp管理节点查看ceph集群状态时,出现如下,原因是因为时间不一致所导致;

将dlp节点的ntpd时间服务重新启动,node节点再次同步时间即可,如下所示:

8.在dlp管理节点进行管理node节点时,所处的位置一定是/home/yzyu/ceph-cluster/,不然会提示找不到ceph.conf主配置文件;

搭建Ceph分布式存储的更多相关文章

  1. CentOS7下搭建Ceph分布式存储架构

    (1).Ceph概述 Ceph是为了优秀的性能.可靠性和可扩展性而设计的统一的.分布式文件系统,并且还是一个开源的分布式文件系统.因为其支持块存储.对象存储,所以很自然的被用做云计算框架opensta ...

  2. Ceph分布式存储(luminous)部署文档-ubuntu18-04

    Ceph分布式存储(luminous)部署文档 环境 ubuntu18.04 ceph version 12.2.7 luminous (stable) 三节点 配置如下 node1:1U,1G me ...

  3. Ceph分布式存储-原理介绍及简单部署

    1)Ceph简单概述Ceph是一个分布式存储系统,诞生于2004年,最早致力于开发下一代高性能分布式文件系统的项目.Ceph源码下载:http://ceph.com/download/.随着云计算的发 ...

  4. Centos7下使用Ceph-deploy快速部署Ceph分布式存储-操作记录

    之前已详细介绍了Ceph分布式存储基础知识,下面简单记录下Centos7使用Ceph-deploy快速部署Ceph环境: 1)基本环境 192.168.10.220 ceph-admin(ceph-d ...

  5. Ceph分布式存储-运维操作笔记

    一.Ceph简单介绍1)OSDs: Ceph的OSD守护进程(OSD)存储数据,处理数据复制,恢复,回填,重新调整,并通过检查其它Ceph OSD守护程序作为一个心跳 向Ceph的监视器报告一些检测信 ...

  6. Ceph分布式存储集群-硬件选择

    在规划Ceph分布式存储集群环境的时候,对硬件的选择很重要,这关乎整个Ceph集群的性能,下面梳理到一些硬件的选择标准,可供参考: 1)CPU选择Ceph metadata server会动态的重新分 ...

  7. 简单介绍Ceph分布式存储集群

    在规划Ceph分布式存储集群环境的时候,对硬件的选择很重要,这关乎整个Ceph集群的性能,下面梳理到一些硬件的选择标准,可供参考: 1)CPU选择 Ceph metadata server会动态的重新 ...

  8. centos7下搭建ceph luminous(12.2.1)--无网或网络较差

    本博客的主要内容是在centos7下搭建luminous,配置dashboard,搭建客户端使用rbd,源码安装ceph,最后给出一些较为常用的命令.本博客针对初次接触ceph的人群. 搭建环境: 主 ...

  9. docker部署Ceph分布式存储集群

    1.环境准备 3台virtualbox虚拟机,用来安装ceph集群,已用docker-machine安装上了docker,每台虚拟机虚拟创建一个5G的硬盘,用于存储osd数据,例如:/dev/sdb ...

随机推荐

  1. appium--uiautomatorviewer的使用

    uiautomatorviewer的使用 uiautomatorviewer也是获取页面元素属性的工具,相比之前介绍的appium desktop来说,方便了很多,appium desktop需要从启 ...

  2. [LeetCode] 137. Single Number II 单独的数字之二

    Given a non-empty array of integers, every element appears three times except for one, which appears ...

  3. 文件夹如何添加备注(保证可以WIN7 WIN10测试通过)

    网上很多方法都有人说试过了,不可以.其实不是不可以,他们都没有说完整 今天自己弄了下,弄出来了,废话不多说先上图 如果需要用备注排序,那么就需要把排序的选项加上备注的分类 1.获得desktop.in ...

  4. JDK Mac 安装

    JDK安装步骤 一台mac os  环境 jdk.sdk.Android studio 1.打开终端-输入命令Java     2.从下面的图中可以看到,终端会自动给出提示,没有可以使用的java命令 ...

  5. FutureTask源码

    FutureTask可用于异步获取执行结果或取消执行任务的场景.通过传入Runnable或者Callable的任务给FutureTask,直接调用其run方法或者放入线程池执行,之后可以在外部通过Fu ...

  6. 第十九节:Asp.Net Core WebApi基础总结和请求方式

    一. 基础总结 1.Restful服务改造 Core下的WebApi默认也是Restful格式服务,即通过请求方式(Get,post,put,delete)来区分请求哪个方法,请求的URL中不需要写方 ...

  7. CEF4Delphi初识

    代码模块与职责 所有的代码都在src目录下,这会导致一上手的时候无法快速划分模块,不便于理解,如果分类然后放文件夹就会好一些. 最关键的部分在于uCEFApplication,是和dll链接的部分 u ...

  8. Redis(四)Pub/Sub

    发布与订阅 Pub/Sub模式应该非常熟悉,在现实应用中被广泛的使用.如:微博中关注某个号,这个号有发新博时,关注的都会收到:github上watch了某个项目,当有issue时,就会发邮件. Red ...

  9. EF连接mysql,出现A call to SSPI failed错误,解决办法

    我的使用场景是用EF连接AWS的mysql RDS,会偶发性的出现A call to SSPI failed错误, System.AggregateException: One or more err ...

  10. kafka在zookeeper创建使用了哪些znode节点?

    我们都知道kafka利用zookeeper做分布式管理,具体创建使用了哪些znode节点呢? 答案均在源码的ZkData.scala文件中,具体路径如下: https://github.com/apa ...