1.ceph存储集群的访问接口
 
1.1ceph块设备接口(RBD)
ceph块设备,也称为RADOS块设备(简称RBD),是一种基于RADOS存储系统支持超配(thin-provisioned)、可伸缩的条带化数据存储系统,它通过librbd库与OSD进行交互。RBD为KVM等虚拟化技术和云OS(例如Openstack和CloudStack)提供高性能和无限可扩展的存储后端,这些系统依赖于libvirt和QEMU实用程序于RBD进行集成。
 
客户端基于librbd库即可将RADOS存储集群用作块存储,不过,用于rbd的存储池需要事先启用rbd功能并进行初始化。例如,下面的命令创建一个名为volumes的存储池,在启用rbd功能后对其进行初始化:
# ceph osd pool create volumes 128
# ceph osd pool application enable volumes rbd
# rbd pool init -p volumes
 
不过,rbd存储池并不能直接用于块设备,而是需要事先在其中按需创建映像(image),并把映像文件作为块设备使用。rbd命令可用于创建、查看及删除块设备相在的映像(image),以及克隆映像、创建快照、将映像回滚到快照和查看快照等管理操作。例如,下面创建一个名为img1的映像:
rbd create img1 --size 1024 --pool volumes
 
映像的相关信息则可以使用“rbd info”命令获取:
rbd --image img1 --pool volumes info
 
 
1.2文件系统(CephFS)接口
CephFS需要至少运行一个元数据服务器(MDS)守护进程(ceph-mds),此进程管理与CephFS上存储的文件相关的元数据,并协调Ceph存储集群的访问。因此,若要使用CephFS接口,需要在存储集群上至少部署一个MDS实例。“ceph-deploy mds create {ceph-node}”命令可以完成此功能,例如,在stor1上启用MDS:
[root@ceph-host-01 ceph-cluster]# ceph-deploy mds create ceph-host-02
 
查看MDS的相关状态可以发现,刚添加的MDS处于Standby模式:
[root@ceph-host-01 ceph-cluster]# ceph mds stat
1 up:standby
 
使用CephFS之前需要事先于集群中创建一个文件系统,并为其分别指定元数据和数据相关的储存池。下面创建一个名为cephfs的文件系统用于测试,它使用cephfs-metadata为元数据存储池,使用cephfs-data为数据存储池:
 
[root@ceph-host-01 ceph-cluster]# ceph osd pool create cephfs-metadata 64
[root@ceph-host-01 ceph-cluster]# ceph osd pool create cephfs-data 64
[root@ceph-host-01 ceph-cluster]# ceph fs new cephfs cephfs-metadata cephfs-data
 
而后即可使用如下命令“ceph fs status <fsname>”查看文件系统的相关状态,例如:
# ceph fs status cephfs
 
此时,MDS的状态已经发生了改变:
# ceph mds stat系统
cephfs:1 {0=ceph-host-02=up:active}
 
随后,客户端通过内核中的cephfs文件系统接口即可挂载使用cephfs文件,或者通过FUSE接口与文件系统进行交互
 
[root@node5 ~]# mkdir /data/ceph-storage/ -p
[root@node5 ~]# chown -R ceph.ceph /data/ceph-storage
[root@node5 ~]# mount -t ceph 10.30.1.221:6789:/ /data/ceph-storage/ -o name=admin,secret=AQA8HzdeFQuPHxAAUfjHnOMSfFu7hHIoGv/x1A==
[root@node5 ~]# mount | grep ceph
10.30.1.221:6789:/ on /data/ceph-storage type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216)
 
注:secret值的查看方法
# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
    key = AQA8HzdeFQuPHxAAUfjHnOMSfFu7hHIoGv/x1A==
 
 
关于删除fs以及pool
ceph fs fail cephfs
ceph fs rm cephfs --yes-i-really-mean-it
ceph osd pool rm cephfs-matadata cephfs-matadata --yes-i-really-really-mean-it
ceph osd pool rm cephfs-data cephfs-data --yes-i-really-really-mean-it
 
 
---------------------------------------------------------------------------------------------------------------------------
 
2.存储空间用量
命令:ceph df
输出两段内容:RAW STORAGE和POOLS
   RAW STORAGE:存储量概览
   POOLS:存储池
RAW STORAGE段
    SIZE:集群的整体存储量容量
    AVAIL:集群中可以使用的可用空间容量
    RAW USED:已用的原始储存量
    % RAW USED:已用的原始储存量百分比。将此数字与full ratio 和 near full ratio搭配使用,可确保您不会用完集群的容量。
[root@ceph-host-01]# ceph df
RAW STORAGE:
    CLASS     SIZE        AVAIL       USED        RAW USED     %RAW USED
    hdd       1.2 TiB     1.1 TiB     6.1 GiB       21 GiB          1.78
    TOTAL     1.2 TiB     1.1 TiB     6.1 GiB       21 GiB          1.78
POOLS:
    POOL              ID     STORED      OBJECTS     USED        %USED     MAX AVAIL
    nova-metadata      6     4.5 MiB          24      20 MiB         0       276 GiB
    nova-data          7     1.3 GiB         391     5.4 GiB      0.48       276 GiB
 
 
3.检查OSD和MON的状态
可通过执行以下命令来检查OSD,以确保它们已启动且正在运行
[root@ceph-host-03 ~]# ceph osd stat
15 osds: 15 up (since 22m), 15 in (since 116m); epoch: e417
 
[root@ceph-host-03 ~]# ceph osd dump
epoch 417
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
created 2020-02-03 03:13:00.528959
modified 2020-02-04 19:29:43.906336
flags sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit
crush_version 33
full_ratio 0.95
backfillfull_ratio 0.9
nearfull_ratio 0.85
require_min_compat_client jewel
min_compat_client jewel
require_osd_release nautilus
pool 6 'nova-metadata' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 7 'nova-data' replicated size 4 min_size 2 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode warn last_change 152 flags hashpspool stripe_width 0 application cephfs
max_osd 15
osd.0 up   in  weight 1 up_from 379 up_thru 413 down_at 360 last_clean_interval [328,378) [v2:10.30.1.221:6802/4544,v1:10.30.1.221:6803/4544] [v2:192.168.9.211:6808/2004544,v1:192.168.9.211:6809/2004544] exists,up 5903a2c7-ca1f-4eb8-baff-2583e0db38c8
osd.1 up   in  weight 1 up_from 278 up_thru 413 down_at 277 last_clean_interval [247,268) [v2:10.30.1.222:6802/3716,v1:10.30.1.222:6803/3716] [v2:192.168.9.212:6800/3716,v1:192.168.9.212:6801/3716] exists,up bd1f8700-c318-4a35-a0ac-16b16e9c1179
osd.2 up   in  weight 1 up_from 413 up_thru 413 down_at 406 last_clean_interval [272,411) [v2:10.30.1.223:6802/3882,v1:10.30.1.223:6803/3882] [v2:192.168.9.213:6806/1003882,v1:192.168.9.213:6807/1003882] exists,up 1d4e71da-1956-48bb-bf93-af6c4eae0799
osd.3 up   in  weight 1 up_from 355 up_thru 413 down_at 351 last_clean_interval [275,352) [v2:10.30.1.224:6802/3856,v1:10.30.1.224:6803/3856] [v2:192.168.9.214:6802/3856,v1:192.168.9.214:6803/3856] exists,up ecd3b813-c1d7-4612-8448-a9834af18d8f
osd.4 up   in  weight 1 up_from 400 up_thru 413 down_at 392 last_clean_interval [273,389) [v2:10.30.1.221:6800/6694,v1:10.30.1.221:6801/6694] [v2:192.168.9.211:6800/6694,v1:192.168.9.211:6801/6694] exists,up 28488ddd-240a-4a21-a245-351472a7deaa
osd.5 up   in  weight 1 up_from 398 up_thru 413 down_at 390 last_clean_interval [279,389) [v2:10.30.1.222:6805/4521,v1:10.30.1.222:6807/4521] [v2:192.168.9.212:6803/4521,v1:192.168.9.212:6804/4521] exists,up cc8742ff-9d93-46b7-9fdb-60405ac09b6f
osd.6 up   in  weight 1 up_from 412 up_thru 412 down_at 410 last_clean_interval [273,411) [v2:10.30.1.223:6800/3884,v1:10.30.1.223:6801/3884] [v2:192.168.9.213:6808/2003884,v1:192.168.9.213:6810/2003884] exists,up 27910039-7ee6-4bf9-8d6b-06a0b8c3491a
osd.7 up   in  weight 1 up_from 353 up_thru 413 down_at 351 last_clean_interval [271,352) [v2:10.30.1.224:6800/3858,v1:10.30.1.224:6801/3858] [v2:192.168.9.214:6800/3858,v1:192.168.9.214:6801/3858] exists,up ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a
osd.8 up   in  weight 1 up_from 380 up_thru 415 down_at 366 last_clean_interval [346,379) [v2:10.30.1.221:6814/4681,v1:10.30.1.221:6815/4681] [v2:192.168.9.211:6806/1004681,v1:192.168.9.211:6807/1004681] exists,up 4e8582b0-e06e-497d-8058-43e6d882ba6b
osd.9 up   in  weight 1 up_from 382 up_thru 413 down_at 377 last_clean_interval [280,375) [v2:10.30.1.222:6810/4374,v1:10.30.1.222:6811/4374] [v2:192.168.9.212:6808/4374,v1:192.168.9.212:6809/4374] exists,up baef9f86-2d3d-4f1a-8d1b-777034371968
osd.10 up   in  weight 1 up_from 412 up_thru 416 down_at 403 last_clean_interval [272,407) [v2:10.30.1.223:6808/3880,v1:10.30.1.223:6810/3880] [v2:192.168.9.213:6800/1003880,v1:192.168.9.213:6805/1003880] exists,up b6cd0b80-9ef1-42ad-b0c8-2f5b8d07da98
osd.11 up   in  weight 1 up_from 354 up_thru 413 down_at 351 last_clean_interval [278,352) [v2:10.30.1.224:6808/3859,v1:10.30.1.224:6809/3859] [v2:192.168.9.214:6808/3859,v1:192.168.9.214:6809/3859] exists,up 788897e9-1b8b-456d-b379-1c1c376e5bf0
osd.12 up   in  weight 1 up_from 395 up_thru 413 down_at 390 last_clean_interval [383,393) [v2:10.30.1.221:6810/6453,v1:10.30.1.221:6811/6453] [v2:192.168.9.211:6804/1006453,v1:192.168.9.211:6805/1006453] exists,up bf5765f0-cb28-4ef8-a92d-f7fe1b5f2a09
osd.13 up   in  weight 1 up_from 413 up_thru 413 down_at 403 last_clean_interval [274,411) [v2:10.30.1.223:6806/3878,v1:10.30.1.223:6807/3878] [v2:192.168.9.213:6801/1003878,v1:192.168.9.213:6802/1003878] exists,up 54a3b38f-e772-4e6f-bb6a-afadaf766a4e
osd.14 up   in  weight 1 up_from 353 up_thru 413 down_at 351 last_clean_interval [273,352) [v2:10.30.1.224:6812/3860,v1:10.30.1.224:6813/3860] [v2:192.168.9.214:6812/3860,v1:192.168.9.214:6813/3860] exists,up 2652556d-b2a9-4bce-a4a2-3039a80f3c29
blacklist 10.30.1.222:6826/1493024757 expires 2020-02-04 20:47:36.530951
blacklist 10.30.1.222:6827/1493024757 expires 2020-02-04 20:47:36.530951
blacklist 10.30.1.221:6829/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.221:6828/1662704623 expires 2020-02-05 05:33:02.570724
blacklist 10.30.1.222:6800/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6801/1416583747 expires 2020-02-05 17:54:46.478629
blacklist 10.30.1.222:6800/3620735873 expires 2020-02-05 19:03:42.652746
blacklist 10.30.1.222:6801/3620735873 expires 2020-02-05 19:03:42.652746
 
还可以根据OSD在CRUSH地图中的位置查看OSD
[root@ceph-host-01]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME             STATUS REWEIGHT PRI-AFF
-1       1.15631 root default                                  
-3       0.30835     host ceph-host-01                         
0   hdd 0.07709         osd.0             up  1.00000 1.00000
4   hdd 0.07709         osd.4             up  1.00000 1.00000
8   hdd 0.07709         osd.8             up  1.00000 1.00000
12   hdd 0.07709         osd.12            up  1.00000 1.00000
-5       0.23126     host ceph-host-02                         
1   hdd 0.07709         osd.1             up  1.00000 1.00000
5   hdd 0.07709         osd.5             up  1.00000 1.00000
9   hdd 0.07709         osd.9             up  1.00000 1.00000
-7       0.30835     host ceph-host-03                         
2   hdd 0.07709         osd.2             up  1.00000 1.00000
6   hdd 0.07709         osd.6             up  1.00000 1.00000
10   hdd 0.07709         osd.10            up  1.00000 1.00000
13   hdd 0.07709         osd.13            up  1.00000 1.00000
-9       0.30835     host ceph-host-04                         
3   hdd 0.07709         osd.3             up  1.00000 1.00000
7   hdd 0.07709         osd.7             up  1.00000 1.00000
11   hdd 0.07709         osd.11            up  1.00000 1.00000
14   hdd 0.07709         osd.14            up  1.00000 1.00000
注:ceph将列显CRUSH树及主机、它的OSD、OSD是否已启动及其权重
 
集群中存在多个Mon主机时,应该在启动集群之后读取或写入数据之前检查Mon的仲裁状态;事实上。管理员也应该定期检查这种仲裁结果
显示监视器映射:ceph mon stat命令或者ceph mon dump
[root@ceph-host-03 ~]# ceph mon dump
dumped monmap epoch 3
epoch 3
fsid 272905d2-fd66-4ef6-a772-9cd73a274683
last_changed 2020-02-04 19:01:35.801920
created 2020-02-03 03:12:45.424079
min_mon_release 14 (nautilus)
0: [v2:10.30.1.221:3300/0,v1:10.30.1.221:6789/0] mon.ceph-host-01
1: [v2:10.30.1.222:3300/0,v1:10.30.1.222:6789/0] mon.ceph-host-02
2: [v2:10.30.1.223:3300/0,v1:10.30.1.223:6789/0] mon.ceph-host-03
显示仲裁状态:ceph quorum_status
 
 
 
4.使用管理套接字
ceph的管理套接字接口常用于查询守护进程
    套接字默认保存于/var/run/ceph目录
     此接口的使用不能以远程方式进行
命令的使用格式:
    ceph --admin-daemon /var/run/ceph/socket-name
    获取使用帮助:
        ceph --admin-daemon  /var/run/ceph/socket-name help
使用示范如下:
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok version
{"version":"14.2.7","release":"nautilus","release_type":"stable"}
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok status
{
    "cluster_fsid": "272905d2-fd66-4ef6-a772-9cd73a274683",
    "osd_fsid": "ef7c51dd-b9ee-44ef-872a-2861c3ad2f5a",
    "whoami": 7,
    "state": "active",
    "oldest_map": 1,
    "newest_map": 417,
    "num_pgs": 28
}
[root@ceph-host-02 ceph-cluster]# ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-host-02.asok help
{
    "add_bootstrap_peer_hint": "add peer address as potential bootstrap peer for cluster bringup",
    "add_bootstrap_peer_hintv": "add peer address vector as potential bootstrap peer for cluster bringup",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show recent ops, sorted by duration",
    "dump_historic_slow_ops": "show recent slow ops",
    "dump_mempools": "get mempool stats",
    "get_command_descriptions": "list available commands",
    "git_version": "get git sha1",
    "help": "list available commands",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "mon_status": "show current monitor status",
    "ops": "show the ops currently in flight",
    "perf dump": "dump perfcounters value",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump perfcounters schema",
    "quorum enter": "force monitor back into quorum",
    "quorum exit": "force monitor out of the quorum",
    "quorum_status": "show current quorum status",
    "sessions": "list existing sessions",
    "sync_force": "force sync of and clear monitor store",
    "version": "get ceph version"
}
 
[root@ceph-host-04 ~]# ceph --admin-daemon /var/run/ceph/ceph-osd.7.asok help
{
    "bluestore allocator dump block": "dump allocator free regions",
    "bluestore allocator dump bluefs-db": "dump allocator free regions",
    "bluestore allocator score block": "give score on allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore allocator score bluefs-db": "give score on allocator fragmentation (0-no fragmentation, 1-absolute fragmentation)",
    "bluestore bluefs available": "Report available space for bluefs. If alloc_size set, make simulation.",
    "calc_objectstore_db_histogram": "Generate key value histogram of kvdb(rocksdb) which used by bluestore",
    "compact": "Commpact object store's omap. WARNING: Compaction probably slows your requests",
    "config diff": "dump diff of current config and default config",
    "config diff get": "dump diff get <field>: dump diff of current and default config setting <field>",
    "config get": "config get <field>: get the config value",
    "config help": "get config setting schema and descriptions",
    "config set": "config set <field> <val> [<val> ...]: set a config variable",
    "config show": "dump current config settings",
    "config unset": "config unset <field>: unset a config variable",
    "dump_blacklist": "dump blacklisted clients and times",
    "dump_blocked_ops": "show the blocked ops currently in flight",
    "dump_historic_ops": "show recent ops",
    "dump_historic_ops_by_duration": "show slowest recent ops, sorted by duration",
    "dump_historic_slow_ops": "show slowest recent ops",
    "dump_mempools": "get mempool stats",
    "dump_objectstore_kv_stats": "print statistics of kvdb which used by bluestore",
    "dump_op_pq_state": "dump op priority queue state",
    "dump_ops_in_flight": "show the ops currently in flight",
    "dump_osd_network": "Dump osd heartbeat network ping times",
    "dump_pgstate_history": "show recent state history",
    "dump_recovery_reservations": "show recovery reservations",
    "dump_scrub_reservations": "show scrub reservations",
    "dump_scrubs": "print scheduled scrubs",
    "dump_watchers": "show clients which have active watches, and on which objects",
    "flush_journal": "flush the journal to permanent store",
    "flush_store_cache": "Flush bluestore internal cache",
    "get_command_descriptions": "list available commands",
    "get_heap_property": "get malloc extension heap property",
    "get_latest_osdmap": "force osd to update the latest map from the mon",
    "get_mapped_pools": "dump pools whose PG(s) are mapped to this OSD.",
    "getomap": "output entire object map",
    "git_version": "get git sha1",
    "heap": "show heap usage info (available only if compiled with tcmalloc)",
    "help": "list available commands",
    "injectdataerr": "inject data error to an object",
    "injectfull": "Inject a full disk (optional count times)",
    "injectmdataerr": "inject metadata error to an object",
    "list_devices": "list OSD devices.",
    "log dump": "dump recent log entries to log file",
    "log flush": "flush log entries to log file",
    "log reopen": "reopen log file",
    "objecter_requests": "show in-progress osd requests",
    "ops": "show the ops currently in flight",
    "perf dump": "dump perfcounters value",
    "perf histogram dump": "dump perf histogram values",
    "perf histogram schema": "dump perf histogram schema",
    "perf reset": "perf reset <name>: perf reset all or one perfcounter name",
    "perf schema": "dump perfcounters schema",
    "rmomapkey": "remove omap key",
    "send_beacon": "send OSD beacon to mon immediately",
    "set_heap_property": "update malloc extension heap property",
    "set_recovery_delay": "Delay osd recovery by specified seconds",
    "setomapheader": "set omap header",
    "setomapval": "set omap key",
    "smart": "probe OSD devices for SMART data.",
    "status": "high-level status of OSD",
    "trigger_deep_scrub": "Trigger a scheduled deep scrub ",
    "trigger_scrub": "Trigger a scheduled scrub ",
    "truncobj": "truncate object to length",
    "version": "get ceph version"
}
 
 
 
5.停止或重启ceph集群
停止
 1.告知ceph集群不要将osd标记为out
   命令:ceph osd set noout
 2.按如下顺序停止守护进程和节点
   存储客户端
   网关,例如NFS Ganesha或者对象网关
   元数据服务器
   ceph osd
   ceph manager
   ceph monitor
 
启动
 1.以与停止过程相关顺序启动节点
   ceph monitor
   ceph manager
   ceph osd
   元数据服务器
   网关,例如NFS Ganesha或者对象网关
   存储客户端
 2.删除noout标记
   命令:ceph osd unset noout

ceph存储集群的应用的更多相关文章

  1. Ceph 存储集群

    Ceph 存储集群 Ceph 作为软件定义存储的代表之一,最近几年其发展势头很猛,也出现了不少公司在测试和生产系统中使用 Ceph 的案例,尽管与此同时许多人对它的抱怨也一直存在.本文试着整理作者了解 ...

  2. 002.RHCS-配置Ceph存储集群

    一 前期准备 [kiosk@foundation0 ~]$ ssh ceph@serverc #登录Ceph集群节点 [ceph@serverc ~]$ ceph health #确保集群状态正常 H ...

  3. Ceph 存储集群 - 搭建存储集群

    目录 一.准备机器 二.ceph节点安装 三.搭建集群 四.扩展集群(扩容)   一.准备机器 本文描述如何在 CentOS 7 下搭建 Ceph 存储集群(STORAGE CLUSTER). 一共4 ...

  4. Ceph 存储集群搭建

    前言 Ceph 分布式存储系统,在企业中应用面较广 初步了解并学会使用很有必要 一.简介 Ceph 是一个开源的分布式存储系统,包括对象存储.块设备.文件系统.它具有高可靠性.安装方便.管理简便.能够 ...

  5. Ceph 存储集群5-数据归置

    一.数据归置概览 Ceph 通过 RADOS 集群动态地存储.复制和重新均衡数据对象.很多不同用户因不同目的把对象存储在不同的存储池里,而它们都坐落于无数的 OSD 之上,所以 Ceph 的运营需要些 ...

  6. Ceph 存储集群4-高级运维:

    一.高级运维 高级集群操作主要包括用 ceph 服务管理脚本启动.停止.重启集群,和集群健康状态检查.监控和操作集群. 操纵集群 运行 Ceph 每次用命令启动.重启.停止Ceph 守护进程(或整个集 ...

  7. Ceph 存储集群2-配置:心跳选项、OSD选项、存储池、归置组和 CRUSH 选项

    一.心跳选项 完成基本配置后就可以部署.运行 Ceph 了.执行 ceph health 或 ceph -s 命令时,监视器会报告 Ceph 存储集群的当前状态.监视器通过让各 OSD 自己报告.并接 ...

  8. Ceph 存储集群1-配置:硬盘和文件系统、配置 Ceph、网络选项、认证选项和监控器选项

    所有 Ceph 部署都始于 Ceph 存储集群.基于 RADOS 的 Ceph 对象存储集群包括两类守护进程: 1.对象存储守护进程( OSD )把存储节点上的数据存储为对象: 2.Ceph 监视器( ...

  9. Ceph 存储集群第一部分:配置和部署

    内容来源于官方,经过个人实践操作整理,官方地址:http://docs.ceph.org.cn/rados/ 所有 Ceph 部署都始于 Ceph 存储集群. 基于 RADOS 的 Ceph 对象存储 ...

随机推荐

  1. 语音识别2 -- Listen,Attend,and Spell (LAS)

    LAS是Listen(Encoder),Attend,和Spell(Decoder)的简称 第一个步骤Listen(Encoder) listen的作用是输入一段语音信号,输出一段向量,去掉语音中的杂 ...

  2. zabbix agent 编译安装

    zabbix 安装包下载地址 https://www.zabbix.com/download 解压好之后进入zabbix目录 执行编译安装 ./configure --prefix=/usr/loca ...

  3. selenium截图功能

    selenium自动化测试完后需要查看值观的结果,或者查操作过程中是否正确,此时需要使用自带的截图功能. 示例1: from time import sleep from selenium impor ...

  4. HTTPS原理剖析

    一.HTTP隐患 客户端向服务器发送HTTP请求,服务器收到请求后返回响应给客户端: 抓包如图: 我们可以发现:HTTP报文明文传输(而TCP/IP是可能被窃听的网络):且客户端跟服务器之间没有任何身 ...

  5. testlink——解决测试度量与报告或图表中中文显示乱码问题

    解决问题之前的图表: 解决方法: (1)下载SimHei.TTF字体(可以在自己电脑的C:/windows/fonts目录下找到,若找不到,可以在网上下载) (2)将SimHei.TTF文件拷贝到te ...

  6. Linux 学习笔记02丨Linux 概述、常用快捷键、apt命令

    Chapter 1. Linux 概述 Linux 是一种自由和开放源码的 Unix 操作系统, 是一个基于 POSIX 和 UNIX 的多用户.多任务.支持多线程和多CPU的操作系统.只要遵循 GN ...

  7. Network-Emulator-Toolkit 模拟各种网络环境 windows

    背景.目标.目的 (1) 背景: 我们在使用网络时,时常遇到在正常网络环境下的代码运行一切正常,可以复杂的网络环境下的各种问题无法复现,必须搭建模拟各种网络环境,去复现问题,定位问题.不管是移动平台, ...

  8. SQL优化之SQL 进阶技巧(上)

    由于工作需要,最近做了很多 BI 取数的工作,需要用到一些比较高级的 SQL 技巧,总结了一下工作中用到的一些比较骚的进阶技巧,特此记录一下,以方便自己查阅,主要目录如下: SQL 的书写规范 SQL ...

  9. Spring Cloud 学习 (七) Spring Cloud Sleuth

    微服务架构是一个分布式架构,微服务系统按业务划分服务单元,一个微服务系统往往有很多个服务单元.由于服务单元数量众多,业务的复杂性较高,如果出现了错误和异常,很难去定位.主要体现在一个请求可能需要调用很 ...

  10. day1(初始化项目结构)

    1.初始化项目结构  └─shiyanlou_project    │  .gitignore    │  README.en.md           # 英文    │  README.md    ...