在Azure上目前已经有基于Samba协议的共享存储了。

但目前在Azure上,还不能把Disk作为共享盘。而在实际的应用部署中,共享盘是做集群的重要组件之一。比如仲裁盘、Shared Disk等。

本文将介绍,如果通过基于Samba的文件共享,加上Linux的Target、iscsid以及multipath等工具,提供带HA的共享Disk。

上图是整体架构:

通过两台CentOS7.2的VM都挂载Azure File;在Azure File上创建一个文件: disk.img,两个VM通过iscsi server的软件target,同时把这个disk.img作为iscsi的disk发布出去;一台装有iscsid的CentOS7.2的Server同时挂载这两个iscsi Disk;再采用multipath的软件把这两个盘合成一个。

在这种架构下,iscsi客户端获得了一块iscsi的disk。而且这块Disk是提供HA的网络Disk。

具体实现方式如下:

一、 创建File Service

1. 在Azure Portal上创建File Service

在Storage Account中点击Add:

填写相关信息,点击Create。

创建成功后,在Storage Account中选择创建好的Storage Account:

选择File:

点击+file Share后,填写File Share的名字,点击创建。

创建好后,可以看到在Linux中mount的命令提示:

在Access Keys中复制key

2. 在两台iscsi server上mount这个File Service

首先查看服务器版本:

[root@hwis01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2. (Core)

都是CentOS7.0以上的版本,可以支持Samba3.0。

根据前面File Service的信息执行下面的命令:

[root@hwis01 ~]# mkdir /file
[root@hwis01 ~]# sudo mount -t cifs //hwiscsi.file.core.chinacloudapi.cn/hwfile /file -o vers=3.0,username=hwiscsi,password=xxxxxxxx==,dir_mode=0777,file_mode=0777
[root@hwis01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .1G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
//hwiscsi.file.core.chinacloudapi.cn/hwfile 5.0T 0 5.0T 0% /file

二、 在iscsi server上创建iscsi的disk

1. 在共享目录中创建disk.img

[root@hwis01 ~]# dd if=/dev/zero of=/file/disk.img bs=1M count=
+ records in
+ records out
bytes (1.1 GB) copied, 20.8512 s, 51.5 MB/s

本机查看

[root@hwis01 ~]# cd /file
[root@hwis01 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img

在另外一台Server上查看:

[root@hwis02 ~]# cd /file
[root@hwis02 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img

2.安装相关软件

iscsi服务器端安装targetcli:

[root@hwis01 file]# yum install -y targetcli

iscsi客户端安装iscsi-initiator-utils:

[root@hwic01 ~]# yum install iscsi-initiator-utils -y

安装完成后,在iscsi的客户端机器上查看iqn号码:

[root@hwic01 /]# cd /etc/iscsi/
[root@hwic01 iscsi]# more initiatorname.iscsi
InitiatorName=iqn.-.hw.ic01:client

3.用targetcli创建iscsi的disk

[root@hwis01 file]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright - by Datera, Inc and others.
For help on commands, type 'help'.
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
o- loopback ......................................................................................................... [Targets: ]
/> cd backstores/
/backstores> cd fileio
/backstores/fileio> create disk01 /file/disk.img 1G
/file/disk.img exists, using its size ( bytes) instead
Created fileio disk01 with size
/backstores/fileio> cd /iscsi/
/iscsi> create iqn.-.hw.is01:disk01.lun0
Created target iqn.-.hw.is01:disk01.lun0.
Created TPG .
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port .
/iscsi> cd iqn.-.hw.is01:disk01.lun0/tpg1/luns/
/iscsi/iqn....un0/tpg1/luns> create /backstores/fileio/disk01
Created LUN .
/iscsi/iqn....un0/tpg1/luns> cd ../acls/
/iscsi/iqn....un0/tpg1/acls> create iqn.-.hw.ic01:client
Created Node ACL for iqn.-.hw.ic01:client
Created mapped LUN .
/iscsi/iqn....un0/tpg1/acls> ls
o- acls ................................................................................................................ [ACLs: ]
o- iqn.-.hw.ic01:client ................................................................................... [Mapped LUNs: ]
o- mapped_lun0 ......................................................................................... [lun0 fileio/disk01 (rw)]
/iscsi/iqn....un0/tpg1/acls> cd /
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| | o- disk01 ..................................................................... [/file/disk.img (.0GiB) write-back activated]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
| o- iqn.-.hw.is01:disk01.lun0 ................................................................................... [TPGs: ]
| o- tpg1 ................................................................................................. [no-gen-acls, no-auth]
| o- acls .............................................................................................................. [ACLs: ]
| | o- iqn.-.hw.ic01:client ............................................................................... [Mapped LUNs: ]
| | o- mapped_lun0 ..................................................................................... [lun0 fileio/disk01 (rw)]
| o- luns .............................................................................................................. [LUNs: ]
| | o- lun0 ..................................................................................... [fileio/disk01 (/file/disk.img)]
| o- portals ........................................................................................................ [Portals: ]
| o- 0.0.0.0: ........................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: ]

查看配置文件中的wwn

[root@hwis01 file]# cd /etc/target
[root@hwis01 target]# ls
backup saveconfig.json
[root@hwis01 target]# vim saveconfig.json
[root@hwis01 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is01:disk01.lun0"

记录下第一个disk的 "wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"

并将其复制到iscsi server2中的配置。

在server2上查看相关信息:

[root@hwis02 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is02:disk01.lun0"

4.开启target服务

[root@hwis01 target]# systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 7s ago
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.

5.允许tcp3206

配置防火墙或者nsg,允许tcp3206的访问。

这里就不展开了。

三、 在iscsi客户端配置iscsi

1.发现并login iscsi的disk

发现:

[root@hwic01 iscsi]# iscsiadm -m discovery -t sendtargets -p 10.1.1.5
10.1.1.5:, iqn.-.hw.is02:disk01.lun0

Login:

[root@hwic01 iscsi]# iscsiadm --mode node --targetname iqn.-.hw.is02:disk01.lun0 --portal 10.1.1.5 --login
Logging in to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] (multiple)
Login to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] successful.

此时在如下文件中,就有了相关iqn的信息:

[root@hwic01 /]# ls /var/lib/iscsi/nodes
iqn.-.hw.is01:disk01.lun0 iqn.-.hw.is02:disk01.lun0

此时在dev中已经存在sdc和sdd两个新增加的disk。

2.安装multipath软件

[root@hwic01 dev]# yum install device-mapper-multipath -y

复制配置文件:

cd /etc
cp /usr/share/doc/device-mapper-multipath-0.4./multipath.conf .

修改配置文件:

vim multipath.conf

blacklist {
devnode "^sda$"
devnode "^sdb$"
}
defaults {
find_multipaths yes
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry fail
}

启动服务:

[root@hwic01 etc]# systemctl enable multipathd
[root@hwic01 etc]# systemctl start multipathd
[root@hwic01 etc]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri -- :: UTC; 11s ago
Process: ExecStart=/sbin/multipathd (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/multipath -A (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=/SUCCESS)
Main PID: (multipathd)
CGroup: /system.slice/multipathd.service
└─ /sbin/multipathd
 
Nov :: hwic01 systemd[]: Starting Device-Mapper Multipath Device Controller...
Nov :: hwic01 systemd[]: PID file /run/multipathd/multipathd.pid not readable (yet?) after start.
Nov :: hwic01 systemd[]: Started Device-Mapper Multipath Device Controller.
Nov :: hwic01 multipathd[]: mpatha: load table [ multipath service-time : : ]
Nov :: hwic01 multipathd[]: mpatha: event checker started
Nov :: hwic01 multipathd[]: path checkers start up

刷新multipath:

[root@hwic01 etc]# multipath -F

查看:

[root@hwic01 etc]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running

可以看到,两个iscsi的disk已经合并成一个disk了。

[root@hwic01 /]# cd /dev/mapper/
[root@hwic01 mapper]# ll
total
crw-------. root root , Nov : control
lrwxrwxrwx. root root Nov : mpatha -> ../dm-

对dm-0进行分区:

[root@hwic01 dev]# fdisk /dev/mapper/mpatha
Welcome to fdisk (util-linux 2.23.).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x44f032cb.
Command (m for help): n
Partition type:
p primary ( primary, extended, free)
e extended
Select (default p): p
Partition number (-, default ):
First sector (-, default ):
Using default value
Last sector, +sectors or +size{K,M,G} (-, default ):
Using default value
Partition of type Linux and of size MiB is set
 
Command (m for help): p
 
Disk /dev/dm-: MB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk label type: dos
Disk identifier: 0x44f032cb
 
Device Boot Start End Blocks Id System
mpatha1 Linux

格式化:

[root@hwic01 mapper]# mkfs.ext4 /dev/mapper/mpatha1

挂载、查看:

[root@hwic01 mapper]# mkdir /iscsi
[root@hwic01 mapper]# mount /dev/mapper/mpatha1 /iscsi
[root@hwic01 mapper]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .2G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
/dev/mapper/mpatha1 985M 2.5M 915M % /iscsi

四、 检查iscsi disk的HA及其它属性

1. HA检查

将iscsi server1上的target服务停止掉:

[root@hwis01 target]# systemctl stop target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Fri -- :: UTC; 7s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
 
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.
Nov :: hwis01 systemd[]: Stopping Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Stopped Restore LIO kernel target configuration.

在iscsi客户端查看:

[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : failed faulty running
`- ::: sdd : active undef running

可以看到一条路径已经出现故障。但磁盘工作仍然正常:

[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
drwx------. root root Nov : lost+found
[root@hwic01 iscsi]# touch b
[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
-rw-r--r--. root root Nov : b
drwx------. root root Nov : lost+found

再恢复服务:

[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 3s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
 
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.

客户端multipath仍然正常工作:

[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running

2. iops

根据前面的一片iops测试的文章中的方法,进行iops的测试:

[root@hwic01 ~]# ./iops.py /dev/dm-
/dev/dm-, 1.07 G, sectorsize=512B, #threads=, pattern=random:
B blocks: 383.1 IO/s, 196.1 kB/s ( 1.6 Mbit/s)
kB blocks: 548.5 IO/s, 561.6 kB/s ( 4.5 Mbit/s)
kB blocks: 495.8 IO/s, 1.0 MB/s ( 8.1 Mbit/s)
kB blocks: 414.1 IO/s, 1.7 MB/s ( 13.6 Mbit/s)
kB blocks: 376.2 IO/s, 3.1 MB/s ( 24.7 Mbit/s)
kB blocks: 357.5 IO/s, 5.9 MB/s ( 46.9 Mbit/s)
kB blocks: 271.0 IO/s, 8.9 MB/s ( 71.0 Mbit/s)
kB blocks: 223.0 IO/s, 14.6 MB/s (116.9 Mbit/s)
kB blocks: 181.2 IO/s, 23.7 MB/s (190.0 Mbit/s)
kB blocks: 137.7 IO/s, 36.1 MB/s (288.9 Mbit/s)
kB blocks: 95.0 IO/s, 49.8 MB/s (398.6 Mbit/s)
MB blocks: 55.4 IO/s, 58.1 MB/s (465.0 Mbit/s)
MB blocks: 37.5 IO/s, 78.7 MB/s (629.9 Mbit/s)
MB blocks: 24.8 IO/s, 103.8 MB/s (830.6 Mbit/s)
MB blocks: 16.6 IO/s, 139.2 MB/s ( 1.1 Gbit/s)
MB blocks: 11.2 IO/s, 188.7 MB/s ( 1.5 Gbit/s)
MB blocks: 5.7 IO/s, 190.0 MB/s ( 1.5 Gbit/s)

可以看到,这个盘的iops大约在500左右,带宽在1.5Gbps左右。

总结:

通过Azure的File Service可以通过文件创建iscsi disk的方式把File Service中的文件发布成Disk,供iscsi客户端挂载。在多个iscsi Server提供ha服务的情况下,通过multipath软件可以实现HA的iscsi disk方案。

这个方案适合于一些集群需要共享磁盘、仲裁盘的情况。

通过Azure File Service搭建基于iscsi的共享盘的更多相关文章

  1. Azure File Service in IIS

    微软Azure的Storage套件中提供了新的服务File Service,让我们运行在Azure中的程序都能共享存储,一个存储账号共享的没有上线,但每个共享的上限是5G.由于File Service ...

  2. 简单几步零成本使用Vercel部署OneIndex 无需服务器搭建基于OneDrive的网盘

    前提 你需要一个OneDrive账号,必须管理员开放API 需要已安装Node.js 拥有Github账号,没有就注册一个 魔法上网环境(看情况) 注册应用 登录https://portal.azur ...

  3. Windows Azure Storage (20) 使用Azure File实现共享文件夹

    <Windows Azure Platform 系列文章目录> Update 2016-4-14.在Azure VM配置FTP和IIS,请参考: http://blogs.iis.net/ ...

  4. Azure File Storage 基本用法 -- Azure Storage 之 File

    Azure Storage 是微软 Azure 云提供的云端存储解决方案,当前支持的存储类型有 Blob.Queue.File 和 Table. 笔者在<Azure Blob Storage 基 ...

  5. 【Azure App Service】C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Application

    问题描述 C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Ap ...

  6. Windows Azure文件共享服务--File Service

    部署在Windows Azure上的虚拟机之间如何共享文件?例如:Web Server A和Web Server B组成负载均衡集群,两个服务器需要一个共享目录来存储用户上传的文件.通常,大家可能首先 ...

  7. 如何使用新浪微博账户进行应用登录验证(基于Windows Azure Mobile Service 集成登录验证)

    使用三方账号登录应用应该对大家来说已经不是什么新鲜事儿了,但是今天为什么还要在这里跟大家聊这个话题呢,原因很简单 Windows Azure Mobiles Service Authenticatio ...

  8. Azure AD Domain Service(二)为域服务中的机器配置 Azure File Share 磁盘共享

    一,引言 Azure File Share 是支持两种认证方式的! 1)Active Directory 2)Storage account key 记得上次分析的 "Azure File ...

  9. Azure Front Door(一)为基于.net core 开发的Azure App Service 提供流量转发

    一,引言 之前我们讲解到使用 Azure Traffic Manager.Azure LoadBalancer.Azure Application Gateway,作为项目的负载均衡器来分发流量,转发 ...

随机推荐

  1. Windos Server Tomcat 双开配置

    Tomcat 双开配置 tomcat_1   server.mxl文件 # 修改端口 <Connector port=" protocol="HTTP/1.1" c ...

  2. 老司机也该掌握的MySQL优化指南

    当MySQL单表记录数过大时,增删改查性能都会急剧下降,所以我们本文会提供一些优化参考,大家可以参考以下步骤来优化: 除非单表数据未来会一直不断上涨,否则不要一开始就考虑拆分,拆分会带来逻辑.部署.运 ...

  3. ubuntu关闭631(cups)端口

    在ubuntu17.04环境下使用nmap扫描自己机器,发现631端口处于开启状态,将其输入到浏览器,可以看出是网络打印机的服务: 这个端口开着总是那么的刺眼,(5.12全球爆发的勒索病毒让人不寒而栗 ...

  4. web-view和wx.navigateback

    web-view 我们先来了解一下官方的东西 web-view 组件是一个可以用来承载网页的容器,会自动铺满整个小程序页面.个人类型与海外类型的小程序暂不支持使用. 属性名 类型 默认值 说明 src ...

  5. json对象与字符串互转方法

    字符串转json对象: var data = eval( '(' + str + ')' ); json对象转字符串: var jsonStr = JSON.stringify( obj );

  6. QT 布局时使用 addStretch 可伸缩设置

    今天在使用addStretch,布局的时候,发现addStretch竟然是可以平均分配的,有意思.比如: QVBoxLayout *buttonLayout = new QVBoxLayout; bu ...

  7. 查找和删除倒数第n个节点的问题

    class ListNode { int val; ListNode next; ListNode(int x) { val = x; } } public class NthNodeFromEnd ...

  8. Codeforces Beta Round #27 (Codeforces format, Div. 2) E. Number With The Given Amount Of Divisors 反素数

    E. Number With The Given Amount Of Divisors time limit per test 2 seconds memory limit per test 256 ...

  9. devstack apache2/keystone 没有启动

    在devstack中./rejoin-stack.sh 发现apache2/keystone 没有启动 单单手动启动apach2服务之后keystone并没有启动 sudo service apach ...

  10. svn更新时忽略指定文件或文件夹

    选择一个收SVN控制的文件夹->右击->选择TortoiseSVN->更新至版本,就会出现   选择更新深度为工作副本,再选择项目,出现如图中所示的界面,把不想更新的文件或者文件夹前 ...