在Azure上目前已经有基于Samba协议的共享存储了。

但目前在Azure上,还不能把Disk作为共享盘。而在实际的应用部署中,共享盘是做集群的重要组件之一。比如仲裁盘、Shared Disk等。

本文将介绍,如果通过基于Samba的文件共享,加上Linux的Target、iscsid以及multipath等工具,提供带HA的共享Disk。

上图是整体架构:

通过两台CentOS7.2的VM都挂载Azure File;在Azure File上创建一个文件: disk.img,两个VM通过iscsi server的软件target,同时把这个disk.img作为iscsi的disk发布出去;一台装有iscsid的CentOS7.2的Server同时挂载这两个iscsi Disk;再采用multipath的软件把这两个盘合成一个。

在这种架构下,iscsi客户端获得了一块iscsi的disk。而且这块Disk是提供HA的网络Disk。

具体实现方式如下:

一、 创建File Service

1. 在Azure Portal上创建File Service

在Storage Account中点击Add:

填写相关信息,点击Create。

创建成功后,在Storage Account中选择创建好的Storage Account:

选择File:

点击+file Share后,填写File Share的名字,点击创建。

创建好后,可以看到在Linux中mount的命令提示:

在Access Keys中复制key

2. 在两台iscsi server上mount这个File Service

首先查看服务器版本:

[root@hwis01 ~]# cat /etc/redhat-release
CentOS Linux release 7.2. (Core)

都是CentOS7.0以上的版本,可以支持Samba3.0。

根据前面File Service的信息执行下面的命令:

[root@hwis01 ~]# mkdir /file
[root@hwis01 ~]# sudo mount -t cifs //hwiscsi.file.core.chinacloudapi.cn/hwfile /file -o vers=3.0,username=hwiscsi,password=xxxxxxxx==,dir_mode=0777,file_mode=0777
[root@hwis01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .1G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
//hwiscsi.file.core.chinacloudapi.cn/hwfile 5.0T 0 5.0T 0% /file

二、 在iscsi server上创建iscsi的disk

1. 在共享目录中创建disk.img

[root@hwis01 ~]# dd if=/dev/zero of=/file/disk.img bs=1M count=
+ records in
+ records out
bytes (1.1 GB) copied, 20.8512 s, 51.5 MB/s

本机查看

[root@hwis01 ~]# cd /file
[root@hwis01 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img

在另外一台Server上查看:

[root@hwis02 ~]# cd /file
[root@hwis02 file]# ll
total
-rwxrwxrwx. root root Nov : disk.img

2.安装相关软件

iscsi服务器端安装targetcli:

[root@hwis01 file]# yum install -y targetcli

iscsi客户端安装iscsi-initiator-utils:

[root@hwic01 ~]# yum install iscsi-initiator-utils -y

安装完成后,在iscsi的客户端机器上查看iqn号码:

[root@hwic01 /]# cd /etc/iscsi/
[root@hwic01 iscsi]# more initiatorname.iscsi
InitiatorName=iqn.-.hw.ic01:client

3.用targetcli创建iscsi的disk

[root@hwis01 file]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.fb41
Copyright - by Datera, Inc and others.
For help on commands, type 'help'.
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
o- loopback ......................................................................................................... [Targets: ]
/> cd backstores/
/backstores> cd fileio
/backstores/fileio> create disk01 /file/disk.img 1G
/file/disk.img exists, using its size ( bytes) instead
Created fileio disk01 with size
/backstores/fileio> cd /iscsi/
/iscsi> create iqn.-.hw.is01:disk01.lun0
Created target iqn.-.hw.is01:disk01.lun0.
Created TPG .
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port .
/iscsi> cd iqn.-.hw.is01:disk01.lun0/tpg1/luns/
/iscsi/iqn....un0/tpg1/luns> create /backstores/fileio/disk01
Created LUN .
/iscsi/iqn....un0/tpg1/luns> cd ../acls/
/iscsi/iqn....un0/tpg1/acls> create iqn.-.hw.ic01:client
Created Node ACL for iqn.-.hw.ic01:client
Created mapped LUN .
/iscsi/iqn....un0/tpg1/acls> ls
o- acls ................................................................................................................ [ACLs: ]
o- iqn.-.hw.ic01:client ................................................................................... [Mapped LUNs: ]
o- mapped_lun0 ......................................................................................... [lun0 fileio/disk01 (rw)]
/iscsi/iqn....un0/tpg1/acls> cd /
/> ls
o- / ....................................................................................................................... [...]
o- backstores .............................................................................................................. [...]
| o- block .................................................................................................. [Storage Objects: ]
| o- fileio ................................................................................................. [Storage Objects: ]
| | o- disk01 ..................................................................... [/file/disk.img (.0GiB) write-back activated]
| o- pscsi .................................................................................................. [Storage Objects: ]
| o- ramdisk ................................................................................................ [Storage Objects: ]
o- iscsi ............................................................................................................ [Targets: ]
| o- iqn.-.hw.is01:disk01.lun0 ................................................................................... [TPGs: ]
| o- tpg1 ................................................................................................. [no-gen-acls, no-auth]
| o- acls .............................................................................................................. [ACLs: ]
| | o- iqn.-.hw.ic01:client ............................................................................... [Mapped LUNs: ]
| | o- mapped_lun0 ..................................................................................... [lun0 fileio/disk01 (rw)]
| o- luns .............................................................................................................. [LUNs: ]
| | o- lun0 ..................................................................................... [fileio/disk01 (/file/disk.img)]
| o- portals ........................................................................................................ [Portals: ]
| o- 0.0.0.0: ........................................................................................................... [OK]
o- loopback ......................................................................................................... [Targets: ]

查看配置文件中的wwn

[root@hwis01 file]# cd /etc/target
[root@hwis01 target]# ls
backup saveconfig.json
[root@hwis01 target]# vim saveconfig.json
[root@hwis01 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is01:disk01.lun0"

记录下第一个disk的 "wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"

并将其复制到iscsi server2中的配置。

在server2上查看相关信息:

[root@hwis02 target]# grep wwn saveconfig.json
"wwn": "acadb3f7-9a2d-44f4-8caf-de627ea98e27"
"node_wwn": "iqn.2016-10.hw.ic01:client"
"wwn": "iqn.2016-10.hw.is02:disk01.lun0"

4.开启target服务

[root@hwis01 target]# systemctl enable target
Created symlink from /etc/systemd/system/multi-user.target.wants/target.service to /usr/lib/systemd/system/target.service.
[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 7s ago
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.

5.允许tcp3206

配置防火墙或者nsg,允许tcp3206的访问。

这里就不展开了。

三、 在iscsi客户端配置iscsi

1.发现并login iscsi的disk

发现:

[root@hwic01 iscsi]# iscsiadm -m discovery -t sendtargets -p 10.1.1.5
10.1.1.5:, iqn.-.hw.is02:disk01.lun0

Login:

[root@hwic01 iscsi]# iscsiadm --mode node --targetname iqn.-.hw.is02:disk01.lun0 --portal 10.1.1.5 --login
Logging in to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] (multiple)
Login to [iface: default, target: iqn.-.hw.is02:disk01.lun0, portal: 10.1.1.5,] successful.

此时在如下文件中,就有了相关iqn的信息:

[root@hwic01 /]# ls /var/lib/iscsi/nodes
iqn.-.hw.is01:disk01.lun0 iqn.-.hw.is02:disk01.lun0

此时在dev中已经存在sdc和sdd两个新增加的disk。

2.安装multipath软件

[root@hwic01 dev]# yum install device-mapper-multipath -y

复制配置文件:

cd /etc
cp /usr/share/doc/device-mapper-multipath-0.4./multipath.conf .

修改配置文件:

vim multipath.conf

blacklist {
devnode "^sda$"
devnode "^sdb$"
}
defaults {
find_multipaths yes
user_friendly_names yes
path_grouping_policy multibus
failback immediate
no_path_retry fail
}

启动服务:

[root@hwic01 etc]# systemctl enable multipathd
[root@hwic01 etc]# systemctl start multipathd
[root@hwic01 etc]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri -- :: UTC; 11s ago
Process: ExecStart=/sbin/multipathd (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/multipath -A (code=exited, status=/SUCCESS)
Process: ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=/SUCCESS)
Main PID: (multipathd)
CGroup: /system.slice/multipathd.service
└─ /sbin/multipathd
 
Nov :: hwic01 systemd[]: Starting Device-Mapper Multipath Device Controller...
Nov :: hwic01 systemd[]: PID file /run/multipathd/multipathd.pid not readable (yet?) after start.
Nov :: hwic01 systemd[]: Started Device-Mapper Multipath Device Controller.
Nov :: hwic01 multipathd[]: mpatha: load table [ multipath service-time : : ]
Nov :: hwic01 multipathd[]: mpatha: event checker started
Nov :: hwic01 multipathd[]: path checkers start up

刷新multipath:

[root@hwic01 etc]# multipath -F

查看:

[root@hwic01 etc]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running

可以看到,两个iscsi的disk已经合并成一个disk了。

[root@hwic01 /]# cd /dev/mapper/
[root@hwic01 mapper]# ll
total
crw-------. root root , Nov : control
lrwxrwxrwx. root root Nov : mpatha -> ../dm-

对dm-0进行分区:

[root@hwic01 dev]# fdisk /dev/mapper/mpatha
Welcome to fdisk (util-linux 2.23.).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x44f032cb.
Command (m for help): n
Partition type:
p primary ( primary, extended, free)
e extended
Select (default p): p
Partition number (-, default ):
First sector (-, default ):
Using default value
Last sector, +sectors or +size{K,M,G} (-, default ):
Using default value
Partition of type Linux and of size MiB is set
 
Command (m for help): p
 
Disk /dev/dm-: MB, bytes, sectors
Units = sectors of * = bytes
Sector size (logical/physical): bytes / bytes
I/O size (minimum/optimal): bytes / bytes
Disk label type: dos
Disk identifier: 0x44f032cb
 
Device Boot Start End Blocks Id System
mpatha1 Linux

格式化:

[root@hwic01 mapper]# mkfs.ext4 /dev/mapper/mpatha1

挂载、查看:

[root@hwic01 mapper]# mkdir /iscsi
[root@hwic01 mapper]# mount /dev/mapper/mpatha1 /iscsi
[root@hwic01 mapper]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 30G .2G 29G % /
devtmpfs 829M 829M % /dev
tmpfs 839M 839M % /dev/shm
tmpfs 839M 8.3M 831M % /run
tmpfs 839M 839M % /sys/fs/cgroup
/dev/sdb1 69G 53M 66G % /mnt/resource
tmpfs 168M 168M % /run/user/
/dev/mapper/mpatha1 985M 2.5M 915M % /iscsi

四、 检查iscsi disk的HA及其它属性

1. HA检查

将iscsi server1上的target服务停止掉:

[root@hwis01 target]# systemctl stop target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Fri -- :: UTC; 7s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
 
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.
Nov :: hwis01 systemd[]: Stopping Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Stopped Restore LIO kernel target configuration.

在iscsi客户端查看:

[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : failed faulty running
`- ::: sdd : active undef running

可以看到一条路径已经出现故障。但磁盘工作仍然正常:

[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
drwx------. root root Nov : lost+found
[root@hwic01 iscsi]# touch b
[root@hwic01 iscsi]# ll
total
-rw-r--r--. root root Nov : a
-rw-r--r--. root root Nov : b
drwx------. root root Nov : lost+found

再恢复服务:

[root@hwis01 target]# systemctl start target
[root@hwis01 target]# systemctl status target
target.service - Restore LIO kernel target configuration
Loaded: loaded (/usr/lib/systemd/system/target.service; enabled; vendor preset: disabled)
Active: active (exited) since Fri -- :: UTC; 3s ago
Process: ExecStop=/usr/bin/targetctl clear (code=exited, status=/SUCCESS)
Process: ExecStart=/usr/bin/targetctl restore (code=exited, status=/SUCCESS)
Main PID: (code=exited, status=/SUCCESS)
 
Nov :: hwis01 systemd[]: Starting Restore LIO kernel target configuration...
Nov :: hwis01 systemd[]: Started Restore LIO kernel target configuration.

客户端multipath仍然正常工作:

[root@hwic01 iscsi]# multipath -l
mpatha (36001405acadb3f79a2d44f48cafde627) dm- LIO-ORG ,disk01
size=.0G features='' hwhandler='' wp=rw
`-+- policy='service-time 0' prio= status=active
|- ::: sdc : active undef running
`- ::: sdd : active undef running

2. iops

根据前面的一片iops测试的文章中的方法,进行iops的测试:

[root@hwic01 ~]# ./iops.py /dev/dm-
/dev/dm-, 1.07 G, sectorsize=512B, #threads=, pattern=random:
B blocks: 383.1 IO/s, 196.1 kB/s ( 1.6 Mbit/s)
kB blocks: 548.5 IO/s, 561.6 kB/s ( 4.5 Mbit/s)
kB blocks: 495.8 IO/s, 1.0 MB/s ( 8.1 Mbit/s)
kB blocks: 414.1 IO/s, 1.7 MB/s ( 13.6 Mbit/s)
kB blocks: 376.2 IO/s, 3.1 MB/s ( 24.7 Mbit/s)
kB blocks: 357.5 IO/s, 5.9 MB/s ( 46.9 Mbit/s)
kB blocks: 271.0 IO/s, 8.9 MB/s ( 71.0 Mbit/s)
kB blocks: 223.0 IO/s, 14.6 MB/s (116.9 Mbit/s)
kB blocks: 181.2 IO/s, 23.7 MB/s (190.0 Mbit/s)
kB blocks: 137.7 IO/s, 36.1 MB/s (288.9 Mbit/s)
kB blocks: 95.0 IO/s, 49.8 MB/s (398.6 Mbit/s)
MB blocks: 55.4 IO/s, 58.1 MB/s (465.0 Mbit/s)
MB blocks: 37.5 IO/s, 78.7 MB/s (629.9 Mbit/s)
MB blocks: 24.8 IO/s, 103.8 MB/s (830.6 Mbit/s)
MB blocks: 16.6 IO/s, 139.2 MB/s ( 1.1 Gbit/s)
MB blocks: 11.2 IO/s, 188.7 MB/s ( 1.5 Gbit/s)
MB blocks: 5.7 IO/s, 190.0 MB/s ( 1.5 Gbit/s)

可以看到,这个盘的iops大约在500左右,带宽在1.5Gbps左右。

总结:

通过Azure的File Service可以通过文件创建iscsi disk的方式把File Service中的文件发布成Disk,供iscsi客户端挂载。在多个iscsi Server提供ha服务的情况下,通过multipath软件可以实现HA的iscsi disk方案。

这个方案适合于一些集群需要共享磁盘、仲裁盘的情况。

通过Azure File Service搭建基于iscsi的共享盘的更多相关文章

  1. Azure File Service in IIS

    微软Azure的Storage套件中提供了新的服务File Service,让我们运行在Azure中的程序都能共享存储,一个存储账号共享的没有上线,但每个共享的上限是5G.由于File Service ...

  2. 简单几步零成本使用Vercel部署OneIndex 无需服务器搭建基于OneDrive的网盘

    前提 你需要一个OneDrive账号,必须管理员开放API 需要已安装Node.js 拥有Github账号,没有就注册一个 魔法上网环境(看情况) 注册应用 登录https://portal.azur ...

  3. Windows Azure Storage (20) 使用Azure File实现共享文件夹

    <Windows Azure Platform 系列文章目录> Update 2016-4-14.在Azure VM配置FTP和IIS,请参考: http://blogs.iis.net/ ...

  4. Azure File Storage 基本用法 -- Azure Storage 之 File

    Azure Storage 是微软 Azure 云提供的云端存储解决方案,当前支持的存储类型有 Blob.Queue.File 和 Table. 笔者在<Azure Blob Storage 基 ...

  5. 【Azure App Service】C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Application

    问题描述 C#下制作的网站,所有网页本地测试运行无误,发布至Azure之后,包含CHART(图表)的网页打开报错,错误消息为 Runtime Error: Server Error in '/' Ap ...

  6. Windows Azure文件共享服务--File Service

    部署在Windows Azure上的虚拟机之间如何共享文件?例如:Web Server A和Web Server B组成负载均衡集群,两个服务器需要一个共享目录来存储用户上传的文件.通常,大家可能首先 ...

  7. 如何使用新浪微博账户进行应用登录验证(基于Windows Azure Mobile Service 集成登录验证)

    使用三方账号登录应用应该对大家来说已经不是什么新鲜事儿了,但是今天为什么还要在这里跟大家聊这个话题呢,原因很简单 Windows Azure Mobiles Service Authenticatio ...

  8. Azure AD Domain Service(二)为域服务中的机器配置 Azure File Share 磁盘共享

    一,引言 Azure File Share 是支持两种认证方式的! 1)Active Directory 2)Storage account key 记得上次分析的 "Azure File ...

  9. Azure Front Door(一)为基于.net core 开发的Azure App Service 提供流量转发

    一,引言 之前我们讲解到使用 Azure Traffic Manager.Azure LoadBalancer.Azure Application Gateway,作为项目的负载均衡器来分发流量,转发 ...

随机推荐

  1. java 程序的使用

    Java程序可以在任何安装有Java平台的系统上运行,运行的时候语法如下: java -jar <program.jar>   -jar这个参数是必须有的,后面跟你的java程序,例如我们 ...

  2. Windows10提示“没有权限使用网络资源”的解决方案

    1.点击“开始→运行”,在“运行”对话框中输入“GPEDIT.MSC”,打开组策略编辑器 2.依次选择“计算机配置→Windows设置→安全设置→本地策略→用户权利分配” 3.双击“拒绝从网络访问这台 ...

  3. ceph安装各种报错

    [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate –mark-init sysvini ...

  4. codeforces 439D 思维

    题意:两个数组a,b,每次操作可将其中一个数组的一个数字加1或减1,求最小操作次数使得a数组的最小值大于等于b数组的最大值. 思路: 解法一:考虑最终状态,假设a为数组a中最小的数,b为数组b中最大的 ...

  5. 最长k可重区间集

      P3358 最长k可重区间集问题 P3357 最长k可重线段集问题 P3356 火星探险问题 P4012 深海机器人问题 P3355 骑士共存问题 P2754 [CTSC1999]家园 题目描述 ...

  6. windows系统JDK的安装及环境配置

    本文转载至:http://blog.csdn.net/sweetburden2011/article/details/8881181 一:JDK的安装 1.   首先上甲骨文公司的官方网站下载JDK的 ...

  7. vim 乱码问题的方法参考

    linux 中设置当前用户的系统默认编码为 UTF-8 格式解决 vim 乱码问题的方法参考  任侠  2013-05-02 11:58  电脑基础  抢沙发  13,732 views  在使用 l ...

  8. 用SQL语句删除除了id不同,其他都相同的学生表信息

    delete from <table_name> wehere id not in (select max(id) from <table_name> group by < ...

  9. (转)理解Keystone的四种Token

    Token 是什么 通俗的讲,token 是用户的一种凭证,需拿正确的用户名/密码向 Keystone 申请才能得到.如果用户每次都采用用户名/密码访问 OpenStack API,容易泄露用户信息, ...

  10. MVP实战心得—封装Retrofit2.0+RxAndroid+RxBus

    响应式编程框架,rxjava的扩展,很爽的链式编程 魅力在于对数据的处理,与线程切换的灵活性. 用来处理异步操作(Lambda表达式不会用.用Lambda表达式代码会更少,但不会的人会看不懂代码.不是 ...