Understanding the Space Used by ZFS -- (转)
Understanding the Space Used by ZFS
By Brian Leonard on Sep 28, 2010
Until recently, I've been confused and frustrated by the zfs list output as I try to clear up space on my hard drive.
Take this example using a 1 GB zpool:
bleonard@os200906:~# mkfile 1G /dev/dsk/disk1
bleonard@os200906:~# zpool create tank disk1
bleonard@os200906:~# zpool list tank
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 1016M 73K 1016M 0% ONLINE -
Now let's create some files and snapshots:
bleonard@os200906:~# mkfile 100M /tank/file1
bleonard@os200906:~# zfs snapshot tank@snap1
bleonard@os200906:~# mkfile 100M /tank/file2
bleonard@os200906:~# zfs snapshot tank@snap2
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 200M /tank
tank@snap1 17K - 100M -
tank@snap2 0 - 200M -
The output here looks as I'd expect. I have used 200 MB of disk space, neither of which is used by the snapshots. snap1 refers to 100 MBs of data (file1) and snap2 refers to 200 MBs of data (file1 and file2).
Now let's delete file1 and look at our zfs list output again:
bleonard@os200906:~# rm /tank/file1
bleonard@os200906:~# zpool scrub tank
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 100M /tank
tank@snap1 17K - 100M -
tank@snap2 17K - 200M -
Only 1 thing has changed - tank now only refers to 100 MB of data. file1 has been deleted and is only referenced by the snapshots. So why don't the snapshots reflect this in their USED column? You may think we should show 100 MB used by snap1, however, this would be misleading as deleting snap1 has no effect on the data used by the tank file system. Deleting snap1 would only free up 17K of disk space. We'll come back to this test case in a moment.
There is an option to get more detail on the space consumed by the snapshots. Although you can pretty easily deduct from the example above that the snapshots are using 100 MB, by using the zfs space option you can save yourself from doing the math:
bleonard@os200906:~# zfs list -t all -o space -r tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 784M 200M 100M 100M 0 154K
tank@snap1 - 17K - - - -
tank@snap2 - 17K - - - -
Here we can clearly see that of the 200 MB used by our file system, 100 MB is used by snapshots (file1) and 100 MB is used by the dataset itself (file2). Of course, there are other factors that can affect the total amount used - see the zfs man page for details.
Now, if we were to delete snap1 (we know this is safe, because it's not using any space):
bleonard@os200906:~# zfs destroy tank@snap1
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 100M /tank
tank@snap2 100M - 200M -
We can see that snap2 now shows 100 MBs used. If I were to delete snap2, I would be deleting 100 MB of data (or reclaiming 100 MB of space):
Now let's look at a more realistic example - my home directory where I have Time Slider running:
bleonard@opensolaris:~$ zfs list -t all -r -o space rpool/export/home
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool/export/home 25.4G 35.2G 17.9G 17.3G 0 0
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 - 166M - - - -
rpool/export/home@zfs-backup-2010-08-12-15:30 - 5.06M - - - -
rpool/export/home@zfs-backup-2010-08-12-15:56 - 5.15M - - - -
rpool/export/home@zfs-backup-2010-08-31-14:12 - 54.6M - - - -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 - 53.8M - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 - 95.8M - - - -
rpool/export/home@zfs-backup-2010-09-09-09:04 - 53.9M - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 - 2.06G - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 - 89.7M - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:00 - 18.3M - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 - 293K - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 - 293K - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 - 1.18M - - - -
My snapshots are consuming almost 18 GBs of space. However, it would appear that I could only reclaim about 2.5 GBs of space by deleting all of my snapshots. In reality, 15.5 GBs of space is referenced by 2 or more snapshots.
I can get a better idea of which snapshots might reclaim the most space by removing the space option so I get the REFER field in the output:
bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 35.2G 25.4G 17.3G /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 166M - 15.5G -
rpool/export/home@zfs-backup-2010-08-12-15:30 5.06M - 28.5G -
rpool/export/home@zfs-backup-2010-08-12-15:56 5.15M - 28.5G -
rpool/export/home@zfs-backup-2010-08-31-14:12 54.6M - 15.5G -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 53.8M - 15.5G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 95.8M - 15.5G -
rpool/export/home@zfs-backup-2010-09-09-09:04 53.9M - 17.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 2.06G - 19.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 89.7M - 15.5G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 1.18M - 17.3G -
rpool/export/home@zfs-auto-snap:hourly-2010-09-28-12:00 0 - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00 0 - 17.3G -
In the above output, I can see that 2 snapshots, taken 26 seconds apart, are referring to 28.5 GBs of disk space. If I were to delete one of those snapshots and check the zfs list output again:
bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:30
bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 35.2G 25.4G 17.3G /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 166M - 15.5G -
rpool/export/home@zfs-backup-2010-08-12-15:56 12.5G - 28.5G -
rpool/export/home@zfs-backup-2010-08-31-14:12 54.6M - 15.5G -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 53.8M - 15.5G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 95.8M - 15.5G -
rpool/export/home@zfs-backup-2010-09-09-09:04 53.9M - 17.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 2.06G - 19.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 89.7M - 15.5G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 1.18M - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00 537K - 17.3G -
I can now clearly see that the remaining snapshot is using 12.5 GBs of space and deleting this snapshot would reclaim much needed space on my laptop:
bleonard@opensolaris:~$ zpool list rpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 149G 120G 28.5G 80% ONLINE -
bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:56
bleonard@opensolaris:~$ zpool list rpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 149G 108G 41.0G 72% ONLINE -
And that should be enough to keep Time Slider humming along smoothly and prevent the warning dialog from appearing (lucky you if you haven't seen that yet).
Understanding the Space Used by ZFS -- (转)的更多相关文章
- linux概念之分区与文件系统
分区类型 [root@-shiyan dev]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated. It's strongly re ...
- User space 与 Kernel space
学习 Linux 时,经常可以看到两个词:User space(用户空间)和 Kernel space(内核空间). 简单说,Kernel space 是 Linux 内核的运行空间,User spa ...
- Understanding, Operating and Monitoring Apache Kafka
Apache Kafka is an attractive service because it's conceptually simple and powerful. It's easy to un ...
- A Beginner's Guide To Understanding Convolutional Neural Networks(转)
A Beginner's Guide To Understanding Convolutional Neural Networks Introduction Convolutional neural ...
- Understanding CMS GC Logs--转载
原文地址:https://blogs.oracle.com/poonam/entry/understanding_cms_gc_logs Understanding CMS GC Logs By Po ...
- Understanding G1 GC Logs--转载
原文地址:https://blogs.oracle.com/poonam/entry/understanding_g1_gc_logs Understanding G1 GC Logs By Poon ...
- [ZZ] Understanding 3D rendering step by step with 3DMark11 - BeHardware >> Graphics cards
http://www.behardware.com/art/lire/845/ --> Understanding 3D rendering step by step with 3DMark11 ...
- Understanding Virtual Memory
Understanding Virtual Memory by Norm Murray and Neil Horman Introduction Definitions The Life of a P ...
- (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2
Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...
随机推荐
- java.lang.NoSuchMethodError: org.hibernate.integrator.internal.IntegratorServiceImpl.<init>(Ljava/util/LinkedHashSet;Lorg/hibernate/boot/registry/classloading/spi/ClassLoaderService;)
需要:4.3及以上的版本才能用StandardServiceRegistryBuilder() hibernate-core-4.3.11.Final.jar version:4.3 ServiceR ...
- Ubuntu16.04中MySQL之中文不能插入问题
转自:http://blog.csdn.net/fr555wlj/article/details/55668476 今天下午在学习MySQL时,向表中插入一条数据含有中文,结果报错如下, ERROR ...
- codeforces 1041 E.Vasya and Good Sequences(暴力?)
E. Vasya and Good Sequences time limit per test 2 seconds memory limit per test 256 megabytes input ...
- java中new两个对象,在堆中开辟几个对象空间
内存堆中有两个对象,两个对象里都有独立的变量.p1 p2指向的不是同一个内存空间. 也可以这样描述引用p1,p2指向两个不同的对象.
- 【刷题】BZOJ 4636 蒟蒻的数列
Description 蒟蒻DCrusher不仅喜欢玩扑克,还喜欢研究数列 题目描述 DCrusher有一个数列,初始值均为0,他进行N次操作,每次将数列[a,b)这个区间中所有比k小的数改为k,他想 ...
- Linux中cd test和cd /test以及类似命令的区别
一.加“/”的区别 今天重拾Linux的学习!按照书上,在tmp下,创建文件夹,命令如下: mkdir -p /test1/test2 结果使用下面两行命令结果不同,就对是否加“/”有了疑问,就去百度 ...
- 洛谷 P2473 [SCOI2008]奖励关 解题报告
P2473 [SCOI2008]奖励关 题目描述 你正在玩你最喜欢的电子游戏,并且刚刚进入一个奖励关.在这个奖励关里,系统将依次随机抛出\(k\)次宝物,每次你都可以选择吃或者不吃(必须在抛出下一个宝 ...
- 解题:ZJOI 2013 K大数查询
题面 树套树,权值线段树套序列线段树,每次在在权值线段树上的每棵子树上做区间加,查询的时候左右子树二分 本来想两个都动态开点的,这样能体现树套树在线的优越性.但是常数太大惹,所以外层直接固定建树了QA ...
- python之旅:面向对象之多态、多态性
一 多态 多态指的是一类事物有多种形态 eg:动物有多种形态:猫,狗,猪 class Animal: #动物类 def eat(self): #吃 pass def drink(self): #喝 p ...
- pg_upgrade升级报错:Only the install user can be defined in the new cluster
前两天pg11刚出来,打算测试一下,想将测试库升级到pg11,之前测试库的版本是pg9.6,后面我将它升到了pg10,打算在pg10的版本基础上升级到pg11. 但执行时,多次报出: Performi ...