Understanding the Space Used by ZFS

By Brian Leonard on Sep 28, 2010

Until recently, I've been confused and frustrated by the zfs list output as I try to clear up space on my hard drive.

Take this example using a 1 GB zpool:

bleonard@os200906:~# mkfile 1G /dev/dsk/disk1
bleonard@os200906:~# zpool create tank disk1
bleonard@os200906:~# zpool list tank
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank 1016M 73K 1016M 0% ONLINE -

Now let's create some files and snapshots:

bleonard@os200906:~# mkfile 100M /tank/file1
bleonard@os200906:~# zfs snapshot tank@snap1
bleonard@os200906:~# mkfile 100M /tank/file2
bleonard@os200906:~# zfs snapshot tank@snap2
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 200M /tank
tank@snap1 17K - 100M -
tank@snap2 0 - 200M -

The output here looks as I'd expect. I have used 200 MB of disk space, neither of which is used by the snapshots. snap1 refers to 100 MBs of data (file1) and snap2 refers to 200 MBs of data (file1 and file2).

Now let's delete file1 and look at our zfs list output again:

bleonard@os200906:~# rm /tank/file1
bleonard@os200906:~# zpool scrub tank
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 100M /tank
tank@snap1 17K - 100M -
tank@snap2 17K - 200M -

Only 1 thing has changed - tank now only refers to 100 MB of data. file1 has been deleted and is only referenced by the snapshots. So why don't the snapshots reflect this in their USED column? You may think we should show 100 MB used by snap1, however, this would be misleading as deleting snap1 has no effect on the data used by the tank file system. Deleting snap1 would only free up 17K of disk space. We'll come back to this test case in a moment.

There is an option to get more detail on the space consumed by the snapshots. Although you can pretty easily deduct from the example above that the snapshots are using 100 MB, by using the zfs space option you can save yourself from doing the math:

bleonard@os200906:~# zfs list -t all -o space -r tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 784M 200M 100M 100M 0 154K
tank@snap1 - 17K - - - -
tank@snap2 - 17K - - - -

Here we can clearly see that of the 200 MB used by our file system, 100 MB is used by snapshots (file1) and 100 MB is used by the dataset itself (file2). Of course, there are other factors that can affect the total amount used - see the zfs man page for details.

Now, if we were to delete snap1 (we know this is safe, because it's not using any space):

bleonard@os200906:~# zfs destroy tank@snap1
bleonard@os200906:~# zfs list -t all -r tank
NAME USED AVAIL REFER MOUNTPOINT
tank 200M 784M 100M /tank
tank@snap2 100M - 200M -

We can see that snap2 now shows 100 MBs used. If I were to delete snap2, I would be deleting 100 MB of data (or reclaiming 100 MB of space):

Now let's look at a more realistic example - my home directory where I have Time Slider running:

bleonard@opensolaris:~$ zfs list -t all -r -o space rpool/export/home
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool/export/home 25.4G 35.2G 17.9G 17.3G 0 0
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 - 166M - - - -
rpool/export/home@zfs-backup-2010-08-12-15:30 - 5.06M - - - -
rpool/export/home@zfs-backup-2010-08-12-15:56 - 5.15M - - - -
rpool/export/home@zfs-backup-2010-08-31-14:12 - 54.6M - - - -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 - 53.8M - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 - 95.8M - - - -
rpool/export/home@zfs-backup-2010-09-09-09:04 - 53.9M - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 - 2.06G - - - -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 - 89.7M - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:00 - 18.3M - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 - 293K - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 - 293K - - - -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 - 1.18M - - - -

My snapshots are consuming almost 18 GBs of space. However, it would appear that I could only reclaim about 2.5 GBs of space by deleting all of my snapshots. In reality, 15.5 GBs of space is referenced by 2 or more snapshots.

I can get a better idea of which snapshots might reclaim the most space by removing the space option so I get the REFER field in the output:

bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 35.2G 25.4G 17.3G /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 166M - 15.5G -
rpool/export/home@zfs-backup-2010-08-12-15:30 5.06M - 28.5G -
rpool/export/home@zfs-backup-2010-08-12-15:56 5.15M - 28.5G -

rpool/export/home@zfs-backup-2010-08-31-14:12 54.6M - 15.5G -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 53.8M - 15.5G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 95.8M - 15.5G -
rpool/export/home@zfs-backup-2010-09-09-09:04 53.9M - 17.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 2.06G - 19.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 89.7M - 15.5G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 1.18M - 17.3G -
rpool/export/home@zfs-auto-snap:hourly-2010-09-28-12:00 0 - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00 0 - 17.3G -

In the above output, I can see that 2 snapshots, taken 26 seconds apart, are referring to 28.5 GBs of disk space. If I were to delete one of those snapshots and check the zfs list output again:

bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:30
bleonard@opensolaris:~$ zfs list -t all -r rpool/export/home
NAME USED AVAIL REFER MOUNTPOINT
rpool/export/home 35.2G 25.4G 17.3G /export/home
rpool/export/home@zfs-auto-snap:monthly-2010-08-03-09:30 166M - 15.5G -
rpool/export/home@zfs-backup-2010-08-12-15:56 12.5G - 28.5G -
rpool/export/home@zfs-backup-2010-08-31-14:12 54.6M - 15.5G -
rpool/export/home@zfs-auto-snap:monthly-2010-09-01-00:00 53.8M - 15.5G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-08-00:00 95.8M - 15.5G -
rpool/export/home@zfs-backup-2010-09-09-09:04 53.9M - 17.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-15-00:00 2.06G - 19.4G -
rpool/export/home@zfs-auto-snap:weekly-2010-09-22-00:00 89.7M - 15.5G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:15 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:30 293K - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-11:45 1.18M - 17.3G -
rpool/export/home@zfs-auto-snap:frequent-2010-09-28-12:00 537K - 17.3G -

I can now clearly see that the remaining snapshot is using 12.5 GBs of space and deleting this snapshot would reclaim much needed space on my laptop:

bleonard@opensolaris:~$ zpool list rpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 149G 120G 28.5G 80% ONLINE -
bleonard@opensolaris:~$ pfexec zfs destroy rpool/export/home@zfs-backup-2010-08-12-15:56
bleonard@opensolaris:~$ zpool list rpool
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 149G 108G 41.0G 72% ONLINE -

And that should be enough to keep Time Slider humming along smoothly and prevent the warning dialog from appearing (lucky you if you haven't seen that yet).

 
 
https://blogs.oracle.com/observatory/entry/understanding_the_space_used_by

Understanding the Space Used by ZFS -- (转)的更多相关文章

  1. linux概念之分区与文件系统

    分区类型 [root@-shiyan dev]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated. It's strongly re ...

  2. User space 与 Kernel space

    学习 Linux 时,经常可以看到两个词:User space(用户空间)和 Kernel space(内核空间). 简单说,Kernel space 是 Linux 内核的运行空间,User spa ...

  3. Understanding, Operating and Monitoring Apache Kafka

    Apache Kafka is an attractive service because it's conceptually simple and powerful. It's easy to un ...

  4. A Beginner's Guide To Understanding Convolutional Neural Networks(转)

    A Beginner's Guide To Understanding Convolutional Neural Networks Introduction Convolutional neural ...

  5. Understanding CMS GC Logs--转载

    原文地址:https://blogs.oracle.com/poonam/entry/understanding_cms_gc_logs Understanding CMS GC Logs By Po ...

  6. Understanding G1 GC Logs--转载

    原文地址:https://blogs.oracle.com/poonam/entry/understanding_g1_gc_logs Understanding G1 GC Logs By Poon ...

  7. [ZZ] Understanding 3D rendering step by step with 3DMark11 - BeHardware >> Graphics cards

    http://www.behardware.com/art/lire/845/ --> Understanding 3D rendering step by step with 3DMark11 ...

  8. Understanding Virtual Memory

    Understanding Virtual Memory by Norm Murray and Neil Horman Introduction Definitions The Life of a P ...

  9. (转)A Beginner's Guide To Understanding Convolutional Neural Networks Part 2

    Adit Deshpande CS Undergrad at UCLA ('19) Blog About A Beginner's Guide To Understanding Convolution ...

随机推荐

  1. CGOS 8 备用交换机(割点)

    题目链接:http://cojs.tk/cogs/problem/problem.php?pid=8 题意:n个城市之间有通讯网络,每个城市都有通讯交换机,直接或间接与其它城市连接.因电子设备容易损坏 ...

  2. 半夜思考之查漏补缺, 在 Spring中, 所有的 bean 都是 Spring 创建的吗 ?

    Spring 是一个 bean 容器, 负责 bean 的创建, 那么所有的 bean对象都是 Spring 容器创建的吗 ? 答案是否定的. 但是乍一想, 好像所有的对象都是 Spring 容器负责 ...

  3. ASP.NET MVC中在 @RenderBody() 或者 @Html.Partial()中需要使用引入外部js,css

    今天想在后台封装一下bootstraptree这个插件,引入jquery.js bootstrap.js bootstrap.css bootstrap-tree.js后,我在页面查看脚本错误就连最简 ...

  4. Java pdf转String 并修正格式

    在尝试pdf转成String的时候,首先用python的pdfminer和pdfminer3k去尝试转换,然后资料看不太懂,就尝试用了java, 以下是java的pdfbox写的pdf转String函 ...

  5. 3.11 - 3.12 A day with Google

    补了一番游记. 找了一个本科学弟一起去上海游玩.本来老板还要我周一过去讨论寒假阅读的论文,总算是把讨论时间挪到周六了. 兴冲冲地买好车票后就开始期待上海Google office的神秘之旅. upda ...

  6. Mininet 系列实验(六)

    写在前面 这次实验遇到了非常多问题,非常非常多,花了很多时间去解决,还是有一些小问题没有解决,但是基本上能完成实验.建议先看完全文再开始做实验. 实验内容 先看一下本次实验的拓扑图: 在该环境下,假设 ...

  7. Metasploit+python生成免杀exe过360杀毒

    Metasploit+python生成免杀exe过360杀毒 1在kali下生成一个反弹的msf的python脚本,命令如下: msfvenom -p windows/meterpreter/reve ...

  8. IO编程(2)-操作文件和目录

    操作文件和目录 如果我们要操作文件.目录,可以在命令行下面输入操作系统提供的各种命令来完成.比如dir.cp等命令. 如果要在Python程序中执行这些目录和文件的操作怎么办?其实操作系统提供的命令只 ...

  9. 文件查找 locate 和 find

    locate locate命令依赖于一个数据库文件,系统默认每天会检索一次系统中的所有文件,然后将检索到的文件记录到数据库中; 在执行查找时,可直接到数据库中查找记录,所以locate比find反馈更 ...

  10. php高效遍历文件夹、高效读取文件

    /** * PHP高效遍历文件夹(大量文件不会卡死) * @param string $path 目录路径 * @param integer $level 目录深度 */ function fn_sc ...