UNDERSTANDING POSTGRESQL.CONF: CHECKPOINT_SEGMENTS, CHECKPOINT_TIMEOUT, CHECKPOINT_WARNING
While there are some docs on it, I decided to write about it, in perhaps more accessible language – not as a developer, but as PostgreSQL user.
Some parts (quite large parts) were described in one of my earlier posts, but I'll try to concentrate on WAL itself, and show a bit more in here.
Before I will go into how it's written, and how to check various things, first let's try to think about simple question: why such thing even exits? What does it do?
Let's imagine you have a 1GB file, and you want to change the 100kB of it, from some defined offset.
Theoretically it's simple – open the file to write (without overwriting), do fseek() to appropriate location, and write the new 100kB. Then close the file, and open a beer – job done.
But is it? What will happen if the power will go down while doing the write? And you have no UPS?
It's actually pretty bad situation – let's assume that it happened in the middle of process. Your program got killed (well, the machine got turned off), and data on disk contains half old data and half new data!
Of course you could argue that it's why we have UPSes, but reality is a bit more complicated – various disks or storage controlles have write cache, and can lie to operating system (which then lies to application) that the data has been written, while it's in cache, and then the problem can strike again.
So, the solution has to be found that will make sure that we will either have new data, or old data, but not a mix of it.
The solution in here is relatively simple. Aside from this 1GB data file, store additional file, which never gets overwritten, only appended to. And change your process to:
- open log file, in append mode
- write to log file information “Will write this data (here goes the data) to this file (path) at offset (offset)"
- close the log file
- make sure that log file got actually written to disk. Call fsync() and hope that the disks will do it properly
- change data file normally
- mark the operation in log file as done
The last part can be simply done by storing somewhere location of last applied change from log file.
Now. Let's think about power failure. What will happen if the power failure will strike when writing to logfile? Nothing. Data in real file didn't get damaged, and your program just has to be smart enough to ignore not-fully written log entries. And what will happen if the power outage will happen during changing data in your real file? That's simple – on next start of your program, you check log for any changes that should be applied, but aren't, and you apply them – when the program is started content of data file is broken, but it gets fixed fast.
And if the power will break when marking the operation as done? No problem – on next start the operation will be simply redone.
Thanks to this we are (reasonably) safe from such things. It has also other benefits, but this will come later.
So, now that we know what is the purpose of WAL ( in case it wasn't clear: protect your data from hardware/power failures ), let's think about how.
PostgreSQL uses so called “pages". All things aside page is simply 8kB of data. That's why table/index files have always sizes divisible by 8192 (8kB) – you can't have table that is 10.5 pages in size. It's 11 pages. Not all of the pages are full. Not all are even guaranteed to be used (they could contain data that got removed).
All IO operations use pages. I.e. to get INT4 value from table, PostgreSQL reads 8kB of data (at least).
So, what happens when you INSERT new row? First PostgreSQL find which page it will put it to. It might be newly created page if all pages of the table are full, or it could be some other page if there is free space in them.
After the page has been chosen PostgreSQL loads it to memory (shared_buffers), and makes changes there. Information of all changes gets logged to WAL, but this is done by simple “write()" (without call to fsync, yet), so it's very fast.
Now, when you issue COMMIT; (which could happen automatically if it's auto-commit connection), PostgreSQL actually does fsync() to WAL. And that's all. The data is not written to your table file.
At this moment, the changes you wanted (new row) are in 2 places:
- modified copy of table page in shared_buffers
- record in WAL with information about the change
All writes to WAL are done not by individual backends, but by specialized process – wal writer, which you can see for example here:
=$ ps uxf
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
pgdba 0.0 0.0 ? S : : sshd: pgdba@pts/
pgdba 0.0 0.0 pts/ Ss : : \_ -bash
pgdba 0.0 0.0 pts/ R+ : : \_ ps uxf
pgdba 0.0 0.0 ? S Jul13 : /home/pgdba/work/bin/postgres
pgdba 0.0 0.0 ? Ss Jul13 : \_ postgres: logger process
pgdba 0.5 0.2 ? Ss Jul13 : \_ postgres: writer process
pgdba 0.0 0.0 ? Ss Jul13 : \_ postgres: wal writer process
pgdba 0.0 0.0 ? Ss Jul13 : \_ postgres: autovacuum launcher process
pgdba 0.0 0.0 ? Ss Jul13 : \_ postgres: archiver process last was
pgdba 0.0 0.0 ? Ss Jul13 : \_ postgres: stats collector process
Thanks to this there is no locking on writes to WAL, just simple, plain appending of data.
Now, if we'd continue the process for long time, we would have lots of modified pages in memory, and lots of records in WAL.
So, when is data written to actual disk pages of tables?
Two situations:
- page swap
- checkpoint
Page swap is very simple process – let's assume we had shared_buffers set to 10, and all these buffers are taken by 10 different pages, and all are modified. And now, due to user activity PostgreSQL has to load another page to get data from it. What will happen? It's simple – one of the pages will get evicted from memory, and new page will be loaded. If the page that got removed was “dirty" ( which means that there were some changes in it that weren't yet saved to table file ), it will be first written to table file.
Checkpoint is much more interesting. Before we will go into what it is, let's think about theoretical scenario. You have database that is 1GB in size, and your server has 10GB of RAM. Clearly you can keep all pages of database in memory, so the page swap never happens.
What will happen if you'd let the database run, with writes, changes, additions, for long time? Theoretically all would be OK – all changes would get logged to WAL, and memory pages would be modified, all good. Now imagine, that after 24 hours of work, the system gets killed – again – power failure.
On next start PostgreSQL would have to read, and apply, all changes from all WAL segments that happened in last 24 hours! That's a lot of work, and this would cause startup of PostgreSQL to take loooooooong time.
To solve this problem, we get checkpoints. These happen usually automatically, but you can force them to happen at will, by issuing CHECKPOINT command.
So, what is checkpoint? Checkpoint does very simple thing: it writes all dirty pages from memory to disk, marks them as “clean" in shared_buffers, and stores information that all of wal up to now is applied. This happens without any locking of course. So, the immediate information from here, is that amount of work that newly started PostgreSQL has to do is related to how much time passed before last CHECKPOINT and the moment PostgreSQL got stopped.
This brings us back to – when it happens. Manual checkpoints are not common, usually one doesn't even think about it – it all happens in background. How does PostgreSQL know when to checkpoint, then? Simple, thanks to two configuration parameters:
- checkpoint_segments
- checkpoint_timeout
And here, we have to learn a bit about segments.
As I wrote earlier – WAL is (in theory) infinite file, that gets only new data (appended), and never overwrites.
While this is nice in theory, practice is a bit more complex. For example – there is not really any use for WAL data that was logged before last checkpoint. And files of infinite size are (at least for now) not possible.
PostgreSQL developers decided to segment this infinite WAL into segments. Each segment has it's consecutive number, and is 16MB in size. When one segment will be full, PostgreSQL simply switches to next.
Now, that we know what is segments we can understand what checkpoint_segments is about. This is number (default: 3) which means: if that many segments for filled since last checkpoint, issue new checkpoint.
With defaults, it means that if you'd insert data that would take (in PostgreSQL format) 6144 pages ( 16MB of segment is 2048 pages, so 3 segments are 6144 pages) – it would automatically issue checkpoint.
Second parameter – checkpoint_timeout, is a time interval (defaults to 5 minutes), and if this time passes from last checkpoint – new checkpoint will be issued. It has to be understood that (generally) the more often you make the checkpoints, the less invasive they are.
This comes from simple fact – generally, over time, more and more different pages get dirty. If you'd checkpoint every minute – only pages from minute would have to be written to disk. 5 minutes – more pages. 1 hour – even more pages.
While checkpointing doesn't lock anything, it has to be understood that checkpoint of (for example) 30 segments, will cause high-intensive write of 480 MB of data to disk. And this might cause some slowdowns for concurrent reads.
So far, I hope, it's pretty clear.
Now, next part of the jigsaw – wal segments.
These files (wal segments) reside in pg_xlog/ directory of PostgreSQL PGDATA:
=$ ls -l data/pg_xlog/
total
-rw------- pgdba pgdba -- :
-rw------- pgdba pgdba -- :
-rw------- pgdba pgdba -- : 00000001000000080000005A
-rw------- pgdba pgdba -- : 00000001000000080000005B
-rw------- pgdba pgdba -- : 00000001000000080000005C
-rw------- pgdba pgdba -- : 00000001000000080000005D
-rw------- pgdba pgdba -- : 00000001000000080000005E
-rw------- pgdba pgdba -- : 00000001000000080000005F
drwx------ pgdba pgdba -- : archive_status/
Each segment name contains 3 blocks of 8 hex digits. For example: 00000001000000080000005C means:
- 00000001 – timeline 1
- 00000008 – 8th block
- 0000005C – hex(5C) segment within block
Last part goes only from 00000000 to 000000FE (not FF!).
2nd part of filename plus the 2 characters at the end of 3rd part give us location is this theoretical infinite WAL file.
Within PostgreSQL we can always check what is current WAL location:
$ select pg_current_xlog_location();
pg_current_xlog_location
--------------------------
8/584A62E0
(1 row)
This means that we are using now file xxxxxxxx000000080000058, and PostgreSQL is writing at offset 4A62E0 in it – which is 4874976, which, since the WAL segment is 16MB means that the wal segment is filled in ~ 25% now.
The most mysterious thing is timeline. Timeline starts from 1, and increments (by one) everytime you make WAL-slave from server, and this Slave is promoted to standalone. Generally – within given working server this value doesn't change.
All of these information we can also get using pg_controldata program:
=$ pg_controldata data
pg_control version number:
Catalog version number:
Database system identifier:
Database cluster state: in production
pg_control last modified: Thu Jul :: AM CEST
Latest checkpoint location: /584A6318
Prior checkpoint location: /584A6288
Latest checkpoint's REDO location: 8/584A62E0
Latest checkpoint's TimeLineID: 1
Latest checkpoint's NextXID: 0/33611
Latest checkpoint's NextOID: 28047
Latest checkpoint's NextMultiXactId: 1
Latest checkpoint's NextMultiOffset: 0
Latest checkpoint's oldestXID: 727
Latest checkpoint's oldestXID's DB:
Latest checkpoint's oldestActiveXID: 33611
Time of latest checkpoint: Thu Jul :: AM CEST
Minimum recovery ending location: /
Backup start location: /
Current wal_level setting: hot_standby
Current max_connections setting:
Current max_prepared_xacts setting:
Current max_locks_per_xact setting:
Maximum data alignment:
Database block size:
Blocks per segment of large relation:
WAL block size:
Bytes per WAL segment:
Maximum length of identifiers:
Maximum columns in an index:
Maximum size of a TOAST chunk:
Date/time type storage: -bit integers
Float4 argument passing: by value
Float8 argument passing: by value
This has some interesting information – for example location (in WAL infinite-file) of last checkpoint, previous checkpoint, and REDO location.
REDO location is very important – this is the place in WAL that PostgreSQL will have to read from if it got killed, and restarted.
Values above don't differ much because this is my test system which doesn't have any traffic now, but we can see on another machine:
=> pg_controldata data/
...
Latest checkpoint location: 623C/E07AC698
Prior checkpoint location: 623C/DDD73588
Latest checkpoint's REDO location: 623C/DE0915B0
...
The last thing that's important is to understand what happens with obsolete WAL segments, and how “new" wal segments are created.
Let me show you one thing again:
=$ ls -l data/pg_xlog/
total
-rw------- pgdba pgdba -- :
-rw------- pgdba pgdba -- :
-rw------- pgdba pgdba -- : 00000001000000080000005A
-rw------- pgdba pgdba -- : 00000001000000080000005B
-rw------- pgdba pgdba -- : 00000001000000080000005C
-rw------- pgdba pgdba -- : 00000001000000080000005D
-rw------- pgdba pgdba -- : 00000001000000080000005E
-rw------- pgdba pgdba -- : 00000001000000080000005F
drwx------ pgdba pgdba -- : archive_status/
This was on system with no writes, and REDO location of 8/584A62E0.
Since on start PostgreSQL will need to read from this location, all WAL segments before 000000010000000800000058 (i.e. 000000010000000800000057, 000000010000000800000056 and so on) are obsolete.
On the other hand – please note that we have ready seven files for future use.
PostgreSQL works in this way: whenever WAL segment gets obsolete (i.e. REDO location is later in WAL than this segment) the file is renamed. That's right. It's not removed, it's renamed. Renamed to what? To next file in WAL. So when I'll do some writes, and then there will be checkpoint in 8/59* location, file 000000010000000800000058 will get renamed to 000000010000000800000060.
This is one of the reasons why your checkpoint_segments shouldn't be too low.
Let's think for a while about what would happen if we had very long checkpoint_timeout, and we would fill all checkpoint_segments. To record new write PostgreSQL would have to either do checkpoint (which it will do), but at the same time – it wouldn't have any more ready segments left to use. So it would have to create new file. New file, 16MB of data (\x00 probably) – it would have to be written to disk before PostgreSQL could write anything that user requested. Which means that if you'll ever reach the checkpoint_segments concurrent user activity will be slowed down, because PostgreSQL will have to create new files to accommodate writes of data requested by users.
Usually it's not a problem, you just set checkpoint_segments to some relatively high number, and you're done.
Anyway. When looking at pg_xlog/ directory, current WAL segment (the one that gets the writes) is usually somewhere in the middle. Which might cause some confusion, because mtime of the files will not change in the same direction as numbers in filenames. Like here:
$ ls -l
total
-rw------- postgres postgres Jul : 000000010000002B0000002A
-rw------- postgres postgres Jul : 000000010000002B0000002B
-rw------- postgres postgres Jul : 000000010000002B0000002C
-rw------- postgres postgres Jul : 000000010000002B0000002D
-rw------- postgres postgres Jul : 000000010000002B0000002E
-rw------- postgres postgres Jul : 000000010000002B0000002F
-rw------- postgres postgres Jul : 000000010000002B00000030
-rw------- postgres postgres Jul : 000000010000002B00000031
-rw------- postgres postgres Jul : 000000010000002B00000032
-rw------- postgres postgres Jul : 000000010000002B00000033
-rw------- postgres postgres Jul : 000000010000002B00000034
-rw------- postgres postgres Jul : 000000010000002B00000035
-rw------- postgres postgres Jul : 000000010000002B00000036
-rw------- postgres postgres Jul : 000000010000002B00000037
-rw------- postgres postgres Jul : 000000010000002B00000038
-rw------- postgres postgres Jul : 000000010000002B00000039
-rw------- postgres postgres Jul : 000000010000002B0000003A
-rw------- postgres postgres Jul : 000000010000002B0000003B
-rw------- postgres postgres Jul : 000000010000002B0000003C
-rw------- postgres postgres Jul : 000000010000002B0000003D
-rw------- postgres postgres Jul : 000000010000002B0000003E
-rw------- postgres postgres Jul : 000000010000002B0000003F
-rw------- postgres postgres Jul : 000000010000002B00000040
-rw------- postgres postgres Jul : 000000010000002B00000041
-rw------- postgres postgres Jul : 000000010000002B00000042
-rw------- postgres postgres Jul : 000000010000002B00000043
-rw------- postgres postgres Jul : 000000010000002B00000044
-rw------- postgres postgres Jul : 000000010000002B00000045
-rw------- postgres postgres Jul : 000000010000002B00000046
-rw------- postgres postgres Jul : 000000010000002B00000047
-rw------- postgres postgres Jul : 000000010000002B00000048
-rw------- postgres postgres Jul : 000000010000002B00000049
-rw------- postgres postgres Jul : 000000010000002B0000004A
-rw------- postgres postgres Jul : 000000010000002B0000004B
-rw------- postgres postgres Jul : 000000010000002B0000004C
-rw------- postgres postgres Jul : 000000010000002B0000004D
-rw------- postgres postgres Jul : 000000010000002B0000004E
-rw------- postgres postgres Jul : 000000010000002B0000004F
-rw------- postgres postgres Jul : 000000010000002B00000050
-rw------- postgres postgres Jul : 000000010000002B00000051
-rw------- postgres postgres Jul : 000000010000002B00000052
-rw------- postgres postgres Jul : 000000010000002B00000053
-rw------- postgres postgres Jul : 000000010000002B00000054
drwx------ postgres postgres Jun : archive_status
Please note that newest file – 000000010000002B00000033 is neither the first, nor the last. And the oldest file – is quote close after newest – 000000010000002B00000036.
This is all natural. All files before current, are the ones that are still needed, and their mtimes will be going in the same direction as WAL segments numbering.
Last file (based on filenames) – *54 has mtime just before *2A – which tells us that it previously was *29, but got renamed when REDO location moved somewhere to file *2A.
Hope that it's clear from above explanation, if not – please state your questions/concerns in comments.
So, to wrap it up. WAL exists to save your bacon in case of emergency. Thanks to WAL it is very hard to get any problems with data – I would even say impossible, but it's still possible in case your hardware misbehaves – like: lies about actual disk writes.
WAL is stored in a number of files in pg_xlog/ directory, and the files get reused, so the directory shouldn't grow. Number of these files is usually 2 * checkpoint_segments + 1.
Whoa? Why 2* checkpoint_segments?
Reason is very simple. Let's assume you have checkpoint_segments set to 5. You filled them all, and checkpoint is called. Checkpoint is called in WAL segment #x. In #x + 5 we will have another checkpoint. But PostgreSQL always keeps (at least) checkpoint_segments ahead of current location, to avoid need to create new segments for data from user queries. So, at any given moment, you might have:
- current segment
- checkpoint_segments segments, since REDO location
- checkpoint_segments “buffer" in front of current location
Sometimes, when you have more writes than checkpoint_segments, in which case PostgreSQL will create new segments (as I described above). Which will inflate number of files in pg_xlog/. But this will get restored after some time – simply some obsolete segments will not get renamed, but instead will be removed.
Finally, last thing. GUC “checkpoint_warning". It is also (like checkpoint_timeout) interval, usually much shorter – by default 30 seconds. This is used to log (not to WAL, but normal log) information if the automated checkpoints happen too often.
Since checkpoint_timeout is supposedly larger than checkpoint_warning, this usually means that it alerts if you filled more than checkpoint_segments worth of log in checkpoint_timeout time.
Such information looks like this:
-- ::22.160 CEST @ LOG: checkpoint starting: xlog
-- ::26.175 CEST @ LOG: checkpoint complete: wrote buffers (40.7%); transaction log file(s) added, removed, recycled; write=3.225 s, sync=0.720 s, total=4.014 s; sync files=, longest=0.292 s, average=0.144 s
-- ::34.904 CEST @ LOG: checkpoints are occurring too frequently ( seconds apart)
-- ::34.904 CEST @ HINT: Consider increasing the configuration parameter "checkpoint_segments".
-- ::34.904 CEST @ LOG: checkpoint starting: xlog
-- ::39.239 CEST @ LOG: checkpoint complete: wrote buffers (41.2%); transaction log file(s) added, removed, recycled; write=3.425 s, sync=0.839 s, total=4.334 s; sync files=, longest=0.267 s, average=0.167 s
-- ::48.077 CEST @ LOG: checkpoints are occurring too frequently ( seconds apart)
-- ::48.077 CEST @ HINT: Consider increasing the configuration parameter "checkpoint_segments".
-- ::48.077 CEST @ LOG: checkpoint starting: xlog
Please note the “HINT" lines.
These are hints only (that is not warnings, or fatals) because too low checkpoint_segments doesn't cause any risk to your data – it just might slow down interaction with clients, if user will send modification query, that will have to wait for new WAL segment to be created (i.e. 16MB written to disk).
As a last note – if you're having some kind of monitoring of your PostgreSQL (like cacti, or ganglia, or munin or some commercial, like circonus) you might want to add graph that will show you your WAL progress in time.
To do it, you'd need to convert the current xlog location to some normal decimal number, and then draw differences. For example like this:
=$ psql -qAtX -c "select pg_current_xlog_location()" | \
awk -F/ 'BEGIN{print "ibase=16"} {printf "%s%08s\n", $1, $2}' | \
bc
108018127223360
Or, if the numbers get too big, just decimal “number" of the file:
=$ (
echo "ibase=16"
psql -qAtX -c "select pg_xlogfile_name(pg_current_xlog_location())" | \
cut -b 9-16,23-24
) | bc
6438394
Drawing increments of the 2nd value (6438394) in 5 minute increments will tell you what's the optimal checkpoint_segments (although always remember to make it a bit larger than the actually needed, just in case of sudden spike in traffic).
参考:
http://www.depesz.com/2011/07/14/write-ahead-log-understanding-postgresql-conf-checkpoint_segments-checkpoint_timeout-checkpoint_warning/
注:
1、Information of all changes gets logged to WAL, but this is done by simple “write()" (without call to fsync, yet),修改的内容在还未commit时,就已写入到了日志;commit后,再调用fsync()写日志,不知道这两种写有什么区别?
2、checkpoint会将所有dirty数据写入磁盘,不管是否提交。
UNDERSTANDING POSTGRESQL.CONF: CHECKPOINT_SEGMENTS, CHECKPOINT_TIMEOUT, CHECKPOINT_WARNING的更多相关文章
- Understanding postgresql.conf : log*
After loooong pause, adding next (well, second) post to the “series“. This time, I'd like to describ ...
- PostgreSQL.conf文件配置详解[转]
一.连接配置与安全认证 1.连接Connection Settings listen_addresses (string) 这个参数只有在启动数据库时,才能被设置.它指定数据库用来监听客户端连接的 ...
- PostgreSQL——服务器配置_{postgresql.conf}
一.设置参数 所有参数名称都是不区分大小写的 值为字符串时,需要单引号 值为数值时不需要单引号,但带单位时,需要单引号 配置文件(如:postgresql.conf.postgresql.auto.c ...
- [Postgres]postgresql.conf : Permission denied处理一法
使用yum安装完postgresql,没有使用默认的DATA地址,自己配置了DATA地址以后,使用root权限启动service service postgresql start ,报出了" ...
- PostgreSQL configuration file postgresql.conf recommanded settings and how it works
1 Set max_connections to three times the number of processor cores on the server. Include vir ...
- PostgreSQL数据库postgresql.conf部分相关参数
listen_addresses:#指定数据库用来监听客户端连接的TCP/IP地址,默认是值是* ,表示数据库在启动以后将在运行数据的机器上的所有的IP地址上监听用户请求,可以写成机器的名字,也可以写 ...
- postgresql pg_hba.conf
pg_hba.conf是客户端认证配置文件 METHOD指定如何处理客户端的认证.常用的有ident,md5,password,trust,reject. PostgreSQL默认只监听本地端口,用n ...
- postgreSQL远程连接出现:Error connecting to server :致命错误 SSL关闭的pg_hba.conf记录
异常截图:
- PostgreSQL pg_hba.conf 文件简析
作者:高张远瞩(HiLoveS) 博客:http://www.cnblogs.com/hiloves/ 转载请保留该信息 最近试用PostgreSQL 9.04,将pg_hba.conf配置的一些心得 ...
随机推荐
- mysql 命令行操作
1.连接Mysql 格式: mysql -h主机地址 -u用户名 -p用户密码 1.连接到本机上的MYSQL.首先打开DOS窗口,然后进入目录mysql\bin,再键入命令mysql -u root ...
- (spring-第13回【IoC基础篇】)PropertyEditor(属性编辑器)--实例化Bean的第五大利器
上一篇讲到JavaBeans的属性编辑器,编写自己的属性编辑器,需要继承PropertyEditorSupport,编写自己的BeanInfo,需要继承SimpleBeanInfo,然后在BeanIn ...
- linux下的定时任务
cronb命令 在Linux中,周期执行的任务一般由cron这个守护进程来处理.ps -ef | grep cron.cron读取一个或多个配置文件,这些配置文件中包含了命令行及其调用时间. cron ...
- Swift:函数和闭包
函数 函数是一个完成独立任务的代码块,Swift中的函数不仅可以像C语言中的函数一样有函数的参数和返回值,而且还支持嵌套,并且有函数参数默认值.可变参数等. //定义一个函数,注意参数和返回值,如果没 ...
- IOS开发-几种截屏方法
IOS开发-几种截屏方法 1. UIGraphicsBeginImageContextWithOptions(pageView.page.bounds.size, YES, zoomSc ...
- 第一个Shader的更新,增加爆光度, 属性改为数值型(更直观,精确)
Shader "Castle/ColorMix" { Properties { // 基本贴图 _MainTex ("Texture Image", 2D) = ...
- LeetCode Search a 2D Matrix II (技巧)
题意: 有一个矩阵,每行有序,每列也有序.判断一个数target是否存在于此矩阵中. 思路: 从右上角开始,如果当前数字<target,则该行作废.若当前数字>target,该列作废.这样 ...
- 3des加解密算法
编号:1003时间:2016年4月1日09:51:11功能:openssl_3des加解密算法http://blog.csdn.net/alonesword/article/details/17385 ...
- CoreOS Linux available in China
CoreOS Linux 竭诚服务中国用户 今天,我们宣布一个令人振奋的消息 - CoreOS Linux 开源版本正式向中国地区提供服务!国内的用户们现在可以使用安全.自动的 CoreOS Linu ...
- 如何使用SnpEff 对SNP结果进行分析
SnpEff is a variant annotation and effect prediction tool. It annotates and predicts the effects of ...