Ubuntu16.04安装Redis并配置

2018年05月22日 10:40:35 Hello_刘 阅读数:29146
 

Ubuntu16.04安装Redis并配置
1):安装:

1:终端命令下载redis-4.0.9.tar.gz包

wget http://download.redis.io/releases/redis-4.0.9.tar.gz包

2:解压

tar xzf redis-4.0.9.tar.gz

3:移动,放到usr/local⽬录下

sudo mv ./redis-4.0.9 /usr/local/redis/

4:进⼊redis⽬录

cd /usr/local/redis/

5:生成

sudo make

6:测试,这段运⾏时间会较⻓

sudo make test

7:安装,将redis的命令安装到/usr/local/bin/⽬录

sudo make install

8:安装完成后,进入目录/usr/local/bin中查看

  1.  
    cd /usr/local/bin
  2.  
    ls -all

  • redis-server redis服务器
  • redis-cli redis命令行客户端
  • redis-benchmark redis性能测试工具
  • redis-check-aof AOF文件修复工具
  • redis-check-rdb RDB文件检索工具

9:把配置⽂件移动到/etc/redis⽬录下
配置⽂件⽬录为/usr/local/redis/redis.conf

在/etc/目录下创建redis目录,然后移动配置文件

sudo cp /usr/local/redis/redis.conf /etc/redis/

2):配置
1:查看
Redis的配置信息在/etc/redis/redis.conf下

sudo vim /etc/redis/redis.conf

2:核心配置

绑定ip:如果需要远程访问,可将此⾏注释,或绑定⼀个真实ip
bind 127.0.0.1

端⼝,默认为6379
port 6379

是否以守护进程运⾏

  • 如果以守护进程运⾏,则不会在命令⾏阻塞,类似于服务
  • 如果以⾮守护进程运⾏,则当前终端被阻塞
  • 设置为yes表示守护进程,设置为no表示⾮守护进程
  • 推荐设置为yes

daemonize yes

数据⽂件
dbfilename dump.rdb

数据⽂件存储路径
dir /var/lib/redis

⽇志⽂件
logfile "/var/log/redis/redis-server.log"

数据库,默认有16个
database 16

主从复制,类似于双机备份。

slaveof

详细配置参数:

  1.  
    # Redis configuration file example
  2.  
     
  3.  
    # Note on units: when memory size is needed, it is possible to specify
  4.  
    # it in the usual form of 1k 5GB 4M and so forth:
  5.  
    # 内存大小的配置,下面是内存大小配置的转换方式
  6.  
    #
  7.  
    # 1k => 1000 bytes
  8.  
    # 1kb => 1024 bytes
  9.  
    # 1m => 1000000 bytes
  10.  
    # 1mb => 1024*1024 bytes
  11.  
    # 1g => 1000000000 bytes
  12.  
    # 1gb => 1024*1024*1024 bytes
  13.  
    #
  14.  
    # units are case insensitive so 1GB 1Gb 1gB are all the same.
  15.  
    # 内存大小的配置,不区分大小写
  16.  
     
  17.  
    ################################## INCLUDES ###################################
  18.  
     
  19.  
    # Include one or more other config files here. This is useful if you
  20.  
    # have a standard template that goes to all Redis server but also need
  21.  
    # to customize a few per-server settings. Include files can include
  22.  
    # other files, so use this wisely.
  23.  
    #
  24.  
    # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
  25.  
    # from admin or Redis Sentinel. Since Redis always uses the last processed
  26.  
    # line as value of a configuration directive, you'd better put includes
  27.  
    # at the beginning of this file to avoid overwriting config change at runtime.
  28.  
    #
  29.  
    # If instead you are interested in using includes to override configuration
  30.  
    # options, it is better to use include as the last line.
  31.  
    #
  32.  
    # include /path/to/local.conf
  33.  
    # include /path/to/other.conf
  34.  
    # 当配置多个redis时,可能大部分配置一样,而对于不同的redis,只有少部分配置需要定制
  35.  
    # 就可以配置一个公共的模板配置。
  36.  
    # 对于具体的reids,只需设置少量的配置,并用include把模板配置包含进来即可。
  37.  
    #
  38.  
    # 值得注意的是,对于同一个配置项,redis只对最后一行的有效
  39.  
    # 所以为避免模板配置覆盖当前配置,应在配置文件第一行使用include
  40.  
    # 当然,如果模板配置的优先级比较高,就在配置文件最后一行使用include
  41.  
     
  42.  
    ################################ GENERAL #####################################
  43.  
     
  44.  
    # By default Redis does not run as a daemon. Use 'yes' if you need it.
  45.  
    # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
  46.  
    # yes为使用守护进程,此时redis的进程ID会被写进 pidfile的配置中
  47.  
    daemonize yes
  48.  
     
  49.  
    # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
  50.  
    # default. You can specify a custom pid file location here.
  51.  
    # 当redis以守护进程的方式启动时,redis的进程ID将会写在这个文件中
  52.  
    pidfile /var/run/redis.pid
  53.  
     
  54.  
    # Accept connections on the specified port, default is 6379.
  55.  
    # If port 0 is specified Redis will not listen on a TCP socket.
  56.  
    # redis 启动的端口。【应该知道redis是服务端吧】
  57.  
    port 6379
  58.  
     
  59.  
    # TCP listen() backlog.
  60.  
    #
  61.  
    # In high requests-per-second environments you need an high backlog in order
  62.  
    # to avoid slow clients connections issues. Note that the Linux kernel
  63.  
    # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
  64.  
    # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
  65.  
    # in order to get the desired effect.
  66.  
    # 最大链接缓冲池的大小,这里应该是指的未完成链接请求的数量
  67.  
    #(测试值为1时,仍可以有多个链接)
  68.  
    # 但该值与listen函数中的backlog意义应该是相同的,源码中该值就是被用在了listen函数中
  69.  
    # 该值同时受/proc/sys/net/core/somaxconn 和 tcp_max_syn_backlog(/etc/sysctl.conf中配置)的限制
  70.  
    # tcp_max_syn_backlog 指的是未完成链接的数量
  71.  
    tcp-backlog 511
  72.  
     
  73.  
    # By default Redis listens for connections from all the network interfaces
  74.  
    # available on the server. It is possible to listen to just one or multiple
  75.  
    # interfaces using the "bind" configuration directive, followed by one or
  76.  
    # more IP addresses.
  77.  
    # 绑定ip,指定ip可以连接到redis
  78.  
    #
  79.  
    # Examples:
  80.  
    #
  81.  
    # bind 192.168.1.100 10.0.0.1
  82.  
    # bind 127.0.0.1
  83.  
     
  84.  
    # Specify the path for the Unix socket that will be used to listen for
  85.  
    # incoming connections. There is no default, so Redis will not listen
  86.  
    # on a unix socket when not specified.
  87.  
    #
  88.  
    # 这个应该就是以文件形式创建的socket
  89.  
    # unixsocket /tmp/redis.sock
  90.  
    # unixsocketperm 755
  91.  
     
  92.  
    # Close the connection after a client is idle for N seconds (0 to disable)
  93.  
    # 超时断链机制,如果一个链接在N秒内没有任何操作,则断开该链接
  94.  
    # N为0时,该机制失效
  95.  
    timeout 0
  96.  
     
  97.  
    # TCP keepalive.
  98.  
    #
  99.  
    # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
  100.  
    # of communication. This is useful for two reasons:
  101.  
    #
  102.  
    # 1) Detect dead peers.
  103.  
    # 2) Take the connection alive from the point of view of network
  104.  
    # equipment in the middle.
  105.  
    #
  106.  
    # On Linux, the specified value (in seconds) is the period used to send ACKs.
  107.  
    # Note that to close the connection the double of the time is needed.
  108.  
    # On other kernels the period depends on the kernel configuration.
  109.  
    # 就像心跳检测一样,检查链接是否保持正常,同时也可以保持正常链接的通信
  110.  
    # 建议值为60
  111.  
    #
  112.  
    # A reasonable value for this option is 60 seconds.
  113.  
    tcp-keepalive 0
  114.  
     
  115.  
    # Specify the server verbosity level.
  116.  
    # This can be one of:
  117.  
    # debug (a lot of information, useful for development/testing)
  118.  
    # verbose (many rarely useful info, but not a mess like the debug level)
  119.  
    # notice (moderately verbose, what you want in production probably)
  120.  
    # warning (only very important / critical messages are logged)
  121.  
    # 日志级别
  122.  
    loglevel notice
  123.  
     
  124.  
    # Specify the log file name. Also the empty string can be used to force
  125.  
    # Redis to log on the standard output. Note that if you use standard
  126.  
    # output for logging but daemonize, logs will be sent to /dev/null
  127.  
    # 日志存放路径,默认是输出到标准输出,但当以守护进程方式启动时,默认输出到/dev/null(传说中的linux黑洞)
  128.  
    logfile ""
  129.  
     
  130.  
    # To enable logging to the system logger, just set 'syslog-enabled' to yes,
  131.  
    # and optionally update the other syslog parameters to suit your needs.
  132.  
    # yes 表示将日志写到系统日志中
  133.  
    # syslog-enabled no
  134.  
     
  135.  
    # Specify the syslog identity.
  136.  
    # 当syslog-enabled为yes时,指定系统日志的标示为 redis
  137.  
    # syslog-ident redis
  138.  
     
  139.  
    # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
  140.  
    # 指定系统日志的设备
  141.  
    # syslog-facility local0
  142.  
     
  143.  
    # Set the number of databases. The default database is DB 0, you can select
  144.  
    # a different one on a per-connection basis using SELECT <dbid> where
  145.  
    # dbid is a number between 0 and 'databases'-1
  146.  
    # redis的数据库格式,默认16个(0~15),默认使用第0个。
  147.  
    databases 16
  148.  
     
  149.  
    ################################ SNAPSHOTTING ################################
  150.  
    #
  151.  
    # Save the DB on disk:
  152.  
    #
  153.  
    # save <seconds> <changes>
  154.  
    #
  155.  
    # Will save the DB if both the given number of seconds and the given
  156.  
    # number of write operations against the DB occurred.
  157.  
    # 快照,即将数据写到硬盘上,在<seconds>秒内,至少有<changes>次写入数据库操作
  158.  
    # 则会将数据写入硬盘一次。
  159.  
    # 将save行注释掉则永远不会写入硬盘
  160.  
    # save "" 表示删除所有的快照点
  161.  
    #
  162.  
    # In the example below the behaviour will be to save:
  163.  
    # after 900 sec (15 min) if at least 1 key changed
  164.  
    # after 300 sec (5 min) if at least 10 keys changed
  165.  
    # after 60 sec if at least 10000 keys changed
  166.  
    #
  167.  
    # Note: you can disable saving at all commenting all the "save" lines.
  168.  
    #
  169.  
    # It is also possible to remove all the previously configured save
  170.  
    # points by adding a save directive with a single empty string argument
  171.  
    # like in the following example:
  172.  
    #
  173.  
    # save ""
  174.  
     
  175.  
    save 900 1
  176.  
    save 300 10
  177.  
    save 60 10000
  178.  
     
  179.  
    # By default Redis will stop accepting writes if RDB snapshots are enabled
  180.  
    # (at least one save point) and the latest background save failed.
  181.  
    # This will make the user aware (in a hard way) that data is not persisting
  182.  
    # on disk properly, otherwise chances are that no one will notice and some
  183.  
    # disaster will happen.
  184.  
    #
  185.  
    # If the background saving process will start working again Redis will
  186.  
    # automatically allow writes again.
  187.  
    #
  188.  
    # However if you have setup your proper monitoring of the Redis server
  189.  
    # and persistence, you may want to disable this feature so that Redis will
  190.  
    # continue to work as usual even if there are problems with disk,
  191.  
    # permissions, and so forth.
  192.  
    # 当做快照失败的时候,redis会停止继续向其写入数据,保证第一时间发现redis快照出现问题
  193.  
    # 当然,通过下面配置为 no,即使redis快照失败,也能继续向redis写入数据
  194.  
    stop-writes-on-bgsave-error yes
  195.  
     
  196.  
    # Compress string objects using LZF when dump .rdb databases?
  197.  
    # For default that's set to 'yes' as it's almost always a win.
  198.  
    # If you want to save some CPU in the saving child set it to 'no' but
  199.  
    # the dataset will likely be bigger if you have compressible values or keys.
  200.  
    # 快照的时候,是否用LZF压缩,使用压缩会占一定的cpu,但不使用压缩,快照会很大
  201.  
    rdbcompression yes
  202.  
     
  203.  
    # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
  204.  
    # This makes the format more resistant to corruption but there is a performance
  205.  
    # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
  206.  
    # for maximum performances.
  207.  
    #
  208.  
    # RDB files created with checksum disabled have a checksum of zero that will
  209.  
    # tell the loading code to skip the check.
  210.  
    # 数据校验,快照末尾会存放一个校验值,保证数据的准确性
  211.  
    # 但数据校验会使性能下降约10%,默认开启校验
  212.  
    rdbchecksum yes
  213.  
     
  214.  
    # The filename where to dump the DB
  215.  
    # 快照的名字
  216.  
    dbfilename dump.rdb
  217.  
     
  218.  
    # The working directory.
  219.  
    #
  220.  
    # The DB will be written inside this directory, with the filename specified
  221.  
    # above using the 'dbfilename' configuration directive.
  222.  
    #
  223.  
    # The Append Only File will also be created inside this directory.
  224.  
    #
  225.  
    # Note that you must specify a directory here, not a file name.
  226.  
    #
  227.  
    # 快照存放的目录
  228.  
    # linux root下测试,会发现该进程会在当前目录下创建一个dump.rdb
  229.  
    # 但快照却放在了根目录/下,重启的时候,是不会从快照中恢复数据的
  230.  
    # 当把根目录下的dump.rdb文件拷贝到当前目录的时候,再次启动,就会从快照中恢复数据
  231.  
    # 而且以后的快照也都在当前目录的dump.rdb中做操作
  232.  
    #
  233.  
    # 值得一提的是,快照是异步方式的,如果在还未达到快照的时候,修改了数据,而且redis发生问题crash了
  234.  
    # 那么中间的修改数据是不会被保存到dump.rdb快照中的
  235.  
    # 解决办法就是用Append Only Mode的同步模式(下面将会有该配置项)
  236.  
    # 将会把每个操作写到Append Only File中,该文件也存放于当前配置的目录
  237.  
    # 建议使用绝对路径!!!
  238.  
    #
  239.  
    dir ./
  240.  
     
  241.  
    ################################# REPLICATION #################################
  242.  
     
  243.  
    # Master-Slave replication. Use slaveof to make a Redis instance a copy of
  244.  
    # another Redis server. Note that the configuration is local to the slave
  245.  
    # so for example it is possible to configure the slave to save the DB with a
  246.  
    # different interval, or to listen to another port, and so on.
  247.  
    #
  248.  
    # 主从复制,类似于双机备份。
  249.  
    # 配置需指定主机的ip 和port
  250.  
    # slaveof <masterip> <masterport>
  251.  
     
  252.  
    # If the master is password protected (using the "requirepass" configuration
  253.  
    # directive below) it is possible to tell the slave to authenticate before
  254.  
    # starting the replication synchronization process, otherwise the master will
  255.  
    # refuse the slave request.
  256.  
    #
  257.  
    # 如果主机redis需要密码,则指定密码
  258.  
    # 密码配置在下面安全配置中
  259.  
    # masterauth <master-password>
  260.  
     
  261.  
    # When a slave loses its connection with the master, or when the replication
  262.  
    # is still in progress, the slave can act in two different ways:
  263.  
    #
  264.  
    # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
  265.  
    # still reply to client requests, possibly with out of date data, or the
  266.  
    # data set may just be empty if this is the first synchronization.
  267.  
    #
  268.  
    # 2) if slave-serve-stale-data is set to 'no' the slave will reply with
  269.  
    # an error "SYNC with master in progress" to all the kind of commands
  270.  
    # but to INFO and SLAVEOF.
  271.  
    #
  272.  
    # 当从机与主机断开时,即同步出现问题的时候,从机有两种处理方式
  273.  
    # yes, 继续响应客户端请求,但可能有脏数据(过期数据、空数据等)
  274.  
    # no,对客户端的请求统一回复为“SYNC with master in progress”,除了INFO和SLAVEOF命令
  275.  
    slave-serve-stale-data yes
  276.  
     
  277.  
    # You can configure a slave instance to accept writes or not. Writing against
  278.  
    # a slave instance may be useful to store some ephemeral data (because data
  279.  
    # written on a slave will be easily deleted after resync with the master) but
  280.  
    # may also cause problems if clients are writing to it because of a
  281.  
    # misconfiguration.
  282.  
    #
  283.  
    # Since Redis 2.6 by default slaves are read-only.
  284.  
    #
  285.  
    # Note: read only slaves are not designed to be exposed to untrusted clients
  286.  
    # on the internet. It's just a protection layer against misuse of the instance.
  287.  
    # Still a read only slave exports by default all the administrative commands
  288.  
    # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
  289.  
    # security of read only slaves using 'rename-command' to shadow all the
  290.  
    # administrative / dangerous commands.
  291.  
    # slave只读选项,设置从机只读(默认)。
  292.  
    # 即使设置可写,当下一次从主机上同步数据,仍然会删除当前从机上写入的数据
  293.  
    # 【待测试】:主机与从机互为slave会出现什么情况?
  294.  
    # 【预期三种结果】:1. 提示报错 2. 主从服务器数据不可控 3. 一切正常
  295.  
    slave-read-only yes
  296.  
     
  297.  
    # Slaves send PINGs to server in a predefined interval. It's possible to change
  298.  
    # this interval with the repl_ping_slave_period option. The default value is 10
  299.  
    # seconds.
  300.  
    #
  301.  
    # 从服务器向主服务器发送心跳包,默认10发送一次
  302.  
    # repl-ping-slave-period 10
  303.  
     
  304.  
    # The following option sets the replication timeout for:
  305.  
    #
  306.  
    # 1) Bulk transfer I/O during SYNC, from the point of view of slave.
  307.  
    # 2) Master timeout from the point of view of slaves (data, pings).
  308.  
    # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
  309.  
    #
  310.  
    # It is important to make sure that this value is greater than the value
  311.  
    # specified for repl-ping-slave-period otherwise a timeout will be detected
  312.  
    # every time there is low traffic between the master and the slave.
  313.  
    #
  314.  
    # 超时响应时间,值必须比repl-ping-slave-period大
  315.  
    # 批量数据传输超时、ping超时
  316.  
    # repl-timeout 60
  317.  
     
  318.  
    # Disable TCP_NODELAY on the slave socket after SYNC?
  319.  
    #
  320.  
    # If you select "yes" Redis will use a smaller number of TCP packets and
  321.  
    # less bandwidth to send data to slaves. But this can add a delay for
  322.  
    # the data to appear on the slave side, up to 40 milliseconds with
  323.  
    # Linux kernels using a default configuration.
  324.  
    #
  325.  
    # If you select "no" the delay for data to appear on the slave side will
  326.  
    # be reduced but more bandwidth will be used for replication.
  327.  
    #
  328.  
    # By default we optimize for low latency, but in very high traffic conditions
  329.  
    # or when the master and slaves are many hops away, turning this to "yes" may
  330.  
    # be a good idea.
  331.  
    # 主从同步是否延迟
  332.  
    # yes 有延迟,约40毫秒(linux kernel的默认配置),使用较少的数据包,较小的带宽
  333.  
    # no 无延迟(减少延迟),但需要更大的带宽
  334.  
    repl-disable-tcp-nodelay no
  335.  
     
  336.  
    # Set the replication backlog size. The backlog is a buffer that accumulates
  337.  
    # slave data when slaves are disconnected for some time, so that when a slave
  338.  
    # wants to reconnect again, often a full resync is not needed, but a partial
  339.  
    # resync is enough, just passing the portion of data the slave missed while
  340.  
    # disconnected.
  341.  
    #
  342.  
    # The biggest the replication backlog, the longer the time the slave can be
  343.  
    # disconnected and later be able to perform a partial resynchronization.
  344.  
    #
  345.  
    # The backlog is only allocated once there is at least a slave connected.
  346.  
    #
  347.  
    # 默认情况下,当slave重连的时候,会进行全量数据同步
  348.  
    # 但实际上slave只需要部分同步即可,这个选项设置部分同步的大小
  349.  
    # 设置值越大,同步的时间就越长
  350.  
    # repl-backlog-size 1mb
  351.  
     
  352.  
    # After a master has no longer connected slaves for some time, the backlog
  353.  
    # will be freed. The following option configures the amount of seconds that
  354.  
    # need to elapse, starting from the time the last slave disconnected, for
  355.  
    # the backlog buffer to be freed.
  356.  
    #
  357.  
    # A value of 0 means to never release the backlog.
  358.  
    #
  359.  
    # 主机的后台日志释放时间,即当没有slave连接时,过多久释放后台日志
  360.  
    # 0表示不释放
  361.  
    # repl-backlog-ttl 3600
  362.  
     
  363.  
    # The slave priority is an integer number published by Redis in the INFO output.
  364.  
    # It is used by Redis Sentinel in order to select a slave to promote into a
  365.  
    # master if the master is no longer working correctly.
  366.  
    #
  367.  
    # A slave with a low priority number is considered better for promotion, so
  368.  
    # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
  369.  
    # pick the one with priority 10, that is the lowest.
  370.  
    #
  371.  
    # However a special priority of 0 marks the slave as not able to perform the
  372.  
    # role of master, so a slave with priority of 0 will never be selected by
  373.  
    # Redis Sentinel for promotion.
  374.  
    #
  375.  
    # By default the priority is 100.
  376.  
    # 当主机crash的时候,在从机中选择一台作为主机,数字越小,优先级越高
  377.  
    # 0 表示永远不作为主机,默认值是100
  378.  
    slave-priority 100
  379.  
     
  380.  
    # It is possible for a master to stop accepting writes if there are less than
  381.  
    # N slaves connected, having a lag less or equal than M seconds.
  382.  
    #
  383.  
    # The N slaves need to be in "online" state.
  384.  
    #
  385.  
    # The lag in seconds, that must be <= the specified value, is calculated from
  386.  
    # the last ping received from the slave, that is usually sent every second.
  387.  
    #
  388.  
    # This option does not GUARANTEES that N replicas will accept the write, but
  389.  
    # will limit the window of exposure for lost writes in case not enough slaves
  390.  
    # are available, to the specified number of seconds.
  391.  
    #
  392.  
    # For example to require at least 3 slaves with a lag <= 10 seconds use:
  393.  
    #
  394.  
    # 当slave数量小于min-slaves-to-write,且延迟小于等于min-slaves-max-lag时,
  395.  
    # 主机停止写入操作
  396.  
    # 0表示禁用
  397.  
    # 默认min-slaves-to-write为0,即禁用。min-slaves-max-lag为10
  398.  
    # min-slaves-to-write 3
  399.  
    # min-slaves-max-lag 10
  400.  
    #
  401.  
    # Setting one or the other to 0 disables the feature.
  402.  
    #
  403.  
    # By default min-slaves-to-write is set to 0 (feature disabled) and
  404.  
    # min-slaves-max-lag is set to 10.
  405.  
     
  406.  
    ################################## SECURITY ###################################
  407.  
     
  408.  
    # Require clients to issue AUTH <PASSWORD> before processing any other
  409.  
    # commands. This might be useful in environments in which you do not trust
  410.  
    # others with access to the host running redis-server.
  411.  
    #
  412.  
    # This should stay commented out for backward compatibility and because most
  413.  
    # people do not need auth (e.g. they run their own servers).
  414.  
    #
  415.  
    # Warning: since Redis is pretty fast an outside user can try up to
  416.  
    # 150k passwords per second against a good box. This means that you should
  417.  
    # use a very strong password otherwise it will be very easy to break.
  418.  
    #
  419.  
    # redis密码,默认不配置,即无密码
  420.  
    # 这里注意,如果设置了密码,应该设置一个复杂度比较高的密码
  421.  
    # 因为redis的速度很快,每秒可以尝试150k次的密码测试,很容易对其进行暴力破解(跑码)。
  422.  
    # 疑问:这里为什么不设置一个针对主机的测试次数限制的,例如每10次,则禁止建立连接1个小时!
  423.  
    # requirepass foobared
  424.  
     
  425.  
    # Command renaming.
  426.  
    #
  427.  
    # It is possible to change the name of dangerous commands in a shared
  428.  
    # environment. For instance the CONFIG command may be renamed into something
  429.  
    # hard to guess so that it will still be available for internal-use tools
  430.  
    # but not available for general clients.
  431.  
    #
  432.  
    # 命令重命名,将命令重命名为另一个字符串标识
  433.  
    # 如果命令为空串(""),则会彻底禁用该命令
  434.  
    # 命令重命名,会对写AOF(Append of file)文件、slave从机造成一些问题
  435.  
    # Example:
  436.  
    #
  437.  
    # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
  438.  
    #
  439.  
    # It is also possible to completely kill a command by renaming it into
  440.  
    # an empty string:
  441.  
    #
  442.  
    # rename-command CONFIG ""
  443.  
    #
  444.  
    # Please note that changing the name of commands that are logged into the
  445.  
    # AOF file or transmitted to slaves may cause problems.
  446.  
     
  447.  
    ################################### LIMITS ####################################
  448.  
     
  449.  
    # Set the max number of connected clients at the same time. By default
  450.  
    # this limit is set to 10000 clients, however if the Redis server is not
  451.  
    # able to configure the process file limit to allow for the specified limit
  452.  
    # the max number of allowed clients is set to the current file limit
  453.  
    # minus 32 (as Redis reserves a few file descriptors for internal uses).
  454.  
    #
  455.  
    # Once the limit is reached Redis will close all the new connections sending
  456.  
    # an error 'max number of clients reached'.
  457.  
    #
  458.  
    # 这只redis的最大连接数目,默认设置为10000个客户端
  459.  
    # 当超过限制时,将段开新的连接,并响应“max number of clients reached”
  460.  
    # maxclients 10000
  461.  
     
  462.  
    # Don't use more memory than the specified amount of bytes.
  463.  
    # When the memory limit is reached Redis will try to remove keys
  464.  
    # according to the eviction policy selected (see maxmemory-policy).
  465.  
    #
  466.  
    # If Redis can't remove keys according to the policy, or if the policy is
  467.  
    # set to 'noeviction', Redis will start to reply with errors to commands
  468.  
    # that would use more memory, like SET, LPUSH, and so on, and will continue
  469.  
    # to reply to read-only commands like GET.
  470.  
    #
  471.  
    # This option is usually useful when using Redis as an LRU cache, or to set
  472.  
    # a hard memory limit for an instance (using the 'noeviction' policy).
  473.  
    #
  474.  
    # WARNING: If you have slaves attached to an instance with maxmemory on,
  475.  
    # the size of the output buffers needed to feed the slaves are subtracted
  476.  
    # from the used memory count, so that network problems / resyncs will
  477.  
    # not trigger a loop where keys are evicted, and in turn the output
  478.  
    # buffer of slaves is full with DELs of keys evicted triggering the deletion
  479.  
    # of more keys, and so forth until the database is completely emptied.
  480.  
    #
  481.  
    # In short... if you have slaves attached it is suggested that you set a lower
  482.  
    # limit for maxmemory so that there is some free RAM on the system for slave
  483.  
    # output buffers (but this is not needed if the policy is 'noeviction').
  484.  
    #
  485.  
    # redis的最大内存限制,如果达到最大内存,会按照下面的maxmemory-policy进行清除
  486.  
    # 如果不能再清除或者maxmemory-policy为noeviction,则对于需要增加空间的操作,将会返回错误
  487.  
    maxmemory <1024*1024*1024>
  488.  
     
  489.  
    # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
  490.  
    # is reached. You can select among five behaviors:
  491.  
    #
  492.  
    # volatile-lru -> remove the key with an expire set using an LRU algorithm
  493.  
    # allkeys-lru -> remove any key accordingly to the LRU algorithm
  494.  
    # volatile-random -> remove a random key with an expire set
  495.  
    # allkeys-random -> remove a random key, any key
  496.  
    # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
  497.  
    # noeviction -> don't expire at all, just return an error on write operations
  498.  
    #
  499.  
    # Note: with any of the above policies, Redis will return an error on write
  500.  
    # operations, when there are not suitable keys for eviction.
  501.  
    #
  502.  
    # At the date of writing this commands are: set setnx setex append
  503.  
    # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
  504.  
    # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
  505.  
    # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
  506.  
    # getset mset msetnx exec sort
  507.  
    #
  508.  
    # The default is:
  509.  
    #
  510.  
    # 内存删除策略,默认volatile-lru,利用LRU算法,删除过期的key
  511.  
    maxmemory-policy volatile-lru
  512.  
     
  513.  
    # LRU and minimal TTL algorithms are not precise algorithms but approximated
  514.  
    # algorithms (in order to save memory), so you can select as well the sample
  515.  
    # size to check. For instance for default Redis will check three keys and
  516.  
    # pick the one that was used less recently, you can change the sample size
  517.  
    # using the following configuration directive.
  518.  
    #
  519.  
    # LRU算法与最小TTL算法只是相对精确的算法,并不是绝对精确的算法
  520.  
    # 为了更精确,可以设置样本个数
  521.  
    # 比如设置3个样本,redis会选取三个key,并选择删除那个上次使用时间最远的
  522.  
    # maxmemory-samples 3
  523.  
     
  524.  
    ############################## APPEND ONLY MODE ###############################
  525.  
     
  526.  
    # By default Redis asynchronously dumps the dataset on disk. This mode is
  527.  
    # good enough in many applications, but an issue with the Redis process or
  528.  
    # a power outage may result into a few minutes of writes lost (depending on
  529.  
    # the configured save points).
  530.  
    #
  531.  
    # The Append Only File is an alternative persistence mode that provides
  532.  
    # much better durability. For instance using the default data fsync policy
  533.  
    # (see later in the config file) Redis can lose just one second of writes in a
  534.  
    # dramatic event like a server power outage, or a single write if something
  535.  
    # wrong with the Redis process itself happens, but the operating system is
  536.  
    # still running correctly.
  537.  
    #
  538.  
    # AOF and RDB persistence can be enabled at the same time without problems.
  539.  
    # If the AOF is enabled on startup Redis will load the AOF, that is the file
  540.  
    # with the better durability guarantees.
  541.  
    #
  542.  
    # Please check http://redis.io/topics/persistence for more information.
  543.  
    # 将对redis所有的操作都保存到AOF文件中
  544.  
    # 因为dump.rdb是异步的,在下次快照到达之前,如果出现crash等问题,会造成数据丢失
  545.  
    # 而AOF文件时同步记录的,所以会完整的恢复数据
  546.  
     
  547.  
    appendonly no
  548.  
     
  549.  
    # The name of the append only file (default: "appendonly.aof")
  550.  
    # AOF文件的名字
  551.  
     
  552.  
    appendfilename "appendonly.aof"
  553.  
     
  554.  
    # The fsync() call tells the Operating System to actually write data on disk
  555.  
    # instead to wait for more data in the output buffer. Some OS will really flush
  556.  
    # data on disk, some other OS will just try to do it ASAP.
  557.  
    #
  558.  
    # Redis supports three different modes:
  559.  
    #
  560.  
    # no: don't fsync, just let the OS flush the data when it wants. Faster.
  561.  
    # always: fsync after every write to the append only log . Slow, Safest.
  562.  
    # everysec: fsync only one time every second. Compromise.
  563.  
    #
  564.  
    # The default is "everysec", as that's usually the right compromise between
  565.  
    # speed and data safety. It's up to you to understand if you can relax this to
  566.  
    # "no" that will let the operating system flush the output buffer when
  567.  
    # it wants, for better performances (but if you can live with the idea of
  568.  
    # some data loss consider the default persistence mode that's snapshotting),
  569.  
    # or on the contrary, use "always" that's very slow but a bit safer than
  570.  
    # everysec.
  571.  
    #
  572.  
    # More details please check the following article:
  573.  
    # http://antirez.com/post/redis-persistence-demystified.html
  574.  
    #
  575.  
    # If unsure, use "everysec".
  576.  
    # redis的数据同步方式,三种
  577.  
    # no,redis本身不做同步,由OS来做。redis的速度会很快
  578.  
    # always,在每次写操作之后,redis都进行同步,即写入AOF文件。redis会变慢,但是数据更安全
  579.  
    # everysec,折衷考虑,每秒同步一次数据。【默认】
  580.  
     
  581.  
    # appendfsync always
  582.  
    appendfsync everysec
  583.  
    # appendfsync no
  584.  
     
  585.  
    # When the AOF fsync policy is set to always or everysec, and a background
  586.  
    # saving process (a background save or AOF log background rewriting) is
  587.  
    # performing a lot of I/O against the disk, in some Linux configurations
  588.  
    # Redis may block too long on the fsync() call. Note that there is no fix for
  589.  
    # this currently, as even performing fsync in a different thread will block
  590.  
    # our synchronous write(2) call.
  591.  
    #
  592.  
    # In order to mitigate this problem it's possible to use the following option
  593.  
    # that will prevent fsync() from being called in the main process while a
  594.  
    # BGSAVE or BGREWRITEAOF is in progress.
  595.  
    #
  596.  
    # This means that while another child is saving, the durability of Redis is
  597.  
    # the same as "appendfsync none". In practical terms, this means that it is
  598.  
    # possible to lose up to 30 seconds of log in the worst scenario (with the
  599.  
    # default Linux settings).
  600.  
    #
  601.  
    # If you have latency problems turn this to "yes". Otherwise leave it as
  602.  
    # "no" that is the safest pick from the point of view of durability.
  603.  
    # redis的同步方式中,always和everysec,快照和写AOF可能会执行大量的硬盘I/O操作,
  604.  
    # 而在一些Linux的配置中,redis会阻塞很久,而redis本身并没有很好的解决这一问题。
  605.  
    # 为了缓和这一问题,redis提供no-appendfsync-on-rewrite选项,
  606.  
    # 即当有另外一个进程在执行保存操作的时候,redis采用no的同步方式。
  607.  
    # 最坏情况下会有延迟30秒的同步延迟。
  608.  
    # 如果你觉得这样做会有潜在危险,则请将该选项改为yes。否则就保持默认值no(基于稳定性考虑)。
  609.  
     
  610.  
    no-appendfsync-on-rewrite no
  611.  
     
  612.  
    # Automatic rewrite of the append only file.
  613.  
    # Redis is able to automatically rewrite the log file implicitly calling
  614.  
    # BGREWRITEAOF when the AOF log size grows by the specified percentage.
  615.  
    #
  616.  
    # This is how it works: Redis remembers the size of the AOF file after the
  617.  
    # latest rewrite (if no rewrite has happened since the restart, the size of
  618.  
    # the AOF at startup is used).
  619.  
    #
  620.  
    # This base size is compared to the current size. If the current size is
  621.  
    # bigger than the specified percentage, the rewrite is triggered. Also
  622.  
    # you need to specify a minimal size for the AOF file to be rewritten, this
  623.  
    # is useful to avoid rewriting the AOF file even if the percentage increase
  624.  
    # is reached but it is still pretty small.
  625.  
    #
  626.  
    # Specify a percentage of zero in order to disable the automatic AOF
  627.  
    # rewrite feature.
  628.  
    # 自动重写AOF文件
  629.  
    # 当AOF日志文件大小增长到指定百分比时,redis会自动隐式调用BGREWRITEAOF来重写AOF文件
  630.  
    # redis会记录上次重写AOF文件之后的大小,
  631.  
    # 如果当前文件大小增加了auto-aof-rewrite-percentage,则会触发重写AOF日志功能
  632.  
    # 当然如果文件过小,比如小于auto-aof-rewrite-min-size这个大小,是不会触发重写AOF日志功能的
  633.  
    # auto-aof-rewrite-percentage为0时,禁用重写功能
  634.  
     
  635.  
    auto-aof-rewrite-percentage 100
  636.  
    auto-aof-rewrite-min-size 64mb
  637.  
     
  638.  
    ################################ LUA SCRIPTING ###############################
  639.  
     
  640.  
    # Max execution time of a Lua script in milliseconds.
  641.  
    #
  642.  
    # If the maximum execution time is reached Redis will log that a script is
  643.  
    # still in execution after the maximum allowed time and will start to
  644.  
    # reply to queries with an error.
  645.  
    #
  646.  
    # When a long running script exceed the maximum execution time only the
  647.  
    # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
  648.  
    # used to stop a script that did not yet called write commands. The second
  649.  
    # is the only way to shut down the server in the case a write commands was
  650.  
    # already issue by the script but the user don't want to wait for the natural
  651.  
    # termination of the script.
  652.  
    #
  653.  
    # Set it to 0 or a negative value for unlimited execution without warnings.
  654.  
    # LUA脚本的最大执行时间(单位是毫秒),默认5000毫秒,即5秒
  655.  
    # 如果LUA脚本执行超过这个限制,可以调用SCRIPT KILL和SHUTDOWN NOSAVE命令。
  656.  
    # SCRIPT KILL可以终止脚本执行
  657.  
    # SHUTDOWN NOSAVE关闭服务,防止LUA脚本的写操作发生
  658.  
    # 该值为0或者负数,表示没有限制时间
  659.  
    lua-time-limit 5000
  660.  
     
  661.  
    ################################## SLOW LOG ###################################
  662.  
     
  663.  
    # The Redis Slow Log is a system to log queries that exceeded a specified
  664.  
    # execution time. The execution time does not include the I/O operations
  665.  
    # like talking with the client, sending the reply and so forth,
  666.  
    # but just the time needed to actually execute the command (this is the only
  667.  
    # stage of command execution where the thread is blocked and can not serve
  668.  
    # other requests in the meantime).
  669.  
    #
  670.  
    # You can configure the slow log with two parameters: one tells Redis
  671.  
    # what is the execution time, in microseconds, to exceed in order for the
  672.  
    # command to get logged, and the other parameter is the length of the
  673.  
    # slow log. When a new command is logged the oldest one is removed from the
  674.  
    # queue of logged commands.
  675.  
    # 记录执行比较慢的命令
  676.  
    # 执行比较慢仅仅是指命令的执行时间,不包括客户端的链接与响应等时间
  677.  
    # slowlog-log-slower-than 设定这个慢的时间,单位是微妙,即1000000表示1秒,0表示所有命令都记录,负数表示不记录
  678.  
    # slowlog-max-len表示记录的慢命令的个数,超过限制,则最早记录的命令会被移除
  679.  
    # 命令的长度没有限制,但是会消耗内存,用SLOWLOG RESET来收回这些消耗的内存
  680.  
     
  681.  
    # The following time is expressed in microseconds, so 1000000 is equivalent
  682.  
    # to one second. Note that a negative number disables the slow log, while
  683.  
    # a value of zero forces the logging of every command.
  684.  
    slowlog-log-slower-than 10000
  685.  
     
  686.  
    # There is no limit to this length. Just be aware that it will consume memory.
  687.  
    # You can reclaim memory used by the slow log with SLOWLOG RESET.
  688.  
    slowlog-max-len 128
  689.  
     
  690.  
    ################################ LATENCY MONITOR ##############################
  691.  
     
  692.  
    # The Redis latency monitoring subsystem samples different operations
  693.  
    # at runtime in order to collect data related to possible sources of
  694.  
    # latency of a Redis instance.
  695.  
    #
  696.  
    # Via the LATENCY command this information is available to the user that can
  697.  
    # print graphs and obtain reports.
  698.  
    #
  699.  
    # The system only logs operations that were performed in a time equal or
  700.  
    # greater than the amount of milliseconds specified via the
  701.  
    # latency-monitor-threshold configuration directive. When its value is set
  702.  
    # to zero, the latency monitor is turned off.
  703.  
    #
  704.  
    # By default latency monitoring is disabled since it is mostly not needed
  705.  
    # if you don't have latency issues, and collecting data has a performance
  706.  
    # impact, that while very small, can be measured under big load. Latency
  707.  
    # monitoring can easily be enalbed at runtime using the command
  708.  
    # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
  709.  
    # 延迟监控器
  710.  
    # redis延迟监控子系统在运行时,会抽样检测可能导致延迟的不同操作
  711.  
    # 通过LATENCY命令可以打印相关信息和报告, 命令如下(摘自源文件注释):
  712.  
    # LATENCY SAMPLES: return time-latency samples for the specified event.
  713.  
    # LATENCY LATEST: return the latest latency for all the events classes.
  714.  
    # LATENCY DOCTOR: returns an human readable analysis of instance latency.
  715.  
    # LATENCY GRAPH: provide an ASCII graph of the latency of the specified event.
  716.  
    #
  717.  
    # 系统只记录超过设定值的操作,单位是毫秒,0表示禁用该功能
  718.  
    # 可以通过命令“CONFIG SET latency-monitor-threshold <milliseconds>” 直接设置而不需要重启redis
  719.  
     
  720.  
    latency-monitor-threshold 0
  721.  
     
  722.  
    ############################# Event notification ##############################
  723.  
     
  724.  
    # Redis can notify Pub/Sub clients about events happening in the key space.
  725.  
    # This feature is documented at http://redis.io/topics/keyspace-events
  726.  
    #
  727.  
    # For instance if keyspace events notification is enabled, and a client
  728.  
    # performs a DEL operation on key "foo" stored in the Database 0, two
  729.  
    # messages will be published via Pub/Sub:
  730.  
    #
  731.  
    # PUBLISH __keyspace@0__:foo del
  732.  
    # PUBLISH __keyevent@0__:del foo
  733.  
    #
  734.  
    # It is possible to select the events that Redis will notify among a set
  735.  
    # of classes. Every class is identified by a single character:
  736.  
    #
  737.  
    # K Keyspace events, published with __keyspace@<db>__ prefix.
  738.  
    # E Keyevent events, published with __keyevent@<db>__ prefix.
  739.  
    # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
  740.  
    # $ String commands
  741.  
    # l List commands
  742.  
    # s Set commands
  743.  
    # h Hash commands
  744.  
    # z Sorted set commands
  745.  
    # x Expired events (events generated every time a key expires)
  746.  
    # e Evicted events (events generated when a key is evicted for maxmemory)
  747.  
    # A Alias for g$lshzxe, so that the "AKE" string means all the events.
  748.  
    #
  749.  
    # The "notify-keyspace-events" takes as argument a string that is composed
  750.  
    # by zero or multiple characters. The empty string means that notifications
  751.  
    # are disabled at all.
  752.  
    #
  753.  
    # Example: to enable list and generic events, from the point of view of the
  754.  
    # event name, use:
  755.  
    #
  756.  
    # notify-keyspace-events Elg
  757.  
    #
  758.  
    # Example 2: to get the stream of the expired keys subscribing to channel
  759.  
    # name __keyevent@0__:expired use:
  760.  
    #
  761.  
    # notify-keyspace-events Ex
  762.  
    #
  763.  
    # By default all notifications are disabled because most users don't need
  764.  
    # this feature and the feature has some overhead. Note that if you don't
  765.  
    # specify at least one of K or E, no events will be delivered.
  766.  
    # 事件通知,当事件发生时,redis可以通知Pub/Sub客户端
  767.  
    # 空串表示禁用事件通知
  768.  
    # 注意:K和E至少要指定一个,否则不会有事件通知
  769.  
    notify-keyspace-events ""
  770.  
     
  771.  
    ############################### ADVANCED CONFIG ###############################
  772.  
     
  773.  
    # Hashes are encoded using a memory efficient data structure when they have a
  774.  
    # small number of entries, and the biggest entry does not exceed a given
  775.  
    # threshold. These thresholds can be configured using the following directives.
  776.  
    # 当hash数目比较少,并且最大元素没有超过给定值时,Hash使用比较有效的内存数据结构来存储。
  777.  
    # 即ziplist的结构(压缩的双向链表),参考:http://blog.csdn.net/benbendy1984/article/details/7796956
  778.  
    hash-max-ziplist-entries 512
  779.  
    hash-max-ziplist-value 64
  780.  
     
  781.  
    # Similarly to hashes, small lists are also encoded in a special way in order
  782.  
    # to save a lot of space. The special representation is only used when
  783.  
    # you are under the following limits:
  784.  
    # List配置同Hash
  785.  
    list-max-ziplist-entries 512
  786.  
    list-max-ziplist-value 64
  787.  
     
  788.  
    # Sets have a special encoding in just one case: when a set is composed
  789.  
    # of just strings that happens to be integers in radix 10 in the range
  790.  
    # of 64 bit signed integers.
  791.  
    # The following configuration setting sets the limit in the size of the
  792.  
    # set in order to use this special memory saving encoding.
  793.  
    # Sets的元素如果全部是整数(10进制),且为64位有符号整数,则采用特殊的编码方式。
  794.  
    # 其元素个数限制配置如下:
  795.  
    set-max-intset-entries 512
  796.  
     
  797.  
    # Similarly to hashes and lists, sorted sets are also specially encoded in
  798.  
    # order to save a lot of space. This encoding is only used when the length and
  799.  
    # elements of a sorted set are below the following limits:
  800.  
    # sorted set 同Hash和List
  801.  
    zset-max-ziplist-entries 128
  802.  
    zset-max-ziplist-value 64
  803.  
     
  804.  
    # HyperLogLog sparse representation bytes limit. The limit includes the
  805.  
    # 16 bytes header. When an HyperLogLog using the sparse representation crosses
  806.  
    # this limit, it is converted into the dense representation.
  807.  
    #
  808.  
    # A value greater than 16000 is totally useless, since at that point the
  809.  
    # dense representation is more memory efficient.
  810.  
    #
  811.  
    # The suggested value is ~ 3000 in order to have the benefits of
  812.  
    # the space efficient encoding without slowing down too much PFADD,
  813.  
    # which is O(N) with the sparse encoding. The value can be raised to
  814.  
    # ~ 10000 when CPU is not a concern, but space is, and the data set is
  815.  
    # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
  816.  
    # 关于HyperLogLog的介绍:http://www.redis.io/topics/data-types-intro#hyperloglogs
  817.  
    # HyperLogLog稀疏表示限制设置,如果其值大于16000,则仍然采用稠密表示,因为这时稠密表示更能有效使用内存
  818.  
    # 建议值为3000
  819.  
    hll-sparse-max-bytes 3000
  820.  
     
  821.  
    # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
  822.  
    # order to help rehashing the main Redis hash table (the one mapping top-level
  823.  
    # keys to values). The hash table implementation Redis uses (see dict.c)
  824.  
    # performs a lazy rehashing: the more operation you run into a hash table
  825.  
    # that is rehashing, the more rehashing "steps" are performed, so if the
  826.  
    # server is idle the rehashing is never complete and some more memory is used
  827.  
    # by the hash table.
  828.  
    #
  829.  
    # The default is to use this millisecond 10 times every second in order to
  830.  
    # active rehashing the main dictionaries, freeing memory when possible.
  831.  
    #
  832.  
    # If unsure:
  833.  
    # use "activerehashing no" if you have hard latency requirements and it is
  834.  
    # not a good thing in your environment that Redis can reply form time to time
  835.  
    # to queries with 2 milliseconds delay.
  836.  
    #
  837.  
    # use "activerehashing yes" if you don't have such hard requirements but
  838.  
    # want to free memory asap when possible.
  839.  
    # 每100毫秒,redis将用1毫秒的时间对Hash表进行重新Hash。
  840.  
    # 采用懒惰Hash方式:操作Hash越多,则重新Hash的可能越多,若根本就不操作Hash,则不会重新Hash
  841.  
    # 默认每秒10次重新hash主字典,释放可能释放的内存
  842.  
    # 重新hash会造成延迟,如果对延迟要求较高,则设为no,禁止重新hash。但可能会浪费很多内存
  843.  
    activerehashing yes
  844.  
     
  845.  
    # The client output buffer limits can be used to force disconnection of clients
  846.  
    # that are not reading data from the server fast enough for some reason (a
  847.  
    # common reason is that a Pub/Sub client can't consume messages as fast as the
  848.  
    # publisher can produce them).
  849.  
    #
  850.  
    # The limit can be set differently for the three different classes of clients:
  851.  
    #
  852.  
    # normal -> normal clients including MONITOR clients
  853.  
    # slave -> slave clients
  854.  
    # pubsub -> clients subscribed to at least one pubsub channel or pattern
  855.  
    #
  856.  
    # The syntax of every client-output-buffer-limit directive is the following:
  857.  
    #
  858.  
    # 客户端输出缓冲区限制,当客户端从服务端的读取速度不够快时,则强制断开
  859.  
    # 三种不同的客户端类型:normal、salve、pubsub,语法如下:
  860.  
    # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
  861.  
    #
  862.  
    # A client is immediately disconnected once the hard limit is reached, or if
  863.  
    # the soft limit is reached and remains reached for the specified number of
  864.  
    # seconds (continuously).
  865.  
    # So for instance if the hard limit is 32 megabytes and the soft limit is
  866.  
    # 16 megabytes / 10 seconds, the client will get disconnected immediately
  867.  
    # if the size of the output buffers reach 32 megabytes, but will also get
  868.  
    # disconnected if the client reaches 16 megabytes and continuously overcomes
  869.  
    # the limit for 10 seconds.
  870.  
    #
  871.  
    # By default normal clients are not limited because they don't receive data
  872.  
    # without asking (in a push way), but just after a request, so only
  873.  
    # asynchronous clients may create a scenario where data is requested faster
  874.  
    # than it can read.
  875.  
    #
  876.  
    # Instead there is a default limit for pubsub and slave clients, since
  877.  
    # subscribers and slaves receive data in a push fashion.
  878.  
    #
  879.  
    # Both the hard or the soft limit can be disabled by setting them to zero.
  880.  
    # 当达到硬限制,或者达到软限制且持续了算限制秒数,则立即与客户端断开
  881.  
    # 限制设为0表示禁止该功能
  882.  
    # 普通用户默认不限制
  883.  
    client-output-buffer-limit normal 0 0 0
  884.  
    client-output-buffer-limit slave 256mb 64mb 60
  885.  
    client-output-buffer-limit pubsub 32mb 8mb 60
  886.  
     
  887.  
    # Redis calls an internal function to perform many background tasks, like
  888.  
    # closing connections of clients in timeout, purging expired keys that are
  889.  
    # never requested, and so forth.
  890.  
    #
  891.  
    # Not all tasks are performed with the same frequency, but Redis checks for
  892.  
    # tasks to perform accordingly to the specified "hz" value.
  893.  
    #
  894.  
    # By default "hz" is set to 10. Raising the value will use more CPU when
  895.  
    # Redis is idle, but at the same time will make Redis more responsive when
  896.  
    # there are many keys expiring at the same time, and timeouts may be
  897.  
    # handled with more precision.
  898.  
    #
  899.  
    # The range is between 1 and 500, however a value over 100 is usually not
  900.  
    # a good idea. Most users should use the default of 10 and raise this up to
  901.  
    # 100 only in environments where very low latency is required.
  902.  
    # redis调用内部函数执行的后台任务的频率
  903.  
    # 后台任务比如:清除过期数据、客户端超时链接等
  904.  
    # 默认为10,取值范围1~500,
  905.  
    # 对延迟要求很低的可以设置超过100以上
  906.  
    hz 10
  907.  
     
  908.  
    # When a child rewrites the AOF file, if the following option is enabled
  909.  
    # the file will be fsync-ed every 32 MB of data generated. This is useful
  910.  
    # in order to commit the file to the disk more incrementally and avoid
  911.  
    # big latency spikes.
  912.  
    # 当修改AOF文件时,该设置为yes,则每生成32MB的数据,就进行同步
  913.  
    aof-rewrite-incremental-fsync yes

Ubuntu16.04安装Redis并配置的更多相关文章

  1. Ubuntu16.04安装Redis

    前言 Redis是常用基于内存的Key-Value数据库,比Memcache更先进,支持多种数据结构,高效,快速.用Redis可以很轻松解决高并发的数据访问问题:作为实时监控信号处理也非常不错. 环境 ...

  2. <记录> Ubuntu16.04 安装Redis以及phpredis扩展

    Linux下安装Redis 1.获取redis资源 wget http://download.redis.io/releases/redis-4.0.8.tar.gz 2.解压 tar xzvf re ...

  3. Ubuntu16.04安装redis和php的redis扩展

    安装redis服务 sudo apt-get install redis-server 装好之后默认就是自启动.后台运行的,无需过多设置,安装目录应该是  /etc/redis 启动 sudo ser ...

  4. Linux入门(11)——Ubuntu16.04安装texlive2016并配置texmaker和sublime text3

    安装过程中需要调用 Perl 的模块 Digest::MD5 来检测 ISO 文件的完整性:升级过程中界面需要调用 Perl 的模块 Tk: sudo apt-get install libdiges ...

  5. Ubuntu16.04 安装redis

    1. 保证网络畅通,选定好下载工作路径,执行以下命令下载redis-3.2.6: sudo wget http://download.redis.io/releases/redis-3.2.6.tar ...

  6. Ubuntu16.04安装配置和使用ctags

    Ubuntu16.04安装配置和使用ctags by ChrisZZ ctags可以用于在vim中的函数定义跳转.在ubuntu16.04下默认提供的ctags是很老很旧的ctags,快要发霉的版本( ...

  7. Ubuntu16.04安装后开发环境配置和常用软件安装

    Ubuntu16.04安装后1.安装常用软件搜狗输入法+编辑器Atom+浏览器Chome+视频播放器vlc+图像编辑器GIMP Image Editor安装+视频录制软件RcordMyDesktop安 ...

  8. ubuntu16.04 安装cuda9.0+cudnn7.0.5+tensorflow+nvidia-docker配置GPU服务

    [摘要] docker很好用,但是在GPU服务器上使用docker却比较复杂,需要一些技巧,下面将介绍一下在ubuntu16.04环境下的GPU-docker环境搭建过程. 第一步: 删除之前的nvi ...

  9. ubuntu16.04安装jdk,tomcat

    ubuntu16.04安装jdk,tomcat 最近装了一下tomcat,网上的教程很多,我也试了很多次,但是有一些教程关于tomcat配置是错误的,让我走上了歧途.差点重装系统,还好王总及时出手帮助 ...

随机推荐

  1. O(n*logn)级别的算法之二(快速排序)的三种实现方法详解及其与归并排序的对比

    一,单路快排1.测试用例: #ifndef INC_06_QUICK_SORT_DEAL_WITH_NEARLY_ORDERED_ARRAY_SORTTESTHELPER_H #define INC_ ...

  2. #WEB安全基础 : HTML/CSS | 0x8CSS进阶

    你以为自己学这么点CSS就厉害了? 学点新东西吧,让你的网页更漂亮 我们只需要用到图片和网页   这是index.html的代码 <html> <head> <title ...

  3. 从.Net到Java学习第十篇——Spring Boot文件上传和下载

    从.Net到Java学习系列目录 图片上传 Spring Boot中的文件上传就是Spring MVC中的文件上传,将其集成进来了. 在模板目录创建一个新的页面 profile/uploadPage. ...

  4. 跨进程SharedPreferences异常。

    诡异的SharedPreferences异常,在ACC之后,SharedPreferences获取不到值了,但是另一个应用可以获取到值.同样的方法,一个正常一个异常. Context c = null ...

  5. Android音乐播放器的设计与实现

    目录 应用开发技术及开发平台介绍 应用需求分析 应用功能设计及其描述 应用UI展示 一.应用开发技术及平台介绍 ①开发技术: 本系统是采用面向对象的软件开发方法,基于Android studio开发平 ...

  6. Python Django对接企业微信第三方服务回调验证的一些坑

    今天公司老总,叫我把公司的企业微信,服务商管理后台中的本地应用进行回调验证. 听起来一脸懵逼,没搞过企业微信对接情况.一头雾水,不知道如何下手. 先讲解一下,企业微信情况. 登录到企业微信后,右上角服 ...

  7. ASP.NET MVC从空项目开始定制项目

    在上一篇net core的文章中已经讲过如何从零开始搭建WebSocket. 今天聊聊ASP.NET的文件结构,如何用自己的目录结构组织项目里的文件. 如果用Visual Studio(VS)向导或d ...

  8. Vue双向绑定原理,教你一步一步实现双向绑定

    当今前端天下以 Angular.React.vue 三足鼎立的局面,你不选择一个阵营基本上无法立足于前端,甚至是两个或者三个阵营都要选择,大势所趋. 所以我们要时刻保持好奇心,拥抱变化,只有在不断的变 ...

  9. 通过Visual Studio 2012 比较SQL Server 数据库的架构变更

    一 需求 随着公司业务的发展,数据库实例也逐渐增多,数据库也会越来越多,有时候我们会发现正式生产数据库也测试数据库数据不一致,也有可能是预发布环境下的数据库与其他数据库架构不一致,或者,分布式数据库上 ...

  10. Selenium自动化测试-unittest单元测试框架使用

    一.什么是unittest 这里我们将要用的unittest是python的单元测试框架,它的官网是 https://docs.python.org/2/library/unittest.html,在 ...