postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64
node1:192.168.216.130
node2:192.168.216.132
node3:192.168.216.134
postgres内核优化指南:https://github.com/digoal/blog/blob/master/201611/20161121_01.md?spm=a2c4e.10696291.0.0.660a19a4sIk1Ok&file=20161121_01.md
一、安装postgres
yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
yum install postgresql11
yum install postgresql11-server
yum install postgresql11-libs
yum install postgresql11-contrib
yum install postgresql11-devel
可以参考:https://www.jianshu.com/p/b4a759c2208f
安装完成后可以查询下rpm -qa|grep postgres安装了哪些包
postgresql11-libs-11.5-1PGDG.rhel7.x86_64
postgresql10-libs-10.10-1PGDG.rhel7.x86_64
postgresql11-11.5-1PGDG.rhel7.x86_64
postgresql11-contrib-11.5-1PGDG.rhel7.x86_64
postgresql11-server-11.5-1PGDG.rhel7.x86_64
postgresql11-devel-11.5-1PGDG.rhel7.x86_64
安装后不需要初始化,可由patroni来完成初始化操作,如果已经初始化完成,不需要patroni来初始化操作,可以修改patroni配置文件的以下参数来指定data目录和安装目录
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
stats_temp_directory: /var/lib/pgsql_stats_tmp
chown -Rf postgres:postgres /var/lib/pgsql/11/data
chmod -Rf 700 /var/lib/pgsql/11/data
chown -Rf postgres:postgres /var/lib/pgsql_stats_tmp
chmod -Rf 700 /var/lib/pgsql_stats_tmp
二、安装patroni,这里建议先修改pip源为国内,否则在安装过程中可能遇到大量超时问题
可参考:https://www.cnblogs.com/caidingyu/p/11566690.html
yum install gcc
yum install python-devel.x86_64
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
pip install psycopg2-binary
pip install patroni[etcd,consul]
三、安装etcd服务
可参考:https://www.cnblogs.com/caidingyu/p/11408389.html
四、创建patroni的配置文件
node1:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode01
namespace: /service/ restapi:
listen: 192.168.216.130:8008
connect_address: 192.168.216.130:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.130,127.0.0.1:5432
connect_address: 192.168.216.130:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node2:patroni配置文件如下
[root@localhost postgresql]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode02
namespace: /service/ restapi:
listen: 192.168.216.132:8008
connect_address: 192.168.216.132:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.132,127.0.0.1:5432
connect_address: 192.168.216.132:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
node3:patroni配置文件如下
[root@localhost tmp]# cat /etc/patroni/patroni.yml scope: postgres-cluster
name: pgnode03
namespace: /service/ restapi:
listen: 192.168.216.134:8008
connect_address: 192.168.216.134:8008
# certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
# keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
# authentication:
# username: username
# password: password etcd:
hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379 bootstrap:
# this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
# and all other cluster members will use it as a `global configuration`
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
master_start_timeout: 300
synchronous_mode: false
synchronous_mode_strict: false
#standby_cluster:
#host: 127.0.0.1
#port: 1111
#primary_slot_name: patroni
postgresql:
use_pg_rewind: true
use_slots: true
parameters:
max_connections: 100
superuser_reserved_connections: 5
max_locks_per_transaction: 64
max_prepared_transactions: 0
huge_pages: try
shared_buffers: 512MB
work_mem: 128MB
maintenance_work_mem: 256MB
effective_cache_size: 4GB
checkpoint_timeout: 15min
checkpoint_completion_target: 0.9
min_wal_size: 2GB
max_wal_size: 4GB
wal_buffers: 32MB
default_statistics_target: 1000
seq_page_cost: 1
random_page_cost: 4
effective_io_concurrency: 2
synchronous_commit: on
autovacuum: on
autovacuum_max_workers: 5
autovacuum_vacuum_scale_factor: 0.01
autovacuum_analyze_scale_factor: 0.02
autovacuum_vacuum_cost_limit: 200
autovacuum_vacuum_cost_delay: 20
autovacuum_naptime: 1s
max_files_per_process: 4096
archive_mode: on
archive_timeout: 1800s
archive_command: cd .
wal_level: replica
wal_keep_segments: 130
max_wal_senders: 10
max_replication_slots: 10
hot_standby: on
wal_log_hints: on
shared_preload_libraries: pg_stat_statements,auto_explain
pg_stat_statements.max: 10000
pg_stat_statements.track: all
pg_stat_statements.save: off
auto_explain.log_min_duration: 10s
auto_explain.log_analyze: true
auto_explain.log_buffers: true
auto_explain.log_timing: false
auto_explain.log_triggers: true
auto_explain.log_verbose: true
auto_explain.log_nested_statements: true
track_io_timing: on
log_lock_waits: on
log_temp_files: 0
track_activities: on
track_counts: on
track_functions: all
log_checkpoints: on
logging_collector: on
log_truncate_on_rotation: on
log_rotation_age: 1d
log_rotation_size: 0
log_line_prefix: '%t [%p-%l] %r %q%u@%d '
log_filename: 'postgresql-%a.log'
log_directory: /var/log/postgresql # recovery_conf:
# restore_command: cp ../wal_archive/%f %p # some desired options for 'initdb'
initdb: # Note: It needs to be a list (some options need values, others are switches)
- encoding: UTF8
- locale: en_US.UTF-8
- data-checksums pg_hba: # Add following lines to pg_hba.conf after running 'initdb'
- host replication replicator 0.0.0.0/0 md5
- host all all 0.0.0.0/0 md5 # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
# post_init: /usr/local/bin/setup_cluster.sh # Some additional users which needs to be created after initializing new cluster
# users:
# admin:
# password: admin-pass
# options:
# - createrole
# - createdb postgresql:
listen: 192.168.216.134,127.0.0.1:5432
connect_address: 192.168.216.134:5432
use_unix_socket: true
data_dir: /var/lib/pgsql/11/data
bin_dir: /usr/pgsql-11/bin
config_dir: /var/lib/pgsql/11/data
pgpass: /var/lib/pgsql/.pgpass
authentication:
replication:
username: replicator
password: replicator-pass
superuser:
username: postgres
password: postgres-pass
# rewind: # Has no effect on postgres 10 and lower
# username: rewind_user
# password: rewind_password
parameters:
unix_socket_directories: /var/run/postgresql
stats_temp_directory: /var/lib/pgsql_stats_tmp # callbacks:
# on_start:
# on_stop:
# on_restart:
# on_reload:
# on_role_change: create_replica_methods:
# - pgbackrest
# - wal_e
- basebackup
# pgbackrest:
# command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
# keep_data: True
# no_params: True
# wal_e
# command: patroni_wale_restore
# no_master: 1
# envdir: /etc/wal_e/envdir
# use_iam: 1
basebackup:
max-rate: '100M' #watchdog:
# mode: automatic # Allowed values: off, automatic, required
# device: /dev/watchdog
# safety_margin: 5 tags:
nofailover: false
noloadbalance: false
clonefrom: false
nosync: false
# specify a node to replicate from. This can be used to implement a cascading replication.
# replicatefrom: (node name)
五、分别在3个node节点上创建/etc/systemd/system/patroni.service来通过systemctl管理patroni服务
可以先执行下,确认patroni的安装位置
which patroni
如果安装位置和patroni.service配置中的不一致,可以采用创建软连接的方式,
1、创建软连接(或者手动修改patroni.service中patroni的路径为实际路径,即:ExecStart=patroni的实际路径)
ln -s /usr/bin/patronictl /usr/local/bin/patronictl
ln -s /usr/bin/patroni /usr/local/bin/patroni
2、在Node1上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
3、在Node2上创建/etc/systemd/system/patroni.service
cat /etc/systemd/system/patroni.service
[Unit]
Description=Runners to orchestrate a high-availability PostgreSQL - patroni
After=syslog.target network.target [Service]
Type=simple User=postgres
Group=postgres # Read in configuration file if it exists, otherwise proceed
EnvironmentFile=-/etc/patroni_env.conf WorkingDirectory=~ # Where to send early-startup messages from the server
# This is normally controlled by the global default set by systemd
# StandardOutput=syslog # Pre-commands to start watchdog device
# Uncomment if watchdog is part of your patroni setup
#ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
#ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog # Start the patroni process
ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml # Send HUP to reload from patroni.yml
ExecReload=/bin/kill -s HUP $MAINPID # only kill the patroni process, not it's children, so it will gracefully stop postgres
KillMode=process # Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=60 # Do not restart the service if it crashes, we want to manually inspect database on failure
Restart=no [Install]
WantedBy=multi-user.target
4、node3上操作同上,直接复制即可
postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点的更多相关文章
- postgres高可用学习篇二:通过pgbouncer连接池工具来管理postgres连接
安装pgbouncer yum install libevent -y yum install libevent-devel -y wget http://www.pgbouncer.org/down ...
- postgres高可用学习篇三:haproxy+keepalived实现postgres负载均衡
环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64 node1:192.168.216.130 node2 ...
- 分布式架构高可用架构篇_07_MySQL主从复制的配置(CentOS-6.7+MySQL-5.6)
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 高可用架构篇--MyCat在MySQL主从复制基础上实现读写分离
实战操作可参考:http://www.roncoo.com/course/view/3117ffd4c74b4a51a998f9276740dcfb 一.环境 操作系统:CentOS-6.6-x86_ ...
- 分布式架构高可用架构篇_01_zookeeper集群的安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 10-Flink集群的高可用(搭建篇补充)
戳更多文章: 1-Flink入门 2-本地环境搭建&构建第一个Flink应用 3-DataSet API 4-DataSteam API 5-集群部署 6-分布式缓存 7-重启策略 8-Fli ...
- 分布式架构高可用架构篇_04_Keepalived+Nginx实现高可用Web负载均衡
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
- 分布式架构高可用架构篇_03-redis3集群的安装高可用测试
参考文档 Redis 官方集群指南:http://redis.io/topics/cluster-tutorial Redis 官方集群规范:http://redis.io/topics/cluste ...
- 分布式架构高可用架构篇_02_activemq高可用集群(zookeeper+leveldb)安装、配置、高可用测试
参考: 龙果学院http://www.roncoo.com/share.html?hamc=hLPG8QsaaWVOl2Z76wpJHp3JBbZZF%2Bywm5vEfPp9LbLkAjAnB%2B ...
随机推荐
- git对某个文件取消跟踪
git rm --cached readme1.txt 删除readme1.txt的跟踪,并保留在本地. git rm --f readme1.txt 删除readme1.txt的跟踪,并 ...
- springmvc 拦截器流程图
- Python中使用@的理解
Python函数中使用@ 稍提一下的基础 fun 和fun()的区别 以一段代码为例: def fun(): print('fun') return None a = fun() #fun函数并将返回 ...
- day03——整型、字符串、for循环
day03 整型 用于比较和运算 32位:-2 ** 31--2 ** 31-1 64位:-2 ** 63--2 ** 63-1 长整型(long) python2中有长整型.获取的是整数 pytho ...
- Django框架之第三篇(路由层)--有名/无名分组、反向解析、路由分发、名称空间、伪静态
一.Django请求生命周期 二.路由层 urls.py url()方法 第一个参数其实就是一个正则表达式,一旦前面的正则匹配到了内容,就不会再往下继续匹配,而是直接执行对应的视图函数. djang ...
- C语言中,static关键字作用
static修饰变量 1 在块中使用static修饰变量 它具有静态存储持续时间.块范围和无链接. 即作用域只能在块中,无法被块外的程序调用:变量在程序加载时创建,在程序终止时结束. 它只在编译时初始 ...
- html 打开新页面
设置 target 页面 这样会点击一次就产生一个页面 页面 填任意名称,多个点击只产生于一个页面
- pyspider 数据存入Mysql--Python3
一.不写入Mysql 以爬取哪儿网为例. 以下为脚本: from pyspider.libs.base_handler import * class Handler(BaseHandler): cra ...
- 【题解】Luogu P5279 [ZJOI2019]麻将
原题传送门 希望这题不会让你对麻将的热爱消失殆尽 我们珂以统计每种牌出现的次数,不需要统计是第几张牌 判一副牌能不能和,类似这道题 对于这题: 设\(f[i][j][k][0/1]\)表示前\(i\) ...
- Spring Security的RBAC数据模型嵌入
1.简介 基于角色的权限访问控制(Role-Based Access Control)作为传统访问控制(自主访问,强制访问)的有前景的代替受到广泛的关注.在RBAC中,权限与角色相关联,用户通过成 ...