Deepgreen/Greenplum删除节点步骤

Greenplum和Deepgreen官方都没有给出删除节点的方法和建议,但实际上,我们可以对节点进行删除。由于不确定性,删除节点极有可能导致其他的问题,所以还行做好备份,谨慎而为。下面是具体的步骤:

1.查看数据库当前状态(12个实例)

[gpadmin@sdw1 ~]$ gpstate
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Starting gpstate with args:
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 8.2.15 (Greenplum Database 4.3.99.00 build Deepgreen DB) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6) compiled on Jul 6 2017 03:04:10'
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:53:25:097578 gpstate:sdw1:gpadmin-[INFO]:-Gathering data from segments...
..
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-Greenplum instance status summary
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Master instance = Active
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Master standby = No master standby configured
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total segment instance count from metadata = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Primary Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segments = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segment valid (at master) = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total primary segment failures (at master) = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid files missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid files found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid PIDs missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of postmaster.pid PIDs found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of /tmp lock files missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number of /tmp lock files found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number postmaster processes missing = 0
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Total number postmaster processes found = 12
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Mirror Segment Status
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:- Mirrors not configured on this array
20170816:12:53:27:097578 gpstate:sdw1:gpadmin-[INFO]:-----------------------------------------------------

2.并行备份数据库

使用 gpcrondump 命令备份数据库,这里不赘述,不明白的可以翻看文档。

3.关闭当前数据库

[gpadmin@sdw1 ~]$ gpstop -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -M fast
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:10:097793 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master Greenplum instance process active PID = 31250
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Database = template1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master port = 5432
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Master directory = /hgdata/master/hgdwseg-1
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Shutdown mode = fast
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Timeout = 120
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Shutdown Master standby host = Off
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:-Segment instances that will be shutdown:
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:---------------------------------------------
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- Host Datadir Port Status
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg0 25432 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg1 25433 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg2 25434 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg3 25435 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg4 25436 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg5 25437 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg6 25438 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg7 25439 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg8 25440 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg9 25441 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg10 25442 u
20170816:12:54:11:097793 gpstop:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg11 25443 u Continue with Greenplum instance shutdown Yy|Nn (default=N):
> y
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='fast'
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Detected 0 connections to database
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Using standard WAIT mode of 120 seconds
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=fast
20170816:12:54:12:097793 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-No standby master host configured
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
20170816:12:54:13:097793 gpstop:sdw1:gpadmin-[INFO]:-0.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-100.00% of jobs completed
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:- Segments stopped successfully = 12
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:- Segments with errors during stop = 0
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Successfully shutdown 12 of 12 segment instances
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Database successfully shutdown with no errors reported
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpmmon process
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpmmon process found
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover gpsmon processes
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-No leftover gpsmon processes on some hosts. not attempting forceful termination on these hosts
20170816:12:54:23:097793 gpstop:sdw1:gpadmin-[INFO]:-Cleaning up leftover shared memory

4.以管理模式启动数据库

[gpadmin@sdw1 ~]$ gpstart -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args: -m
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:54:40:098061 gpstart:sdw1:gpadmin-[INFO]:-Master-only start requested in configuration without a standby master. Continue with master-only startup Yy|Nn (default=N):
> y
20170816:12:54:41:098061 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:54:42:098061 gpstart:sdw1:gpadmin-[INFO]:-Master Started...

5.登陆管理数据库

[gpadmin@sdw1 ~]$ PGOPTIONS="-c gp_session_role=utility" psql -d postgres
psql (8.2.15)
Type "help" for help.

6.删除segment

postgres=# select * from gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
1 | -1 | p | p | s | u | 5432 | sdw1 | sdw1 | |
2 | 0 | p | p | s | u | 25432 | sdw1 | sdw1 | |
3 | 1 | p | p | s | u | 25433 | sdw1 | sdw1 | |
4 | 2 | p | p | s | u | 25434 | sdw1 | sdw1 | |
5 | 3 | p | p | s | u | 25435 | sdw1 | sdw1 | |
6 | 4 | p | p | s | u | 25436 | sdw1 | sdw1 | |
7 | 5 | p | p | s | u | 25437 | sdw1 | sdw1 | |
8 | 6 | p | p | s | u | 25438 | sdw1 | sdw1 | |
9 | 7 | p | p | s | u | 25439 | sdw1 | sdw1 | |
10 | 8 | p | p | s | u | 25440 | sdw1 | sdw1 | |
11 | 9 | p | p | s | u | 25441 | sdw1 | sdw1 | |
12 | 10 | p | p | s | u | 25442 | sdw1 | sdw1 | |
13 | 11 | p | p | s | u | 25443 | sdw1 | sdw1 | |
(13 rows)
postgres=# set allow_system_table_mods='dml';
SET
postgres=# delete from gp_segment_configuration where dbid=13;
DELETE 1
postgres=# select * from gp_segment_configuration;
dbid | content | role | preferred_role | mode | status | port | hostname | address | replication_port | san_mounts
------+---------+------+----------------+------+--------+-------+----------+---------+------------------+------------
1 | -1 | p | p | s | u | 5432 | sdw1 | sdw1 | |
2 | 0 | p | p | s | u | 25432 | sdw1 | sdw1 | |
3 | 1 | p | p | s | u | 25433 | sdw1 | sdw1 | |
4 | 2 | p | p | s | u | 25434 | sdw1 | sdw1 | |
5 | 3 | p | p | s | u | 25435 | sdw1 | sdw1 | |
6 | 4 | p | p | s | u | 25436 | sdw1 | sdw1 | |
7 | 5 | p | p | s | u | 25437 | sdw1 | sdw1 | |
8 | 6 | p | p | s | u | 25438 | sdw1 | sdw1 | |
9 | 7 | p | p | s | u | 25439 | sdw1 | sdw1 | |
10 | 8 | p | p | s | u | 25440 | sdw1 | sdw1 | |
11 | 9 | p | p | s | u | 25441 | sdw1 | sdw1 | |
12 | 10 | p | p | s | u | 25442 | sdw1 | sdw1 | |
(12 rows)

7.删除filespace

postgres=# select * from pg_filespace_entry;
fsefsoid | fsedbid | fselocation
----------+---------+---------------------------
3052 | 1 | /hgdata/master/hgdwseg-1
3052 | 2 | /hgdata/primary/hgdwseg0
3052 | 3 | /hgdata/primary/hgdwseg1
3052 | 4 | /hgdata/primary/hgdwseg2
3052 | 5 | /hgdata/primary/hgdwseg3
3052 | 6 | /hgdata/primary/hgdwseg4
3052 | 7 | /hgdata/primary/hgdwseg5
3052 | 8 | /hgdata/primary/hgdwseg6
3052 | 9 | /hgdata/primary/hgdwseg7
3052 | 10 | /hgdata/primary/hgdwseg8
3052 | 11 | /hgdata/primary/hgdwseg9
3052 | 12 | /hgdata/primary/hgdwseg10
3052 | 13 | /hgdata/primary/hgdwseg11
(13 rows)
postgres=#  delete from pg_filespace_entry where fsedbid=13;
DELETE 1
postgres=# select * from pg_filespace_entry;
fsefsoid | fsedbid | fselocation
----------+---------+---------------------------
3052 | 1 | /hgdata/master/hgdwseg-1
3052 | 2 | /hgdata/primary/hgdwseg0
3052 | 3 | /hgdata/primary/hgdwseg1
3052 | 4 | /hgdata/primary/hgdwseg2
3052 | 5 | /hgdata/primary/hgdwseg3
3052 | 6 | /hgdata/primary/hgdwseg4
3052 | 7 | /hgdata/primary/hgdwseg5
3052 | 8 | /hgdata/primary/hgdwseg6
3052 | 9 | /hgdata/primary/hgdwseg7
3052 | 10 | /hgdata/primary/hgdwseg8
3052 | 11 | /hgdata/primary/hgdwseg9
3052 | 12 | /hgdata/primary/hgdwseg10
(12 rows)

8.退出管理模式,正常启动数据库

[gpadmin@sdw1 ~]$ gpstop -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Starting gpstop with args: -m
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-There are 0 connections to the database
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master host=sdw1
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart
20170816:12:56:52:098095 gpstop:sdw1:gpadmin-[INFO]:-Master segment instance directory=/hgdata/master/hgdwseg-1
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20170816:12:56:53:098095 gpstop:sdw1:gpadmin-[INFO]:-Terminating processes for segment /hgdata/master/hgdwseg-1
[gpadmin@sdw1 ~]$ gpstart
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting gpstart with args:
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.99.00 build Deepgreen DB'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150'
20170816:12:57:02:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Setting new master era
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Started...
20170816:12:57:03:098112 gpstart:sdw1:gpadmin-[INFO]:-Shutting down master
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master instance parameters
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Database = template1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master Port = 5432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master directory = /hgdata/master/hgdwseg-1
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Timeout = 600 seconds
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Master standby = Off
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:-Segment instances that will be started
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:---------------------------------------
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- Host Datadir Port
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg0 25432
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg1 25433
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg2 25434
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg3 25435
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg4 25436
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg5 25437
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg6 25438
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg7 25439
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg8 25440
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg9 25441
20170816:12:57:05:098112 gpstart:sdw1:gpadmin-[INFO]:- sdw1 /hgdata/primary/hgdwseg10 25442 Continue with Greenplum instance startup Yy|Nn (default=N):
> y
20170816:12:57:07:098112 gpstart:sdw1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
.......
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Process results...
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Successful segment starts = 11
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Failed segment starts = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Successfully started 11 of 11 segment instances
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-----------------------------------------------------
20170816:12:57:14:098112 gpstart:sdw1:gpadmin-[INFO]:-Starting Master instance sdw1 directory /hgdata/master/hgdwseg-1
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Command pg_ctl reports Master sdw1 instance active
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-No standby master configured. skipping...
20170816:12:57:15:098112 gpstart:sdw1:gpadmin-[INFO]:-Database successfully started

9.将删除节点的备份文件使用psql恢复到当前数据库

psql -d postgres -f xxxx.sql #这里不赘述恢复过程

备注:

1)本文使用的是只恢复删除节点的数据。

2)本文的过程,逆向执行,可以将删除的节点重新添加回来,但是数据恢复起来比较耗时,与重新建库恢复差不多。

转载自:https://www.sypopo.com/post/M95Rm39Or7/

Deepgreen/Greenplum 删除节点步骤的更多相关文章

  1. 【RAC】oracle11g r2 rac环境删除节点步骤

    1.移除数据库实例 如果节点运行了service首先需要删除service使用dbca图形化界面删除节点依次选择 Real Application Clusters -- > Instance ...

  2. Hadoop 删除节点步骤

    1.在hadoop1.1.1/conf 下新建文件 nn-excluded-list 并写入要删除的节点名称或者IP 一个节点 一行 如: mos5200app cmpaknwom rac7 2.分发 ...

  3. 『GreenPlum系列』GreenPlum 4节点集群安装(图文教程)

      目标架构如上图   一.硬件评估 cpu主频,核数推荐CPU核数与磁盘数的比例在12:12以上Instance上执行时只能利用一个CPU核资源进行计算,推荐高主频 内存容量 网络带宽重分布操作 R ...

  4. Deepgreen & Greenplum DBA小白普及课之三

    Deepgreen & Greenplum DBA小白普及课之三(备份问题解答) 不积跬步无以至千里,要想成为一名合格的数据库管理员,首先应该具备扎实的基础知识及问题处理能力.本文参考Pivo ...

  5. Javascript 笔记与总结(2-10)删除节点,创建节点

    [删除节点] 步骤: ① 找到对象 ② 找到他的父对象 parentObj ③ parentObj.removeChild(子对象); [例] <!DOCTYPE html> <ht ...

  6. adoop集群动态添加和删除节点

    hadoop集群动态添加和删除节点说明 上篇博客我已经安装了Hadoop集群(hadoop集群的安装步骤和配置),现在写这个博客我将在之前的基础上进行节点的添加的删除. 首先将启动四台机器(一主三从) ...

  7. redis 集群新增节点,slots槽分配,删除节点, [ERR] Calling MIGRATE ERR Syntax error, try CLIENT (LIST | KILL | GET...

    redis reshard 重新分槽(slots) https://github.com/antirez/redis/issues/5029 redis 官方已确认该bug redis 集群重新(re ...

  8. Elasticsearch集群管理之添加、删除节点

    1.问题抛出 1.1 新增节点问题 我的群集具有黄色运行状况,因为它只有一个节点,因此副本保持未分配状态,我想要添加一个节点,该怎么弄? 1.2 删除节点问题 假设集群中有5个节点,我必须在运行时删除 ...

  9. C和指针 第十七章 二叉树删除节点

    二叉树的节点删除分为三种情况: 1.删除的节点没有子节点,直接删除即可 2. 删除的节点有一个子节点,直接用子节点替换既可以 3.删除的节点有两个子节点. 对于第三种情况,一般是不删除这个节点,而是删 ...

随机推荐

  1. PAT(B) 1050 螺旋矩阵(Java:24分)

    题目链接:1050 螺旋矩阵 (25 point(s)) 题目描述 本题要求将给定的 N 个正整数按非递增的顺序,填入"螺旋矩阵".所谓"螺旋矩阵",是指从左上 ...

  2. 电路板工艺中的NPTH和PTH

    今天收到PCB生产公司发来的工程咨询单 Q1:请问贵司资料中的沉头孔是做PTH沉头还是做NPTH沉头? 好吧,鄙人见识少,第一次听说PTH和NPTH,查资料吧,一张图看一下就明白了. 另一种比较小的P ...

  3. ApachShiro 一个系统 两套验证方法-(后台管理员登录、前台App用户登录)同一接口实现、源码分析

    需求: 在公司新的系统里面博主我使用的是ApachShiro 作为安全框架.作为后端的鉴权以及登录.分配权限等操作 管理员的信息都是存储在管理员表 前台App 用户也需要校验用户名和密码进行登录.但是 ...

  4. 【洛谷 P4688】 [Ynoi2016]掉进兔子洞(bitset,莫队)

    题目链接 第一道Ynoi 显然每次询问的答案为三个区间的长度和减去公共数字个数*3. 如果是公共数字种数的话就能用莫队+bitset存每个区间的状态,然后3个区间按位与就行了. 但现在是个数,bits ...

  5. sshpass非交互式连接

    $ sshpass -p $passwd ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null $USER@$IP 'echo ...

  6. 分享个免费的svn平台

    平时在工作中难免会用到svn,但是要自己搭建一个,未免成本太高,近来,本人接触到一个免费的svn平台(可能大神们早就发现了),个人使用还是足够了. 地址:https://svnbucket.com 相 ...

  7. Ceph集群部署(基于Luminous版)

    环境 两个节点:ceph0.ceph1 ceph0: mon.a.mds.mgr.osd.0.osd.1 ceph1: mon.b.osd.2.osd.3 操作系统:ubuntu14.04 网络配置: ...

  8. mysql FORMAT() 格式化后的数字运算出错

    原文链接 FORMAT()  之后   会满三位加逗号, 在此基础上进行数字运算的时候会出现预料之外的结果, 建议使用 : convert(param, decimal(12,2)) cast(par ...

  9. nginx 是如何分配 worker 进程连接数的

    客户端连接过来后,多个空闲的进程,会竞争这个连接,很容易看到,这种竞争会导致不公平,如果某个进程得到 accept 的机会比较多,它的空闲连接很快就用完了,如果不提前做一些控制,当 accept 到一 ...

  10. 在Activity/Fragment以外使用Toast【转】

    在 Activity 使用 Toast 这种是最基本的使用,Toast的第一个参数就是Context,一般在Activity中我们直接用this代替,代表调用者的实例为Activity. public ...