转 禁用HAIP,cluster_interconnects配错了集群无法启动
简介:
在Oracle 11.2.0.2之前,私网的冗余一般是通过在OS上做网卡绑定(如Bond等)来实现的,从Oracle 11.2.0.2版本开始推出HAIP(Highly Available Virtual IP)技术替代了操作系统层面的网卡绑定技术,功能更强大、更兼容。HAIP通过其提供的独特的169.254.*网段的IP地址实现集群内部链接的高可用及负载均衡。所以,在11.2.0.2或更高版本安装RAC的时候需要注意169.254.*的IP地址不能被占用。有了HAIP技术则可以不再需要借助任何第三方的冗余技术来实现私网网卡的冗余。
资源ora.cluster_interconnect.haip将会启动一个到四个本地HAIP地址附在Private网络适配器上。通过HAIP完成Oracle RAC和ASM等内部通讯。如果某一个私有网卡物理损坏,那么该网卡上的HAIP地址会漂移到其它的可用的私有网络上。多个私网网卡可以在安装阶段定义,也可以在GRID配置完成之后,通过调用$GRID_HOME/bin/oifcfg setif工具(命令为:oifcfg setif -global eth2/192.168.1.0:cluster_interconnect)来配置HAIP。
HAIP的个数取决于GRID激活的私网网卡的个数。如果只有1块私网网卡,那么GRID将会创建1个HAIP。如果有两块私网网卡,那么GRID将会创建两个HAIP。若超过两块私网网卡则GRID创建4个HAIP。GRID最多支持4块私网网卡,而集群实际上使用的HAIP地址数则取决于集群中最先启动的节点中激活的私网网卡数目。如果选中更多的私网网卡作为Oracle的私有网络,那么多余4个的不能被激活。
管理ora.cluster_interconnect.haip这个资源的是ohasd.bin进程。其对应的log位于$GRID_HOME/log/<nodename>/ohasd/ohasd.log以及$GRID_HOME/log/<nodename>/agent/ohasd/orarootagent_root/orarootagent_root.log这两个位置。在HAIP资源online以后,通过操作系统命令ifconfig -a就能查看到多了类似于eth0:1的虚拟网卡,HAIP地址为169.254.X.X。当然也可以在数据库级别通过GV$CLUSTER_INTERCONNECTS视图查看HAIP的地址。HAIP对应的地址由系统自动分配,无法由用户手工进行指定。
Oracle数据库和ASM实例可以通过HAIP来实现私网通讯的高可用性和负载均衡。私网的流量会在这些私网网卡上实现负载均衡,如果某个网卡出现了故障,它上面的HAIP会自动切换到别的可用的私网网卡上,从而不影响私网的通讯。Windows平台目前不支持HAIP技术。
在有些客户环境下,私网是通过VLAN划出来的,而出于网络管理要求,VLAN的IP地址与网卡必须是绑定的,私网IP也必须是固定的IP地址(虽然按Oracle RAC的安装要求,私网应该是独立隔离的网络),这时HAIP会无法分配,导致依赖它的ASM资源无法启动。HAIP存在不少Bug,若不幸碰到,则可以将HAIP功能禁用掉。如果用户使用的是操作系统级别的绑定或者没有使用私网的绑定,那么可以通过在RDBMS和ASM的参数文件中设置CLUSTER_INTECONNECTS指定私网地址将HAIP覆盖(如果有多个私网地址,请用英文冒号分隔)。虽然说HAIP本身依然存在,但是ASM实例和RDBMS实例以后就不会使用HAIP。
以下步骤就是通过在RDBMS和ASM的参数文件中设置CLUSTER_INTECONNECTS指定私网地址将HAIP覆盖(如果有多个私网地址,请用英文冒号分隔)。虽然说HAIP本身依然存在,但是ASM实例和RDBMS实例以后就不会使用HAIP。
#######sample 0
cluster_interconnects
SQL> select * from v$cluster_interconnects;
NAME
---------------------------------------------
IP_ADDRESS IS_PUBLIC
------------------------------------------------ ---------
SOURCE
--------------------------------------------------------------------------------
en8
169.254.251.241 NO
SQL> show parameter spfile
NAME TYPE
------------------------------------ ---------------------------------
VALUE
------------------------------
spfile string
+db_OCR/db-cluster/asmpara
meterfile/registry.253.1006541
[grid@pdbdb02:/home/grid]$ asmcmd find --type ASMPARAMETERFILE +db_OCR "*"
+db_OCR/db-cluster/ASMPARAMETERFILE/REGISTRY.253.1006541113
[grid@pdbdb02:/home/grid]$ asmcmd lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED HIGH N 512 4096 1048576 5125 3869 2050 606 0 Y db_OCR/
sqllus / as sysasm
create pfile='/tmp/dba/pfile_asm.ora' from spfile='+db_OCR/db-cluster/asmparameterfile/registry.253.1006541113';
modify pfile_asm.ora
processes=170
sessions=180
shared_pool_size = 5G
large_pool_size = 1G
db_cache_size = 1G
sga_max_size=8192M
sqllus / as sysasm
shutdown abort
startup pfile='/tmp/dba/pfile_asm.ora';
create spfile='+NEW_DATA/spfileASM.ora' from pfile='/tmp/dba/pfile_asm.ora';
##We now see the ASM spfile itself (REGISTRY.253.843597139) and its alias (spfileASM.ora)
asmcmd ls -l +NEW_DATA/spfileASM.ora
Type Redund Striped Time Sys Name
N spfileASM.ora => +NEW_DATA/db-cluster/ASMPARAMETERFILE/REGISTRY.253.1006958153
/db/db/grid/11.2.0/bin/crsctl stop crs
/db/db/grid/11.2.0/bin/crsctl start crs
alter system set cluster_interconnects='190.0.1.201' sid='+ASM1' scope=spfile;
alter system set cluster_interconnects='190.0.1.202' sid='+ASM2' scope=spfile;
/db/db/grid/11.2.0/bin/crsctl stop crs
/db/db/grid/11.2.0/bin/crsctl start crs
SQL> select * from v$cluster_interconnects;
NAME
---------------------------------------------
IP_ADDRESS IS_PUBLIC
------------------------------------------------ ---------
SOURCE
--------------------------------------------------------------------------------
en8
190.0.1.201 NO
cluster_interconnects parameter
###sample1
ASM spfile discovery
So, how can the ASM instance read the spfile on startup, if the spfile is in a disk group that is not mounted yet? Not only that - the ASM doesn't really know which disk group has the spfile, or even if the spfile is in a disk group. And what is the value of the ASM discovery string?
The ASM Administration guide says this on the topic:
When an Oracle ASM instance searches for an initialization parameter file, the search order is:
- The location of the initialization parameter file specified in the Grid Plug and Play (GPnP) profile.
- If the location has not been set in the GPnP profile, then the search order changes to:
- SPFILE in the Oracle ASM instance home (e.g. $ORACLE_HOME/dbs/spfile+ASM.ora)
- PFILE in the Oracle ASM instance home
This does not tell us anything about the ASM discovery string, but at least it tells us about the spfile and the GPnP profile. It turns out the ASM discovery string is also in the GPnP profile. Here are the values from an Exadata environment:
o/*/*
$ gpnptool getpval -p=profile.xml -asm_spf -o-
+DBFS_DG/spfileASM.ora
There is no GPnP profile in a single instance set up, so this information is in the ASM resource (ora.asm), stored in the Oracle Local Repository (OLR). Here are the values from a single instance environment:
ASM_DISKSTRING=
SPFILE=+DATA/ASM/ASMPARAMETERFILE/registry.253.822856169
So far so good. Now the ASM knows where to look for ASM disks and where the spfile is. But the disk group is not mounted yet, as the ASM instance still hasn't started up, so how can ASM read the spfile?
The trick is in the ASM disk headers. To support the ASM spfile in a disk group, two new fields were added to the ASM disk header:
- kfdhdb.spfile - Allocation unit number of the ASM spfile.
- kfdhdb.spfflg - ASM spfile flag. If this value is 1, the ASM spfile is on this disk in allocation unit kfdhdb.spfile.
As part of the disk discovery process, the ASM instance reads the disk headers and looks for the spfile information. Once it finds the disks that have the spfile, it can read the actual initialization parameters.
Let's have a look at my disk group DATA. First check the disk group state and redundancy
Inst_ID State Type
1 MOUNTED NORMAL
The disk group is mounted and the redundancy is normal. This means the ASM spfile will be mirrored, so we should see two disks with kfdhdb.spfile and kfdhdb.spfflg values set. Let's have a look:
> do
> echo $disk
> kfed read $disk | grep spf
> done
/dev/sdc1
kfdhdb.spfile: 46 ; 0x0f4: 0x0000002e
kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001
/dev/sdd1
kfdhdb.spfile: 2212 ; 0x0f4: 0x000008a4
kfdhdb.spfflg: 1 ; 0x0f8: 0x00000001
/dev/sde1
kfdhdb.spfile: 0 ; 0x0f4: 0x00000000
kfdhdb.spfflg: 0 ; 0x0f8: 0x00000000
As we can see, two disks have the ASM spfile.
Let's check the contents of the Allocation Unit 46 on disk /dev/sdc1:
+ASM.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
+ASM.asm_diskgroups='RECO','ACFS'#Manual Mount
*.asm_power_limit=1
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
1+0 records in
1+0 records out
1048576 bytes (1.0 MB) copied, 0.0352732 s, 29.7 MB/s
The AU 46 on disk /dev/sdc1 indeed contains the ASM spfile.
ASM spfile alias block
In addition to the new ASM disk header fields, there is a new ASM metadata block type - KFBTYP_ASMSPFALS - that describes the ASM spfile alias. The ASM spfile alias block will be the last block in the ASM spfile.
Let's have a look at the last block of the Allocation Unit 46:
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 27 ; 0x002: KFBTYP_ASMSPFALS
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 255 ; 0x004: blk=255
kfbh.block.obj: 253 ; 0x008: file=253
kfbh.check: 806373865 ; 0x00c: 0x301049e9
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfspbals.incarn: 822856169 ; 0x000: 0x310bc9e9
kfspbals.blksz: 512 ; 0x004: 0x00000200
kfspbals.size: 3 ; 0x008: 0x0003
kfspbals.path.len: 0 ; 0x00a: 0x0000
kfspbals.path.buf: ; 0x00c: length=0
There is not much in this metadata block. Most of the entries have the block header info (fields kfbh.*). The actual ASM spfile alias data (fields kfspbals.*) has only few entries. The spfile file incarnation (822856169) is part of the file name (REGISTRY.253.822856169), the block size is 512 (bytes) and the file size is 3 blocks. The path info is empty, meaning I don't actually have the ASM spfile alias.
Let's create one. I will first create a pfile from the existing spfile and then create the spfile alias from that pfile.
SQL> create pfile='/tmp/pfile+ASM.ora' from spfile;
File created.
SQL> shutdown abort;
ASM instance shutdown
SQL> startup pfile='/tmp/pfile+ASM.ora';
ASM instance started
Total System Global Area 1135747072 bytes
Fixed Size 2297344 bytes
Variable Size 1108283904 bytes
ASM Cache 25165824 bytes
ASM diskgroups mounted
SQL> create spfile='+DATA/spfileASM.ora' from pfile='/tmp/pfile+ASM.ora';
File created.
SQL> exit
Looking for the ASM spfile again shows two entries:
+DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.843597139
+DATA/spfileASM.ora
We now see the ASM spfile itself (REGISTRY.253.843597139) and its alias (spfileASM.ora). Having a closer look at spfileASM.ora confirms this is indeed the alias for the registry file:
Type Redund Striped Time Sys Name
ASMPARAMETERFILE MIRROR COARSE MAR 30 20:00:00 N spfileASM.ora => +DATA/ASM/ASMPARAMETERFILE/REGISTRY.253.843597139
Check the ASM spfile alias block now:
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 27 ; 0x002: KFBTYP_ASMSPFALS
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 255 ; 0x004: blk=255
kfbh.block.obj: 253 ; 0x008: file=253
kfbh.check: 2065104480 ; 0x00c: 0x7b16fe60
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfspbals.incarn: 843597139 ; 0x000: 0x32484553
kfspbals.blksz: 512 ; 0x004: 0x00000200
kfspbals.size: 3 ; 0x008: 0x0003
kfspbals.path.len: 13 ; 0x00a: 0x000d
kfspbals.path.buf: spfileASM.ora ; 0x00c: length=13
Now we see that the alias file name appears in the ASM spfile alias block. Note the new incarnation number, as this is a new ASM spfile, created from the pfile.
Conclusion
Starting with ASM version 11.2, the ASM spfile can be stored in an ASM disk group. To support this feature, we now have new ASMCMD commands and new ASM metadata structures.
########
http://blog.itpub.net/7590112/viewspace-1410355/
挫挫把vip和priv ip傻傻分不清, 扔来这么一段
-------------------------begin-------------------
禁用HAIP,其中的将169.254.x.x修改为两个节点的private IP后
将所有节点的ASM实例及所有数据库实例的cluster_interconnects参数值修改为对应节点的private IP的值:
SQL> alter system set cluster_interconnects='83.16.193.38' sid='+ASM1' scope=spfile;
SQL> alter system set cluster_interconnects='83.16.193.40' sid='+ASM2' scope=spfile;
SQL> alter system set cluster_interconnects='83.16.193.38' sid='orcl1' scope=spfile;
SQL> alter system set cluster_interconnects='83.16.193.40' sid='orcl2' scope=spfile;
启动集群时报错,看报错是找不到private IP,83.16.193.138和83.16.193.140,现在重启了机器,ifconfig -a又能看到169.254.70.237这个ip了
eth1:1 Link encap:Ethernet HWaddr 18:C5:8A:1A:3A:70
inet addr:169.254.70.237 Bcast:169.254.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Interrupt:44
启集群抱错
CRS-2672: Attempting to start 'ora.asm' on 'aas20150114l2'
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if_not_found failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpvaddr9
ORA-27303: additional information: requested interface 83.16.193.138 not found. Check output from ifconfig command
. For details refer to "(:CLSN00107:)" in "/oracle/app/11.2.0/grid/log/aas20150114l1/agent/ohasd/oraagent_grid/oraagent_grid.log".
CRS-2674: Start of 'ora.asm' on 'aas20150114l1' failed
-------------------------扔来这么一段end-----------------------
开始排查
grid@AAS20150114L1:~> sqlplus '/as sysasm'
SQL*Plus: Release 11.2.0.3.0 Production on Tue Mar 3 14:09:46 2015
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:if_not_found failed with status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: skgxpvaddr9
ORA-27303: additional information: requested interface 83.16.193.138 not found. Check output from ifconfig command
SQL> !
asm启不来,在pfile后面加上正确的cluster_interconnects
+ASM1.asm_diskgroups='DATDG'#Manual Mount
+ASM2.asm_diskgroups='DATDG'#Manual Mount
*.asm_diskstring='/dev/asmdisk*'
*.asm_power_limit=1
*.diagnostic_dest='/oracle/app/grid'
*.instance_type='asm'
*.large_pool_size=12M
*.memory_max_target=1572864000
*.memory_target=0
*.pga_aggregate_target=524288000
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=1048576000
+ASM1.cluster_interconnects='83.5.227.200'
+ASM2.cluster_interconnects='83.5.227.201'
用pfile启动
sqlplus '/as sysasm'
SQL> startup pfile='/oracle/app/11.2.0/grid/dbs/init+ASM1.ora';
ASM instance started
Total System Global Area 1603411968 bytes
Fixed Size 2228784 bytes
Variable Size 1567628752 bytes
ASM Cache 33554432 bytes
ASM diskgroups mounted
随后实例自动起来了。
SQL> select name,ip_address from v$cluster_interconnects;
NAME IP_ADDRESS
--------------- ----------------
eth1:1 169.254.204.52
SQL> alter system set cluster_interconnects='83.5.227.200' sid='orcl1' scope=spfile;
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 2.2556E+11 bytes
Fixed Size 2241744 bytes
Variable Size 1.1489E+11 bytes
Database Buffers 1.1006E+11 bytes
Redo Buffers 604123136 bytes
Database mounted.
Database opened.
再到gi里
SQL> create spfile from pfile;
create spfile from pfile
*
ERROR at line 1:
ORA-17502: ksfdcre:4 Failed to create file
+DGOCR/aas-cluster/asmparameterfile/registry.253.873210539
ORA-15177: cannot operate on system aliases
缺少目录吧,进去建一个。
ASMCMD>cd +DGOCR/aas-cluster
ASMCMD> mkdir asmparameterfile
sqlplus '/as sysasm'
SQL> create spfile from pfile;
File created.
SQL> shutdown immediate;
ORA-15097: cannot SHUTDOWN ASM instance with connected client (process 17079)
直接重启整个集群正常
AAS20150114L1:/ # /oracle/app/11.2.0/grid/bin/crsctl stop cluster -all
CRS-2673: Attempting to stop 'ora.crsd' on 'aas20150114l1'
CRS-2673: Attempting to stop 'ora.crsd' on 'aas20150114l2'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'aas20150114l1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'aas20150114l1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'aas20150114l1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'aas20150114l1'
CRS-2673: Attempting to stop 'ora.cvu' on 'aas20150114l1'
...。。。
AAS20150114L1:/ # /oracle/app/11.2.0/grid/bin/crsctl start cluster -all
......。。。
CRS-2672: Attempting to start 'ora.crsd' on 'aas20150114l2'
CRS-2676: Start of 'ora.crsd' on 'aas20150114l1' succeeded
CRS-2676: Start of 'ora.crsd' on 'aas20150114l2' succeeded
###sample debug 方法:
os/db1/grid/11.2.0/log/sdb1db01/ohasd/ohasd.log.
/os/db1/grid/11.2.0/log/sdb1db01/cssd/ocssd.log
/os/db1/app/grid/diag/asm/+asm/+ASM1/trace
/os/db1/grid/11.2.0/log/sdb1db01/agent/ohasd/orarootagent_root/orarootagent_root.log
/os/db1/grid/11.2.0/bin/oifcfg getif
/os/db1/grid/11.2.0/bin/crsctl stop crs -f
/os/db1/grid/11.2.0/bin/crsctl start crs
/os/db1/grid/11.2.0/bin/crsctl stat res -t -init
/os/db1/app/grid/diag/asm/+asm/+ASM2/trace
alter system set cluster_interconnects='190.0.2.31' sid='+ASM1' scope=spfile;
alter system set cluster_interconnects='190.0.2.32' sid='+ASM2' scope=spfile;
/os/db1/grid/11.2.0/log/sdb1db02/
vi /os/db1/grid/11.2.0/log/sdb1db02/agent/ohasd/oraagent_grid/oraagent_grid.log
/os/db1/app/grid/diag/asm/+asm/+ASM2/trace
kfod disks=all
kfod disks=all status=ture asm_diskstrings='/dev/*_disk*'
/os/db1/grid/11.2.0/log/sdb1db02/agent/ohasd/oraagent_grid//oraagent_grid.log
kfed read /dev/ocr_disk1
kfed read '/dev/vote_disk1'
转 禁用HAIP,cluster_interconnects配错了集群无法启动的更多相关文章
- hbase集群的启动,注意几个问题
1.hbase的改的会影响器他的组件的使用, 故而, 在修改 hadoop的任何组件后, 一定要记得其它的组件也能受到影响, 一下是我在将hadoop的集群改了之后 , 再次运行hbase的时候, 就 ...
- hbase集群在启动的时候找不到JAVA_HOME的问题
hbase集群在启动的时候找不到JAVA_HOME的问题,启动集群的时候报错信息如下: root@master:/usr/local/hbase-/bin# ./start-hbase.sh star ...
- Hyperledger Fabric 1.0 从零开始(九)——Fabric多节点集群生产启动
7:Fabric多节点集群生产启动 7.1.多节点服务器配置 在生产环境上,我们沿用4.1.配置说明中的服务器各节点配置方案. 我们申请了五台生产服务器,其中四台服务器运行peer节点,另外一台服务器 ...
- zookeeper做集群后启动不了,大部分原因是防火墙未关闭
zookeeper做单机版,可以正常启动:但是zookeeper做集群后启动不了,大部分原因是防火墙未关闭. centos的关闭防火墙方法比较独立. systemctl stop firewalld. ...
- HBase的多节点集群详细启动步骤(3或5节点)(分为Zookeeper自带还是外装)
HBase的多节点集群详细启动步骤(3或5节点)分为: 1.HBASE_MANAGES_ZK的默认值是false(zookeeper外装)(推荐) 2.HBASE_MANAGES_ZK的默认值是tru ...
- Hadoop集群初始化启动
hadoop集群初始化启动 启动zookeeper ./zkServer.sh start 启动journalnode ./hadoop-daemon.sh start journalnode 格式化 ...
- windows下配置redis集群,启动节点报错:createing server TCP listening socket *:7000:listen:Unknown error
windows下配置redis集群,启动节点报错:createing server TCP listening socket *:7000:listen:Unknown error 学习了:https ...
- Kafka的3节点集群详细启动步骤(Zookeeper是外装)
首先,声明,kafka集群是搭建在hadoop1.hadoop2和hadoop3机器上. kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 如下分 ...
- Hadoop的多节点集群详细启动步骤(3或5节点)
版本1 利用自己写的脚本来启动,见如下博客 hadoop-2.6.0-cdh5.4.5.tar.gz(CDH)的3节点集群搭建 hadoop-2.6.0.tar.gz的集群搭建(3节点) hadoop ...
随机推荐
- SEO网站结构优化
结构布局优化:用扁平化结构(层次结构超过三层小蜘蛛就不愿意爬了) 控制首页链接数量(中小网站100以内,页面导航.底部导航.锚文字链接等) 扁平化的目录层次(小蜘蛛跳转3次可以到达网站内任何一个内页, ...
- Java IO输入输出流 字符数组流 ByteArrayOutputStream/ByteArrayInputStream
private static void StringWriterAndReader() throws Exception { //字符串流(字符串的内存流) //字符串输入流 StringWriter ...
- TensorFlow中文手册
注意:本文只为读书笔记. 第一章 起步 - 起步 - [介绍](SOURCE/get_started/introduction.md) - [下载及安装](SOURCE/get_started/os_ ...
- 编写高质量代码改善C#程序的157个建议——建议59:不要在不恰当的场合下引发异常
建议59:不要在不恰当的场合下引发异常 常见的不易于引发异常的情况是对在可控范围内的输入和输出引发异常. private void SaveUser3(User user) { ) { throw n ...
- 编写高质量代码改善C#程序的157个建议——建议41:实现标准的事件模型
建议41:实现标准的事件模型 上一建议中,我们实现了一个带事件通知的文件传输类FileUploader.虽然已经满足需求,但却不符合C#的编码规范,查看EventHandler的原型声明: publi ...
- 新编html网页设计从入门到精通 (龙马工作室) pdf扫描版
新编html网页设计从入门到精通共分为21章,全面系统地讲解了html的发展历史及4.0版的新特性.基本概念.设计原则.文件结构.文件属性标记.用格式标记进行页面排版.使用图像装饰页面.超链接的使用. ...
- window中启动vs后鼠标无法移动
你停止wisptis.exe这个进程,在c:\Windows\System32下删除wispitis.exe就可以了!
- SourceTree使用
SourceTree的基本使用 1. SourceTree是什么 拥有可视化界面的项目版本控制软件,适用于git项目管理 window.mac可用 2. 获取项目代码 1. 点击克隆/新建 2. ...
- 「CF140C」 New Year Snowmen
题目链接 戳这 贪心+优先队列,只要每次将数量前三大的半径拿出来就好了,用优先队列维护一下 #include<bits/stdc++.h> #define rg register #def ...
- linux下文件权限的介绍
linux操作系统下,使用ll查看该目录下所有文件及其文件权限,以下是对文件权限的介绍 d代表的是目录(或称之为文件夹) 红框内的这3个是代表3个组的权限每组都是3个 第一组rwx代表是本用户的权 ...